id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2305.02370
Exceptional points in single open acoustic resonator due to the symmetry breaking
Exceptional points (EPs) have been widely studied in quantum mechanics, condensed matter physics, optics and photonics. However, their potential in acoustics has only recently been recognized due to the rapid development of acoustic metamaterials. This paper proposes a method for achieving EP conditions in acoustic resonators by lowering their symmetry and enabling resonant mode interaction. The formation of EPs is predicted through direct numerical simulation supported by coupled mode theory and resonant state expansion. These findings have significant implications for the design and optimization of acoustic metamaterials for applications such as acoustic sensing and noise reduction.
Vladimir Igoshin, Mariia Tsimokha, Anastasia Nikitina, Mihail Petrov, Ivan Toftul, Kristina Frizyuk
2023-05-03T18:15:20Z
http://arxiv.org/abs/2305.02370v1
# Exceptional points in single open acoustic resonator due to the symmetry breaking ###### Abstract Exceptional points (EPs) have been widely studied in quantum mechanics, condensed matter physics, optics and photonics. However, their potential in acoustics has only recently been recognized due to the rapid development of acoustic metamaterials. This paper proposes a method for achieving EP conditions in acoustic resonators by lowering their symmetry and enabling resonant mode interaction. The formation of EPs is predicted through direct numerical simulation supported by coupled mode theory and resonant state expansion. These findings have significant implications for the design and optimization of acoustic metamaterials for applications such as acoustic sensing and noise reduction. ## I Introduction Acoustic metamaterials are a promising class of materials that offer unique capabilities for tailoring properties of sound waves [1; 2; 3] as well as for mechanical manipulation [4; 5; 6]. While resonances play a central role in acoustics metamaterials there are still many hindered physical effects and mechanism. EPs are the points in the parameter space where the eigenvalues of the system become degenerate and the eigenvectors coalesce, leading to a non-diagonalizable Jordan block formation [7; 8; 9; 10]. The spectral singularities related to EP are highly sensitive to the system parameters making them very prospective for sensing applications [11; 12]. Despite the progress in this area, the problem of EP appearance in single acoustic resonators still requires thorough study and it is addressed in this paper. EPs occur only in non-hermitian systems [10] and they are often mentioned in the context of PT-symmetric systems [13; 14; 8; 15] observed in electronics [16], optics and photonics [11; 13; 15], and recently in acoustics [17; 18]. However, reaching PT-symmetry requires particular engineering of gain and loss in acoustical systems. Alternatively, EP can be observed in open resonators as a special class of non-Hermitian systems, which was extensively studied in optics and photonics [15; 19; 20]. One of the possible mechanisms of EP formation is based on _breaking the symmetry of the resonator_[21]. While this is not the unique approach we leave the other methods beyond the scope of the current work and refer the readers to Ref. [8; 22]. However, despite extensive research on EPs in optics, there has been little work on EPs in the acoustics domain. In this work, we show that by engineering the shape of resonators and due to related symmetry breaking one can enable mode coupling mechanisms leading to formation of EP condition as schematically shown in Fig. 1: in a system with perturbed symmetry two initially non-interacting modes transform into two different modes a degenerate a state. This transitional state appears to be an EP state. We improve and adapt the powerful method of multipolar analysis [23; 24; 25] to _(i)_ predict the occurrence of EPs as a result of a particular symmetry breaking and _(ii)_ to understand deeper the mode interaction within a coupled mode theory and resonant state expansion (RSE) method. Exploring the physics behind the EPs formation in acoustic resonators may unlock novel methods for the design and optimization of acoustic metamaterials for a wide range of applications such as sound focusing [26], optomechanics [27], sensing [28; 29; 30; 31], noise insulation [32], seismic cloaking [33; 34]. By addressing the important questions surrounding the physics of acoustic metamaterials, we may open the door to new and exciting opportunities in acoustics and materials science. This work is organised as follows. In Section II we build a simple model based on the linear acoustic equations and discuss the simple mechanism of the EP ap Figure 1: The main idea of this work. Symmetry breaking perturbation merges two groups of modes induced by symmetry considerations into a single group. This is shown at the bottom of the figure. By changing amplitude of the symmetry breaking perturbation we can tune a coupling strength. As shown in the top of the figure, this allows us to observe the transition from crossing of energy terms to avoided crossing. This transition can be followed by EP characterized by the coalescence of eigenspaces.
2306.14042
Existence Criteria for Lipschitz Selections of Set-Valued Mappings in ${\bf R}^2$
Let $F$ be a set-valued mapping which to each point $x$ of a metric space $({\mathcal M},\rho)$ assigns a convex closed set $F(x)\subset{\bf R}^2$. We present several constructive criteria for the existence of a Lipschitz selection of $F$, i.e., a Lipschitz mapping $f:{\mathcal M}\to{\bf R}^2$ such that $f(x)\in F(x)$ for every $x\in{\mathcal M}$. The geometric methods we develop to prove these criteria provide efficient algorithms for constructing nearly optimal Lipschitz selections and computing the order of magnitude of their Lipschitz seminorms.
Pavel Shvartsman
2023-06-24T19:34:32Z
http://arxiv.org/abs/2306.14042v1
###### Abstract ###### Abstract Let \(F\) be a set-valued mapping which to each point \(x\) of a metric space \((\mathcal{M},\rho)\) assigns a convex closed set \(F(x)\subset\mathbf{R}^{2}\). We present several constructive criteria for the existence of a Lipschitz selection of \(F\), i.e., a Lipschitz mapping \(f:\mathcal{M}\to\mathbf{R}^{2}\) such that \(f(x)\in F(x)\) for every \(x\in\mathcal{M}\). The geometric methods we develop to prove these criteria provide efficient algorithms for constructing nearly optimal Lipschitz selections and computing the order of magnitude of their Lipschitz seminorms. **Existence Criteria for Lipschitz Selections of Set-Valued Mappings in \(\mathbf{R}^{2}\)** By Pavel Shvartsman _Department of Mathematics, Technion - Israel Institute of Technology,_ _32000 Haifa, Israel_ _e-mail: [email protected]_ ###### Contents * 1 Introduction. * 2 Notation and preliminaries. * 2.1 Background notation. * 2.2 Rectangles and rectangular hulls. * 2.3 Rectangles: intersections, neighborhoods and selections. * 3 The key theorem: Lipschitz selections and rectangular hulls. * 4 Proof of the key theorem: the final step. * 5 Lipschitz selection criteria in the two dimensional case. * 5.1 Constructive criteria for Lipschitz selections: proofs. * 5.2 Criteria for Lipschitz selections in terms of intersections of sets. * 6 Projection Algorithm for nearly optimal Lipschitz selections. * 6.1 The \(\vec{\lambda}\)-Projection Algorithm. * 6.2 Projection Algorithms and a solution to the second main problem. * 6.3 The constant \(\Lambda_{\mathcal{R}}(F)\) and other related constants. * 6.4 Lipschitz selections of polygon-set valued mappings. * 7 Lipschitz selections and iterations of balanced refinements. * 7.1 The Stabilization Principle for balanced refinements of set-valued mappings. * 7.2 The Iterative Algorithm for set-valued mappings. References **1. Introduction.** Let \(\mathfrak{M}=({\cal M},\rho)\) be a _pseudometric space_, i.e., suppose that the "distance function" \(\rho:{\cal M}\times{\cal M}\to[0,+\infty]\) satisfies \[\rho(x,x)=0,\ \rho(x,y)=\rho(y,x),\ \ \mbox{and}\ \ \rho(x,y)\leq\rho(x,z)+\rho(z,y)\] for all \(x,y,z\in{\cal M}\). Note that \(\rho(x,y)=0\) may hold with \(x\neq y\), and \(\rho(x,y)\) may be \(+\infty\). By \(\mbox{Lip}({\cal M})\) we denote the space of all Lipschitz mappings from \({\cal M}\) into \({\bf R}^{2}\) equipped with the Lipschitz seminorm \[\|f\|_{\mbox{\scriptsize Lip}({\cal M})}=\inf\{\,\lambda\geq 0:\|f(x)-f(y)\| \leq\lambda\,\rho(x,y)\ \ \mbox{for all}\ \ x,y\in{\cal M}\}.\] Hereafter \(\|\cdot\|\) denotes the uniform norm in \({\bf R}^{2}\), i.e., \[\|x\|=\max\{|x_{1}|,|x_{2}|\}\ \ \ \mbox{for}\ \ \ x=(x_{1},x_{2})\in{\bf R}^{2}.\] Let \(F\) be a set-valued mapping which to each element \(x\in{\cal M}\) assigns a non-empty convex closed set \(F(x)\subset{\bf R}^{2}\). A _selection_ of \(F\) is a map \(f:{\cal M}\to{\bf R}^{2}\) such that \(f(x)\in F(x)\) for all \(x\in{\cal M}\). A selection \(f\) is said to be Lipschitz if \(f\in\mbox{Lip}({\cal M})\). We introduce the quantity \(|F|_{\mathfrak{M}}\) by letting \[|F|_{\mathfrak{M}}=\inf\{\,\|f\|_{\mbox{\scriptsize Lip}({\cal M})}:f\ \ \mbox{is a Lipschitz selection of}\ \ F\} \tag{1.1}\] whenever \(F\) has a Lipschitz selection, and we set \(|F|_{\mathfrak{M}}=+\infty\) otherwise. Let \(\mathfrak{T}\) be a family of convex closed subsets of \({\bf R}^{2}\). In this paper, we study two main problems related to calculation of the quantity \(|F|_{\mathfrak{M}}\) and constructing of a nearly optimal Lipschitz selection of \(F\) respectively. Here is the first of these problems. **Problem 1.1**: _Find an explicit constructive formula for the order of magnitude of the quantity \(|F|_{\mathfrak{M}}\) where \(F:{\cal M}\to\mathfrak{T}\) is an arbitrary set-valued mapping._ More specifically, we would like to find an _explicit_ formula for computing the value of \(|F|_{\mathfrak{M}}\) (up to an absolute positive constant) which exploits only the pseudometric \(\rho\) and certain geometric characteristics of the sets \(F(x)\in\mathfrak{T}\), \(x\in{\cal M}\). By "order of magnitude" we mean the following: Two numbers \(A,B\geq 0\) are said to have "the same order of magnitude" provided that \(cA\leq B\leq CA\), with absolute positive constants \(c\) and \(C\). To "compute the order of magnitude of \(A\)" is to compute a number \(B\) such that \(A\) and \(B\) have the same order of magnitude. Let us formulate the second main problem. **Problem 1.2**: _Find a constructive algorithm which given a set-valued mapping \(F:{\cal M}\to\mathfrak{T}\) assigns a nearly optimal Lipschitz selection \(f\) of \(F\)._ Thus, we are looking for an efficient constructive algorithm which proceeds a selection \(f\) of \(F\) such that \(\|f\|_{\mbox{\scriptsize Lip}({\cal M})}\leq\gamma\,|F|_{\mathfrak{M}}\) where \(\gamma\geq 1\) is an absolute constant. We expect that this algorithm uses only the pseudometric \(\rho\) and the geometrical parameters which determine the mapping \(F\). In this paper, the family \(\mathfrak{T}\) is one of the following families of convex sets: (i) \(\mathfrak{T}=\operatorname{Conv}(\mathbf{R}^{2})\) where \[\operatorname{Conv}(\mathbf{R}^{2})=\{C\subset\mathbf{R}^{2}:C\ \ \text{is non-empty convex and closed}\}; \tag{1.2}\] (ii) \(\mathfrak{T}=\mathcal{K}(\mathbf{R}^{2})\), where \[\mathcal{K}(\mathbf{R}^{2})=\{C\subset\mathbf{R}^{2}:C\ \ \text{is non-empty convex and bounded}\}; \tag{1.3}\] (iii) \(\mathfrak{T}=\mathcal{HP}(\mathbf{R}^{2})\) where \[\mathcal{HP}(\mathbf{R}^{2})=\{H\subset\mathbf{R}^{2}:H\ \ \text{is a closed half-plane}\}. \tag{1.4}\] In Theorem 1.7 and Theorem 1.8 below, we present two solutions to Problem 1.1 by exhibiting two different explicit formulae for the order of magnitude of the quantity \(|F|_{\mathbb{R}^{2}}\) where \(F\) is an arbitrary set-valued mapping from \(\mathcal{M}\) into the family \(\mathfrak{T}=\mathcal{K}(\mathbf{R}^{2})\). These formulae are expressed in terms of the diameters of the four-point subsets of \(\mathcal{M}\) and the angles between the supporting half-planes of the sets \(F(x)\), \(x\in\mathcal{M}\). Problems 1.1 and 1.2 are special cases of the general Lipschitz selection problem studying the existence and properties of Lipschitz selections of set-valued mappings from (pseudo)metric spaces into various families of convex subsets of Banach spaces. The Lipschitz selection problem may be regarded as a search for a Lipschitz mapping that agrees approximately with data. There is an extensive literature devoted to different aspects of the Lipschitz and the related smooth selection problems. Among the multitude of results known so far we mention those in the papers and monographs [1, 3, 4, 12, 15, 17, 20, 21, 24, 25, 26, 28, 29, 30, 31]. We refer the reader to all of these works and references therein, for numerous results and techniques concerning this topic. The Lipschitz selection problem is of great interest in recent years, mainly due to its close connections with the classical _Whitney Extension Problem_[35]: _Given a positive integer \(m\) and a function \(f\) defined on a closed subset of \(\mathbf{R}^{n}\), how can one tell whether \(f\) extends to a \(C^{m}\)-function on all of \(\mathbf{R}^{n}\)?_ Over the years (since 1934) this problem has attracted a lot of attention, and there is an extensive literature devoted to this problem and its analogues for various spaces of smooth functions. For a detailed account of the history of extension and restriction problems for \(m\)-smooth functions, and various references related to this topic, we refer the reader to [5, 6, 9, 10, 11, 13, 14, 19]. As an example, let us illustrate the connection between the Lipschitz selection problem and the Whitney problem for the space \(C^{2}(\mathbf{R}^{n})\). In [6, 27, 29] we show that the Whitney problem for the restrictions of \(C^{2}\)-functions to _finite_ subsets of \(\mathbf{R}^{n}\) can be reduced to a certain Lipschitz selection problem for _affine-set valued_ mappings. A solution to this special case of the Lipschitz selection problem given in [28, 29, 30] led us to an interesting property of the restrictions of \(C^{2}\)-functions called by C. Fefferman [10] (for the general case of \(C^{m}\)-spaces) as the _Finiteness Principle_. This principle enables us to reduce the Whitney problem for \(C^{2}(\mathbf{R}^{n})\)-restrictions to an _arbitrary finite subsets_ of \(\mathbf{R}^{n}\) to a similar problem but for \(C^{2}(\mathbf{R}^{n})\)-restrictions to _finite sets consisting of at most \(k^{\#}=3\cdot 2^{n-1}\) points_. See [27]. In [9], C. Fefferman showed that this version of the Finiteness Principle holds for the space \(C^{m}(\mathbf{R}^{n})\) for arbitrary \(m,n\geq 1\) with a certain constant \(k^{\#}=k^{\#}(m,n)\) depending only on \(m\) and \(n\). Furthermore, in [29] we solved Problem 1.1 for the special case of the _line-set valued_ mappings in \(\mathbf{R}^{2}\), and showed how constructive geometrical criteria for Lipschitz selections of such mappings are transformed into purely analytical descriptions of the restrictions of \(C^{2}\)-functions to finite subsets of the plane. There is also a _Finiteness Principle for Lipschitz selections_ proven in the recent joint work with C. Fefferman [17]. In two dimensional case, this principle states the following (see [29]): _Let \(F:\mathcal{M}\to\mathcal{K}(\mathbf{R}^{2})\) be a set-valued mapping. If for every \(\mathcal{M}^{\prime}\subset\mathcal{M}\) consisting of at most four points, the restriction \(F|_{\mathcal{M}^{\prime}}\) of \(F\) to \(\mathcal{M}^{\prime}\) has a Lipschitz selection \(f_{\mathcal{M}^{\prime}}\) with \(\|f_{\mathcal{M}^{\prime}}\|_{\mathrm{Lip}(\mathcal{M}^{\prime})}\leq 1\), then \(F\) has a Lipschitz selection with Lipschitz seminorm at most \(\gamma\). Here, \(\gamma>0\) is an absolute constant._ This statement is equivalent to the following inequality: \[|F|_{\mathfrak{M}^{\prime}}\leq\gamma\,\sup\{|F|_{\mathcal{M}^{\prime}}|_{ \mathfrak{M}^{\prime}}:\mathfrak{M}^{\prime}=(\mathcal{M}^{\prime},\rho), \mathcal{M}^{\prime}\subset\mathcal{M},\#\mathcal{M}\leq N\}\quad\text{with} \ \ \ N=4. \tag{1.5}\] (Clearly, the converse inequality trivially holds with \(\gamma=1\).) It is shown in [17] that the Finiteness Principle holds for set-valued mappings taking values in the family \(\mathcal{K}(\mathbf{R}^{n})\) of all compact convex subsets of \(\mathbf{R}^{n}\) with \(N=2^{n}\) and a constant \(\gamma=\gamma(n)\) depending only on \(n\). In particular, the Finiteness Principle enables us to reduce Problem 1.1 to the same one but for a _finite_ pseudometric spaces \(\mathfrak{M}=(\mathcal{M},\rho)\) consisting of at most _four_ elements. Nevertheless, as we will see below, even for a four-element pseudometric space, the solution to Problem 1.1 (especially with good lower and upper bounds for the quantity \(|F|_{\mathfrak{M}^{\prime}}\)) remains a rather difficult problem. Our main results, Theorems 1.7 and 1.8, are corollaries of Theorems 1.4 and 1.6 which provide two different solutions to Problem 1.1 for the case of set-valued mappings taking values in the family \(\mathfrak{T}=\mathcal{H}\mathcal{P}(\mathbf{R}^{2})\) of all closed half-planes in \(\mathbf{R}^{2}\). Let us prepare the ingredients that are needed to formulate these results. Let \(\mathbf{S}_{1}\) be the unit circle in \(\mathbf{R}^{2}\), and let \[\mathbf{n}:\mathcal{M}\to\mathbf{S}_{1}\quad\text{and}\quad\alpha:\mathcal{M} \to\mathbf{R}\] be two mappings defined on \(\mathcal{M}\). These mappings determine a set-valued mapping \(F:\mathcal{M}\to\mathcal{H}\mathcal{P}(\mathbf{R}^{2})\) defined by \[F(x)=\{a\in\mathbf{R}^{2}:\langle a,\mathbf{n}(x)\rangle+\alpha(x)\leq 0\}, \quad x\in\mathcal{M}. \tag{1.6}\] Here, given \(a=(a_{1},a_{2})\), \(\mathbf{n}(x)=(n_{1}(x),n_{2}(x))\in\mathbf{R}^{2}\), by \[\langle a,\mathbf{n}(x)\rangle=a_{1}n_{1}(x)+a_{2}n_{2}(x)\] we denote the standard inner product in \(\mathbf{R}^{2}\). Thus, for each \(x\in\mathcal{M}\), the set \(F(x)\) is a half-plane in \(\mathbf{R}^{2}\) whose boundary is the straight line \[\ell_{F}(x)=\{a\in\mathbf{R}^{2}:\langle a,\mathbf{n}(x)\rangle+\alpha(x)=0\}.\] The unit vector \(\mathbf{n}(x)\) is directed outside of the half-plane \(F(x)\) and is orthogonal to the line \(\ell_{F}(x)\). **Definition 1.3**: Let \(F:\mathcal{M}\to\mathcal{H}\mathcal{P}(\mathbf{R}^{2})\) be a set-valued mapping defined by (1.6). We say that the half-planes \(\{F(x):x\in\mathcal{M}\}\) are in _general position_ if there exist elements \(x_{1},...,x_{m}\in\mathcal{M}\) such that _the interior of the convex hull of the vectors_ \(\ \mathbf{n}(x_{1}),...,\mathbf{n}(x_{m})\) _contains_ \(0\)_._ Given \(x,y\in\mathcal{M}\), we let \(\Delta_{n}(x,y)\) denote the determinant \[\Delta_{n}(x,y)=\left|\begin{array}{cc}n_{1}(x)&n_{1}(y)\\ n_{2}(x)&n_{2}(y)\end{array}\right|=n_{1}(x)\,n_{2}(y)-n_{2}(x)\,n_{1}(y). \tag{1.7}\] Next, let \({\bf n}(x),{\bf n}(y)\in{\bf S}_{1}\) be two non-collinear vectors (we write \({\bf n}(x)\nparallel{\bf n}(y)\)). Let \[w(x,y:F)=(w_{1}(x,y:F),w_{2}(x,y:F))=\ell_{F}(x)\cap\ell_{F}(y)\] be the point of intersection of the straight lines \(\ell_{F}(x)\) and \(\ell_{F}(y)\). Clearly, \[w_{1}(x,y:F)=\frac{\left|\begin{array}{cc}\alpha(y)&\alpha(x)\\ n_{2}(y)&n_{2}(x)\end{array}\right|}{\Delta_{n}(x,y)}\quad\mbox{ and }\quad w_{2}(x,y:F)=\frac{\left|\begin{array}{cc} \alpha(x)&\alpha(y)\\ n_{1}(x)&n_{1}(y)\end{array}\right|}{\Delta_{n}(x,y)}.\] Finally, given \(x,x^{\prime},y,y^{\prime}\in{\cal M}\) such that \({\bf n}(x)\nparallel{\bf n}(x^{\prime})\) and \({\bf n}(y)\nparallel{\bf n}(y^{\prime})\), we introduce the following quantities: \[D_{1}[x,x^{\prime}:y,y^{\prime}]=\frac{\rho(x,x^{\prime})}{|\Delta_{n}(x,x^{ \prime})|}\min\{|n_{2}(x)|,|n_{2}(x^{\prime})|\}+\frac{\rho(y,y^{\prime})}{| \Delta_{n}(y,y^{\prime})|}\min\{|n_{2}(y)|,|n_{2}(y^{\prime})|\}+\rho(x,y) \tag{1.8}\] and \[D_{2}[x,x^{\prime}:y,y^{\prime}]=\frac{\rho(x,x^{\prime})}{|\Delta_{n}(x,x^{ \prime})|}\min\{|n_{1}(x)|,|n_{1}(x^{\prime})|\}+\frac{\rho(y,y^{\prime})}{| \Delta_{n}(y,y^{\prime})|}\min\{|n_{1}(y)|,|n_{1}(y^{\prime})|\}+\rho(x,y). \tag{1.9}\] **Theorem 1.4**: _Let \(F:{\cal M}\to{\cal H}{\cal P}({\bf R}^{2})\) be a set-valued mapping defined by (1.6). Assume that either \({\cal M}\) is finite or the half-planes \(\{F(x):x\in{\cal M}\}\) are in general position. (See Definition 1.3.)_ _Then \(F\) has a Lipschitz selection if and only if there exists a constant \(\lambda\geq 0\) such that the following conditions hold:_ \((\bigstar 1)\)_\(\alpha(x)+\alpha(y)\leq\lambda\,\rho(x,y)\) for every \(x,y\in{\cal M}\) such that \({\bf n}(y)=-{\bf n}(x)\);_ \((\bigstar 2)\) _Let \(x,x^{\prime},y,y^{\prime}\in{\cal M}\) be arbitrary elements such that \({\bf n}(x)\nparallel{\bf n}(x^{\prime})\) and \({\bf n}(y)\nparallel{\bf n}(y^{\prime})\). Then the following conditions are satisfied:_ _(i). If_ \[n_{2}(x)\,n_{2}(x^{\prime})\leq 0,\ n_{1}(x)+n_{1}(x^{\prime})\leq 0\quad \mbox{ and }\quad n_{2}(y)\,n_{2}(y^{\prime})\leq 0,\ n_{1}(y)+n_{1}(y^{\prime})\geq 0, \tag{1.10}\] _then_ \[w_{1}(x,x^{\prime}:F)-w_{1}(y,y^{\prime}:F)\leq\lambda\,D_{1}[x,x^{\prime}:y, y^{\prime}];\] _(ii). If_ \[n_{1}(x)\,n_{1}(x^{\prime})\leq 0,\ n_{2}(x)+n_{2}(x^{\prime})\leq 0,\quad \mbox{ and }\quad n_{1}(y)\,n_{1}(y^{\prime})\leq 0,\ n_{2}(y)+n_{2}(y^{\prime})\geq 0. \tag{1.11}\] _then_ \[w_{2}(x,x^{\prime}:F)-w_{2}(y,y^{\prime}:F)\leq\lambda\,D_{2}[x,x^{\prime}:y, y^{\prime}].\] _Furthermore,_ \[\frac{1}{\sqrt{2}}\,\inf\lambda\leq\,|F|_{\mathfrak{M}}\leq 5\inf\lambda.\] _See Fig. 1._ **Remark 1.5**: Let us explain the geometrical meaning of inequalities (1.10) and (1.11). We let \(\mathrm{Pr}_{i}\), \(i=1,2\), denote the operator of the orthogonal projection onto the axis \(Ox_{i}\). It can be shown that, if these inequalities hold then \(\mathrm{Pr}_{i}[F(x)\cap F(x^{\prime})]\) and \(\mathrm{Pr}_{i}[F(y)\cap F(y^{\prime})]\) are two closed _unbounded_ intervals in \(Ox_{i}\) such that _the first of them is bounded from below_, while _the second is bounded from above_ (with respect to the standard ordering on the coordinate axes). In other words, the inequalities in (1.10) are equivalent to the following equalities: \[\mathrm{Pr}_{1}[F(x)\cap F(x^{\prime})]=\{t\,(w_{1}(x,x^{\prime}:F),0):t\geq 1 \},\ \ \mathrm{Pr}_{1}[F(y)\cap F(y^{\prime})]=\{t\,(w_{1}(y,y^{\prime}:F),0):t\leq 1\}.\] In turn, the inequalities in (1.11) mean the following: \[\mathrm{Pr}_{2}[F(x)\cap F(x^{\prime})]=\{t\,(0,w_{2}(x,x^{\prime}:F)):t\geq 1 \},\ \ \mathrm{Pr}_{2}[F(y)\cap F(y^{\prime})]=\{t\,(0,w_{2}(y,y^{\prime}:F)):t\leq 1\}.\] Thus, if (1.10) and (1.11) hold, condition \((\bigstar 2)\) of Theorem 1.4 can be reformulated as follows: \((\bigstar 2^{\prime})\) Let \(x,x^{\prime},y,y^{\prime}\in\mathcal{M}\) be arbitrary elements such that \(\mathbf{n}(x)\nparallel\mathbf{n}(x^{\prime})\) and \(\mathbf{n}(y)\nparallel\mathbf{n}(y^{\prime})\). Then for every \(i=1,2\) the following inequality \[\mathrm{dist}(\mathrm{Pr}_{i}[F(x)\cap F(x^{\prime})],\mathrm{Pr}_{i}[F(y) \cap F(y^{\prime})])\leq\lambda\,D_{i}[x,x^{\prime}:y,y^{\prime}] \tag{1.12}\] holds. See Fig. 2. Fig. 1: The Lipschitz selection criterion for half-planes. In general, we can omit the requirement that inequalities (1.10) and (1.11) are satisfied. In other words, in the formulation of Theorem 1.4 we can replace condition \((\bigstar 2)\) with \((\bigstar 2^{\prime})\). \(\blacktriangleleft\) We turn to the second criterion for the seminorm of a nearly optimal Lipschitz selection. This criterion can be regarded as a certain modification of the criterion (1.12). This modification is motivated by the following observation: _Theorem 1.4 includes the quantities \(D_{1}[\cdot,\cdot\cdot\cdot,\cdot]\) and \(D_{2}[\cdot,\cdot\cdot\cdot,\cdot]\) which depend on the Cartesian coordinates of the vectors \({\bf n}(x)\), \(x\in{\cal M}\)_ See (1.8) and (1.9). Theorem 1.6 below provides another explicit criterion for Lipschitz selections of a set-valued mapping \(F:{\cal M}\to{\cal HP}({\bf R}^{2})\). This criterion is formulated in terms of geometric objects that depend only on \(F\) and do not depend on the choice of the coordinate system in \({\bf R}^{2}\). We refer to this criterion as a "coordinate-free" Lipschitz selection criterion. Let us introduce additional definitions and notation necessary for its formulation. Given \(x,y\in{\cal M}\), we let \(\varphi_{F}(x,y)\in[0,\pi/2]\) denote the angle between the boundaries of \(F(x)\) and \(F(y)\), i.e., between the straight lines \(\ell_{F}(x)\) and \(\ell_{F}(y)\). Clearly, \(\sin\varphi_{F}(x,y)=|\Delta_{n}(x,y)|\), see (1.7). Given \({\cal M}^{\prime}\subset{\cal M}\), by \({\rm diam}_{\rho}({\cal M})\) we denote the diameter of \({\cal M}\) in the pseudometric space \(({\cal M},\rho)\). We set \(0/0=0\), \(a/0=+\infty\) for every \(a>0\), and \({\rm dist}(\emptyset,A)=0\) for \(A\subset{\bf R}^{2}\). Finally, we put \[\mathfrak{D}[x,x^{\prime};y,y^{\prime}]=\frac{\rho(x,x^{\prime})}{\sin\varphi_ {F}(x,x^{\prime})}+\frac{\rho(y,y^{\prime})}{\sin\varphi_{F}(y,y^{\prime})}+ {\rm diam}_{\rho}[x,x^{\prime},y,y^{\prime}]\ \ \ {\rm provided}\ \ \ x,x^{\prime},y,y^{\prime}\in{\cal M}.\] **Theorem 1.6**: _Let \(\mathfrak{M}=({\cal M},\rho)\) be a pseudometric space, and let \(F:{\cal M}\to{\cal HP}({\bf R}^{2})\) be a set-valued mapping. Assume that either \({\cal M}\) is finite or the half-planes \(\{F(x):x\in{\cal M}\}\) are in general position._ _The mapping \(F\) has a Lipschitz selection \(f:{\cal M}\to{\bf R}^{2}\) if and only if there exists a constant \(\lambda\geq 0\) such that for every four elements \(x,x^{\prime},y,y^{\prime}\in{\cal M}\) the following inequality_ \[{\rm dist}(F(x)\cap F(x^{\prime}),F(y)\cap F(y^{\prime}))\leq\lambda\, \mathfrak{D}[x,x^{\prime};y,y^{\prime}] \tag{1.13}\] _holds. Furthermore, \(\frac{1}{\sqrt{2}}\inf\lambda\leq|F|_{\mathfrak{M}}\leq\gamma\inf\lambda\) where \(\gamma>0\) is an absolute constant. See Fig. 3._ Fig. 2: A reformulation of the Lipschitz selection criterion for half-planes. Theorem 1.4 and Theorem 1.6 lead us to two different criteria for Lipschitz selections which provide two different solutions to Problem 1.1 for the family \(\mathcal{T}=\mathcal{K}(\mathbf{R}^{2})\) of all convex compact subsets of \(\mathbf{R}^{2}\). Let us fix some notation that we need to formulate these results. Given \(\mathbf{n}\in\mathbf{S}_{1}\) and \(\alpha\in\mathbf{R}\), we let \(H(\mathbf{n},\alpha)\) denote a closed half-plane defined by \[H(\mathbf{n},\alpha)=\{a\in\mathbf{R}^{2}:\langle\mathbf{n},a\rangle+\alpha \leq 0\}.\] Let \(\Re=(\mathcal{M},\rho)\) be a pseudometric space, and let \(G:\mathcal{M}\to\mathcal{K}(\mathbf{R}^{2})\) be a set-valued mapping. For each \(x\in\mathcal{M}\), let us fix a family of half-planes \(\mathbb{H}_{G}(x)\subset\mathcal{H}\mathcal{P}(\mathbf{R}^{2})\) such that \[G(x)=\cap\left\{H:H\in\mathbb{H}_{G}(x)\right\}. \tag{1.14}\] Of course, the family \(\mathbb{H}_{G}(x)\) can be defined in various ways: for instance, thanks to separation theorem, one can set \(\mathbb{H}_{G}(x)=\{H\in\mathcal{H}\mathcal{P}(\mathbf{R}^{2}):H\supset G(x)\}\). A smaller family \(\mathbb{H}_{G}(x)\) satisfying (1.14) one can define as follows: to each \(x\in\mathcal{M}\) and each \(\mathbf{n}\in\mathbf{S}_{1}\) we assign a half-plane \(V(x;\mathbf{n})\) defined by \[V(x;\mathbf{n})=\{a\in\mathbf{R}^{2}:\langle\mathbf{n},a\rangle\leq h(\mathbf{ n}:G(x))\}.\] Here \(h(\mathbf{n}:A)=\sup\{\langle\mathbf{n},a\rangle:a\in A\}\) is _the support function_ of a convex set \(A\subset\mathbf{R}^{2}\). Then we set \[\mathbb{H}_{G}(x)=\{V(x;\mathbf{n}):\mathbf{n}\in\mathbf{S}_{1}\},\quad x\in \mathcal{M}.\] Thus, in this case, the family \(\mathbb{H}_{G}(x)\) is the family of all _supporting half-planes_ of the set \(G(x)\). **Theorem 1.7**: _Let \(\Re=(\mathcal{M},\rho)\) be a pseudometric space, and let \(G:\mathcal{M}\to\mathcal{K}(\mathbf{R}^{2})\) be a set-valued mapping. This mapping has a Lipschitz selection if and only if there exists a constant \(\lambda\geq 0\) such that the following two conditions are satisfied:_ _(i) \(\mathrm{dist}(G(x),G(y))\leq\lambda\,\rho(x,y)\) for every \(x,y\in\mathcal{M}\);_ Fig. 3: The coordinate-free criterion for Lipschitz selections. _(ii) Let \({\cal M}=\{x,x^{\prime},y,y^{\prime}\}\subset{\cal M}\) be an arbitrary set, and let \({\bf n}:{\cal M}\to{\bf S}_{1}\) and \(\alpha:{\cal M}\to{\bf R}\) be arbitrary mappings such that_ \[H({\bf n}(z),\alpha(z))\in{\mathbb{H}}_{G}(z)\ \ \mbox{for every}\ \ z\in{\cal M}.\] _Let \(F(z)=H({\bf n}(z),\alpha(z))\) on \({\cal M}\)._ _Then condition (\(\bigstar 2\)) of Theorem 1.4 holds for \(x,x^{\prime},y,y^{\prime}\) and \(F\). (Recall that this condition is equivalent to condition (\(\bigstar 2\)'), see (1.12).)_ _Furthermore,_ \[\frac{1}{\sqrt{2}}\ \inf\lambda\leq\ |G|_{\mathfrak{M}}\leq 5\inf\lambda.\] _See Fig. 4._ Let us formulate the second criterion for Lipschitz selections. **Theorem 1.8**: _Let \(\mathfrak{M}=({\cal M},\rho)\) be a pseudometric space, and let \(G:{\cal M}\to{\cal K}({\bf R}^{2})\) be a set-valued mapping. The mapping \(G\) has a Lipschitz selection if and only if there exists a constant \(\lambda\geq 0\) such that for every four elements \(x,x^{\prime},y,y^{\prime}\in{\cal M}\) and every four half-planes_ \[F(x)\in{\mathbb{H}}(x),\ \ F(x^{\prime})\in{\mathbb{H}}(x^{\prime}),\ \ F(y)\in{ \mathbb{H}}(y),\ \ F(y^{\prime})\in{\mathbb{H}}(y^{\prime}),\] _inequality (1.13) holds._ _Furthermore,_ \[\frac{1}{\sqrt{2}}\ \inf\lambda\leq\ |G|_{\mathfrak{M}}\leq\gamma\ \inf\lambda\] _where \(\gamma>0\) is an absolute constant._ _See Fig. 5._ Let us formulate the second criterion for Lipschitz selections. **Theorem 1.9**: _Let \(\mathfrak{M}=({\cal M},\rho)\) be a pseudometric space, and let \(G:{\cal M}\to{\cal K}({\bf R}^{2})\) be a set-valued mapping. The mapping \(G\) has a Lipschitz selection if and only if there exists a constant \(\lambda\geq 0\) such that for every four elements \(x,x^{\prime},y,y^{\prime}\in{\cal M}\) and every four half-planes_ \[F(x)\in{\mathbb{H}}(x),\ \ F(x^{\prime})\in{\mathbb{H}}(x^{\prime}),\ \ F(y)\in{ \mathbb{H}}(y),\ \ F(y^{\prime})\in{\mathbb{H}}(y^{\prime}),\] _inequality (1.13) holds._ _Furthermore,_ \[\frac{1}{\sqrt{2}}\ \inf\lambda\leq\ |G|_{\mathfrak{M}}\leq\gamma\ \inf\lambda\] _where \(\gamma>0\) is an absolute constant._ _See Fig. 5._ Let us formulate the second criterion for Lipschitz selections. **Theorem 1.10**: _Let \(\mathfrak{M}=({\cal M},\rho)\) be a pseudometric space, and let \(G:{\cal M}\to{\cal K}({\bf R}^{2})\) be a set-valued mapping. The mapping \(G\) has a Lipschitz selection if and only if there exists a constant \(\lambda\geq 0\) such that for every four elements \(x,x^{\prime},y,y^{\prime}\in{\cal M}\) and every four half-planes_ \[F(x)\in{\mathbb{H}}(x),\ \ F(x^{\prime})\in{\mathbb{H}}(x^{\prime}),\ \ F(y)\in{ \mathbb{H}}(y),\ \ F(y^{\prime})\in{\mathbb{H}}(y^{\prime}),\] _inequality (1.13) holds._ _Furthermore,_ \[\frac{1}{\sqrt{2}}\ \inf\lambda\leq\ |G|_{\mathfrak{M}}\leq\gamma\ \inf\lambda\] _where \(\gamma>0\) is an absolute constant._ _See Fig. 6._ Let us formulate the second criterion for Lipschitz selections. **Theorem 1.11**: _Let \(\mathfrak{M}=({\cal M},\rho)\) be a pseudometric space, and let \(G:{\cal M}\to{\cal K}({\bf R}^{2})\) be a set-valued mapping. The mapping \(G\) has a Lipschitz selection if and only if there exists a constant \(\lambda\geq 0\) such that for every four elements \(x,x^{\prime},y,y^{\prime}\in{\cal M}\) and every four half-planes_ \[F(x)\in{\mathbb{H}}(x),\ \ F(x^{\prime})\in{\mathbb{H}}(x^{\prime}),\ \ F(y)\in{ \mathbb{H}}(y),\ \ F(y^{\prime})\in{\mathbb{H}}(y^{\prime}),\] _inequality (1.13) holds._ _Furthermore,_ \[\frac{1}{\sqrt{2}}\ \inf\lambda\leq\ |G|_{\mathfrak{M}}\leq\gamma\ \inf\lambda\] _where \(\gamma>0\) is an absolute constant._ _See Fig. 7._ Let us formulate the second criterion for Lipschitz selections. **Theorem 1.12**: _Let \(\mathfrak{M}=({\cal M},\rho)\) be a pseudometric space, and let \(G:{\cal M}\to{\cal K}({\bf R}^{2})\) be a set-valued mapping. The mapping \(G\) has a Lipschitz selection if and only if there exists a constant \(\lambda\geq 0\) such that for every four elements \(x,x^{\prime},y,y^{\prime}\in{\cal M}\) and every four half-planes_ \[F(x)\in{\mathbb{H}}(x),\ \ F(x^{\prime})\in{\mathbb{H}}(x^{\prime}),\ \ F(y)\in{ \mathbb{H}}(y),\ \ F(y^{\prime})\in{\mathbb{H}}(y^{\prime}),\] _inequality (1.13) holds._ _Furthermore,_ \[\frac{1}{\sqrt{2}}\ \inf\lambda\leq\ |G|_{\mathfrak{M}}\leq\gamma\ \inf\lambda\] _where \(\gamma>0\) is an absolute constant._ _See Fig. 8._ Let us formulate the second criterion for Lipschitz selections. **Theorem 1.13**: _Let \(\mathfrak{M}=({\cal M},\rho)\) be a pseudometric space, and let \(G:{\cal M}\to{\cal K}({\bf R}^{2})\) be a set-valued mapping. The mapping \(G\) has a Lipschitz selection if and only if there exists a constant \(\lambda\geq 0\) such that for every four elements \(x,x^{\prime},y,y^{\prime}\in{\cal M}\) and every four half-planes_ \[F(x)\in{\mathbb{H}}(x),\ \ F(x^{\prime})\in{\mathbb{H}}(x^{\prime}),\ \ F(y)\in{ \mathbb{H}}(y),\ \ F(y^{\prime})\in{\mathbb{H}}(y^{\prime}),\] _inequality (1.13) holds._ _Furthermore,_ \[\frac{1}{\sqrt{2}}\ \inf\lambda\leq\ |G|_{\mathfrak{M}}\leq\gamma\ \inf\lambda\] _where \(\gamma>0\) is an absolute constant._ _See Fig. 9._ Let us formulate the second criterion for Lipschitz selections. **Theorem 1.14**: _Let \(\mathfrak{M}=({\cal M},\rho)\) be a pseudometric space, and let \(G:{\cal M}\to{\cal K}({\bf R}^{2})\) be a set-valued mapping. The mapping \(G\) has a Lipschitz selection if and only if there exists a constant \(\lambda\geq 0\) such that for every four elements \(x,x^{\prime},y,y^{\prime}\in{\cal M}\) and every four half-planes_ \[F(x)\in{\mathbb{H}}(x),\ \ F(x^{\prime})\in{\mathbb{H}}(x^{\prime}),\ \ F(y)\in{ \mathbb{H}}(y),\ \ F(y^{\prime})\in{\mathbb{H}}(y^{\prime}),\] _inequality (1.13) holds._ _Furthermore,_ \[\frac{1}{\sqrt{2}}\ \inf\lambda\leq\ |G|_{\mathfrak{M}}\leq\gamma\ \inf\lambda\] _where \(\gamma>0\) is an absolute constant._ _See Fig. 10._ Let us formulate the second criterion for Lipschitz selections. **Theorem 1.15**: _Let \(\mathfrak{M}=({\cal M},\rho)\) be a pseudometric space, and let \(G:{\cal M}\to{\cal K}({\bf R}^{2})\) be a set-valued mapping. The mapping \(G\) has a Lipschitz selection if and only if there exists a constant \(\lambda\geq 0\) such that for every four elements \(x,x^{\prime},y,y^{\prime}\in{\cal M}\) and every four half-planes_ \[F(x)\in{\mathbb{H}}(x),\ \ F(x^{\prime})\in{\mathbb{H}}(x^{\prime}),\ \ F(y)\in{ \mathbb{H}}(y),\ \ F(y^{\prime})\in{\mathbb{H}}(y^{\prime}),\] _inequality (1.13) holds._ _Furthermore,_ \[\frac{1}{\sqrt{2}}\ \inf\lambda\leq\ |G|_{\mathfrak{M}}\leq\gamma\ \inf\lambda\] _where \(\gamma>0\) is an absolute constant._ _See Fig. 11._ Let us formulate the second criterion for Lipschitz selections. **Theorem 1.16**: _Let \(\mathfrak{M}=({\cal M},\rho)\) be a pseudometric space, and let \(G:{\cal M}\to{\cal K}({\bf R}^{2})\) be a set-valued mapping. The mapping \(G\) has a Lipschitz selection if and only if there exists a constant \(\lambda\geq 0\) such that for every four elements \(x,x^{\prime},y,y^{\prime}\in{\cal M}\) and every four half-planes_ \[F(x)\in{\mathbb{H}}(x),\ \ F(x^{\prime})\in{\mathbb{H}}(x^{\prime}),\ \ F(y)\in{ \mathbb{H}}(y),\ \ F(y^{\prime})\in{\mathbb{H Let us now briefly describe the structure of the present paper and the main ideas of the proofs of the results stated above. Given a convex set \(S\subset\mathbf{R}^{2}\), we let \(\mathcal{H}[S]\) denote _the smallest rectangle (possibly unbounded) with sides parallel to the coordinate axes, containing \(S\)_. Thus, \(\mathcal{H}[S]=\operatorname{\mathrm{Pr}}_{1}[S]\times\operatorname{\mathrm{ Pr}}_{2}[S]\). (Recall that \(\operatorname{\mathrm{Pr}}_{i}\) is the operator of the orthogonal projection onto the axis \(Ox_{i}\), \(i=1,2\).) We refer to \(\mathcal{H}[S]\) as _the rectangular hull_ of \(S\). (See Section 2.2 for more details.) Let \(Q_{0}=[-1,1]\times[-1,1]\). Given a set-valued mapping \(F:\mathcal{M}\to\operatorname{\mathrm{Conv}}(\mathbf{R}^{2})\), a constant \(\lambda\geq 0\) and elements \(x,x^{\prime}\in\mathcal{M}\), we let \(\mathcal{R}_{F}[x,x^{\prime}:\lambda]\) denote the rectangle defined by \[\mathcal{R}_{F}[x,x^{\prime}:\lambda]=\mathcal{H}[F(x)\cap\{F(x^{\prime})+ \lambda\,\rho(x,x^{\prime})Q_{0}\}]. \tag{1.15}\] See Fig. 6. Fig. 5: The second Lipschitz selection criterion for the family \(\mathcal{K}(\mathbf{R}^{2})\). In this paper we prove a number of Lipschitz selection criteria for set-valued mappings \(F:\mathcal{M}\rightarrow\mathrm{Conv}(\mathbf{R}^{2})\) which work under certain natural conditions on the pseudometric space \(\mathfrak{M}=(\mathcal{M},\rho)\) and \(F\). Here are these conditions. **Condition 1.9**: _Either \(\mathcal{M}\) is finite or there exist a constant \(\alpha\geq 0\) and elements \(\bar{x}_{1},...,\bar{x}_{m}\in\mathcal{M}\) such that the set \(\cap\{F(\bar{x}_{k})+\alpha Q_{0}:k=1,...,m\}\) is non-empty and bounded._ The following result is one of the main ingredients of our approach. **Theorem 1.10**: _Let \(\mathfrak{M}=(\mathcal{M},\rho)\) be a pseudometric space, and let \(F:\mathcal{M}\rightarrow\mathrm{Conv}(\mathbf{R}^{2})\) be a set-valued mapping. Suppose that \(\mathfrak{M}\) and \(F\) satisfy Condition 1.9._ _The mapping \(F\) has a Lipschitz selection if and only if there exists a constant \(\lambda\geq 0\) such that_ \[\mathcal{R}_{F}[x,x^{\prime}:\lambda]\cap\{\mathcal{R}_{F}[y,y^{\prime}: \lambda]+\lambda\rho(x,y)\,Q_{0}\}\neq\emptyset\quad\text{ for every }\ x,x^{\prime},y,y^{\prime}\in\mathcal{M}. \tag{1.16}\] _Furthermore, the following inequalities hold:_ \[\inf\,\lambda\leq\,|F|_{\mathfrak{M}}\leq 5\inf\,\lambda. \tag{1.17}\] Clearly, (1.16) is equivalent to the existence of points \(u\in\mathcal{R}_{F}[x,x^{\prime}:\lambda]\) and \(v\in\mathcal{R}_{F}[y,y^{\prime}:\lambda]\) (depending on \(F\), \(\rho\) and \(x,x^{\prime},y,y^{\prime}\)) such that \(\|u-v\|\leq\lambda\rho(x,y)\). See Fig. 7. We give a proof of Theorem 1.10 in Section 5. This proof relies on the criterion for Lipschitz selections given in Theorem 3.2. In turn, the proof of Theorem 3.2 is based on the further development and generalization of the ideas and methods of work [29] devoted to set-valued mappings taking values in the family \(\mathcal{K}(\mathbf{R}^{2})\), see (1.3). Theorem 3.2 is the most technically difficult part of the present paper. We prove this theorem in Sections 3 and 4. **Remark 1.11**: (i) Let us note that, given \(\lambda\geq 0\) and \(x,x^{\prime},y,y^{\prime}\in\mathcal{M}\), the intersection \[\mathcal{R}_{F}[x,x^{\prime}:\lambda]\cap\{\mathcal{R}_{F}[y,y^{\prime}: \lambda]+\lambda\,\rho(x,y)\,Q_{0}\}\neq\emptyset\] if and only if there exist points \[A(i)=(a_{1}(i),a_{2}(i))\in F(x),A^{\prime}(i)\in F(x^{\prime}),B(i)=(b_{1}(i),b_{2}(i))\in F(y)\ \ \ \text{and}\ \ \ B^{\prime}(i)\in F(y^{\prime})\] Fig. 7: The Lipschitz selection criterion in \(\mathbf{R}^{2}\). such that \[\|A(i)-A^{\prime}(i)\|\leq\lambda\,\rho(x,x^{\prime}),\ \ \ \|B(i)-B^{\prime}(i)\|\leq \lambda\,\rho(y,y^{\prime})\ \ \mbox{and}\ \ |a_{i}(i)-b_{i}(i)|\leq\lambda\,\rho(x,y).\] See Fig. 8. (ii) Suppose that for every \(x,x^{\prime}\in{\cal M}\), the rectangle \({\cal R}_{F}[x,x^{\prime}:\lambda]\) is a _closed set_. (In particular, this property holds provided \(F:{\cal M}\to{\cal H}{\cal P}({\bf R}^{2})\), or \({\cal M}\) and \(F\) satisfy Condition 1.9. However, in general, this property of \(F\) does not hold. See Remarks 2.1 and 3.4 below.) In this case, (1.16) is equivalent to the following two conditions: for every \(x,y\in{\cal M}\), \[\mbox{dist}(F(x),F(y))\leq\lambda\,\rho(x,y),\] and \[\mbox{dist}\left(\,{\cal R}_{F}[x,x^{\prime}:\lambda],{\cal R}_{F}[y,y^{ \prime}:\lambda]\,\right)\leq\lambda\,\rho(x,y)\ \ \ \mbox{for every}\ \ x,x^{\prime},y,y^{\prime}\in{\cal M}.\ Fortunately, we are able to solve the above problem and express the order of magnitude of the optimal \(\lambda\) from (1.16) in explicit geometrical terms provided \(F\) is a mapping from \(\mathcal{M}\) into the family \(\mathcal{HP}(\mathbf{R}^{2})\) of all closed half-planes. This leads us to conditions \((\bigstar 1)\) and \((\bigstar 2)\) of Theorem 1.4 providing the proof of this result. In turn, the proof of Theorem 1.6 relies on the finiteness principle for Lipschitz selections (1.5) and the Lipschitz selection criterion given in Theorem 1.4. Finally, we prove Theorems 1.7 and 1.8 by applying Theorems 1.4 and 1.6 to a new pseudometric space \((\widetilde{\mathcal{M}},\widetilde{\rho})\) where \[\widetilde{\mathcal{M}}=\{(x,H):x\in\mathcal{M},H\in\mathbb{H}_{G}(x)\}\] and \[\widetilde{\rho}((x,H),(x^{\prime},H^{\prime}))=\rho(x,x^{\prime})\ \ \ \text{for}\ \ \ x,x^{\prime}\in\mathcal{M}\ \ \ \text{and}\ \ \ H\in\mathbb{H}_{G}(x),H^{\prime}\in\mathbb{H}_{G}(x^{\prime}).\] We refer the reader to paper [32] for the detailed proofs of these theorems. In Section 5, we present three corollaries of Theorem 3.2. These are Theorems 5.2, 5.4 and 5.6. Each of these results provides a criterion for Lipschitz selections in \(\mathbf{R}^{2}\). In particular, Theorem 5.2 is the Finiteness Principle itself, but with the constant \(\gamma=3\) in inequality (1.5) which the smallest value of this constant known so far. Theorem 5.4 is a variant of Theorem 1.10 in which the rectangles \(\mathcal{R}_{F}[\cdot,\cdot:\lambda]\) defined by (1.15) are replaced with the rectangles \(\mathcal{W}_{F}[\cdot,\cdot,\cdot:\lambda]\) defined by (3.1). As a result, we obtain inequality (1.17) with a smaller constant in the right hand side (\(3\) instead of \(5\)); on the other hand, the rectangles \(\mathcal{W}_{F}\) are more complicated objects each depending on three elements of the set \(\mathcal{M}\), while each rectangle \(\mathcal{R}_{F}\) depends only on two elements of \(\mathcal{M}\). In turn, Theorem 5.6 enables us to reformulate the Lipschitz selection criterion of Theorem 1.10 in terms of intersections of certain rectangles in \(\mathbf{R}^{2}\). In particular, the combination of this criterion with the linear-time algorithms for linear programming developed by N. Megiddo [23] and M. E. Dyer [8] provides an efficient algorithm for computing the quantity \(|F|_{\mathfrak{M}}\) for set-valued mappings \(F:\mathcal{M}\to\mathcal{HP}(\mathbf{R}^{2})\) defined on finite pseudometric space \(\mathfrak{M}=(\mathcal{M},\rho)\). See Section 6. In Section 6 and Section 7 we present a solution to Problem 1.2. More specifically, in these sections we introduce and study two different constructive algorithms for Lipschitz selections which solve this problem for families \(\mathfrak{L}\) of convex closed sets satisfying rather mild geometrical conditions. We refer to these algorithms as the _"Projection Algorithm"_ and the _"Iterative Algorithm"_ respectively. Let us note that the Projection Algorithm is based on the approach to the Lipschitz selection problem developed in the proof of Theorem 3.2, while the Iterative Algorithm relies on the ideas of the proof of the "Stabilization Principle" introduced in our recent paper [33]. **Remark 1.12**: We would like to point out that in this paper we deal only with _theoretical aspects of Projection and Iterative Algorithms_. We give a detailed description and justification of these algorithms, as well as some preliminary estimates of the their computational efficiency. But we do not cover all the computational aspects of their work. This issue will be discussed in the next paper [34], where we will show how these algorithms can be implemented on an (idealized) computer with the standard von Neumann architecture. We will prove that these algorithms are efficient, i.e., they require a minimal use of computer resources. In [34] we will give efficient estimates of the "work" of these algorithms (i.e., the number of machine operations needed to carry them out), as well as estimates of the amount of computer memory required to process them. \(\blacktriangleleft\) **Acknowledgements.** I am very grateful to Charles Fefferman for stimulating discussions and valuable advice. **2. Notation and preliminaries.** **2.1 Background notation.** Let \(A\) and \(B\) be non-empty subsets of \({\bf R}^{2}\). We let \[A+B=\{a+b:a\in A,b\in B\}\] denote the Minkowski sum of these sets. Given \(\lambda\geq 0\), by \(\lambda A\) we denote the set \(\lambda A=\{\lambda a:a\in A\}\). We write \[{\rm dist}(A,B)=\inf\{\|a-b\|:a\in A,\ b\in B\}\] to denote the distance between \(A\) and \(B\). For \(x\in{\bf R}^{2}\) we also set \({\rm dist}(x,A)={\rm dist}(\{x\},A)\). We put \({\rm dist}(\emptyset,A)=0\) provided \(A\) is an arbitrary (possibly empty) subset of \({\bf R}^{2}\). We let \(A^{\bf d}\) denote the _closure_ of the set \(A\). For a Banach space \(X\) with the unit ball \(B_{X}\), and two non-empty subsets \(A\) and \(B\) of \(X\), we let \({\rm d}_{\rm H}(A,B)\) denote the Hausdorff distance between \(A\) and \(B\) in \(X\): \[{\rm d}_{\rm H}(A,B)=\inf\{r>0:A+rB_{X}\supset B,\ B+rB_{X}\supset A\}.\] (Of course, \({\rm d}_{\rm H}(A,B)\) also depends on \(X\). However, we use \({\rm d}_{\rm H}\) only in those places in the paper where \(X\) is clear from the context. Therefore, we omit \(X\) in the Hausdorff distance notation.) Let \({\bf R}_{+}=\{a\in{\bf R}:a\geq 0\}\), and let \({\bf R}_{+}^{2}=\{a=(a_{1},a_{2})\in{\bf R}^{2}:a_{1},a_{2}\geq 0\}\). Given \(a,b\in{\bf R}^{2}\), \(a\neq b\), by \([a,b]\) we denote the closed interval with the ends in \(a\) and \(b\): \[[a,b]=\{x\in{\bf R}^{2}:x=(1-t)\,a+t\,b,0\leq t\leq 1\}.\] Given a set \(A\subset{\bf R}\) we put \(\min A=\{\min x:x\in A\}\) and \(\max A=\{\max x:x\in A\}\) provided \(A\) is a closed subset of \({\bf R}\) bounded from above or below respectively. We write \([x]_{+}\) for the positive part of the real \(x\), i.e., \([x]_{+}=\max\{x,0\}\). We also use the natural convention that \[\frac{0}{0}=0,\ \ \frac{a}{0}=+\infty\ \ {\rm for}\ \ a>0,\ \ a-b=0\ \ \ {\rm if}\ \ a=b=\pm\infty,\ \ {\rm and}\ \ (\pm\infty)-(\mp\infty)=\pm\infty.\] If \(S\) is a finite set, by \(\#S\) we denote the number of elements of \(S\). By \[Ox_{1}=\{x=(t,0):t\in{\bf R}\}\ \ \ {\rm and}\ \ \ Ox_{2}=\{x=(0,t):t\in{\bf R}\}\] we denote the coordinate axes in \({\bf R}^{2}\). We recall that by \({\rm Pr}_{i}\), \(i=1,2\), we denote the operator of the orthogonal projection onto the axis \(Ox_{i}\). Thus, given \(x=(x_{1},x_{2})\in{\bf R}^{2}\), we have \({\rm Pr}_{1}[x]=(x_{1},0)\) and \({\rm Pr}_{2}[x]=(0,x_{2})\). Given sets \(A_{i}\subset Ox_{i}\), \(i=1,2\), we let \(A_{1}\times A_{2}\) denote a subset of \({\bf R}^{2}\) defined by \[A_{1}\times A_{2}=\{a=(a_{1},a_{2})\in{\bf R}^{2}:(a_{1},0)\in A_{1},(0,a_{2} )\in A_{2}\}.\] In other words, \[A=A_{1}\times A_{2}\ \ \ {\rm if\ and\ only\ if}\ \ \ \ {\rm Pr}_{1}(A)=A_{1}\ \ \ {\rm and}\ \ \ {\rm Pr}_{2}(A)=A_{2}.\] Given \(a\in{\bf R}^{2}\) and \(r>0\), we let \(Q(a,r)\) denote the square with center \(a\) and length of side \(2r\): \[Q(a,r)=\{y\in{\bf R}^{2}:\|y-a\|\leq r\}.\] In particular, \(Q_{0}=[-1,1]^{2}=Q(0,1)\) is the unit ball of the Banach space \(\ell_{\infty}^{2}=({\bf R}^{2},\|\cdot\|)\). Let \(S\) be a non-empty _convex_ closed subset of \({\bf R}^{2}\). By \({\bf Pr}(\cdot,S)\) we denote the operator of metric projection onto \(S\) in \(\ell_{\infty}^{2}\)-norm. To each \(a\in{\bf R}^{2}\) this operator assigns the set of all points in \(S\) that are nearest to \(a\) on \(S\) in the uniform norm. Thus, \[{\bf Pr}(a,S)=S\,\cap\,Q(a,{\rm dist}(a,S)). \tag{2.6}\] Clearly, the set \({\bf Pr}(a,S)\) is either a singleton or a line segment in \({\bf R}^{2}\) parallel to one of the coordinate axes. If \(S\subset{\bf R}^{2}\) is convex bounded and centrally symmetric, by center(\(S\)) we denote the center of \(S\). We let \(\ell_{2}^{2}=({\bf R}^{2},\|\cdot\|_{\ell_{2}^{2}})\) denote \({\bf R}^{2}\) equipped with the standard Euclidean norm \(\|a\|_{\ell_{2}^{2}}=(a_{1}^{2}+a_{2}^{2})^{\frac{1}{2}}\), \(a=(a_{1},a_{2})\). Let \[B_{0}=\{a=(a_{1},a_{2})\in{\bf R}^{2}:a_{1}^{2}+a_{2}^{2}\leq 1\}\quad\mbox{ and}\quad{\bf S}_{1}=\{a=(a_{1},a_{2})\in{\bf R}^{2}:a_{1}^{2}+a_{2}^{2}=1\}\] be the closed unit disk and the unit circle in \({\bf R}^{2}\) respectively. Given non-zero vectors \(u,v\in{\bf R}^{2}\) we write \(u\parallel v\) if \(u\) and \(v\) are collinear, and we write \(u\nparallel v\) whenever these vectors are non-collinear. We say that the vectors \(u,v\in{\bf R}^{2}\) are _co-directed_ if \(u,v\neq 0\), \(u\) and \(v\) are collinear and have the same direction, i.e., \(v=\alpha u\) for some \(\alpha>0\). By \(\theta(u,v)\in[0,2\pi)\) we denote \[\mbox{the angle of rotation from}\quad u/\|u\|_{\ell_{2}^{2}}\quad\mbox{to}\quad v/\|v\|_{\ell_{2}^{2}}\quad\mbox{in the counterclockwise direction.}\] (Thus, \(\theta(v,u)=2\pi-\theta(u,v)\).) We refer to \(\theta(u,v)\) as the angle between the vectors \(u\) and \(v\). Let \(\ell_{1}\) and \(\ell_{2}\) be two non-parallel straight lines in \({\bf R}^{2}\); in this case, we write \(\ell_{1}\nparallel\ell_{2}\). Let \(V=\ell_{1}\cap\ell_{2}\). These two lines form two angles \(\varphi_{1},\varphi_{2}\in[0,\pi)\), \(\varphi_{1}+\varphi_{2}=\pi\), with the vertex at the point \(V\). Let \[\varphi(\ell_{1},\ell_{2})=\min\{\varphi_{1},\varphi_{2}\};\quad\mbox{ clearly,}\quad\varphi(\ell_{1},\ell_{2})\in[0,\pi/2].\] We refer to \(\varphi(\ell_{1},\ell_{2})\) as _"the angle between the straight lines \(\ell_{1}\) and \(\ell_{2}\)"_. If \(\ell_{1}\parallel\ell_{2}\) (i.e., \(\ell_{1}\) and \(\ell_{2}\) are parallel), we set \(\varphi(\ell_{1},\ell_{2})=0\). **2.2 Rectangles and rectangular hulls.** Let \({\cal I}(Ox_{i})\), \(i=1,2\), be the family of all non-empty convex subsets of the coordinate axis \(Ox_{i}\). In other words, \({\cal I}(Ox_{i})\) is the family of all non-empty intervals (bounded or unbounded) lying on the \(Ox_{i}\) axis. We set \[\Re({\bf R}^{2})=\{\Pi=I_{1}\times I_{2}:I_{1}\in{\cal I}(Ox_{1}),I_{2}\in{ \cal I}(Ox_{2})\}. \tag{2.7}\] We refer to every member of the family \(\Re({\bf R}^{2})\) as a _"rectangle"_. Furthermore, throughout the paper, the word "rectangle" will mean an element of \(\Re({\bf R}^{2})\), i.e., a rectangle (possibly unbounded and not necessarily closed) with "sides" parallel to the coordinate axes. Clearly, thanks to definition (2.7), \[\Pi={\rm Pr}_{1}[\Pi]\times{\rm Pr}_{2}[\Pi]\quad\mbox{for every rectangle}\quad\Pi \in\Re({\bf R}^{2}). \tag{2.8}\] Because \({\rm Pr}_{i}\), \(i=1,2\), is a continuous operator, for every _convex_ set \(S\subset{\bf R}^{2}\) its orthogonal projection \({\rm Pr}_{i}[S]\) onto the axis \(Ox_{i}\) is a _convex_ subset of \(Ox_{i}\), i.e., \({\rm Pr}_{i}[S]\in{\cal I}(Ox_{i})\). Thus, thanks to this property, (2.3) and (2.4), we have the following: \[\mbox{a convex set}\ \ \Pi\in\Re({\bf R}^{2})\ \ \mbox{if and only if}\ \ \Pi={\rm Pr}_{1}[\Pi]\times{\rm Pr}_{2}[\Pi]. \tag{2.9}\] Let us also note the following intersection property of rectangles: non-empty intersection of any collection \(\mathcal{R}\) of rectangles is a rectangle as well. Furthermore, in this case, \[\bigcap_{\Pi\in\mathcal{R}}\Pi=\left\{\bigcap_{\Pi\in\mathcal{R}}\mathrm{Pr}_{1}[ \Pi]\right\}\times\left\{\bigcap_{\Pi\in\mathcal{R}}\mathrm{Pr}_{2}[\Pi] \right\}. \tag{2.10}\] We also note that any interval in \(\mathbf{R}^{2}\) lying on a line parallel to a coordinate axis is a "rectangle". In particular, every interval on the axis \(Ox_{1}\) or \(Ox_{2}\) belongs to the family \(\mathfrak{R}(\mathbf{R}^{2})\). Finally, given a bounded rectangle \(\Pi\in\mathfrak{R}(\mathbf{R}^{2})\) (which is not necessarily centrally symmetric set) we set \(\mathrm{center}(\Pi)=\mathrm{center}(\Pi^{\mathbf{c}})\). Given a set \(S\in\mathrm{Conv}(\mathbf{R}^{2})\), see (1.2), we let \(\mathcal{H}[S]\) denote the _"rectangular hull"_ of \(S\), i.e., the smallest (with respect to inclusion) rectangle containing \(S\). Thus, \[\mathcal{H}[S]=\cap\{\Pi:\Pi\in\mathfrak{R}(\mathbf{R}^{2}),\Pi\supset S\}. \tag{2.11}\] Combining this definition with (2.10), we conclude that \[\mathcal{H}[S]=\mathrm{Pr}_{1}[S]\times\mathrm{Pr}_{2}[S]. \tag{2.12}\] Thus, given \(S\in\mathrm{Conv}(\mathbf{R}^{2})\), its rectangular hull \(\mathcal{H}[S]\) is the only rectangle \(\Pi\) for which \[\mathrm{Pr}_{1}[\Pi]=\mathrm{Pr}_{1}[S]\quad\text{and}\quad\ \mathrm{Pr}_{2}[\Pi]=\mathrm{Pr}_{2}[S]. \tag{2.13}\] Let us also note the following elementary property of rectangles: for every closed convex set \(S\subset\mathbf{R}^{2}\) and every \(r\geq 0\) we have \[\mathcal{H}[S+rQ_{0}]=\mathcal{H}[S]+rQ_{0}. \tag{2.14}\] **Remark 2.1**: If \(S\) is a convex _compact_ subset of \(\mathbf{R}^{2}\) then its orthogonal projections onto the coordinate axes are closed bounded intervals, so that, thanks to (2.12), \(\mathcal{H}[S]=\mathrm{Pr}_{1}[S]\times\mathrm{Pr}_{2}[S]\) is _compact_ as well. Clearly, if the set \(S\in\mathrm{Conv}(\mathbf{R}^{2})\) is unbounded, then its rectangular hull \(\mathcal{H}[S]\) is also unbounded. But _we cannot guarantee that in this case \(\mathcal{H}[S]\) is closed_ (in spite of \(S\) itself is closed). More specifically, \(\mathcal{H}[S]\) is not closed if \(\partial S\), the boundary of \(S\), has _either a horizontal or vertical asymptote_ (i.e., a straight line parallel to one of the coordinate axis having the \(0\)-distance to \(S\) and empty intersection with \(S\)). For instance, let \(S=\{x=(x_{1},x_{2})\in\mathbf{R}^{2}:x_{1},x_{2}>0,\ x_{1}\cdot x_{2}\geq 1\}\) be the epigraph of the function \(f(t)=1/t\), \(t>0\). Then \(\mathcal{H}[S]=\{x=(x_{1},x_{2})\in\mathbf{R}^{2}:x_{1},x_{2}>0\}\) is the positive (open) octant. \(\blacktriangleleft\) In this section we present two important auxiliary results. The first of them is a variant of the classical Helly's intersection theorem for rectangles. **Lemma 2.2**: _Let \(\mathcal{K}\subset\mathfrak{R}(\mathbf{R}^{2})\) be a collection of rectangles in \(\mathbf{R}^{2}\). Suppose that either (i) \(\mathcal{K}\) is finite or (ii) every rectangle from \(\mathcal{K}\) is closed, and there exists a finite subfamily of \(\mathcal{K}\) with a non-empty bounded intersection._ _Under these conditions, the following is true: If the intersection of every two rectangles from \(\mathcal{K}\) is non-empty, then there exists a point in \(\mathbf{R}^{2}\) common to all of the family \(\mathcal{K}\)._ _Proof._ Representation (2.8) reduces the problem to the one dimensional case. In this case the statement of the lemma is a variant of Helly's theorem in \(\mathbf{R}\). See, e.g. [7]. \(\blacksquare\) The second auxiliary result is a Helly-type theorem formulated in terms of the orthogonal projections onto the coordinate axes. **Proposition 2.3**: _Let \(\mathfrak{C}\) be a family of convex closed subsets in \(\mathbf{R}^{2}\). Suppose that either (i) \(\mathfrak{C}\) is finite or (ii) there exists a finite subfamily \(\overline{\mathfrak{C}}\subset\mathfrak{C}\) such that the intersection \(\cap\{C:C\in\overline{\mathfrak{C}}\}\) is non-empty and bounded. If_ \[\mathrm{Pr}_{\mathbf{R}_{1}}[C_{1}\cap C_{1}^{\prime}]\,\,\bigcap\,\,\mathrm{ Pr}_{\mathbf{R}_{1}}[C_{2}\cap C_{2}^{\prime}]\neq\emptyset. \tag{2.15}\] _for every \(C_{1},C_{1}^{\prime},C_{2},C_{2}^{\prime}\in\mathfrak{C}\), then_ \[\cap\{C:C\in\mathfrak{C}\}\neq\emptyset. \tag{2.16}\] _Furthermore, in this case_ \[\mathcal{H}\left[\cap\{C:C\in\mathfrak{C}\}\right]=\cap\{\mathcal{H}[C\cap C ^{\prime}]:C,C^{\prime}\in\mathfrak{C}\}\,. \tag{2.17}\] _Proof._ Condition (2.15) tells us that for every \(C,C^{\prime}\in\mathfrak{C}\) the set \(C\cap C^{\prime}\) is a non-empty. Because \(C\cap C^{\prime}\) is a convex subset of \(\mathbf{R}^{2}\), its projection onto \(Ox_{1}\), the set \(\mathrm{Pr}_{\mathbf{R}_{1}}[C\cap C^{\prime}]\subset Ox_{1}\), is convex as well, i.e., this set is an interval in \(Ox_{1}\). First, let us prove the proposition provided condition (i) of the proposition's hypothesis holds, i.e., \(\mathfrak{C}\) is a _finite_ family. Let \(\mathcal{W}=\{\mathrm{Pr}_{\mathbf{R}_{1}}[C\cap C^{\prime}]:C,C^{\prime}\in \mathfrak{C}\}\). Then \(\mathcal{W}\) is a finite family of intervals, and, thanks to (2.15), every two members of this family have a common point. Helly's theorem tells us that in this case there exists a point in \(Ox_{1}\) common to all of the family \(\mathcal{W}\). See Lemma 2.2. Thus, \[V=\bigcap_{C,C^{\prime}\in\mathfrak{C}}\mathrm{Pr}_{\mathbf{R}_{1}}[C\cap C^{ \prime}]\neq\emptyset\,. \tag{2.18}\] Fix a point \(v\in V\). Then, thanks to (2.18), \[v\in\mathrm{Pr}_{\mathbf{R}_{1}}[C\cap C^{\prime}]\quad\text{for every}\quad C,C^{\prime}\in \mathfrak{C}. \tag{2.19}\] Let \[L=\{w\in\mathbf{R}^{2}:\mathrm{Pr}_{\mathbf{R}_{1}}[w]=v\} \tag{2.20}\] be the straight line through \(v\) orthogonal to the axis \(Ox_{1}\). Given \(C\in\mathfrak{C}\), we set \(K(C)=C\cap L\). Thanks to (2.19), \(v\in\mathrm{Pr}_{\mathbf{R}_{1}}[C\cap C]=\mathrm{Pr}_{\mathbf{R}_{1}}[C]\) so that there exists \(u_{C}\in C\) such that \(\mathrm{Pr}_{\mathbf{R}_{1}}[u_{C}]=v\). From this and (2.20), we have \(u_{C}\in C\cap L\) proving that \(K(C)\neq\emptyset\) for every \(C\in\mathfrak{C}\). Clearly, each \(K(C)\) is a closed interval lying on the straight line \(L\). Let us show that there exists a point in \(L\) common to all these intervals. Property (2.19) tells us that for every \(C,C^{\prime}\in\mathfrak{C}\) there exists a point \(\tilde{u}\in C\cap C^{\prime}\) such that \(\mathrm{Pr}_{\mathbf{R}_{1}}[\tilde{u}]=v\). Hence, thanks to (2.20), \(\tilde{u}\in L\) so that \[\tilde{u}\in L\cap C\cap C^{\prime}=(L\cap C)\cap(L\cap C^{\prime})=K(C)\cap K (C^{\prime}).\] This proves that any two members of the family \(\mathcal{K}=\{K(C):C\in\mathfrak{C}\}\) have a common point. Furthermore, \(\mathcal{K}\) is a _finite_ (because \(\mathfrak{C}\) is finite) family of intervals lying in \(L\). Helly's theorem tells us that in this case \(\cap\{K(C):C\in\mathfrak{C}\}\neq\emptyset\). Thus, \[\cap\{K(C):C\in\mathfrak{C}\}=\cap\{L\cap C:C\in\mathfrak{C}\}=L\cap(\cap\{C :C\in\mathfrak{C}\})\neq\emptyset\] proving (2.16) for a finite family \(\mathfrak{C}\). Let us see that (2.16) holds provided condition (ii) of the proposition's hypothesis holds. As we have proved above, \(\cap[C:C\in\mathfrak{C}^{\prime})\neq\emptyset\) for every _finite_ subfamily \(\mathfrak{C}^{\prime}\) of the family \(\mathfrak{C}\). In particular, _every three members of \(\mathfrak{C}\) have a common point_. This property and condition (ii) of the hypothesis tell us that \(\mathfrak{C}\) satisfies the hypothesis of _the two dimensional Helly's theorem_[7]. Thanks to this theorem, (2.16) holds completing the proof of this property. Let us prove (2.17). Clearly, the left hand side of the equality (2.17) is contained in its right hand side. Let us prove the converse statement. Fix a point \[u=(u_{1},u_{2})\in\cap\{\mathcal{H}[C\cap C^{\prime}]:C,C^{\prime}\in\mathfrak{ C}\} \tag{2.21}\] and prove that \(u\in\mathcal{H}\left[\cap\{C:C\in\mathfrak{C}\}\right]\). Thanks to (2.12), property (2.21) is equivalent to the following one: \[u_{1}\in\mathrm{Pr}_{1}[\mathcal{T}]\ \ \ \text{and}\ \ \ u_{2}\in\mathrm{Pr}_{2}[ \mathcal{T}]\ \ \ \text{where}\ \ \ \mathcal{T}=\cap[C:C\in\mathfrak{C}\}. \tag{2.22}\] Prove that \(u_{1}\in\mathrm{Pr}_{1}[\mathcal{T}]\). We let \(L_{1}\) denote the straight line in \(\mathbf{R}^{2}\) through the point \(u=(u_{1},u_{2})\) orthogonal to the axis \(Ox_{1}\). Thus, \[L_{1}=\{w\in\mathbf{R}^{2}:\mathrm{Pr}_{1}[w]=u_{1}\}. \tag{2.23}\] Let us show that \(L_{1}\cap\mathcal{T}\neq\emptyset\). Indeed, thanks to (2.21), \(u\in\mathcal{H}[C\cap C^{\prime}]\) provided \(C,C^{\prime}\in\mathfrak{C}\) so that, thanks to (2.12), \[u_{1}\in\mathrm{Pr}_{1}[C\cap C^{\prime}]\ \ \ \text{for every}\ \ \ C,C^{\prime}\in\mathfrak{C}.\] Combining this property with definition (2.23), we conclude that \(L_{1}\cap C\cap C^{\prime}\neq\emptyset\) for all \(C,C^{\prime}\in\mathfrak{C}\). Thus, every two members of the family \(\mathcal{K}_{1}=\{L_{1}\cap C:C\in\mathfrak{C}\}\) have a common point. If the family \(\mathfrak{C}\) is finite, i.e., condition (i) of the proposition's hypothesis holds, then, thanks to Helly's theorem, \[\text{there exists a point}\ \ \ \tilde{u}\in\cap\{L_{1}\cap C:C\in \mathfrak{C}\}=L_{1}\cap\{C:C\in\mathfrak{C}\}=L_{1}\cap\mathcal{T}. \tag{2.24}\] Thus, \(\tilde{u}\in L_{1}\) so that, thanks (2.23), \(u_{1}=\mathrm{Pr}[\tilde{u}]\). Furthermore, \(\tilde{u}\in\mathcal{T}=\cap\{C:C\in\mathfrak{C}\}\) so that \(u_{1}\in\mathrm{Pr}_{1}[\mathcal{T}]\). This proves the first statement in (2.22) provided condition (i) holds. Let us prove this statement provided the family \(\mathfrak{C}\) satisfies condition (ii). As we have seen above, it suffices to show that in this case the statement (2.24) holds. We recall that in case (ii), there exists a _finite_ subfamily \(\widetilde{\mathfrak{C}}\subset\mathfrak{C}\) such that \(\cap\{C:C\in\widetilde{\mathfrak{C}}\}\) is non-empty and bounded. Thus, condition (i) holds for \(\widetilde{\mathfrak{C}}\) so that \(\cap\{L_{1}\cap C:C\in\widetilde{\mathfrak{C}}\}\neq\emptyset\). Furthermore, \[\cap\{L_{1}\cap C:C\in\widetilde{\mathfrak{C}}\}\subset\cap\{C:C\in \widetilde{\mathfrak{C}}\}.\] But \(\cap\{C:C\in\widetilde{\mathfrak{C}}\}\) is bounded so that the set \(\cap\{L_{1}\cap C:C\in\widetilde{\mathfrak{C}}\}\) is bounded as well. Thus, the family \[\widetilde{\mathcal{K}_{1}}=\{L_{1}\cap C:C\in\widetilde{\mathfrak{C}}\} \subset\mathcal{K}_{1}\] has a _non-empty and bounded_ intersection. This proves that the family \(\mathcal{K}_{1}\) of closed intervals lying on the straight line \(L_{1}\) satisfies all conditions of Helly's theorem for rectangles, see Lemma 2.2. Thanks to this lemma, \(\cap\{L_{1}\cap C:C\in\mathfrak{C}\}\neq\emptyset\) proving the statement (2.24) in the case under consideration. Thus, we have proved the first statement of (2.22), i.e., the property \(u_{1}\in\mathrm{Pr}_{1}[\mathcal{T}]\). In the same way we prove that \(u_{2}\in\mathrm{Pr}_{2}[\mathcal{T}]\) completing the proof of the proposition. **2.3 Rectangles: intersections, neighborhoods and selections.** In this section we present several criteria and several constructive formulae for the optimal Lipschitz selections of set-valued mappings taking values in the family \(\Re(\mathbf{R}^{2})\) of all rectangles in \(\mathbf{R}^{2}\) with sides parallel to the coordinate axes. See (2.7). Let \(I_{0}=[-1,1]\). Given \(a\in\mathbf{R}\) and \(r\geq 0\), we set \(rI_{0}=[-r,r]\). We also recall that given a bounded interval \(I\in\mathcal{I}(\mathbf{R})\), by center(\(I\)) we denote the center of \(I\). **Lemma 2.4**: _Let \(\mathcal{K}\subset\Re(\mathbf{R}^{2})\) be a family of rectangles in \(\mathbf{R}^{2}\) with non-empty intersection. Then for every \(r\geq 0\) the following equality_ \[\left(\bigcap_{K\in\mathcal{K}}K\right)+rQ_{0}=\bigcap_{K\in\mathcal{K}}\left\{ \,K+rQ_{0}\,\right\} \tag{2.25}\] _holds._ _Proof._ Obviously, the right hand side of (2.25) contains its left hand side. Let us prove that \[\left(\bigcap_{K\in\mathcal{K}}K\right)+rQ_{0}\supset\bigcap_{K\in\mathcal{K }}\left\{\,K+rQ_{0}\,\right\}. \tag{2.26}\] This inclusion is based on the following simple claim: Let \(\mathcal{I}\) be a family of convex subsets of \(\mathbf{R}\) (intervals) with non-empty intersection, and let \(K=[a,b]\subset\mathbf{R}\), be a closed bounded interval. Suppose that \(K\cap I\neq\emptyset\) for every \(I\in\mathcal{I}\). Then there exists a point common to \(K\) and all of the members of the family \(\mathcal{I}\). Indeed, let \(c\in\cap\{I:I\in\mathcal{I}\}\). If \(c\in K\) then the claim holds. Suppose that \(c\notin K\), say \(c>b\). Prove that \(b\in I\) for every \(I\in\mathcal{I}\). In fact, because \(K\cap I\neq\emptyset\), there exists a point \(p_{I}\in K\cap I\). Then \(p_{K}\leq b\) so that \(b\in[p_{K},c]\subset I\) proving the claim. This claim implies the following one dimensional variant of inclusion (2.26): Let \(\mathcal{I}\) be a family of intervals in \(\mathbf{R}\) with non-empty intersection. Then \[\left(\bigcap_{I\in\mathcal{I}}I\right)+rI_{0}\supset\bigcap_{I\in\mathcal{I }}\left\{\,I+rI_{0}\,\right\}\quad\text{where}\quad I_{0}=[-1,1]. \tag{2.27}\] Indeed, if \(u\in\cap\{I+rI_{0}:I\in\mathcal{I}\}\) then \([u-r,u+r]\cap I\neq\emptyset\) for every \(I\in\mathcal{I}\). Therefore, thanks to the above claim, \([u-r,u+r]\cap(\cap\{I:I\in\mathcal{I}\})\neq\emptyset\) proving that \(u\) belongs to the left hand side of (2.27). Now, let us prove (2.26) using (2.27) and properties (2.9) and (2.10) of rectangles. For every \(i=1,2\), we have \[\Pr_{i}\left[\left(\bigcap_{K\in\mathcal{K}}K\right)+rQ_{0}\right]=\left( \bigcap_{K\in\mathcal{K}}\Pr_{i}[K]\right)+r\Pr_{i}[Q_{0}]=U_{i}.\] Furthermore, \[\Pr_{i}\left[\bigcap_{K\in\mathcal{K}}\left\{\,K+rQ_{0}\,\right\}\right]= \bigcap_{K\in\mathcal{K}}\left\{\Pr_{i}[K]+r\Pr_{i}[Q_{0}]\right\}=V_{i}.\] Thanks to inclusion (2.27), \(U_{i}\supset V_{i}\), \(i=1,2\), proving that the orthogonal projections onto the coordinate axes of the left hand side of (2.26) contain the corresponding projections of its right hand side. Because the left and right hand sides of (2.26) are _rectangles_ inclusion (2.26) holds. The proof of the lemma is complete. **Lemma 2.5**: _Let \({\cal R}_{1},{\cal R}_{2}\subset\mathfrak{R}({\bf R}^{2})\) be two families of rectangles in \({\bf R}^{2}\). If each family has a non-empty intersection, then_ \[{\rm dist}\left(\bigcap_{\Pi\in{\cal R}_{1}}\Pi,\bigcap_{\Pi\in{\cal R}_{2}} \Pi\right)=\sup_{\Pi_{1}\in{\cal R}_{1},\Pi_{2}\in{\cal R}_{2}}{\rm dist}(\, \Pi_{1},\Pi_{2})\,.\] _Proof._ The definition of the uniform norm in \({\bf R}^{2}\) and representation (2.8) easily imply the following formula for the distance between rectangles \(\Pi_{1},\Pi_{2}\in\mathfrak{R}({\bf R}^{2})\): \[{\rm dist}\,(\Pi_{1},\Pi_{2})=\max\{{\rm dist}\,(\,{\rm Pr}_{1}[\Pi_{1}],{\rm Pr }_{1}[\Pi_{2}])\,,{\rm dist}\,(\,{\rm Pr}_{2}[\Pi_{1}],{\rm Pr}_{2}[\Pi_{2}])\}.\] It is also clear that for every family \({\cal R}\) of rectangles in \({\bf R}^{2}\) with non-empty intersection, we have \[{\rm Pr}_{i}\left[\bigcap_{\Pi\in{\cal R}}\Pi\right]=\bigcap_{\Pi\in{\cal R}} {\rm Pr}_{i}[\Pi],\quad i=1,2.\] These observations reduces the problem for the one dimensional case which we prove below. Let \({\cal I}_{1},{\cal I}_{2}\subset{\cal I}({\bf R})\) be two families of intervals in \({\bf R}\). We assume that each family has a non-empty intersection. Our aim is to show that the following equality \[{\rm dist}\left(\bigcap_{I\in{\cal I}_{1}}I,\bigcap_{I\in{\cal I}_{2}}I\right) =\sup_{I_{1}\in{\cal I}_{1},I_{2}\in{\cal I}_{2}}{\rm dist}(I_{1},I_{2}) \tag{2.28}\] holds. Clearly, \({\rm dist}(I_{1},I_{2})={\rm dist}(I_{1}^{\rm cl},I_{2}^{\rm cl})\) for every \(I_{1}\in{\cal I}_{1}\), \(I_{2}\in{\cal I}_{2}\). (Recall that the sign "\({\bf cl}\)" denotes the closure of a set.) It is also can be readily seen that for any family \({\cal A}\) of intervals with non-empty intersection, we have \((\cap\{I:I\in{\cal A}\})^{\rm cl}=\cap\{I^{\rm cl}:I\in{\cal A}\}\). These remarks show that without loss of generality we may assume that all intervals from the families \({\cal I}_{1}\) and \({\cal I}_{2}\) are _closed_. Clearly, the right hand side of (2.28) is majorized by its left hand side. Let us prove the converse statement. We know that \[H_{1}=\cap\{I:I\in{\cal I}_{1}\}\neq\emptyset\ \ \mbox{ and }\ \ H_{2}=\cap\{I:I\in{\cal I}_{2}\}\neq\emptyset. \tag{2.29}\] Our aim is to prove that \[{\rm dist}(H_{1},H_{2})\leq\sup\{\,{\rm dist}(I_{1},I_{2}):I_{1}\in{\cal I}_{1 },I_{2}\in{\cal I}_{2}\,\}. \tag{2.30}\] Let \(r=\sup\{\,{\rm dist}(I_{1},I_{2}):I_{1}\in{\cal I}_{1},I_{2}\in{\cal I}_{2}\,\}\). Because all sets from \({\cal I}_{1}\) and \({\cal I}_{2}\) are closed, we have \[(I_{1}+rI_{0})\cap I_{2}\neq\emptyset\ \ \mbox{ for every }\ \ \ I_{1}\in{\cal I}_{1},I_{2}\in{\cal I}_{2}. \tag{2.31}\] We recall that \(H_{1}=\cap\{I:I\in{\cal I}_{1}\}\) is a non-empty set, so that, thanks to Lemma 2.4, \[H_{1}+rI_{0}=\cap\{I+rI_{0}:I\in{\cal I}_{1}\}. \tag{2.32}\] Let us put \({\cal K}=\{I+rI_{0}:I\in{\cal I}_{1}\}\cup\{I:I\in{\cal I}_{2}\}\) and prove that \(\cap\{I:I\in{\cal K}\}\neq\emptyset\). (If this property holds, then, thanks to (2.32), \((H_{1}+rI_{0})\cap H_{2}\neq\emptyset\) proving the required inequality (2.30).) Our proof will rely on Helly's theorem. First, we note that, thanks to (2.29) and (2.31), \(I_{1}\cap I_{2}\neq\emptyset\) for every \(I_{1},I_{2}\in{\cal K}\). We also note that, thanks to (2.29), the family \({\cal K}\) has a non-empty intersection provided every member of \({\cal K}\) is an unbounded interval of the form \(I=[a,+\infty)\), \(a\in{\bf R}\). The same is true provided each interval \(I\in{\cal K}\) is of the form \(I=(-\infty,a]\), \(a\in{\bf R}\). Therefore, we may assume that either \({\cal K}\) contains a _bounded_ interval or there exist intervals \(I_{1},I_{2}\in{\cal K}\) of the forms \(I_{1}=[a_{1},+\infty)\) and \(I_{2}=(-\infty,a_{2}]\) respectively. Clearly, in this case \(I_{1}\cap I_{2}\) is a _bounded_ set. Thus, in this case, all conditions of Lemma 2.2 are satisfied. Thanks to this lemma, there exists a point common to all members of \({\cal K}\), and the proof of Lemma 2.5 is complete. We will also need the following simple claim. **Claim 2.6**: _(i) Let \(A\) and \(B\) be two intervals in \({\bf R}\). Then_ \[{\rm d}_{\rm H}(A,B)=\max\{|\inf A-\inf B|,|\sup A-\sup B|\}. \tag{2.33}\] _(See also our convention (2.2) for the cases of \(\inf A\), \(\inf B=-\infty\) and \(\sup A\), \(\sup B=+\infty\).)_ _(ii) Let \({\cal A},{\cal B}\in\Re({\bf R}^{2})\) be two bounded rectangles in \({\bf R}^{2}\). Then_ \[\|\operatorname{center}({\cal A})-\operatorname{center}({\cal B})\|\leq{ \rm d}_{\rm H}({\cal A},{\cal B}). \tag{2.34}\] _Proof._ (i) Clearly, \(\inf I=\inf I^{\rm d}\) and \(\sup I=\sup I^{\rm d}\) for any interval \(I\in{\cal I}({\bf R})\). Let us also note that \({\rm d}_{\rm H}(A,B)={\rm d}_{\rm H}(A^{\rm d},B^{\rm d})\). These properties show that without loss of generality we may assume that \(A\) and \(B\) are _closed_ intervals. Let \[r={\rm d}_{\rm H}(A,B)\ \ \mbox{and}\ \ \delta=\max\{|\inf A-\inf B|,|\sup A-\sup B |\}.\] Then, thanks to (2.1), \(A+rI_{0}\supset B\) proving that \(\sup A+r\geq\sup B\) and \(\inf A-r\leq\inf B\). By interchanging the roles of \(A\) and \(B\) we obtain also \(\sup B+r\geq\sup A\) and \(\inf B-r\leq\inf A\) proving that \(\delta\leq r\). Let us prove that \(r\leq\delta\). Suppose that both \(A\) and \(B\) are bounded, i.e., \(A=[\inf A,\sup A]\) and \(B=[\inf B,\sup B]\). Let \(\alpha\in[0,1]\), and let \[a_{\alpha}=\alpha\inf A+(1-\alpha)\sup A\ \ \mbox{and}\ \ \ b_{\alpha}=\alpha\inf B+(1- \alpha)\sup B.\] Then \(a_{\alpha}\in A\), \(b_{\alpha}\in B\), and \(|a_{\alpha}-b_{\alpha}|\leq\delta\) proving that \({\rm dist}(a,B)\leq r\) for every \(a\in A\) and \({\rm dist}(b,A)\leq r\) for every \(b\in B\). Hence, \(A+rI_{0}\supset B\) and \(B+rI_{0}\supset A\), so that, thanks to (2.1), \(r\leq\delta\). In a similar way we prove this inequality whenever one of the intervals is unbounded. We leave the details to the interested reader as an easy exercise. (ii) By orthogonal projecting to the coordinate axes, we can reduce the problem to the one dimensional case. In this case, given bounded intervals \(A,B\in{\cal I}({\bf R})\), we have \[\operatorname{center}(A)=(\inf A+\sup A)/2\ \ \mbox{and}\ \ \operatorname{ center}(B)=(\inf B+\sup B)/2.\] This and inequality (2.33) imply the required inequality \(|\operatorname{center}(A)-\operatorname{center}(B)|\leq{\rm d}_{\rm H}(A,B)\) proving the claim. Let \(({\cal M},\rho)\) be a pseudometric space and let \({\cal T}:{\cal M}\to\Re({\bf R}^{2})\) be a set-valued mapping. Given \(\eta\geq 0\), we define a set-valued mapping on \({\cal M}\) by \[{\cal T}^{[1]}[x:\eta]=\bigcap_{z\in{\cal M}}\left[{\cal T}(z)+\eta\rho(x,z)\, Q_{0}\right],\ \ \ x\in{\cal M}. \tag{2.35}\] **Proposition 2.7**: _(a) Suppose that either \({\cal M}\) is finite or each rectangle \({\cal T}(x)\), \(x\in{\cal M}\), is closed. If_ \[{\cal T}^{[1]}[x:\eta]\neq\emptyset\ \ \mbox{ for every }\ \ x\in{\cal M}, \tag{2.36}\] _then_ \[{\rm d}_{\rm H}\left({\cal T}^{[1]}[x:\eta],{\cal T}^{[1]}[y:\eta] \right)\leq\eta\rho(x,y)\ \ \mbox{ for all }\ \ x,y\in{\cal M}. \tag{2.37}\] _(b) Let us assume that either \({\cal M}\) is finite or all rectangles \({\cal T}(x)\), \(x\in{\cal M}\), are closed and at least one of them is bounded. If_ \[{\cal T}(x)\cap\{{\cal T}(y)+\eta\,\rho(x,y)Q_{0}\}\neq\emptyset\ \ \mbox{ for all }\ \ x,y\in{\cal M}, \tag{2.38}\] _then properties (2.36) and (2.37) hold._ _Furthermore, if the set \({\cal T}^{[1]}[x:\eta]\) is bounded for every \(x\in{\cal M}\), then the mapping \(\tau(x)={\rm center}\left({\cal T}^{[1]}[x:\eta]\right)\), \(x\in{\cal M}\), is a Lipschitz selection of \({\cal T}\) with \(\|\tau\|_{{\rm Lip}({\cal M})}\leq\eta\)._ _Proof._ (a) We know that \({\cal T}^{[1]}[x:\eta]\neq\emptyset\) so that, thanks to (2.36), Lemma 2.4 and definition (2.35), we have \[{\cal T}^{[1]}[x:\eta]+\eta\,\rho(x,y)Q_{0} = \left\{\bigcap_{z\in{\cal M}}\left[{\cal T}(z)+\eta\,\rho(x,z)Q_{ 0}\right]\right\}+\eta\,\rho(x,y)Q_{0}\] \[= \bigcap_{z\in{\cal M}}\left[{\cal T}(z)+\left(\eta\,\rho(x,z)+ \eta\,\rho(x,y)\right)Q_{0}\right].\] Hence, thanks to the triangle inequality, we have \[{\cal T}^{[1]}[x:\eta]+\eta\,\rho(x,y)\,Q_{0}\supset\bigcap_{z \in{\cal M}}\,\left[{\cal T}(z)+\eta\,\rho(y,z)\,Q_{0}\right]={\cal T}^{[1]}[ y:\eta].\] By interchanging the roles of \(x\) and \(y\) we obtain also \({\cal T}^{[1]}[y:\eta]+\eta\,\rho(x,y)\,Q_{0}\supset{\cal T}^{[1]}[x:\eta]\). These two inclusions imply the required inequality (2.37). (b) Let us fix \(x\in{\cal M}\) and prove that \[\{{\cal T}(z)+\eta\,\rho(x,z)\,Q_{0}\}\cap\{{\cal T}(z^{\prime}) +\eta\,\rho(x,z^{\prime})\,Q_{0}\}\neq\emptyset\ \ \ \mbox{ for every }\ \ z,z^{\prime}\in{\cal M}. \tag{2.39}\] Thanks to (2.38), there exist points \(g(z)\in{\cal T}(z)\) and \(g(z^{\prime})\in{\cal T}(z^{\prime})\) such that \(\|g(z)-g(z^{\prime})\|\leq\eta\,\rho(z,z^{\prime})\). From this and the triangle inequality, we have \(\|g(z)-g(z^{\prime})\|\leq\eta\,\rho(z,x)+\eta\,\rho(x,z^{\prime})\). This implies the existence of a point \(w\in{\bf R}^{2}\) such that \(\|g(z)-w\|\leq\eta\,\rho(z,x)\) and \(\|g(z^{\prime})-w\|\leq\eta\,\rho(z^{\prime},x)\). But \(g(z)\in{\cal T}(z)\) and \(g(z^{\prime})\in{\cal T}(z^{\prime})\) so that \(w\) belongs to the left hand side of (2.39) proving this property. Let \({\cal K}=\{{\cal T}(z)+\eta\,\rho(x,z)\,Q_{0}:z\in{\cal M}\}\). If \({\cal M}\) is infinite, then, thanks to the hypothesis of part (b), each member of \({\cal K}\) is closed. Furthermore, there exists \(\tilde{x}\in{\cal M}\) such that the set \({\cal T}(\tilde{x})\) is bounded so that the set \(\widetilde{K}={\cal T}(\tilde{x})+\eta\,\rho(x,\tilde{x})\,Q_{0}\) is bounded as well and belongs to the family \({\cal K}\). Combining these properties of \({\cal K}\) with (2.39), we conclude that this family satisfies the hypothesis of Lemma 2.2. Thanks to this lemma, the set \({\cal T}^{[1]}[x:\eta]=\cap\{K:K\in{\cal K}\}\) is non-empty which proves property (2.36). In turn, inequality (2.37) follows from part (a) of the proposition. Finally, inequalities (2.34) and (2.37) imply the required inequality \(\|\tau\|_{{\rm Lip}({\cal M})}\leq\eta\) completing the proof of the proposition. Let us give several explicit formulae for Lipschitz selections of set-valued mappings in the one dimensional case. Let \(F:{\cal M}\to{\cal I}({\bf R})\) be a set-valued mapping. We set \[a_{F}(x)=\inf F(x)\quad\mbox{and}\quad b_{F}(x)=\sup F(x).\] Thus, \(a_{F}\) and \(b_{F}\) are two functions on \({\cal M}\) such that \[a_{F}:{\cal M}\to{\bf R}\cup\{-\infty\},\ \ \ b_{F}:{\cal M}\to{\bf R}\cup\{+ \infty\}\quad\mbox{and}\quad a_{F}(x)\leq b_{F}(x)\ \ \mbox{for all}\ \ \ x\in{\cal M}.\] Clearly, \[{\rm dist}(F(x),F(y))=\max\{[a_{F}(x)-b_{F}(y)]_{+},[a_{F}(y)-b_{F}(x)]_{+}\}. \tag{2.40}\] (See our convention (2.2) for the case of \(a_{F}(x)=-\infty\), \(b_{F}(x)=+\infty\).) Given \(\eta\geq 0\), we introduce the following functions on \({\cal M}\): \[a_{F}^{[1]}[x:\eta]=\sup_{y\in{\cal M}}\left\{a_{F}(y)-\eta\,\rho(x,y)\right\}, \ \ \ \ b_{F}^{[1]}[x:\eta]=\inf_{y\in{\cal M}}\left\{b_{F}(y)+\eta\,\rho(x,y)\right\} \tag{2.41}\] and \[c_{F}[x:\eta]=\left(a_{F}^{[1]}[x:\eta]+b_{F}^{[1]}[x:\eta]\right)/2. \tag{2.42}\] We also define a set-valued mapping on \({\cal M}\) by \[F^{[1]}[x:\eta]=\bigcap_{z\in{\cal M}}\left[F(z)+\eta\,\rho(x,z)\,I_{0}\right],\ \ \ x\in{\cal M}. \tag{2.43}\] Comparing this definition with definitions (2.41) and (2.42), we conclude that for every \(x\in{\cal M}\), \[a_{F}^{[1]}[x:\eta]=\inf F^{[1]}[x:\eta],\ \ \ \ b_{F}^{[1]}[x:\eta]=\sup F^{[1]}[x:\eta], \tag{2.44}\] and \[c_{F}[x:\eta]={\rm center}\left(F^{[1]}[x:\eta]\right) \tag{2.45}\] provided \(F^{[1]}[x:\eta]\) is _bounded_. **Remark 2.8**: We note that the function \[f^{+}=b_{F}^{[1]}[\cdot:\eta]\ \ \mbox{maps}\ \ \ {\cal M}\ \ \mbox{into}\ \ \ {\bf R} \tag{2.46}\] if and only if \(f^{+}\not\equiv+\infty\), i.e., \(f^{+}(x^{+})<\infty\) for some \(x^{+}\in{\cal M}\). Analogously, the mapping \[f^{-}=a_{F}^{[1]}[\cdot:\eta]\ \ \mbox{maps}\ \ \ {\cal M}\ \ \mbox{into}\ \ \ {\bf R} \tag{2.47}\] if and only if \(f^{-}\not\equiv-\infty\). Finally, the mapping \(c_{F}^{[1]}[\cdot:\eta]=(f^{+}+f^{-})/2\), see (2.42) and (2.45), is well defined if and only if both \(f^{+}\not\equiv+\infty\) and \(f^{-}\not\equiv-\infty\). \(\blacktriangleleft\) **Proposition 2.9**: _(i) (The Finiteness Principle for Lipschitz selections in \(\mathbf{R}\).) Let \(F:\mathcal{M}\to\mathcal{I}(\mathbf{R})\) be a set-valued mapping. Let us assume that either \(\mathcal{M}\) is finite or all intervals \(F(x)\), \(x\in\mathcal{M}\) are closed and at least one of them is bounded._ _Let \(\eta\geq 0\). Suppose that for every \(x,y\in\mathcal{M}\) the restriction \(F|_{[x,y]}\) of \(F\) to \(\{x,y\}\) has a Lipschitz selection \(f_{[x,y]}\) with \(\|f_{[x,y]}\|_{\mathrm{Lip}(\mathcal{x},y],\mathbf{R}}\leq\eta\). Then \(F\) has a Lipschitz selection \(f:\mathcal{M}\to\mathbf{R}\) with Lipschitz seminorm \(\|f\|_{\mathrm{Lip}(\mathcal{M},\mathbf{R})}\leq\eta\)._ _Furthermore, one can set \(f=c_{F}^{[1]}[\cdot:\eta]\) provided there exist \(x^{+},x^{-}\in\mathcal{M}\) such that \(\inf F(x^{-})>-\infty\) and \(\sup F(x^{+})<\infty\). If all intervals \(F(x)\), \(x\in\mathcal{M}\), are closed one can set \(f=b_{F}^{[1]}[\cdot:\eta]\) if \(F(x^{+})\) is bounded from above for some \(x^{+}\in\mathcal{M}\), or \(f=a_{F}^{[1]}[\cdot:\eta]\) if \(F(x^{-})\) is bounded from below for some \(x^{-}\in\mathcal{M}\)._ _(ii) Suppose that all intervals \(F(x)\), \(x\in\mathcal{M}\) are closed and at least one of them is bounded. Let_ \[\lambda_{F}=\sup_{x,y\in\mathcal{M}}[\inf F(x)-\sup F(y)]_{+}/\rho(x,y)=\sup_ {x,y\in\mathcal{M}}\mathrm{dist}(F(x),F(y))/\rho(x,y).\] _(When calculating \(\lambda_{F}\), we use convention (2.2). We also note that the second equality in this definition is due to (2.40).)_ _Then \(F\) has a Lipschitz selection if and only if \(\lambda_{F}<\infty\). Moreover, in this case, \(\lambda_{F}=|F|_{\mathfrak{M}}\), see (1.1), and each of the mappings \(f=a_{F}^{[1]}[\cdot:\eta]\), \(f=b_{F}^{[1]}[\cdot:\eta]\) and \(f=c_{F}^{[1]}[\cdot:\eta]\), provides an optimal Lipschitz selection of \(F\), i.e., \(\|f\|_{\mathrm{Lip}(\mathcal{M},\mathbf{R})}=|F|_{\mathfrak{M}}\)._ _Proof._ (i) Thanks to the hypothesis of part (i) of the proposition and part (a) and (b) of Proposition 2.7, we have \[F^{[1]}[x:\eta]\neq\emptyset\ \ \ \text{for every}\ \ \ x\in\mathcal{M},\] and \[\mathrm{d}_{\mathrm{H}}\left(F^{[1]}[x:\eta],F^{[1]}[y:\eta]\right)\leq\eta \,\rho(x,y)\ \ \ \text{for all}\ \ \ x,y\in\mathcal{M}.\] From this, part (i) of Claim 2.6, and definitions (2.44) it follows that the inequality \[\max\left\{|a_{F}^{[1]}[x:\eta]-a_{F}^{[1]}[y:\eta]|,\,|b_{F}^{[1]}[x:\eta]-b_ {F}^{[1]}[y:\eta]|\right\}\leq\eta\,\rho(x,y) \tag{2.48}\] holds for all \(x,y\in\mathcal{M}\). Clearly, if \(F\equiv\mathbf{R}\) then the constant mapping \(f\equiv\{0\}\) on \(\mathcal{M}\) is a Lipschitz selection of \(F\) (with \(\|f\|_{\mathrm{Lip}(\mathcal{M},\mathbf{R})}=0\)). Otherwise, either \(f^{+}=b_{F}^{[1]}[\cdot:\eta]\not\equiv+\infty\) or \(f^{-}=a_{F}^{[1]}[\cdot:\eta]\not\equiv-\infty\). Therefore, thanks to Remark 2.8, either \(f^{+}:\mathcal{M}\to\mathbf{R}\) or \(f^{-}:\mathcal{M}\to\mathbf{R}\). See (2.46) and (2.47). Let us note that if each set \(F(x)\) is _closed_, and at least on of the sets \(F(x)\) is bounded, then the set \(F^{[1]}[x:\eta]\) is closed and bounded as well. In this case, the points \[a_{F}^{[1]}[x:\eta],b_{F}^{[1]}[x:\eta]\in F^{[1]}[x:\eta]\subset F(x). \tag{2.49}\] See definition (2.44). Therefore, in this case we can set either \(f=f^{+}\) or \(f=f^{-}\). Then, thanks to (2.48), in both cases the mapping \(f:\mathcal{M}\to\mathbf{R}\) will be a Lipschitz selection of \(F\) with \(\|f\|_{\mathrm{Lip}(\mathcal{M},\mathbf{R})}\leq\eta\). Also from this it follows that the mapping \(f=c_{F}[\cdot:\eta]=(f^{+}+f^{-})/2\), see (2.45), has the same properties. Now, let \(\mathcal{M}\) be _finite_. In this case we can not guarantee that the set \(F^{[1]}[x:\eta]\) is closed and property (2.49) holds. However, if \(f^{+}=b_{F}^{[1]}[\cdot:\eta]\not\equiv+\infty\) but \(f^{-}=a_{F}^{[1]}[\cdot:\eta]\equiv-\infty\), then each interval \(F^{[1]}[x:\eta]\) is _unbounded from below_. Because \(\mathcal{M}\) is finite, all these intervals have a common point, say \(A\). Then the constant mapping \(f\equiv\{A\}\) is a Lipschitz selection of \(F\) (with \(\|f\|_{\mathrm{Lip}(\mathcal{M},\mathbf{R})}=0\)). Analogously, if \(f^{-}\not\equiv-\infty\) but \(f^{+}\equiv+\infty\), there is a constant mapping which provides a Lipschitz selection of \(F\). Let us suppose that both \(f^{-}=a_{F}^{[1]}[\cdot:\eta]\not\equiv-\infty\) and \(f^{-}=a_{F}^{[1]}[\cdot:\eta]\not\equiv-\infty\). In this case, thanks to Remark 2.8, the mapping \(c_{F}^{[1]}[\cdot:\eta]=(f^{+}+f^{-})/2\), see (2.42) and (2.45), is well defined, i.e., each set \(F^{[1]}[x:\eta]\) is non-empty and bounded (but not necessarily closed!). Clearly, \[f(x)=c_{F}^{[1]}[x:\eta]\in F^{[1]}[x:\eta]\subset F(x)\quad\mbox{for every}\quad x\in{\cal M},\] proving that \(f\) is a selection of \(F\). Thanks to (2.48), its Lipschitz seminorm \(\|f\|_{\mbox{\scriptsize Lip}({\cal M},{\bf R})}\leq\eta\), and the proof of part (i) of the proposition is complete. (ii) This criterion for Lipschitz selections is immediate from part (i) of the proposition. We leave the details of the proof to the interested reader. Part (i) of Proposition 2.9 implies the following Finiteness Principle for rectangles in \({\bf R}^{2}\). **Proposition 2.10**: _Let \(\eta\geq 0\). Let \(({\cal M},\rho)\) be a pseudometric space, and let \({\cal T}:{\cal M}\to\Re({\bf R}^{2})\) be a set-valued mapping. Let us assume that either \({\cal M}\) is finite or all rectangles \({\cal T}(x)\), \(x\in{\cal M}\), are closed and at least one of them is bounded._ _Suppose that for every \(x,y\in{\cal M}\) the restriction \({\cal T}|_{\{x,y\}}\) of \({\cal T}\) to \(\{x,y\}\) has a Lipschitz selection \(\tau_{\{x,y\}}\) with \(\|\tau_{\{x,y\}}\|_{\mbox{\scriptsize Lip}(\{x,y\})}\leq\eta\)._ _Then \({\cal T}\) has a Lipschitz selection \(\tau:{\cal M}\to{\bf R}^{2}\) with Lipschitz seminorm \(\|\tau\|_{\mbox{\scriptsize Lip}({\cal M})}\leq\eta\)._ _Proof._ By orthogonal projecting onto the coordinate axes, we reduce the problem to the one dimensional case, i.e., to the Finiteness Principle for Lipschitz selections in \({\bf R}\) proven in part (i) of Proposition 2.9. **3. The key theorem: Lipschitz selections and rectangular hulls.** Let \(\Re=({\cal M},\rho)\) be a pseudometric space, and let \(F:{\cal M}\to\mbox{Conv}({\bf R}^{2})\) be a set-valued mapping. Given \(\lambda\geq 0\) and \(x,x^{\prime},x^{\prime\prime}\in{\cal M}\), we let \({\cal W}_{F}[x,x^{\prime},x^{\prime\prime}:\lambda]\) denote a (possibly empty) subset of \({\bf R}^{2}\) defined by \[{\cal W}_{F}[x,x^{\prime},x^{\prime\prime}:\lambda]={\cal H}[\{F(x^{\prime})+ \lambda\,\rho(x^{\prime},x)\,Q_{0}\}\cap\{F(x^{\prime\prime})+\lambda\,\rho( x^{\prime\prime},x)\,Q_{0}\}]. \tag{3.1}\] We recall that by \({\cal H}[\cdot]\) we denote the rectangular hull of a set (see (2.11)), and by \({\cal R}_{F}[\cdot,\cdot:\lambda]\) the rectangle defined by formula (1.15). Note that from (1.15) and (3.1), we have \[{\cal R}_{F}[x,x^{\prime}:\lambda]={\cal W}_{F}[x,x,x^{\prime}:\lambda]\quad \mbox{for every}\quad x,x^{\prime}\in{\cal M}. \tag{3.2}\] The proofs of the necessity parts of Lipschitz selection criteria presented in this section rely on the following proposition. **Proposition 3.1**: _Let \(F:{\cal M}\to\mbox{Conv}({\bf R}^{2})\) be a set-valued mapping and let \(\lambda\geq 0\). Suppose that \(F\) has a Lipschitz selection \(f:{\cal M}\to{\bf R}^{2}\) with \(\|f\|_{\mbox{\scriptsize Lip}({\cal M})}\leq\lambda\). Then for every \(x,x^{\prime},x^{\prime\prime},y,y^{\prime},y^{\prime\prime}\in{\cal M}\), we have_ _(i) \({\cal R}_{F}[x,x^{\prime}:\lambda]\cap\{{\cal R}_{F}[y,y^{\prime}:\lambda]+ \lambda\,\rho(x,y)\,Q_{0}\}\neq\emptyset\);_ _(ii) \({\cal W}_{F}[x,x^{\prime},x^{\prime\prime}:\lambda]\cap\{{\cal W}_{F}[y,y^{ \prime},y^{\prime\prime}:\lambda]+\lambda\,\rho(x,y)\,Q_{0}\}\neq\emptyset\)._ _Proof._ Clearly, part (i) of the proposition follows from part (ii) and (3.2). Let us prove part (ii). Because \(f\) is a Lipschitz selection of \(F\), for every \(x,x^{\prime},x^{\prime\prime}\in{\cal M}\) we have \(f(x)\in F(x)\), \(f(x^{\prime})\in F(x^{\prime})\), \(f(x^{\prime\prime})\in F(x^{\prime\prime})\), \[\|f(x)-f(x^{\prime})\|\leq\lambda\,\rho(x,x^{\prime})\quad\mbox{and}\quad\|f(x )-f(x^{\prime\prime})\|\leq\lambda\,\rho(x,x^{\prime\prime}).\] Hence, \[f(x) \in \{F(x^{\prime})+\lambda\,\rho(x,x^{\prime})Q_{0}\}\cap\{F(x^{ \prime\prime})+\lambda\,\rho(x,x^{\prime\prime})Q_{0}\}\] \[\subset {\cal H}[\{F(x^{\prime})+\lambda\,\rho(x,x^{\prime})Q_{0}\}\cap \{F(x^{\prime\prime})+\lambda\,\rho(x,x^{\prime\prime})Q_{0}\}]={\cal W}_{F}[x,x^{\prime},x^{\prime\prime}:\lambda].\] In the same fashion we show that \(f(y)\in{\cal W}_{F}[y,y^{\prime},y^{\prime\prime}:\lambda]\). These properties and inequality \(\|f(x)-f(y)\|\leq\lambda\,\rho(x,y)\) imply (ii) proving the proposition. The main result of the present section is the following theorem. **Theorem 3.2**: _Let \(\Re=({\cal M},\rho)\) be a pseudometric space, and let \(F:{\cal M}\to{\rm Conv}({\bf R}^{2})\) be a set-valued mapping satisfying Condition 1.9._ _Given non-negative constants \(\tilde{\lambda}\) and \(\lambda\), let us assume that for every \(x,x^{\prime},x^{\prime\prime},y,y^{\prime},y^{\prime\prime}\in{\cal M}\), we have_ \[{\cal W}_{F}[x,x^{\prime},x^{\prime\prime}:\tilde{\lambda}]\cap\{{\cal W}_{F}[ y,y^{\prime},y^{\prime\prime}:\tilde{\lambda}]+\lambda\,\rho(x,y)\,Q_{0}\}\neq\emptyset. \tag{3.3}\] _Then \(F\) has a Lipschitz selection \(f:{\cal M}\to{\bf R}^{2}\) with \(\|f\|_{{\rm Lip}({\cal M})}\leq 2\lambda+\tilde{\lambda}\)._ We refer to this result as _the key theorem._ We start the proof of the key theorem in this section and complete it at the end of the next section. _Proof of the key theorem._ Suppose that for every \(x,x^{\prime},x^{\prime\prime},y,y^{\prime},y^{\prime\prime}\in{\cal M}\) condition (3.3) holds. Let us construct a Lipschitz selection \(f:{\cal M}\to{\bf R}^{2}\) of \(F\) with Lipschitz seminorm \(\|f\|_{{\rm Lip}({\cal M})}\leq 2\lambda+\tilde{\lambda}\). We will do this in three steps. The First Step. We introduce a set-valued mapping on \({\cal M}\) defined by the formula \[F^{[1]}[x:\tilde{\lambda}]=\bigcap_{y\in{\cal M}}\left[F(y)+\tilde{\lambda}\, \rho(x,y)\,Q_{0}\right],\quad x\in{\cal M}. \tag{3.4}\] **Lemma 3.3**: _For each \(x\in{\cal M}\) the set \(F^{[1]}[x:\tilde{\lambda}]\) is a non-empty closed convex subset of \({\bf R}^{2}\). Moreover, for every \(x\in{\cal M}\) the following representation holds:_ \[{\cal H}[F^{[1]}[x:\tilde{\lambda}]]=\cap\{{\cal W}_{F}[x,y,y^{\prime}:\tilde {\lambda}]:y,y^{\prime}\in{\cal M}\}. \tag{3.5}\] _Furthermore, if \({\cal M}\) is infinite, then the set \(F^{[1]}[x:\tilde{\lambda}]\) is bounded for all \(x\in{\cal M}\)._ _Proof._ Let us prove that \[F^{[1]}[x:\tilde{\lambda}]\neq\emptyset\quad\mbox{for every}\quad x\in{\cal M}. \tag{3.6}\] Given \(x\in{\cal M}\), we set \[\mathfrak{C}_{x}=\{F(y)+\tilde{\lambda}\,\rho(x,y)\,Q_{0}:y\in{\cal M}\}.\] Then \(F^{[1]}[x:\tilde{\lambda}]=\cap\{C:C\in\mathfrak{C}_{x}\}\). See (3.4). Let us prove that for every \(y_{1},y_{1}^{\prime},y_{2},y_{2}^{\prime}\in{\cal M}\) the sets \[C_{i}=F(y_{i})+\tilde{\lambda}\,\rho(x,y_{i})Q_{0}\quad\mbox{and}\quad C_{i}^{ \prime}=F(y_{i}^{\prime})+\tilde{\lambda}\,\rho(x,y_{i}^{\prime})Q_{0},\ \ i=1,2, \tag{3.7}\] satisfy property (2.15). First, let us note that, thanks to (3.3), the set \[\mathcal{W}_{F}[x,y,z:\tilde{\lambda}]\neq\emptyset\quad\text{for all}\quad x,y,z \in\mathcal{M}.\] In particular, from this and definition (3.1), it follows that \[\{F(y)+\tilde{\lambda}\rho(x,y)\,Q_{0}\}\cap\{F(z)+\tilde{\lambda}\rho(x,z)\,Q _{0}\}\neq\emptyset\quad\text{for every}\quad y,z\in\mathcal{M}\] proving that _any two elements of \(\mathfrak{C}_{x}\) have a common point_. Property (3.3) tells us that \[\mathcal{W}_{F}[x,y_{1},y_{1}^{\prime}:\tilde{\lambda}]\cap\mathcal{W}_{F}[x, y_{2},y_{2}^{\prime}:\tilde{\lambda}]\neq\emptyset\quad\text{for every}\quad y_{1},y_{1}^{\prime},y_{2},y_{2}^{\prime}\in\mathcal{M}.\] Thanks to (3.1) and (3.7), \(\mathcal{W}_{F}[x,y_{1},y_{1}^{\prime}:\tilde{\lambda}]=\mathcal{H}[C_{1} \cap C_{1}^{\prime}]\) and \(\mathcal{W}_{F}[x,y_{2},y_{2}^{\prime}:\tilde{\lambda}]=\mathcal{H}[C_{2} \cap C_{2}^{\prime}]\) proving that \[\mathcal{H}[C_{1}\cap C_{1}^{\prime}]\,\bigcap\mathcal{H}[C_{2}\cap C_{2}^{ \prime}]\neq\emptyset\quad\text{ for every}\quad C_{1},C_{1}^{\prime},C_{2},C_{2}^{ \prime}\in\mathfrak{C}.\] Hence, \[\text{Pr}_{\mathbf{R}_{1}}[\mathcal{H}[C_{1}\cap C_{1}^{\prime}]]\,\,\bigcap \,\text{Pr}_{\mathbf{R}_{1}}[\mathcal{H}[C_{2}\cap C_{2}^{\prime}]]\neq\emptyset\] so that, thanks to (2.13), \(\text{Pr}_{\mathbf{R}_{1}}[C_{1}\cap C_{1}^{\prime}]\,\,\bigcap\,\text{Pr}_{ \mathbf{R}_{1}}[C_{2}\cap C_{2}^{\prime}]\neq\emptyset\) proving (2.15). Now let us assume that the set \(\mathcal{M}\) is _finite_. In this case, the conditions (i) and (ii) of Proposition 2.3 are satisfied. This proposition tells us that \(\cap\{C:C\in\mathfrak{C}\}\neq\emptyset\) proving (3.6) in the case under consideration. Now let \(\mathcal{M}\) be infinite. In this case, thanks to the hypothesis of Theorem 3.2 and Condition 1.9, there exist a constant \(\alpha\geq 0\) and a finite set \(\widehat{M}=\{\hat{x}_{1},...,\hat{x}_{m}\}\subset\mathcal{M}\) such that the set \[\cap\{F(y)+\alpha Q_{0}:y\in\widehat{M}\}\quad\text{is non-empty and bounded}.\] Let \(x\in\mathcal{M}\) and let \[\widehat{\mathfrak{C}}_{x}=\{F(y)+\tilde{\lambda}\rho(x,y)Q_{0}:y\in\widehat {M}\}. \tag{3.8}\] Let us prove that the set \[G_{x}=\cap\{C:C\in\widehat{\mathfrak{C}}_{x}\}\quad\text{is non-empty and bounded}. \tag{3.9}\] As we have proved above, \(G_{x}\neq\emptyset\) (because \(\widehat{M}\) is finite). Let us see that \(G_{x}\) is bounded. Suppose that the set \(G_{x}\) is unbounded. We recall two properties of subsets from \(\text{Conv}(\mathbf{R}^{2})\): \((\bigstar 1)\) Let \(K\in\text{Conv}(\mathbf{R}^{2})\). If \(K\) is unbounded than it contains a ray; \((\bigstar 2)\) Let \(K\in\text{Conv}(\mathbf{R}^{2})\), \(h\in\mathbf{S}_{1}\), and let \(L_{h}=\{th:t\geq 0\}\) be the ray emanating from the origin in the direction of \(h\). Let \(x,y\in K\). Then \(x+L_{h}\subset K\) if and only \(y+L_{h}\subset K\). See, e.g. [18, Section 2.5, Lemma 1, 2 ]. Fix a point \(\tilde{a}\in G_{x}\). The set \(G_{x}\in\text{Conv}(\mathbf{R}^{2})\) and unbounded, so that, thanks to \((\bigstar 1)\), there exists a vector \(h\in\mathbf{S}_{1}\) such that the ray starting at \(\tilde{a}\) in the direction of \(h\) lies in \(G\). Thus, \(\tilde{a}+L_{h}\subset G_{x}\) where \(L_{h}=\{th:t\geq 0\}\). Therefore, thanks to definition of \(G_{x}\) (see (3.9)) and (3.8), for every \(y\in\widehat{M}\), \[\tilde{a}+L_{h}\subset F(y)+\tilde{\lambda}\rho(x,y)Q_{0}.\] Furthermore, combining this property with property (\(\bigstar\)2), we obtain the following: _for every \(b\in F(y)+\tilde{\lambda}\rho(x,y)Q_{0}\)_ we have \[b+L_{h}\subset F(y)+\tilde{\lambda}\rho(x,y)Q_{0}. \tag{3.10}\] In particular, this property holds for every \(b\in F(y)\). Let us prove that property (3.10) implies a stronger one: \[b+L_{h}\subset F(y)\quad\text{for every}\quad b\in F(y). \tag{3.11}\] Indeed, let \(p\in b+L_{h}\), i.e., there exits \(s\geq 0\) such that \(p=b+sh\). We know that \[b+th\in F(y)+\tilde{\lambda}\rho(x,y)Q_{0}\quad\text{for every}\quad t\geq s\] so that there exists a point \(b_{t}\in F(y)\) such that \(\|b_{t}-(b+th)\|\leq\tilde{\lambda}\rho(x,y)\). Furthermore, because \(F(y)\) is convex, the line segment \([b,b_{t}]\subset F(y)\) so that the point \[u_{t}=b+\frac{s}{t}(b_{t}-b)\in F(y).\] Hence, \[\|u_{t}-p\|=\|\frac{s}{t}(b_{t}-b)-sh\|=\|\frac{s}{t}(b_{t}-(b+th))\|\leq\frac {s}{t}(\tilde{\lambda}\rho(x,y))\] proving that \(u_{t}\to p\) as \(t\to\infty\). But \(u_{t}\in F(y)\) and the set \(F(y)\) is closed, so that \(p\in F(y)\) proving (3.11). In particular, from (3.11) it follows that \(b+L_{h}\subset F(y)+\alpha Q_{0}\) provided \(b\in F(y)\). Therefore, thanks to property (\(\bigstar\)2), for every \(y\in\widehat{M}\), \[b+L_{h}\subset F(y)+\alpha Q_{0}\quad\text{for all}\quad b\in F(y)+\alpha Q_{0}. \tag{3.12}\] We recall that there exists a point, say \(b_{0}\), common to all of the sets \(\{F(y)+\alpha Q_{0}:y\in\widehat{M}\}\). Therefore, thanks to (3.12), \[b_{0}+L_{h}\subset F(y)+\alpha Q_{0}\quad\text{for every}\quad y\in\widehat{M}\] proving that \(b_{0}+L_{h}\subset\widetilde{F}=\{F(y)+\alpha Q_{0}:y\in\widehat{M}\}\). Thus, the set \(\widetilde{F}\) is unbounded, a contradiction. This contradictions proves that the set \(G_{x}\) defined by (3.9), is bounded. Thus, the family \(\mathfrak{C}_{x}\) satisfies the hypothesis of Proposition 2.3. This proposition tells us that for every \(x\in\mathcal{M}\) the set \(F^{[1]}[x:\tilde{\lambda}]=\cap\{C:C\in\mathfrak{C}_{x}\}\) is non-empty. Note that \(F^{[1]}[x:\tilde{\lambda}]\subset G_{x}\) so that \(F^{[1]}[x:\tilde{\lambda}]\) is _bounded_ for each \(x\in\mathcal{M}\) provided \(\mathcal{M}\) is infinite. Finally, formula (2.17) and definition (3.1) imply formula (3.5), and the proof of the lemma is complete. Given \(x\in\mathcal{M}\), we let \(\mathcal{T}_{F}(x)\) denote the rectangular hull of the set \(F^{[1]}[x:\tilde{\lambda}]\). Thus, \(\mathcal{T}_{F}:\mathcal{M}\to\mathfrak{R}(\mathbf{R}^{2})\) is a set-valued mapping defined by \[\mathcal{T}_{F}(x)=\mathcal{H}[F^{[1]}[x:\tilde{\lambda}]]=\mathcal{H}\left[ \cap\left\{F(y)+\tilde{\lambda}\rho(x,y)\,Q_{0}:y\in\mathcal{M}\right\}\right], \quad\ x\in\mathcal{M}. \tag{3.13}\] Let us note that formula (3.5) provides the following representation of the mapping \(\mathcal{T}_{F}\): \[\mathcal{T}_{F}(x)=\cap\{\mathcal{W}_{F}[x,x^{\prime},x^{\prime\prime}:\tilde {\lambda}]:x^{\prime},x^{\prime\prime}\in\mathcal{M}\},\quad x\in\mathcal{M}. \tag{3.14}\] **Remark 3.4**: Let \(\mathfrak{T}=\{{\cal T}_{F}(x):x\in{\cal M}\}\). Let us note the following properties of this family: (i) Each member of the family \(\mathfrak{T}\) is a _non-empty rectangle_ in \({\bf R}^{2}\). Indeed, thanks to Lemma 3.3, \(F^{[1]}[x:\tilde{\lambda}]\neq\emptyset\) for every \(x\in{\cal M}\) so that, thanks to (3.13), \({\cal T}_{F}(x)={\cal H}[F^{[1]}[x:\tilde{\lambda}]]\neq\emptyset\) as well; (ii) Either the family \(\mathfrak{T}\) is _finite_ or every rectangle from \(\mathfrak{T}\) is a _compact set_. Indeed, if \({\cal M}\) is infinite, then, thanks to Lemma 3.3, the set \(F^{[1]}[x:\tilde{\lambda}]\) is compact. Therefore, its orthogonal projections onto the coordinate axes are compact, so that, thanks to (2.12) and (3.13), the rectangle \({\cal T}_{F}(x)\) is compact as well. The Second Step. At this step we prove the existence of a Lipschitz selection of the set-valued mapping \({\cal T}_{F}\). **Proposition 3.5**: _(i) The set-valued mapping \({\cal T}_{F}:{\cal M}\to\Re({\bf R}^{2})\) has a Lipschitz selection \(g:{\cal M}\to{\bf R}^{2}\) with Lipschitz seminorm \(\|g\|_{\rm Lip({\cal M})}\leq\lambda\);_ _(ii) We let \({\cal T}_{F}^{[1]}[\cdot:\lambda]\) denote a set-valued mapping on \({\cal M}\) defined by_ \[{\cal T}_{F}^{[1]}[x:\lambda]=\bigcap_{z\in{\cal M}}\left[{\cal T}_{F}(z)+ \lambda\rho(x,z)\,Q_{0}\right],\ \ \ x\in{\cal M}. \tag{3.15}\] _Then_ \[{\cal T}_{F}^{[1]}[x:\lambda]\neq\emptyset\ \ \ \mbox{for every}\ \ \ x\in{\cal M}. \tag{3.16}\] _Furthermore, the following property_ \[{\rm d}_{\rm H}({\cal T}_{F}^{[1]}[x:\lambda],{\cal T}_{F}^{[1]}[y:\lambda]) \leq\lambda\,\rho(x,y)\ \ \ \mbox{for every}\ \ \ \ x,y\in{\cal M}; \tag{3.17}\] _holds. (Recall that \({\rm d}_{\rm H}\) denotes the Hausdorff distance between sets.)_ _(iii) If each rectangle \({\cal T}_{F}^{[1]}[x:\lambda]\), \(x\in{\cal M}\), is bounded, then the mapping_ \[g_{F}(x)={\rm center}({\cal T}_{F}^{[1]}[x:\lambda]),\ \ \ x\in{\cal M},\] _is a Lipschitz selection of \({\cal T}_{F}\) with \(\|g_{F}\|_{\rm Lip({\cal M})}\leq\lambda\)._ _Proof._ _(i)_ Remark 3.4 tells us that the mapping \({\cal T}={\cal T}_{F}\) satisfies the hypothesis of Proposition 2.10. Thanks to this proposition, the required Lipschitz selection \(g\) exists provided for every \(x,y\in{\cal M}\) the restriction \({\cal T}_{F}|_{[x,y]}\) of \({\cal T}_{F}\) to \(\{x,y\}\) has a Lipschitz selection \(g_{[x,y]}\) with Lipschitz seminorm \(\|g_{[x,y]}\|_{\rm Lip(x,y)}\leq\lambda\). Clearly, this requirement is equivalent to the following property: \[{\cal T}_{F}(x)\cap\{{\cal T}_{F}(y)+\lambda\,\rho(x,y)Q_{0}\}\neq\emptyset \ \ \ \mbox{for every}\ \ \ x,y\in{\cal M}. \tag{3.18}\] Let \[\mathfrak{T}_{x}=\{{\cal W}_{F}[x,x^{\prime},x^{\prime\prime}:\tilde{\lambda} ]:x^{\prime},x^{\prime\prime}\in{\cal M}\}\ \ \ \mbox{and}\ \ \ \mathfrak{T}_{y}=\{{\cal W}_{F}[y,y^{\prime},y^{\prime\prime}:\tilde{\lambda}]: y^{\prime},y^{\prime\prime}\in{\cal M}\}.\] Thanks to (3.14), \[{\cal T}_{F}(x)=\cap\{W:W\in\mathfrak{T}_{x}\}\ \ \ \mbox{and}\ \ \ {\cal T}_{F}(y)=\cap\{W:W\in\mathfrak{T}_{y}\}. \tag{3.19}\] First, let us prove (3.18) provided \({\cal M}\) is _finite_. In this case, \(\mathfrak{T}_{x}\) and \(\mathfrak{T}_{y}\) are _finite_ families of rectangles. Furthermore, thanks to part (i) of Remark 3.4, each family has a non-empty intersection. Let \(r=\lambda\rho(x,y)\). Then, thanks to (3.19) and Lemma 2.4, \[\mathcal{T}_{F}(y)+\lambda\,\rho(x,y)Q_{0}=\cap\{W:W\in\mathfrak{T}_{y}\}+rQ_{0} =\cap\{W+rQ_{0}:W\in\mathfrak{T}_{y}\}\] so that \[\mathcal{T}_{F}(x)\cap\{\mathcal{T}_{F}(y)+\lambda\,\rho(x,y)Q_{0}\}=[\cap\{W:W \in\mathfrak{T}_{x}\}]\cap[\cap\{W+rQ_{0}:W\in\mathfrak{T}_{y}\}]. \tag{3.20}\] Let \(\widetilde{\mathfrak{T}}=\mathfrak{T}_{x}\cup\mathfrak{T}_{y}^{+}\) where \(\mathfrak{T}_{y}^{+}=\{W+rQ_{0}:W\in\mathfrak{T}_{y}\}\). Thanks to (3.20), property (3.18) holds provided the family of rectangles \(\widetilde{\mathfrak{T}}\) has a common point. Because \(\mathfrak{T}_{x}\) and \(\mathfrak{T}_{y}\) are _finite_ families, the family \(\widetilde{\mathfrak{T}}\) is finite as well. Therefore, thanks to Helly's intersection theorem for rectangles, see Lemma 2.2, there exists a point common to all the family \(\widetilde{\mathfrak{T}}\) provided \(W^{\prime}\cap W^{\prime\prime}\neq\emptyset\) for every \(W^{\prime},W^{\prime\prime}\in\widetilde{\mathfrak{T}}\). Clearly, \(W^{\prime}\cap W^{\prime\prime}\neq\emptyset\) if \(W^{\prime},W^{\prime\prime}\in\mathfrak{T}_{x}\) or \(W^{\prime},W^{\prime\prime}\in\mathfrak{T}_{y}^{+}\) because both \(\mathfrak{T}_{x}\) and \(\mathfrak{T}_{y}^{+}\) has a non-empty intersection. Let \(W^{\prime}=\mathcal{W}_{F}[x,x^{\prime},x^{\prime\prime}:\widetilde{\lambda}]\), \(x^{\prime},x^{\prime\prime}\in\mathcal{M}\), and let \(W^{\prime\prime}=\mathcal{W}_{F}[y,y^{\prime},y^{\prime\prime}:\widetilde{ \lambda}]+rQ_{0}\), \(y^{\prime},y^{\prime\prime}\in\mathcal{M}\), be two arbitrary members of \(\mathfrak{T}_{x}\) and \(\mathfrak{T}_{y}^{+}\) respectively. Then, thanks to assumption (3.3) of Theorem 3.2, \(W^{\prime}\cap W^{\prime\prime}\neq\emptyset\). Thus, the hypothesis of Lemma 2.2 holds for \(\widetilde{\mathfrak{T}}\) so that this family has a common point. This proves that (3.18) holds provided \(\mathcal{M}\) is finite. Now, let \(\mathcal{M}\) be an infinite set. Remark 3.4 tells us that in this case the rectangles \(\mathcal{T}_{F}(x)\) and \(\mathcal{T}_{F}(y)\) are _compact sets_. Therefore, property (3.18) is equivalent to the following inequality: \[\operatorname{dist}(\mathcal{T}_{F}(x),\mathcal{T}_{F}(y))\leq\lambda\,\rho(x, y). \tag{3.21}\] We recall that, thanks to (3.19), \(\mathfrak{T}_{x}\) and \(\mathfrak{T}_{y}\) are two families of rectangles with non-empty intersections (equal to \(\mathcal{T}_{F}(x)\) and \(\mathcal{T}_{F}(y)\) respectively). Lemma 2.5 tells us that in this case \[\operatorname{dist}(\mathcal{T}_{F}(x),\mathcal{T}_{F}(y))=\operatorname{dist} \left(\bigcap_{W\in\mathfrak{T}_{x}}W,\bigcap_{W\in\mathfrak{T}_{y}}W\right) =\sup\{\operatorname{dist}(W^{\prime},W^{\prime\prime}):W^{\prime}\in \mathfrak{T}_{x}\,,W^{\prime\prime}\in\mathfrak{T}_{y}\}. \tag{3.22}\] But, thanks to assumption (3.3), for every rectangle \(W^{\prime}=\mathcal{W}_{F}[x,x^{\prime},x^{\prime\prime}:\widetilde{\lambda}] \in\mathfrak{T}_{x}\) and every rectangle \(W^{\prime\prime}=\mathcal{W}_{F}[y,y^{\prime},y^{\prime\prime}:\widetilde{ \lambda}]\in\mathfrak{T}_{y}\) we have \(\operatorname{dist}(W^{\prime},W^{\prime\prime})\leq\lambda\,\rho(x,y)\). This inequality and (3.22) imply (3.21) proving the required property (3.18) and part _(i)_ of the proposition. Let us prove parts _(ii)_ and _(iii)_. We note that, thanks to property (3.18), the mapping \(\mathcal{T}=\mathcal{T}_{F}\) satisfies the conditions of the hypothesis of part (b) of Proposition 2.7 with \(\eta=\lambda\). This proposition tells us that property (3.16) and inequality (3.17) hold proving part _(ii)_. Furthermore, part (b) of Proposition 2.7 proves part _(iii)_. The proof of Proposition 3.5 is complete. The Third Step. At this step we construct a Lipschitz selection \(f\) of the set-valued mapping \(F\) with Lipschitz seminorm at most \(2\lambda+\tilde{\lambda}\). First, we recall that the set-valued mapping \(F^{[1]}[\cdot:\widetilde{\lambda}]\) and its rectangular hull, the set-valued mapping \(\mathcal{T}_{F}=\mathcal{H}[F^{[1]}[\cdot:\widetilde{\lambda}]]\), are defined by formulae (3.4) and (3.13) respectively. Part _(i)_ of Proposition 3.5 tells us that \(\mathcal{T}_{F}\) has a Lipschitz selection with Lipschitz seminorm at most \(\lambda\). As we have noted above, we cannot guarantee that the rectangle \(\mathcal{T}_{F}(x)\) is a _closed set_ for all \(x\in\mathcal{M}\). Sometimes this leads to a certain complication of our constructive algorithm for a nearly optimal Lipschitz selection of \(F\). To avoid these technical difficulties, we will work with the rectangles \(\mathcal{T}_{F}(x)^{\mathsf{cl}}(x)\) (i.e., with _the closures_ of \(\mathcal{T}_{F}(x)\)) rather than the rectangles \(\mathcal{T}_{F}(x)\) themselves. Proposition 3.5 tells us that the set-valued mapping \({\cal T}_{F}:{\cal M}\to\Re({\bf R}^{2})\) has a Lipschitz selection with Lipschitz seminorm at most \(\lambda\). Clearly, \({\cal T}_{F}(x)(x)\subset{\cal T}_{F}(x)^{\sf d}(x)\) so that the set-valued mapping \({\cal T}_{F}^{\sf d}\) also has a Lipschitz selection with Lipschitz seminorm at most \(\lambda\). In other words, there exists a mapping \(g:{\cal M}\to{\bf R}^{2}\) such that \[g(x)\in{\cal T}_{F}^{\sf d}(x)={\cal H}\{F^{[1]}[x:\tilde{\lambda}]\}^{\sf d} \quad\mbox{ for every }\quad x\in{\cal M}, \tag{3.23}\] and \[\|g(x)-g(y)\|\leq\lambda\,\rho(x,y)\quad\mbox{ for all }\quad x,y\in{\cal M}. \tag{3.24}\] **Proposition 3.6**: _Let \(g:{\cal M}\to{\bf R}^{2}\) be an arbitrary Lipschitz selection of the set-valued mapping \({\cal T}_{F}^{\sf d}:{\cal M}\to\Re({\bf R}^{2})\) with Lipschitz seminorm at most \(\lambda\), i.e., a mapping satisfying conditions (3.23) and (3.24)._ _We define a mapping \(f:{\cal M}\to{\bf R}^{2}\) by letting_ \[f(x)={\bf Pr}\left(g(x),F^{[1]}[x:\tilde{\lambda}]\right),\quad\ x\in{\cal M}. \tag{3.25}\] _(Recall that \({\bf Pr}(\cdot,S)\) is the operator of metric projection onto a convex closed \(S\subset{\bf R}^{2}\). See (2.6).)_ _Then the following properties hold:_ _(_ \(\bigstar\)_1) The mapping_ \(f\) _is well defined, i.e.,_ \(f(x)\) _is a singleton for every_ \(x\in{\cal M}\)_. In this case_ \[f(x)={\bf Pr}\left(g(x),F^{[1]}[x:\tilde{\lambda}]\right)\in F^{[1]}[x:\tilde{ \lambda}]\subset F(x)\quad\mbox{for every }\quad x\in{\cal M},\] _so that_ \(f\) _is a selection of_ \(F\) _on_ \({\cal M}\)_;_ _(_ \(\bigstar\)_2) The mapping_ \(f:{\cal M}\to{\bf R}^{2}\) _is Lipschitz with Lipschitz seminorm_ \(\|f\|_{\rm Lip({\cal M})}\leq 2\lambda+\tilde{\lambda}\)_._ The proof of this proposition is based on a number of auxiliary results. The first of these is the following lemma. **Lemma 3.7**: _Let \(S\subset{\bf R}^{2}\) be a non-empty convex closed set. Then for every point \(a\in{\cal H}[S]^{\sf d}\) the metric projection \({\bf Pr}(a,S)\) is a singleton. Furthermore, \({\bf Pr}(a,S)\) coincides with a vertex of the square \(Q(a,{\rm dist}(a,S))\)._ _Proof._ Our proof is a slight modification of the proof of this lemma for the special case \(a\in{\cal H}[S]\) given in [29, p. 301]. See also [32, p. 67]. Clearly, if \(a\in S\), nothing to prove. Suppose \(a\notin S\) so that \(r={\rm dist}(a,S)>0\). Because \(S\) is closed, \({\bf Pr}(a;S)\neq\emptyset\). Furthermore, \({\bf Pr}(a;S)=S\cap Q=S\cap\partial Q\) where \(Q=Q(a,r)\). Because \({\bf Pr}(a;S)\) is _a non-empty convex set_ lying on the boundary of \(Q\), it belongs to a certain side of the square \(Q\). In other words, there exist two distinct vertices of \(Q\), say \(A\) and \(B\), such that \({\bf Pr}(a;S)\subset[A,B]\). Let us prove that \[\mbox{either }\quad{\bf Pr}(a;S)=\{A\}\quad\mbox{or}\quad{\bf Pr}(a;S)=\{B\}. \tag{3.26}\] Indeed, otherwise there exists a point \(p\in(A,B)\cap{\bf Pr}(a;S)\). Let \(\ell\) be the straight line passing through \(A\) and \(B\). Clearly, \(\ell\) is parallel to a coordinate axis. Let \(H_{1},H_{2}\) be the closed half-planes determined by \(\ell\). (Thus \(\ell=H_{1}\cap H_{2}\).) Clearly, \(Q\) is contained in one of these half-planes, say in \(H_{1}\). Then \(a\in H_{1}^{int}\) where \(H_{1}^{int}\) denotes the interior of \(H_{1}\) (because \({\rm dist}(a,\ell)=r>0\)). Prove that in this case \(S\subset H_{2}\), i.e., the straight line \(\ell\) separates (not strictly) the square \(Q\) and the set \(S\). Indeed, suppose that there exists a point \(b\in S\,\cap\,H_{1}^{int}\). Then also \((p,b]\subset H_{1}^{int}\) because \(p\in\partial H_{1}=\ell\). But \(p\in(A,B)\) so that \((p,b]\cap\,Q^{int}\neq\emptyset\). On the other hand, because \(S\) is convex and \(p\in\partial S\), the interval \((p,b]\subset S\) proving that \(S\,\cap\,Q^{int}\neq\emptyset\). But \(S\,\cap\,Q\subset\partial Q\), a contradiction. Thus, \(S\subset H_{2}\) and \(Q\subset H_{1}\). But \(a\in H_{1}^{int}\) so that \(a\notin H_{2}\). But the half-plane \(H_{2}\in\Re({\bf R}^{2})\), because its boundary, the straight line \(\ell\), is parallel to one of the coordinate axis. In other words, \(H_{2}\) is an (unbounded) closed rectangle. Therefore \({\cal H}[S]\subset H_{2}\), see definition (2.11). Because \(H_{2}\) is closed, we have \({\cal H}[S]^{\bf d}\subset H_{2}\) so that, thanks to the lemma's hypothesis, \(a\in{\cal H}[S]^{\bf cl}\subset H_{2}\), a contradiction. This contradiction implies (3.26) completing the proof of the lemma. Clearly, this lemma implies the statement \((\bigstar 1)\) of Proposition 3.6. Let us prove the statement \((\bigstar 2)\) which is equivalent to the inequality \[\|f(x)-f(y)\|\leq(2\lambda+\bar{\lambda})\,\rho(x,y),\ \ \ \ x,y\in{\cal M}. \tag{3.27}\] The proof of this inequality relies on a number of auxiliary results which we present in the next section. **4. Proof of the key theorem: the final step.** **Lemma 4.1**: _Let \(A,B\subset{\bf R}^{2}\) be non-empty convex closed sets such that \(A\subset B\), and let \(a\in{\cal H}[A]^{\bf d}\)._ _Then \({\bf Pr}(a,A)\) and \({\bf Pr}(a,B)\) are singletons having the following properties:_ _(i) \({\bf Pr}(a,B)\in[{\bf Pr}(a,A),a]\);_ _(ii) the following equality_ \[\|\,{\bf Pr}(a,A)-{\bf Pr}(a,B)\|={\rm dist}(a,A)-{\rm dist}(a,B)\] _holds._ _Proof._ For the proof of the special case of the lemma for \(a\in{\cal H}[S]\) we refer the reader to [29, p. 301]. See also [32, p. 67]. First, we note that if \(a\in B\), the statement of the lemma is immediate from Lemma 3.7. Suppose that \(a\notin B\). In this case, Lemma 3.7 tells us that \({\bf Pr}(a;A)\) is one of the vertices of the square \(Q(a,r)\) with \(r={\rm dist}(a,A)>0\). Because \(A\subset B\), the point \(a\in{\cal H}[B]^{\bf cl}\) so that, thanks to Lemma 3.7, \({\bf Pr}(a;B)\) is a vertex of the square \(Q(a,\alpha)\) where \(\alpha={\rm dist}(a,B)>0\). Using a suitable shift and dilation, without loss of generality, we can assume that \[a=(0,0),\ \ r={\rm dist}(a,A)=1,\ \ \ {\rm and}\ \ \ {\bf Pr}(a;A)=(1,1).\] Clearly, in this case \(0<\alpha\leq 1\). Furthermore, in these settings the statement of the lemma is equivalent to the property \[{\bf Pr}(a;B)=(\alpha,\alpha). \tag{4.1}\] Suppose that this property does not hold, i.e., \({\bf Pr}(a;B)\in\{(\alpha,-\alpha),(-\alpha,\alpha),(-\alpha,-\alpha)\}\). In order to get a contradiction, we construct a straight line \(\ell_{A}\) which passes through \((1,1)\) and separates (not strictly) the square \(Q(a,r)=[-1,1]^{2}\) and \(A\). This line determines two closed half-planes, \(S^{\,+}_{A}\) and \(S^{\,-}_{A}\), with the common boundary (i.e., the line \(\ell_{A}\)) such that \({\bf R}^{2}=S^{\,+}_{A}\cup S^{\,-}_{A}\). One of them, say \(S^{+}_{A}\), contains \(A\), so that \(S^{+}_{A}\supset Q(a,r)\). We know that \(S^{+}_{A}\) contains \((1,1)\) and does not contain intrinsic points of the square \([-1,1]^{2}\), so that \(Q(a,r)\cap\ell_{A}=(1,1)\). Therefore, the half-plane \(S^{+}_{A}\) admits the following representation: \[S^{+}_{A}=\{x=(x_{1},x_{2})\in{\bf R}^{2}:(x_{1}-1)\,h_{1}+(x_{2}-1)\,h_{2}\geq 0\} \tag{4.2}\] where \(h_{1},h_{2}>0\) are certain numbers. Let us assume that \({\rm Pr}(a;B)=(-\alpha,\alpha)\) and show that this assumption leads to a contradiction. We let \(\ell_{B}\) denote a straight line which passes through the point \((-\alpha,\alpha)\) and separates the square \(Q(a,{\rm dist}(a,B))=[-\alpha,\alpha]^{2}\) and the set \(B\). Let \(S^{+}_{B}\) be that of the two half-planes determined by \(\ell_{B}\) which contains \(B\). Then another half-plane, \(S^{-}_{B}\), contains \(Q(a,{\rm dist}(a,B))\), and \(S^{+}_{B}\cap S^{-}_{B}=\ell_{B}\). We know that \(S^{+}_{B}\) contains the point \((-\alpha,\alpha)\) on its boundary and does not contain intrinsic points of the square \([-\alpha,\alpha]^{2}\). Therefore, this half-plane can be represented in the form \[S^{+}_{B}=\{(x_{1},x_{2})\in{\bf R}^{2}:-(x_{1}+\alpha)\,s_{1}+(x_{2}-\alpha) \,s_{2}\geq 0\} \tag{4.3}\] with certain \(s_{1},s_{2}>0\). Thus, \(A\subset S^{+}_{A}\) and \(A\subset B\subset S^{+}_{B}\), so that \(A\subset S^{+}_{A}\cap S^{+}_{B}\) proving that for every \(x=(x_{1},x_{2})\in A\) we have \[(x_{1}-1)\,h_{1}+(x_{2}-1)\,h_{2}\geq 0\ \ \ {\rm and}\ \ \ -(x_{1}+\alpha)\,s_{1}+(x_{2}-\alpha)\,s_{2}\geq 0. \tag{4.4}\] See (4.2) and (4.3). Note also that since \(S^{+}_{A}\cap S^{+}_{B}\supset A\neq\emptyset\), we have \(h_{2}+s_{2}>0\). Let us prove that inequalities (4.4) imply the following inclusion: \[A\subset{\cal H}_{\alpha}=\{x=(x_{1},x_{2})\in{\bf R}^{2}:x_{2}\geq\alpha\}. \tag{4.5}\] Indeed, it is easy to see that from (4.4) we have \[x_{2}-\alpha\geq\frac{s_{1}((1+\alpha)h_{1}+(1-\alpha)h_{2}))}{s_{1}h_{2}+s_{2 }h_{1}}\geq 0,\ \ \ \ \ x=(x_{1},x_{2})\in A,\] proving (4.5). Let us note that the half-plane \({\cal H}_{\alpha}\) is a rectangle, i.e., \({\cal H}_{\alpha}\in\Re({\bf R}^{2})\). Therefore, the rectangle hull \({\cal H}[A]\subset{\cal H}_{\alpha}\). Furthermore, because \({\cal H}_{\alpha}\) is closed, we have \({\cal H}[A]^{\rm d}\subset{\cal H}_{\alpha}\) so that, thanks to the lemma's assumption \(a=(0,0)\in{\cal H}_{\alpha}\). But \(\alpha>0\) so that \(a=(0,0)\notin{\cal H}_{\alpha}\), a contradiction. In a similar way we get a contradiction provided \({\rm Pr}(a;B)=(\alpha,-\alpha)\) or \({\rm Pr}(a;B)=(-\alpha,-\alpha)\) proving the required property (4.1) and the lemma. **Lemma 4.2**: _(i) Let \(u\in{\cal M}\), and let \(a\in{\cal H}[F^{[1]}[u:\tilde{\lambda}]]^{\rm d}\). Then_ \[{\rm dist}(a,F^{[1]}[u:\tilde{\lambda}])=\sup_{z\in{\cal M}}{\rm dist}(a,F(z) +\tilde{\lambda}\,\rho(u,z)Q_{0})=\sup_{z\in{\cal M}}{[{\rm dist}(a,F(z))- \tilde{\lambda}\,\rho(u,z)]_{+}}\ ; \tag{4.6}\] _(ii) Let \(u,v\in{\cal M}\), and let \(a\in{\cal H}[F^{[1]}[u:\tilde{\lambda}]]^{\rm d}\) and \(b\in{\cal H}[F^{[1]}[v:\tilde{\lambda}]]^{\rm d}\). Then_ \[|\,{\rm dist}(a,F^{[1]}[u:\tilde{\lambda}])-{\rm dist}(b,F^{[1]}[v:\tilde{ \lambda}])|\leq\|a-b\|+\tilde{\lambda}\,\rho(u,v). \tag{4.7}\] _Proof. (i)_ Let \(A=F^{[1]}[u:\tilde{\lambda}]\) and, given \(z\in{\cal M}\), let \(A_{z}=F(z)+\tilde{\lambda}\,\rho(u,z)Q_{0}\). Then, thanks to (3.4), \(A=\cap[A_{z}:z\in{\cal M}]\). Our goal is to prove that \[{\rm dist}(a,A)=\sup\{{\rm dist}(a,A_{z}):z\in{\cal M}\}\ \ {\rm provided}\ \ \ a\in{\cal H}[A]^{\rm d}. \tag{4.8}\] Lemma 3.3 tells us that \(A\) is a non-empty convex and closed subset of \({\bf R}^{2}\). Because \(A\subset A_{z}\) for each \(z\in{\cal M}\), the left hand side of the above equality majorizes its right hand side. Prove the converse inequality. If \(a\in A\), nothing to prove. Let \(a\notin A\), and let \(\varepsilon\in(0,{\rm dist}(a,A))\) be arbitrary. We know that \(a\in{\cal H}[A]^{\rm cl}\) so that, thanks to Lemma 4.1, \({\bf Pr}(a,A)\) is a singleton. We let \(a_{\varepsilon}\) denote a point on the interval \(({\bf Pr}(a,A),a]\) such that \(\|a_{\varepsilon}-{\bf Pr}(a,A)\|<\varepsilon\). Because \(a_{\varepsilon}\notin A\) and \(A=\cap\{A_{z}:z\in{\cal M}\}\), there exists an element \(\bar{z}\in{\cal M}\) such that \(a_{\varepsilon}\notin A_{\bar{z}}\). Note that \(A\subset A_{\bar{z}}\). Lemma 4.1 tells us that in this case \({\bf Pr}(a,A_{\bar{z}})\) is a singleton such that \({\bf Pr}(a,A_{\bar{z}})\in[{\bf Pr}(a,A),a]\). Then \({\bf Pr}(a,A_{\bar{z}})\in[{\bf Pr}(a,A),a_{\varepsilon}]\); otherwise \(a_{\varepsilon}\in[{\bf Pr}(a,A),{\bf Pr}(a,A_{\bar{z}})]\subset A_{\bar{z}}\), a contradiction. This proves that \(\|{\bf Pr}(a,A)-{\bf Pr}(a,A_{\bar{z}})\|<\varepsilon\). Hence, \[{\rm dist}(a,A) = \|a-{\bf Pr}(a,A)\|\leq\|a-{\bf Pr}(a,A_{\bar{z}})\|+\|{\bf Pr}(a,A_{\bar{z}})-{\bf Pr}(a,A)\|\] \[\leq {\rm dist}(a,A_{\bar{z}})+\|a_{\varepsilon}-{\bf Pr}(a,A)\|\leq{ \rm dist}(a,A_{\bar{z}})+\varepsilon.\] Since \(\varepsilon>0\) can be chosen as small as desired, this implies the required inequality (4.8) proving part _(i)_ of the lemma. _(ii)_ Let \(A=F^{[1]}[u:\bar{\lambda}]\) and \(B=F^{[1]}[v:\bar{\lambda}]\). Then, thanks to (4.6), \[|\,{\rm dist}(a,A)-{\rm dist}(a,B)| = |\sup_{z\in{\cal M}}\,[{\rm dist}(a,F(z))-\bar{\lambda}\rho(u,z)] _{+}-\sup_{z\in{\cal M}}\,[{\rm dist}(a,F(z))-\bar{\lambda}\rho(v,z)]_{+}|\] \[\leq \sup_{z\in{\cal M}}\,|\,[{\rm dist}(a,F(z))-\bar{\lambda}\rho(u,z )]_{+}-[{\rm dist}(a,F(z))-\bar{\lambda}\rho(v,z)]_{+}|\] \[\leq \bar{\lambda}\,\sup_{z\in{\cal M}}\,|\rho(u,z)-\rho(v,z)|\] so that, thanks to the triangle inequality, \[|\,{\rm dist}(a,A)-{\rm dist}(a,B)|\leq\bar{\lambda}\,\rho(u,v). \tag{4.9}\] Next, \[|\,{\rm dist}(a,A)-{\rm dist}(b,B)|\leq|\,{\rm dist}(a,A)-{\rm dist}(a,B)|+| \,{\rm dist}(a,B)-{\rm dist}(b,B)|.\] Because \({\rm dist}(\cdot,B)\) is a Lipschitz function, from this and (4.9), we have (4.7) completing the proof of the lemma. Let \(\delta\geq 0\), and let \[H_{1}\ \ {\rm and}\ \ H_{2}\ \ {\rm be\ two\ half-planes\ with}\ \ \ {\rm dist}(H_{1},H_{2})\leq\delta. \tag{4.10}\] Let \(\ell_{i}=\partial H_{i}\) be the boundary of the half-plane \(H_{i}\), \(i=1,2\). Let us represent the half-planes \(H_{i}\), \(i=1,2\), in the form \[H_{i}=\{u\in{\bf R}^{2}:\langle{\bf h}_{i},u\rangle\leq\alpha_{i}\}\ \ \ {\rm where}\ \ \ {\bf h}_{i}\in{\bf S}_{1}\ \ \ {\rm and}\ \ \ \alpha_{i}\in{\bf R}. \tag{4.11}\] Thus the vector \[{\bf h}_{i}\ \ {\rm is\ directed\ outside\ of}\ \ H_{i}\ \ {\rm and}\ \ \ {\bf h}_{i}\perp\ell_{i},\ \ \ i=1,2. \tag{4.12}\] **Proposition 4.3**: _Let \(a_{1}\) and \(a_{2}\) be two points in \({\bf R}^{2}\) such that_ \[a_{1}\in{\cal H}[H_{1}\cap(H_{2}+\delta Q_{0})]\ \ {\rm and}\ \ \ a_{2}\in{\cal H}[H_{2}\cap(H_{1}+\delta Q_{0})]. \tag{4.13}\] _Suppose that_ \[{\bf Pr}(a_{1},H_{1})\in H_{2}+\delta Q_{0}\ \ \ and\ \ \ {\bf Pr}(a_{2},H_{2})\in H _{1}+\delta Q_{0}. \tag{4.14}\] _Then the following inequality_ \[\|\,{\bf Pr}(a_{1},H_{1})-{\bf Pr}(a_{2},H_{2})\|\leq 2\|a_{1}-a_{2}\|+\delta \tag{4.15}\] _holds._ _Proof._ We will need a number of auxiliary lemmas. Let us formulate the first of them. Let \[S_{1}=H_{1}\cap(H_{2}+\delta Q_{0})\ \ \ {\rm and}\ \ \ S_{2}=H_{2}\cap(H_{1}+ \delta Q_{0}). \tag{4.16}\] We know that \({\rm dist}(H_{1},H_{2})\leq\delta\) so that \(S_{1}\neq\emptyset\) and \(S_{2}\neq\emptyset\). Furthermore, thanks to (4.13) and (4.16), \[a_{1}\in{\cal H}[S_{1}]\ \ \ {\rm and}\ \ \ a_{2}\in{\cal H}[S_{2}]. \tag{4.17}\] **Lemma 4.4**: _Both \({\bf Pr}(a_{1},H_{1})\) and \({\bf Pr}(a_{2},H_{2})\) are singletons. Furthermore, \({\bf Pr}(a_{i},H_{i})={\bf Pr}(a_{i},S_{i})\) for every \(i=1,2\), and the following inequality_ \[|\,{\rm dist}(a_{1},H_{1})-{\rm dist}(a_{2},H_{2})|\leq\delta+\|a_{1}-a_{2}\|\] _holds._ _Proof._ Thanks to (4.17), the point \(a_{i}\in{\cal H}[S_{i}]\), so that \(a_{i}\in{\cal H}[H_{i}]\) because \(S_{i}\subset H_{i}\), \(i=1,2\), see (4.16). Therefore, thanks to Lemma 3.7, \({\bf Pr}(a_{i},H_{i})\) is a singleton for every \(i=1,2\). Furthermore, thanks to (4.14), \({\bf Pr}(a_{i},H_{i})\in S_{i}\). But \(S_{i}\subset H_{i}\) so that \({\bf Pr}(a_{i},H_{i})={\bf Pr}(a_{i},S_{i})\), \(i=1,2\). In particular, \({\rm dist}(a_{i},H_{i})={\rm dist}(a_{i},S_{i})\), \(i=1,2\). Clearly, \[{\rm d}_{\rm H}(S_{1},S_{2})={\rm d}_{\rm H}(H_{1}\cap[H_{2}+\delta Q_{0}],H_ {2}\cap[H_{1}+\delta Q_{0}])\leq\delta.\] See (2.1). Therefore, \[|\,{\rm dist}(a_{1},H_{1})-{\rm dist}(a_{1},H_{2})|=|\,{\rm dist}(a_{1},S_{1} )-{\rm dist}(a_{1},S_{2})|\leq{\rm d}_{\rm H}(S_{1},S_{2})\leq\delta.\] Note also that the function \({\rm dist}(\cdot,H_{2})\) is Lipschitz. Hence, \[|\,{\rm dist}(a_{1},H_{1})-{\rm dist}(a_{2},H_{2})|\leq|\,{\rm dist}(a_{1},H_{ 1})-{\rm dist}(a_{1},H_{2})|+|\,{\rm dist}(a_{1},H_{2})-{\rm dist}(a_{2},H_{2} )|\leq\delta+\|a_{1}-a_{2}\|\] proving the lemma. **Lemma 4.5**: _Inequality (4.15) holds provided either \(a_{1}\in H_{1}\) or \(a_{2}\in H_{2}\)._ _Proof._ For example, suppose that \(a_{2}\in H_{2}\). Then \({\bf Pr}(a_{2},H_{2})=a_{2}\) and \({\rm dist}(a_{2},H_{2})=0\). Therefore, thanks to Lemma 4.4, \({\rm dist}(a_{1},H_{1})\leq\delta+\|a_{1}-a_{2}\|\). Hence, \[\|\,{\bf Pr}(a_{1},H_{1})-{\bf Pr}(a_{2},H_{2})\| = \|\,{\bf Pr}(a_{1},H_{1})-a_{2}\|\leq\|\,{\bf Pr}(a_{1},H_{1})-a_{ 1}\|+\|a_{1}-a_{2}\|\] \[= {\rm dist}(a_{1},H_{1})+\|a_{1}-a_{2}\|\leq\delta+2\|a_{1}-a_{2}\|\] proving the lemma. Everywhere below, in the proof of inequality (4.15), we will assume that \[a_{1}\notin H_{1}\ \ \ \mbox{and}\ \ \ a_{2}\notin H_{2}. \tag{4.18}\] Recall that \(\ell_{i}\) is the boundary of the half-plane \(H_{i}\), \(i=1,2\). Let us see that the assumption \(a_{i}\notin H_{i}\), \(i=1,2\), implies the following property: \[\ell_{i}\not\parallel Ox_{j}\ \ \ \mbox{for every}\ \ i,j=1,2. \tag{4.19}\] Indeed, suppose that this statement is not true, say for \(i=1\), i.e., either \(\ell_{1}\parallel Ox_{1}\) or \(\ell_{1}\parallel Ox_{2}\). Then \({\cal H}[H_{1}]=H_{1}\). But \[a_{1}\in{\cal H}[H_{1}\cap(H_{2}+\delta Q_{0})]\subset{\cal H}[H_{1}]\] so that \(a_{1}\in H_{1}\) which contradicts our assumption that \(a_{1}\notin H_{1}\). **Remark 4.6**: Our next result, Lemma 4.5, deals with points \(a_{i}\) and half-planes \(H_{i}\), \(i=1,2\) such that the vectors \[{\bf Pr}(a_{1},H_{1})-a_{1}\ \ \ \mbox{and}\ \ \ {\bf Pr}(a_{2},H_{2})-a_{2}\ \ \ \mbox{are co-directed}. \tag{4.20}\] Recall that this property means that \[{\bf Pr}(a_{2},H_{2})-a_{2}=\beta\,({\bf Pr}(a_{1},H_{1})-a_{1})\ \ \ \mbox{for some}\ \ \ \beta>0.\] We also recall the representation of the half-planes \(H_{1}\) and \(H_{2}\) in the form (4.11), i.e., the formulae \[H_{i}=\{u\in{\bf R}^{2}:\,({\bf h}_{i},u)\leq\alpha_{i}\},\ \ \ i=1,2,\] where each \({\bf h}_{i}\in{\bf S}_{1}\) is a unit vector and \(\alpha_{i}\in{\bf R}\). Because \({\bf h}_{i}\perp\ell_{i}\,(=\partial H_{i})\), from (4.19) we have \[{\bf h}_{i}\not\parallel Ox_{j}\ \ \ \mbox{for every}\ \ i,j=1,2.\] In particular each \({\bf h}_{i}\), \(i=1,2\), has non-zero coordinates. Finally, let us note the following useful property of metric projections in the space \(\ell_{\infty}^{2}=({\bf R}^{2},\|\cdot\|)\). Let \(\alpha\in{\bf R}\) and let \({\bf h}=(h_{1},h_{2})\in{\bf S}_{1}\), \(h_{1},h_{2}\neq 0\). Let \[H=\{u\in{\bf R}^{2}:\,({\bf h},u)\leq\alpha\},\ \ \ \mbox{and let}\ \ \ a\notin H.\] Clearly, in this case \({\bf Pr}(a,H)\) is a singleton, and \({\bf Pr}(a,H)\neq a\). Then the vector \[a-{\bf Pr}(a,H)\ \ \mbox{and the vector}\ \ (\ \mbox{sign}\,h_{1},\mbox{sign}\,h_{2})\ \ \mbox{are co-directed}.\ **Lemma 4.8**: _(i) Inequality (4.15) holds provided_ \[{\rm dist}(a_{1},H_{1})+{\rm dist}(a_{1},H_{2})\leq\delta. \tag{4.23}\] _(ii) Inequality (4.15) holds if \(a_{1}\in H_{2}\)._ _Proof._ _(i)_ First, let us prove that \[\|{\bf Pr}(a_{1},H_{2})-{\bf Pr}(a_{2},H_{2})\|\leq 2\|a_{1}-a_{2}\|. \tag{4.24}\] Indeed, thanks to (4.19), \(\ell_{2}\not\parallel Ox_{1}\) and \(\ell_{2}\not\parallel Ox_{2}\) so that \({\cal H}[H_{2}]={\bf R}^{2}\). Hence, \(a_{1},a_{2}\in{\cal H}[H_{2}]\). We know that \(a_{2}\not\in H_{2}\). If \(a_{1}\in H_{2}\), then all conditions of Lemma 4.5 are satisfied provided \(H_{1}=H_{2}\) and \(\delta=0\). This lemma tells us that in this case inequality (4.24) holds. Now, suppose that \(a_{1}\not\in H_{2}\). Then the vectors \({\bf Pr}(a_{1},H_{1})-a_{1}\) and \({\bf Pr}(a_{2},H_{2})-a_{2}\) are _co-directed_. Therefore, all conditions of Lemma 4.7 are satisfied for the same case, i.e., for \(H_{1}=H_{2}\) and \(\delta=0\). This lemma tells us that, in these settings, inequality (4.24) holds. Thus, we have proved (4.24) for every \(a_{1}\) and \(a_{2}\) satisfying (4.13) and (4.18). From (4.24) and the triangle inequality, we have \[\|{\bf Pr}(a_{1},H_{1})-{\bf Pr}(a_{2},H_{2})\| \leq \|({\bf Pr}(a_{1},H_{1})-a_{1})-({\bf Pr}(a_{1},H_{2})-a_{1})\|+ \|{\bf Pr}(a_{1},H_{2})-{\bf Pr}(a_{2},H_{2})\|\] \[\leq \|{\bf Pr}(a_{1},H_{1})-a_{1}\|+\|{\bf Pr}(a_{1},H_{2})-a_{1}\|+2 \|a_{1}-a_{2}\|\] \[= {\rm dist}(a_{1},H_{1})+{\rm dist}(a_{1},H_{2})+2\|a_{1}-a_{2}\|.\] Combining this inequality with (4.23), we obtain inequality (4.15) proving part _(i)_ of the lemma. _(ii)_ Prove that if \(a_{1}\in H_{2}\), then \[\|{\bf Pr}(a_{1},H_{1})-{\bf Pr}(a_{1},H_{2})\|\leq\delta.\] Indeed, this inequality is immediate from Lemma 4.5 applied to the case \(a_{1}=a_{2}\). Now, from this and (4.24), we have \[\|{\bf Pr}(a_{1},H_{1})-{\bf Pr}(a_{2},H_{2})\|\leq\|{\bf Pr}(a_{1},H_{1})-{ \bf Pr}(a_{1},H_{2})\|+\|{\bf Pr}(a_{1},H_{2})-{\bf Pr}(a_{2},H_{2})\|\leq \delta+2\|a_{1}-a_{2}\|\] proving (4.15) and the lemma. **Lemma 4.9**: _Inequality (4.15) holds provided \(\ell_{1}\parallel\ell_{2}\)._ _Proof._ Because \(\ell_{1}\parallel\ell_{2}\) and \(\ell_{i}\not\parallel Ox_{j}\), \(i,j=1,2\) (see (4.19)), we have \[{\cal H}[H_{1}\cap(H_{2}+\delta Q_{0})]={\cal H}[H_{2}\cap(H_{1}+\delta Q_{0} )]={\bf R}^{2}\] Let us note that, since \(\ell_{1}\parallel\ell_{2}\) and \({\bf h}_{i}\perp\ell_{i}\) (see (4.12)), the vectors \({\bf h}_{1}\) and \({\bf h}_{2}\) are collinear unit vectors. Therefore, either \({\bf h}_{1}={\bf h}_{2}\) or \({\bf h}_{1}=-{\bf h}_{2}\). If \({\bf h}_{1}={\bf h}_{2}\), then \(H_{2}\) is a shift of \(H_{1}\), i.e., \(H_{2}=H_{1}+p\) for some \(p\in{\bf R}^{2}\). Thanks to (4.21), in this case the vectors \({\bf Pr}(a_{1},H_{1})-a_{1}\) and \({\bf Pr}(a_{2},H_{2})-a_{2}\) are _co-directed_. See Fig. 9-1. Therefore, thanks to Lemma 4.7, inequality (4.15) holds. Let us prove (4.15) for \({\bf h}_{2}=-{\bf h}_{1}\). Part _(ii)_ of Lemma 4.8 tells us that (4.15) holds provided \(a_{1}\in H_{2}\). Now, let us suppose that \(a_{1}\not\in H_{2}\) and prove that (4.15) holds as well. In this case \(a_{1}\not\in H_{1}\cup H_{2}\) (because \(a_{1}\not\in H_{1}\), see (4.18)) so that \(H_{1}\cap H_{2}=\emptyset\) as it shown on Fig. 9-2. Let us prove that in this case inequality (4.23) holds. Let \(T\) be the closure of the set \({\bf R}^{2}\setminus(H_{1}\cup H_{2})\). Clearly, \(T\) is the strip between the half-planes \(H_{1}\) and \(H_{2}\). Clearly, \(\partial T=\ell\cup\ell_{2}\). Recall that \(\ell_{1}\parallel\ell_{2}\) and \(\operatorname{dist}(H_{1},H_{2})\leq\delta\) so that \[\operatorname{dist}(x,H_{2})\leq\delta\ \ \text{for}\ \ x\in\ell_{1}\ \ \text{ and}\ \ \operatorname{dist}(x,H_{1})\leq\delta\ \ \text{for}\ \ x\in\ell_{2}.\] We define a function \(f\) on \(T\) by letting \(f(x)=\operatorname{dist}(x,H_{1})+\operatorname{dist}(x,H_{2})\). Clearly, \(f\) is a convex continuous function on \(T\). Therefore, \(\sup_{T}f=\sup_{\partial T}f\). But \[f(x)=\operatorname{dist}(x,\ell_{2})\leq\delta\ \ \text{on}\ \ \ell_{1}\ \ \text{ and}\ \ \ f(x)=\operatorname{dist}(x,\ell_{1})\leq\delta\ \ \text{on}\ \ \ell_{2}\] so that \(\sup_{\partial T}f\leq\delta\). Hence, \(\sup_{T}f\leq\delta\) proving (4.23). Therefore, thanks to part _(i)_ of Lemma 4.8, inequality (4.15) holds, and the proof of Lemma 4.9 is complete. Thanks to Lemma 4.8, part _(ii)_, and Lemma 4.9, it remains to prove that inequality (4.15) holds provided \(\ell_{1}\parallel\ell_{2}\), \[a_{1}\ \ \text{and}\ \ a_{2}\ \ \text{satisfy}\ \eqref{eq:a1},\ \ \ a_{1}\notin(H_{1}\cup H_{2})\ \ \text{and}\ \ \ a_{2}\notin H_{2}. \tag{4.25}\] Clearly, without loss of generality, we may assume that \(\ell_{1}\cap\ell_{2}=\{0\}\). Then \[H_{i}=\{u\in{\bf R}^{2}:\langle{\bf h}_{i},u\rangle\leq 0\},\ \ \ i=1,2,\] where \({\bf h}_{i}\in{\bf S}_{1}\), \(i=1,2\), are _non-collinear_ vectors. In these settings, \[\ell_{i}=\{u\in{\bf R}^{2}:\langle{\bf h}_{i},u\rangle=0\},\ \ i=1,2. \tag{4.26}\] Let \[{\bf h}_{i}=(\cos\varphi_{i},\sin\varphi_{i})\ \ \text{where the angle}\ \ \ \varphi_{i}\in[0,2\pi),\ \ \ i=1,2. \tag{4.27}\] Fig. 9: Metric projections onto the half-planes \(H_{1}\) and \(H_{2}\) with the parallel boundaries. Because the uniform norm on the plane is invariant under reflections with respect to the coordinate axes and with respect to the bisectors of the coordinate angles, we can also assume that the angles \(\varphi_{1}\) and \(\varphi_{2}\) satisfy the following conditions: \[\varphi_{1}\in(\pi/2,\pi)\ \ \ \mbox{and}\ \ \ \varphi_{2}\in(\varphi_{1},\varphi_{1}+\pi).\] We know that \({\bf h}_{i}\perp\ell_{i}\), \(i=1,2\) so that \({\bf h}_{1}\not\perp{\bf h}_{2}\) (because \(\ell_{1}\not\perp\ell_{2}\)). Let us also recall that \({\bf h}_{i}\) is directed outside of \(H_{i}\), \(i=1,2\). We also note that, in the case under consideration, \(H_{1}\cap H_{2}\) is a convex cone with the vertex at \(0\). Moreover, the sets \[S_{1}=H_{1}\cap(H_{2}+\delta Q_{0})\ \ \ \mbox{and}\ \ \ S_{2}=H_{2}\cap(H_{1}+ \delta Q_{0})\] are convex cones in \({\bf R}^{2}\). Let \[X_{1}=(s_{1},s_{2})\ \ \ \mbox{and}\ \ \ X_{2}=(t_{1},t_{2}) \tag{4.28}\] be the vertices of the cones \(S_{1}\) and \(S_{2}\) respectively. Thus, \(X_{1}\) is the point of intersection of the line \(\ell_{1}=\partial H_{1}\) and the line \(\bar{\ell}_{2}=\partial(H_{2}+\delta Q_{0})\). In turn, \[X_{2}=\ell_{2}\cap\bar{\ell}_{1}\ \ \ \mbox{where}\ \ \ \ell_{2}=\partial H_{2}\ \ \ \mbox{and}\ \ \ \bar{\ell}_{1}=\partial(H_{1}+\delta Q_{0}).\] Moreover, thanks to (4.16), we have the following representations of the cones \(S_{1}\) and \(S_{2}\): \[S_{1}=H_{1}\cap H_{2}+X_{1},\ \ \ S_{2}=H_{1}\cap H_{2}+X_{2}. \tag{4.29}\] Let us give explicit formulae for the points \(X_{1}\) and \(X_{2}\). First, we note that \[H_{i}+\delta Q_{0}=\{u\in{\bf R}^{2}:\langle{\bf h}_{i},u\rangle\leq\delta \,\|{\bf h}_{i}\|_{1}\},\ \ \ i=1,2.\] Here, given \(u=(u_{1},u_{2})\in{\bf R}^{2}\) we let \[\|u\|_{1}=|u_{1}|+|u_{2}|\] denote the \(\ell_{2}^{1}\)-norm in \({\bf R}^{2}\). Hence, \[\bar{\ell}_{i}=\{u\in{\bf R}^{2}:\langle{\bf h}_{i},u\rangle=\delta\,\|{\bf h }_{i}\|_{1}\},\ \ \ i=1,2. \tag{4.30}\] Let \[A=\left(\begin{array}{cc}\cos\varphi_{1}&\sin\varphi_{1}\\ \cos\varphi_{2}&\sin\varphi_{2}\end{array}\right)\ \ \ \mbox{and let}\ \ \ \Delta=\det A=\sin(\varphi_{2}-\varphi_{1}). \tag{4.31}\] See (4.27). (Clearly, \(\Delta\neq 0\) because \({\bf h}_{1}\not\perp{\bf h}_{2}\).) We know that \[X_{1}=(s_{1},s_{2})=\ell_{1}\cap\bar{\ell}_{2}\] so that, thanks to (4.26) and (4.30), the vector \((s_{1},s_{2})\) is the solution of the system of linear equations \[A\left(\begin{array}{c}s_{1}\\ s_{2}\end{array}\right)=\left(\begin{array}{c}0\\ \delta\,\|{\bf h}_{2}\|_{1}\end{array}\right).\] Therefore, \[s_{1}=\frac{1}{\Delta}\ \left|\begin{array}{cc}0&\sin\varphi_{1}\\ \delta\,\|{\bf h}_{2}\|_{1}&\sin\varphi_{2}\end{array}\right|=-\frac{\delta}{ \Delta}\,\|{\bf h}_{2}\|_{1}\ \sin\varphi_{1}\ \ \ \mbox{and}\ \ \ s_{2}=\frac{\delta}{\Delta}\,\|{\bf h}_{2}\|_{1}\ \cos\varphi_{1}. \tag{4.32}\] Thus, \[X_{1}=\frac{\delta}{\Delta}\|{\bf h}_{2}\|_{1}\ (-\sin\varphi_{1},\cos \varphi_{1}). \tag{4.33}\] In the same way we prove that \[X_{2}=\frac{\delta}{\Delta}\|{\bf h}_{1}\|_{1}\ (\sin\varphi_{2},- \cos\varphi_{2}). \tag{4.34}\] **Lemma 4.10**: _Inequality (4.15) holds provided \(\varphi_{1}\in(\pi/2,\pi)\) and \(\varphi_{2}\in(\varphi_{1},\pi)\)._ _Proof._ We recall that in the case under consideration \[{\bf h}_{i}=(\cos\varphi_{i},\sin\varphi_{i})\in{\bf S}_{1}\ \ \ {\rm where}\ \ \varphi_{i}\in(\pi/2,\pi),\ \ \ i=1,2.\] Hence, \(({\rm sign}(\cos\varphi_{i}),{\rm sign}(\sin\varphi_{i}))=(-1,1)\). This equality and property (4.21) imply the following: for every \(i=1,2\), the vector \((1,-1)\) and the vector \({\bf Pr}(a_{i},H)-a_{i}\) are co-directed. See Fig. 10. This proves that \[{\bf Pr}(a_{1},H)-a_{1}\ \ \ {\rm and}\ \ \ {\bf Pr}(a_{2},H)-a_{2}\ \ \ {\rm are co-directed\ vectors}.\] Therefore, thanks to Lemma 4.7, (4.15) holds, and the proof of the lemma is complete. **Lemma 4.11**: _Inequality (4.15) holds provided \(\varphi_{1}\in(\pi/2,\pi)\) and \(\varphi_{2}\in(\pi,\varphi_{1}+\pi)\)._ Fig. 10: The half-planes \(H_{1}\) and \(H_{2}\) with the non-parallel boundaries: the first case. _Proof._ First, let us prove inequality (4.15) in the case \(\varphi_{1}\in(\pi/2,\pi)\) and \(\varphi_{2}\in\left(\pi,\frac{3}{2}\pi\right)\). See Fig. 11. In this case, the following inequalities hold: \[\cos\varphi_{1}<0,\ \ \sin\varphi_{1}>0,\ \ \cos\varphi_{2}<0\ \ \ \text{and}\ \ \ \sin\varphi_{2}<0. \tag{4.35}\] Therefore, thanks to (4.21), the vector \(\mathbf{Pr}(a_{1},H_{1})-a_{1}\) is co-directed with the vector \((1,-1)\), and the vector \(\mathbf{Pr}(a_{2},H_{2})-a_{2}\) is co-directed with \((1,1)\). Moreover, in this case the convex cone \(H_{1}\cap H_{2}\) (with the vertex \(0\)) contains the positive semi axis \(Ox_{1}^{\ast}(=\{(t,0):t\geq 0\})\). This implies the following properties of the rectangular hulls of the sets \(S_{1}=H_{1}\cap(H_{2}+\delta Q_{0})\) and \(S_{2}=H_{2}\cap(H_{1}+\delta Q_{0})\): \[\mathcal{H}[S_{1}]=\{(u_{1},u_{2})\in\mathbf{R}^{2}:u_{1}\geq s_{1}\}\ \ \text{and}\ \ \ \mathcal{H}[S_{2}]=\{(u_{1},u_{2})\in\mathbf{R}^{2}:u_{1}\geq t_{1}\}.\] We recall that \(X_{1}=(s_{1},s_{2})\) and \(X_{2}=(t_{1},t_{2})\). See (4.28) and Fig. 10. Let \(\alpha_{i}\in(0,\pi/2)\), \(i=1,2\), be the angle between the straight line \(\ell_{i}=\partial H_{i}\) and the axis \(Ox_{1}\). On Fig. 10 we consider the case \(\alpha_{2}\leq\alpha_{1}\). We note that no necessity in the additional consideration of the case \(\alpha_{2}>\alpha_{1}\) because it can be obtained from the case \(\alpha_{2}<\alpha_{1}\) with the help of a suitable reflections with respect to the coordinate axes and the bisectors of the coordinate angles. Let us prove that if \(\alpha_{2}\leq\alpha_{1}\), then \[t_{1}\leq s_{1}\leq 0. \tag{4.36}\] First, let us note that, thanks to (4.31), \(\Delta=\sin(\varphi_{2}-\varphi_{1})>0\) (because \(\varphi_{2}-\varphi_{1}\in(0,\pi)\).) Also note that, thanks to (4.35), \(\sin\varphi_{1}>0\). Hence, thanks to formula (4.33), \(s_{1}\leq 0\). Let us see that \[s_{1}-t_{1}=\frac{\delta}{\Delta}\sin(\varphi_{1}+\varphi_{2})=\frac{\delta} {\Delta}\sin(\alpha_{1}-\alpha_{2}). \tag{4.37}\] Indeed, thanks to (4.35), \[\|{\bf h}_{1}\|_{1}=-\cos\varphi_{1}+\sin\varphi_{1}\ \ \ \mbox{and}\ \ \ \|{\bf h}_{2}\|_{1}=-\cos\varphi_{2}-\sin\varphi_{2},\] so that, thanks to formulae (4.33) and (4.34), \[s_{1}-t_{1} = (\delta/\Delta)\{-\|{\bf h}_{2}\|_{1}\sin\varphi_{1}-\|{\bf h}_{1 }\|_{1}\sin\varphi_{2}\}\] \[= (\delta/\Delta)\{-(-\cos\varphi_{2}-\sin\varphi_{2})\sin\varphi_ {1}-(-\cos\varphi_{1}+\sin\varphi_{1})\sin\varphi_{2}\}\] \[= (\delta/\Delta)\{\cos\varphi_{2}\sin\varphi_{1}+\cos\varphi_{1} \sin\varphi_{2}\}=(\delta/\Delta)\sin(\varphi_{1}+\varphi_{2}).\] We note that \(\alpha_{1}=\varphi_{1}-\pi/2\) and \(\alpha_{2}=\frac{3}{2}\pi-\varphi_{2}\). Hence, we have \[\varphi_{1}+\varphi_{2}=\alpha_{1}-\alpha_{2}+2\pi\] proving (4.37). It remains to note that \(\sin(\alpha_{1}-\alpha_{2})>0\) (because \(0<\alpha_{2}\leq\alpha_{1}<\pi/2\)) and \(\Delta>0\) so that, thanks (4.37), \(t_{1}\leq s_{1}\) proving (4.36). In particular, this inequality implies the inclusion \({\cal H}[S_{1}]\subset{\cal H}[S_{2}]\) as it shown on Fig. 10. Furthermore, thanks to (4.25) \[a_{1}\notin H_{1}\cup H_{2}\ \ \ \mbox{and}\ \ \ a_{1}\in{\cal H}[S_{1}]={\cal H }[H_{1}\cap(H_{2}+\delta Q_{0})]. \tag{4.38}\] Let \(Y\) be the point of intersection of the line \(\ell_{2}\) (the boundary of \(H_{2}\)) and the line \(\hat{\ell}\) passing through the point \(X_{1}=(s_{1},s_{2})\) and parallel to the axis \(Ox_{2}\). (Thus, \(Y=(s_{1},y_{2})\) for some \(y_{2}\in{\bf R}\)). Recall that the point \(X_{2}=(t_{1},t_{2})\) lies on the line \(\ell_{2}\), and, thanks to inequality (4.36), \(t_{1}\leq s_{1}\leq 0\). In particular, these observations shows that \(Y\in[X_{2},O]\) where \(O=0\) is the origin. Conditions (4.38) shows that the point \(a_{1}\) belongs to the triangle \(\widetilde{T}=\Delta(X_{1},Y,O)\) with the vertices at the points \(X_{1}\), \(Y\) and \(O\). Because \(Y\in[X_{2},O]\), the triangle \(\widetilde{T}\) is a subset of the triangle \(T=\Delta(X_{1},X_{2},O)\) with vertices at \(X_{1}\), \(X_{2}\) and \(O=0\). Hence, \(a_{1}\in T\). Let \[f(x)=\mbox{dist}(x,H_{1})+\mbox{dist}(x,H_{2}),\ \ \ x\in T.\] Prove that \(f(x)\leq\delta\) on \(T\). Indeed, \(f\) is a convex continuous function on \(T\) so that its maximum is attained on the set of vertices of the triangle \(T\), i.e., at the points \(O,X_{1}\) and \(X\). But \(f(O)=0\), \(f(X_{1})=\delta\) (because \(X_{1}\in\tilde{\ell}_{2}=\partial(H_{2}+\delta\,Q_{0})\)) and \(f(X_{2})=\delta\) (because \(X_{2}\in\tilde{\ell}_{1}=\partial(H_{1}+\delta\,Q_{0})\)). This proves the required inequality \(f(x)\leq\delta\), \(x\in T\). In particular, \[f(a_{1})=\mbox{dist}(a_{1},H_{1})+\mbox{dist}(a_{1},H_{2})\leq\delta. \tag{4.39}\] Thus, condition (4.23) of Lemma 4.8 holds. Thanks to this lemma, inequality (4.15) holds proving Lemma 4.11 for the angles \(\varphi_{1}\in(\pi/2,\pi)\) and \(\varphi_{2}\in\left(\pi,\frac{3}{2}\pi\right)\). Let us prove Lemma 4.11 for \[\varphi_{1}\in(\pi/2,\pi)\ \ \mbox{and}\ \ \varphi_{2}\in\left(\frac{3}{2}\,\pi, \varphi_{1}+\pi\right).\ \ \ \ \mbox{See Fig. 12.} \tag{4.40}\] In this case, \[\cos\varphi_{1}<0,\ \ \sin\varphi_{1}>0,\ \ \cos\varphi_{2}>0\ \ \ \mbox{and}\ \ \ \sin\varphi_{2}<0. \tag{4.41}\] Therefore, thanks to (4.21), the vector \({\bf Pr}(a_{1},H_{1})-a_{1}\) is co-directed with \((1,-1)\), and the vector \({\bf Pr}(a_{2},H_{2})-a_{2}\) is co-directed with \((-1,1)\). Moreover, \[{\cal H}[S_{1}]=\{(u_{1},u_{2})\in{\bf R}^{2}:u_{1}\geq s_{1},\,u_{2}\geq s_{2 }\}\ \ \mbox{and}\ \ \ {\cal H}[S_{2}]=\{(u_{1},u_{2})\in{\bf R}^{2}:u_{1}\geq t_{1},\,u_{2}\geq t_{ 2}\}.\] Let us prove that \[X_{1}-X_{2}=\delta(1,-1) \tag{4.42}\] \(X_{1}=(s_{1},s_{2})\) and \(X_{2}=(t_{1},t_{2})\) are the points defined by (4.28). For explicit formulae for \(s_{i},t_{i}\), \(i=1,2\), see (4.33) and (4.34). Thanks to (4.41), we have \[\|{\bf h}_{1}\|_{1}=|\cos\varphi_{1}|+|\sin\varphi_{1}|=-\cos\varphi_{1}+\sin \varphi_{1},\ \ \ \|{\bf h}_{2}\|_{1}=|\cos\varphi_{2}|+|\sin\varphi_{2}|=\cos\varphi_{2}-\sin \varphi_{2}.\] Therefore, thanks to (4.33) and (4.34), \[X_{1}=\frac{\delta}{\Delta}\left(\cos\varphi_{2}-\sin\varphi_{2}\right)(-\sin \varphi_{1},\cos\varphi_{1})\ \ \ \mbox{and}\ \ \ X_{2}=\frac{\delta}{\Delta}\left(-\cos\varphi_{1}+\sin\varphi_{1}\right)( \sin\varphi_{2},-\cos\varphi_{2}).\] Hence, \[X_{1}-X_{2} = \frac{\delta}{\Delta}\left((\cos\varphi_{2}-\sin\varphi_{2}) \left(-\sin\varphi_{1}\right)-(-\cos\varphi_{1}+\sin\varphi_{1})\sin\varphi_{ 2},\right.\] \[\left.(\cos\varphi_{2}-\sin\varphi_{2})\cos\varphi_{1}-(-\cos \varphi_{1}+\sin\varphi_{1})(-\cos\varphi_{2})\right)\] \[= \frac{\delta}{\Delta}\left(-\cos\varphi_{2}\sin\varphi_{1}+\cos \varphi_{1}\sin\varphi_{2},-\sin\varphi_{2}\cos\varphi_{1}+\sin\varphi_{1}\cos \varphi_{2}\right)\] \[= \frac{\delta}{\Delta}\sin(\varphi_{2}-\varphi_{1})(1,-1).\] Thanks to (4.31), \(\Delta=\sin(\varphi_{2}-\varphi_{1})\), and the proof of (4.42) is complete. Thanks to this equality, \(t_{1}=s_{1}-\delta\) and \(t_{2}=s_{2}+\delta\). Furthermore, (4.42) and (4.29) imply the following: \[S_{2}=S_{1}+\delta(-1,1)\ \ \ \text{and}\ \ \ \mathcal{H}[S_{2}]=\mathcal{H}[S_{1}]+ \delta(-1,1).\] Let us prove that \[t_{1}\leq s_{1}\leq 0\ \ \ \text{and}\ \ \ \ s_{2}\leq t_{2}\leq 0. \tag{4.43}\] In fact, we know that \[\varphi_{1}\in(\pi/2,\pi)\ \ \ \text{and}\ \ \ \varphi_{2}\in(\frac{3}{2}\,\pi, \varphi_{1}+\pi).\] Hence, \(0<\varphi_{2}-\varphi_{1}<\pi\) proving that \(\Delta=\sin(\varphi_{2}-\varphi_{1})>0\). We also know that \(\sin\varphi_{1}>0\) so that, thanks to (4.32), \(s_{1}\leq 0\). In addition, \(t_{1}=s_{1}-\delta\leq s_{1}\) proving the first inequality in (4.43). Next, thanks to (4.34), \[t_{2}=-(\delta/\Delta)\,\|{\bf h}_{1}\|_{1}\,\cos\varphi_{2}.\] But \(\Delta>0\) and \(\cos\varphi_{2}>0\) so that \(t_{2}\leq 0\). Moreover, \(s_{2}=t_{2}-\delta\), so that \(s_{2}\leq t_{2}\), and the proof of (4.43) is complete. We recall that the point \(a_{1}\) satisfies conditions (4.38). Therefore, \[a_{1}\in T=\Delta(X_{1},X_{2},O).\] See Fig. 12. From this and (4.39) it follows that condition (4.23) of Lemma 4.8 is satisfied. This lemma tells us that inequality (4.15) holds proving Lemma 4.11 for the case (4.40). The proof of Lemma 4.11 is complete. Finally, the results of Lemmas 4.4 - 4.11 imply the required inequality (4.15) completing the proof of Proposition 4.3. We are in a position to complete the proof of Proposition 3.6. _Proof of inequality (3.27)._ Let us fix elements \(x,y\in\mathcal{M}\). We set \[A_{1}=F^{[1]}[x:\bar{\lambda}],\ \ \ A_{2}=F^{[1]}[y:\bar{\lambda}]\] and \(a_{1}=g(x)\), \(a_{2}=g(y)\). Recall that \(g:\mathcal{M}\to{\bf R}\) is the mapping satisfying (3.23) and (3.24). We also recall that the mapping \(F^{[1]}[\cdot:\bar{\lambda}]\) is defined by (3.4). Thus, \[A_{i}=\cap\{A_{i}^{[u]}:u\in\mathcal{M}\},\ \ i=1,2, \tag{4.44}\] where given \(u\in\mathcal{M}\), we set \[A_{1}^{[u]}=F(u)+\bar{\lambda}\rho(u,x)Q_{0}\ \ \ \text{and}\ \ \ A_{2}^{[u]}=F(u)+\bar{\lambda}\rho(u,y)Q_{0}. \tag{4.45}\] Lemma 3.3 tells us that each \(A_{i}\), \(i=1,2\), is a non-empty closed convex subset of \({\bf R}^{2}\). Thanks to inequality (3.24), we have \[\|a_{1}-a_{2}\|\leq\lambda\,\rho(x,y), \tag{4.46}\] and, thanks to (3.23), \[a_{i}\in\mathcal{H}[A_{i}]^{\bf cl},\ \ \ i=1,2. \tag{4.47}\] Furthermore, formula (3.25) tells us that \[f(x)=\mathbf{Pr}(a_{1},A_{1})\quad\text{and}\quad f(y)=\mathbf{Pr}(a_{2},A_{2}).\] (We also recall that, thanks to Lemma 3.7, the metric projection \(\mathbf{Pr}(a_{i},A_{i})\) is well defined, i.e., \(\mathbf{Pr}(a_{i},A_{i})\) is a singleton.) In these settings, the required inequality (3.27) reads as follows: \[\|\,\mathbf{Pr}(a_{1},A_{1})-\mathbf{Pr}(a_{2},A_{2})\|\leq(2\lambda+\bar{ \lambda})\,\rho(x,y). \tag{4.48}\] Let us note that this inequality is immediate from (4.46) provided \(a_{i}\in A_{i}\), \(i=1,2\), (because in this case \(\mathbf{Pr}(a_{i},A_{i})=a_{i}\)). Suppose that either \(a_{1}\notin A_{1}\) or \(a_{2}\notin A_{2}\). Without loss of generality, we may assume that \(a_{1}\notin A_{1}\). Fix \(\varepsilon>0\) and prove that there exists a half-plane \(H_{1}\in\mathcal{H}\mathcal{P}(\mathbf{R}^{2})\) such that \[H_{1}\supset A_{1},\quad H_{1}+\bar{\lambda}\,\rho(x,y)Q_{0} \supset A_{2}, \tag{4.49}\] and \[\|\,\mathbf{Pr}(a_{1},A_{1})-\mathbf{Pr}(a_{1},H_{1})\|<\varepsilon. \tag{4.50}\] We construct the half-plane \(H_{1}\) as follows: Because \(a_{1}\notin A_{1}\), we have \(\mathbf{Pr}(a_{1},A_{1})\neq a_{1}\) so that \((\mathbf{Pr}(a_{1},A_{1}),a_{1}]\) is a non-empty semi-open interval in \(\mathbf{R}^{2}\). Let us pick a point \[a^{(\varepsilon)}\in(\mathbf{Pr}(a_{1},A_{1}),a_{1}]\] such that \[\|a^{(\varepsilon)}-\mathbf{Pr}(a_{1},A_{1})\|<\varepsilon. \tag{4.51}\] Because \(\mathbf{Pr}(a_{1},A_{1})\) is the nearest to \(a_{1}\) point on \(A_{1}\), we have \[(\mathbf{Pr}(a_{1},A_{1}),a_{1}]\cap A_{1}=\emptyset.\] Therefore, \[a^{(\varepsilon)}\notin A_{1}=\cap\{A_{1}^{[u]}:u\in\mathcal{M}\}.\] See (4.44). This implies the existence of an element \(u\in\mathcal{M}\) such that \(a^{(\varepsilon)}\notin A_{1}^{[u]}\). We let \(B\) denote the set \(A_{1}^{[u]}\). Thus, \[a^{(\varepsilon)}\notin B=A_{1}^{[u]}=F(u)+\bar{\lambda}\,\rho(u,x)Q_{0}. \tag{4.52}\] See (4.45). Thanks to (4.47) and (4.44), \[a_{1}\in\mathcal{H}[A_{1}]^{\mathbf{cl}}\quad\text{and}\quad A_{1}\subset B. \tag{4.53}\] Therefore, thanks to Lemma 4.1, the metric projections \(\mathbf{Pr}(a_{1},A_{1})\) and \(\mathbf{Pr}(a_{1},B)\) are singletons such that \[\mathbf{Pr}(a_{1},B)\in[\mathbf{Pr}(a_{1},A_{1}),a_{1}].\] See Fig. 13. We note that \(\mathbf{Pr}(a_{1},B)\in[\mathbf{Pr}(a_{1},A_{1}),a^{(e)}]\); indeed, otherwise \(a^{(\varepsilon)}\in[\mathbf{Pr}(a_{1},A_{1}),\mathbf{Pr}(a_{1},B)]\subset B\), a contradiction. See (4.52). Hence, thanks to (4.51), \[\|\,\mathbf{Pr}(a_{1},A_{1})-\mathbf{Pr}(a_{1},B)\|\leq\|a^{( \varepsilon)}-\mathbf{Pr}(a_{1},A_{1})\|<\varepsilon. \tag{4.54}\] Let \(\widetilde{Q}=Q(a_{1},r)\) where \(r=\mathrm{dist}(a_{1},B)\). Thus, \(\widetilde{Q}\cap B=\{\mathbf{Pr}(a_{1},B)\}\). Therefore, thanks to the separation theorem, there exists a half-plane \(H_{1}\in\mathcal{HP}(\mathbf{R}^{2})\) which contains \(B\) and separates (not strictly) \(\widetilde{Q}\) and \(B\). Thus, \(B\subset H_{1}\) and \(\widetilde{Q}\cap H_{1}=\mathbf{Pr}(a_{1},B)\) as it shown on Fig. 12. In particular, these properties imply the equality \(\mathbf{Pr}(a_{1},H_{1})=\mathbf{Pr}(a_{1},B)\). Let us see that inclusions (4.49) and inequality (4.50) hold for the half-plane \(H_{1}\). In fact, (4.50) is immediate from (4.54) and the last equality. Prove (4.49). We know that \(A_{1}\subset B\), see (4.53), so that \(A_{1}\subset B\subset H_{1}\). We also recall that \(B=F(u)+\widetilde{\lambda}\,\rho(u,x)Q_{0}\) (see (4.52)). Therefore, \[H_{1}+\widetilde{\lambda}\,\rho(x,y)Q_{0}\supset B+\widetilde{ \lambda}\,\rho(x,y)Q_{0}=F(u)+\widetilde{\lambda}\,\rho(u,x)Q_{0}+\widetilde{ \lambda}\,\rho(x,y)Q_{0}=F(u)+\widetilde{\lambda}\,(\rho(u,x)+\rho(x,y))\,Q_ {0}.\] Therefore, thanks to the triangle inequality, (4.44) and (4.45), we have \[H_{1}+\widetilde{\lambda}\,\rho(x,y)Q_{0}\supset F(u)+\widetilde{ \lambda}\,\rho(u,y)\,Q_{0}=A_{2}^{[u]}\supset A_{2}\] proving (4.49). Next, let us construct a half-plane \(H_{2}\in\mathcal{HP}(\mathbf{R}^{2})\) having the following properties: \[H_{2}\supset A_{2},\quad H_{2}+\widetilde{\lambda}\,\rho(x,y)Q_{0 }\supset A_{1}, \tag{4.55}\] and \[\|\,\mathbf{Pr}(a_{2},A_{2})-\mathbf{Pr}(a_{2},H_{2})\|<\varepsilon. \tag{4.56}\] If \(a_{2}\notin A_{2}\), we define \(H_{2}\) in the same way as we have defined \(H_{1}\) for \(a_{1}\). In this case, properties (4.55), (4.56) are a complete analog of properties (4.49) and (4.50) obtained for the point \(a_{1}\). Fig. 13: Metric projections of \(a_{1}\) onto \(A_{1}\) and \(B\). If \(a_{2}\in A_{2}\), we set \[H_{2}=H_{1}+\tilde{\lambda}\,\rho(x,y)Q_{0}.\] Clearly, \(H_{2}\) is a half-plane. Let us see that inclusions (4.55) and inequality (4.56) hold for this choice of \(H_{2}\). Indeed, thanks to the second inclusion in (4.49), we have \(H_{2}\supset A_{2}\). In turn, thanks to the first inclusion, \[H_{2}+\tilde{\lambda}\,\rho(x,y)Q_{0}=(H_{1}+\tilde{\lambda}\,\rho(x,y)Q_{0})+ \tilde{\lambda}\,\rho(x,y)Q_{0}\supset H_{1}\supset A_{1},\] proving (4.55). Finally, inequality (4.56) is trivial because \(\mathbf{Pr}(a_{2},A_{2})=\mathbf{Pr}(a_{2},H_{2})(=a_{2})\). (Recall that \(a_{2}\in A_{2}\subset H_{2}\).) Now, we set \[\delta=\tilde{\lambda}\,\rho(x,y)+\varepsilon. \tag{4.57}\] Let us prove that the points \(a_{1},a_{2}\) and the half-planes \(H_{1}\) and \(H_{2}\) satisfy conditions (4.10), (4.13) and (4.14). Thanks to (4.49), \(H_{1}\supset A_{1}\), and, thanks to (4.55), \(H_{2}+\tilde{\lambda}\,\rho(x,y)Q_{0}\supset A_{1}\). Hence, \[H_{1}\cap(H_{2}+\tilde{\lambda}\,\rho(x,y)Q_{0})\supset A_{1}.\] Note that \(\tilde{\lambda}\,\rho(x,y)<\delta\); see (4.57). Therefore, \[H_{1}\cap(H_{2}+\delta\,Q_{0})\supset A_{1}. \tag{4.58}\] We also know that \(A_{1}\neq\emptyset\) so that \(H_{1}\cap(H_{2}+\delta\,Q_{0})\neq\emptyset\) as well. This proves that \(\text{dist}(H_{1},H_{2})\leq\delta\) so that the condition (4.10) is satisfied. Let us prove that the points \(a_{1}\) and \(a_{2}\) satisfy condition (4.13). Indeed, inclusion (4.58) tells us that \[\mathcal{H}[A_{1}]\subset\mathcal{H}[H_{1}\cap(H_{2}+\delta\,Q_{0})].\] Clearly, the set \(\mathcal{H}[H_{1}\cap(H_{2}+\delta\,Q_{0})]\) is _closed_ (as the rectangle hull of intersection of two half-planes). Hence, \[\mathcal{H}[A_{1}]^{\mathbf{d}}\subset\mathcal{H}[H_{1}\cap(H_{2}+\delta\,Q_ {0})].\] But, thanks to (4.47), \(a_{1}\in\mathcal{H}[A_{1}]^{\mathbf{cl}}\) proving that \(a_{1}\in\mathcal{H}[H_{1}\cap(H_{2}+\delta\,Q_{0})]\). In the same fashion we show that \(a_{2}\in\mathcal{H}[H_{2}\cap(H_{1}+\delta\,Q_{0})]\) proving that condition (4.13) holds. Let us show that condition (4.14) is satisfied as well. Thanks to (4.55), \[\mathbf{Pr}(a_{1},A_{1})\in A_{1}\subset H_{2}+\tilde{\lambda}\,\rho(x,y)\,Q_ {0},\] and, thanks to (4.50), \[\mathbf{Pr}(a_{1},H_{1})\in\mathbf{Pr}(a_{1},A_{1})+\varepsilon\,Q_{0}\subset A _{1}+\varepsilon\,Q_{0}.\] Therefore, thanks to (4.57), \[\mathbf{Pr}(a_{1},H_{1})\in A_{1}+\varepsilon\,Q_{0}\subset(H_{2}+\tilde{ \lambda}\,\rho(x,y)\,Q_{0})+\varepsilon\,Q_{0}=H_{2}+\delta\,Q_{0}.\] In the same way we show that \(\mathbf{Pr}(a_{2},H_{2})\in H_{1}+\delta\,Q_{0}\) completing the proof of (4.14). Therefore, thanks to Proposition 4.3, inequality (4.15) holds. This inequality together with (4.46) and (4.57) imply the following: \[\|\,\mathbf{Pr}(a_{1},H_{1})-\mathbf{Pr}(a_{2},H_{2})\|\leq 2\|a_{1}-a_{2}\|+ \delta\leq 2\lambda\,\rho(x,y)+\tilde{\lambda}\,\rho(x,y)+\varepsilon.\] From this, (4.50) and (4.56), we have \[\|\,{\bf Pr}(a_{1},A_{1})-{\bf Pr}(a_{2},A_{2})\| \leq \|\,{\bf Pr}(a_{1},A_{1})-{\bf Pr}(a_{1},H_{1})\|+\|\,{\bf Pr}(a_{1},H_{1})-{\bf Pr}(a_{2},H_{2})\|\] \[+ \|\,{\bf Pr}(a_{2},H_{2})-{\bf Pr}(a_{2},A_{2})\|\leq(2\lambda+ \tilde{\lambda})\,\rho(x,y)+3\varepsilon.\] Since \(\varepsilon>0\) is arbitrary, this implies (4.48) proving the required inequality (3.27) and completing the proof of Proposition 3.6. Finally, combining part (i) of Proposition 3.5 with Proposition 3.6, we obtain the statement of Theorem 3.2. **5. Lipschitz selection criteria in the two dimensional case.** **5.1 Constructive criteria for Lipschitz selections: proofs.** We begin with the proof of Theorem 1.10. This proof is based on the following result. **Proposition 5.1**: _Let \(\Re=({\cal M},\rho)\) be a pseudometric space, and let \(F:{\cal M}\to{\rm Conv}({\bf R}^{2})\) be a setwalued mapping. Given constants \(\tilde{\lambda}\) and \(\lambda\), \(0\leq\lambda\leq\tilde{\lambda}\), let us assume that_ \[{\cal R}_{F}[x,x^{\prime}:\tilde{\lambda}]\cap\left\{{\cal R}_{F}[y,y^{\prime} :\tilde{\lambda}]+\lambda\,\rho(x,y)\,Q_{0}\right\}\neq\emptyset\ \ \mbox{ for every }\ \ x,x^{\prime},y,y^{\prime}\in{\cal M}. \tag{5.1}\] _Then for every \(x,x^{\prime},y,y^{\prime},y^{\prime\prime}\in{\cal M}\), the following property_ \[{\cal W}_{F}[x,x^{\prime},x^{\prime\prime}:3\tilde{\lambda}]\cap\left\{{\cal W }_{F}[y,y^{\prime},y^{\prime\prime}:3\tilde{\lambda}]+\lambda\,\rho(x,y)Q_{0 }\right\}\neq\emptyset\] _holds._ _Proof._ Clearly, property (5.1) guarantees that the rectangle \({\cal R}_{F}[x,y:\tilde{\lambda}]\neq\emptyset\) for every \(x,y\in{\cal M}\). Let \(y,y^{\prime},y^{\prime\prime}\in{\cal M}\). Clearly, \({\cal W}_{F}[y,y^{\prime},y^{\prime\prime}:\tilde{\lambda}]={\cal W}_{F}[y,y^{ \prime\prime},y^{\prime}:\tilde{\lambda}]\), see (3.1), so that, without loss of generality, we may assume that \(\rho(y,y^{\prime})\leq\rho(y,y^{\prime\prime})\). Thanks to the triangle inequality, we have \[\rho(y^{\prime},y^{\prime\prime})\leq\rho(y,y^{\prime})+\rho(y,y^{\prime\prime })\leq 2\rho(y,y^{\prime\prime})\ \ \mbox{ so that }\ \ \ \rho(y,y^{\prime})+\rho(y^{\prime},y^{\prime\prime})\leq 3\rho(y,y^{ \prime\prime}).\] Hence, \[{\cal W}_{F}[y,y^{\prime},y^{\prime\prime}:3\tilde{\lambda}] = {\cal H}[\{F(y^{\prime})+3\tilde{\lambda}\,\rho(y,y^{\prime})\,Q_ {0}\}\cap\{F(y^{\prime\prime})+3\tilde{\lambda},\rho(y,y^{\prime\prime})\,Q_{ 0}\}]\] \[\supset {\cal H}[\{F(y^{\prime})+\tilde{\lambda}\,\rho(y,y^{\prime})\,Q_ {0}\}\cap\{(F(y^{\prime\prime})+\tilde{\lambda}\,\rho(y^{\prime},y^{\prime \prime})Q_{0})+\tilde{\lambda}\,\rho(y,y^{\prime})Q_{0}\}].\] Clearly, for every \(A,B\subset{\bf R}^{2}\), \(A\cap B\neq\emptyset\), and every \(r\geq 0\), the following inclusion \[A\cap B+rQ_{0}\subset(A+rQ_{0})\cap(B+rQ_{0})\] holds. From this, property (2.14) and inequality \(\lambda\leq\tilde{\lambda}\), we have \[{\cal W}_{F}[y,y^{\prime},y^{\prime\prime}:3\tilde{\lambda}] \supset {\cal H}[F(y^{\prime})\cap\{F(y^{\prime\prime})+\tilde{\lambda} \,\rho(y^{\prime},y^{\prime\prime})Q_{0}\}+\tilde{\lambda}\,\rho(y,y^{\prime}) Q_{0}]\] \[= {\cal H}[F(y^{\prime})\cap\{F(y^{\prime\prime})+\tilde{\lambda}\, \rho(y^{\prime},y^{\prime\prime})Q_{0}\}]+\tilde{\lambda}\,\rho(y,y^{\prime}) Q_{0}\] \[\supset {\cal H}[F(y^{\prime})\cap\{F(y^{\prime\prime})+\tilde{\lambda} \,\rho(y^{\prime},y^{\prime\prime})Q_{0}\}]+\lambda\,\rho(y,y^{\prime})Q_{0}.\] This and definition (1.15) imply the following inclusion: \[{\cal W}_{F}[y,y^{\prime},y^{\prime\prime}:3\tilde{\lambda}]\supset{\cal R}_{F} [y^{\prime},y^{\prime\prime}:\tilde{\lambda}]+\lambda\,\rho(y,y^{\prime})Q_{0}. \tag{5.2}\] Now, let us consider elements \(x,x^{\prime},x^{\prime\prime},y,y^{\prime},y^{\prime\prime}\in{\cal M}\). We may assume that \(\rho(x,x^{\prime})\leq\rho(x,x^{\prime\prime})\) and \(\rho(y,y^{\prime})\leq\rho(y,y^{\prime\prime})\). We know that in this case (5.2) holds. In the same way we prove that \[{\cal W}_{F}[x,x^{\prime},x^{\prime\prime}:3\bar{\lambda}]\supset{\cal R}_{F}[x ^{\prime},x^{\prime\prime}:\bar{\lambda}]+\lambda\rho(x,x^{\prime})Q_{0}.\] From this inclusion, (5.2) and the triangle inequality, we have \[{\cal A} = {\cal W}_{F}[x,x^{\prime},x^{\prime\prime}:3\bar{\lambda}]\cap\{{ \cal W}_{F}[y,y^{\prime},y^{\prime\prime}:3\bar{\lambda}]+\lambda\,\rho(x,y)Q_ {0}\}\] \[\supset \{{\cal R}_{F}[x^{\prime},x^{\prime\prime}:\bar{\lambda}]+ \lambda\,\rho(x,x^{\prime})Q_{0}\}\cap\{{\cal R}_{F}[y^{\prime},y^{\prime \prime}:\bar{\lambda}]+\lambda\,\rho(y,y^{\prime})Q_{0}+\lambda\,\rho(x,y)Q_{0}\}\] \[\supset {\cal B}=\{{\cal R}_{F}[x^{\prime},x^{\prime\prime}:\bar{\lambda }]+\lambda\,\rho(x,x^{\prime})Q_{0}\}\cap\{{\cal R}_{F}[y^{\prime},y^{\prime \prime}:\bar{\lambda}]+\lambda\,\rho(x,y^{\prime})Q_{0}\}.\] Thanks to (5.1), \[{\cal R}_{F}[x^{\prime},x^{\prime\prime}:\bar{\lambda}]\cap\{{\cal R}_{F}[y^{ \prime},y^{\prime\prime}:\bar{\lambda}]+\lambda\,\rho(x^{\prime},y^{\prime}) \,Q_{0}\}\neq\emptyset\] so that there exists points \(p_{1}\in{\cal R}_{F}[x^{\prime},x^{\prime\prime}:\bar{\lambda}]\) and \(p_{2}\in{\cal R}_{F}[y^{\prime},y^{\prime\prime}:\bar{\lambda}]\) such that \(\|p_{1}-p_{2}\|\leq\lambda\,\rho(x^{\prime},y^{\prime})\). Therefore, thanks to the triangle inequality, \[\|p_{1}-p_{2}\|\leq\lambda\,\rho(x^{\prime},y^{\prime})\leq\lambda\,\rho(x,x^ {\prime})+\lambda\,\rho(x,y^{\prime}).\] This inequality implies the existence of a point \(w\in[p_{1},p_{2}]\) such that \(\|p_{1}-w\|\leq\lambda\,\rho(x,x^{\prime})\) and \(\|p_{2}-w\|\leq\lambda\,\rho(x,y^{\prime})\). Therefore, \(w\in{\cal B}\subset{\cal A}\), and the proof of the proposition is complete. _Proof of Theorem 1.10._ The necessity part of Theorem 1.10 and inequality \(\inf\lambda\leq|F|_{\Re}\) are immediate from part (i) of Proposition 3.1. Let us prove the sufficiency part. Thanks to Proposition 5.1, condition (1.16) of Theorem 1.10 implies condition (3.3) of Theorem 3.2 with \(\bar{\lambda}=3\lambda\). Theorem 3.2 tells us that there exists a Lipschitz selection \(f\) of \(F\) with \[\|f\|_{\rm Lip({\cal M})}\leq 2\lambda+\bar{\lambda}=2\lambda+3\lambda=5\lambda.\] This proves the sufficiency and inequality \(|F|_{\Re}\leq 5\inf\lambda\). The proof of Theorem 1.10 is complete. **Theorem 5.2**: _Let \(({\cal M},\rho)\) be a pseudometric space, and let \(F:{\cal M}\to{\rm Conv}({\bf R}^{2})\) be a set-valued mapping. Suppose that \(\Re\) and \(F\) satisfy Condition 1.9._ _The mapping \(F\) has a Lipschitz selection if and only if there exists a constant \(\lambda\geq 0\) such that for every subset \({\cal M}^{\prime}\subset{\cal M}\) consisting of at most \(N=4\) points, the restriction \(F|_{{\cal M}^{\prime}}\) of \(F\) to \({\cal M}^{\prime}\) has a Lipschitz selection \(f_{{\cal M}^{\prime}}\) with Lipschitz seminorm \(\|f_{{\cal M}}\|_{\rm Lip({\cal M}^{\prime})}\leq\lambda\)._ _Furthermore, \(\inf\lambda\leq|F|_{\Re}\leq\gamma\,\inf\,\lambda\) with \(\gamma=3\)._ _Proof._ The necessity part of the theorem and the inequality \(\inf\lambda\leq|F|_{\Re}\) are obvious. Let us prove the sufficiency. Suppose that there exist a constant \(\lambda\geq 0\) such that for every subset \({\cal M}^{\prime}\subset{\cal M}\) with \(\#{\cal M}^{\prime}\leq 4\), the restriction \(F|_{{\cal M}^{\prime}}\) of \(F\) to \({\cal M}^{\prime}\) has a Lipschitz selection \(f_{{\cal M}^{\prime}}\) with Lipschitz seminorm \(\|f_{{\cal M}^{\prime}}\|_{\rm Lip({\cal M}^{\prime})}\leq\lambda\). Our goal is to prove the existence of a selection \(f\) of \(F\) with \(\|f\|_{\rm Lip({\cal M})}\leq 3\lambda\). Let us show that condition (3.3) of Theorem 3.2 is satisfied with \(\bar{\lambda}=\lambda\). Let \(x,x^{\prime},x^{\prime\prime},y,y^{\prime},y^{\prime\prime}\in{\cal M}\), and let \({\cal M}^{\prime}=\{x^{\prime},x^{\prime\prime},y^{\prime},y^{\prime\prime}\}\). We know that the restriction \(F|_{{\cal M}^{\prime}}\) of \(F\) to \({\cal M}^{\prime}\) has a Lipschitz selection \(f_{{\cal M}^{\prime}}\) with \(\|f_{{\cal M}}\|_{\rm Lip({\cal M}^{\prime})}\leq\lambda\). Thus, \[f_{{\cal M}^{\prime}}(x^{\prime})\in F(x^{\prime}),\ f_{{\cal M}^{\prime}}(x^{ \prime\prime})\in F(x^{\prime\prime}),\ f_{{\cal M}^{\prime}}(y^{\prime})\in F (y^{\prime})\ \ \ {\rm and}\ \ \ f_{{\cal M}^{\prime}}(y^{\prime\prime})\in F(y^{\prime\prime}).\] It is well known that any Lipschitz mapping from a closed subset of a pseudometric space \(({\cal M},\rho)\) into \(\ell^{2}_{\infty}\) (i.e., \({\bf R}^{2}\) equipped with the uniform norm) can be extended to all of the space \(({\cal M},\rho)\) with preserving the Lipschitz constant. Let \(\tilde{f}:{\cal M}\to{\bf R}^{2}\) be such an extension of the mapping \(f_{{\cal M}^{\prime}}\) from \({\cal M}\) to \({\cal M}.\) Thus, \(\tilde{f}|_{{\cal M}^{\prime}}=f_{{\cal M}^{\prime}}\) and \(\|\tilde{f}\|_{\mathrm{Lip}({\cal M})}=\|f_{{\cal M}^{\prime}}\|_{\mathrm{Lip} ({\cal M}^{\prime})}\leq\lambda\) so that the following inequalities hold: \[\|f_{{\cal M}^{\prime}}(x^{\prime})-\tilde{f}(x)\|=\|\tilde{f}(x^{\prime})- \tilde{f}(x)\|\leq\lambda\,\rho(x,x^{\prime}),\ \ \ \ \ \|f_{{\cal M}^{\prime}}(x^{\prime })-\tilde{f}(x)\|=\|\tilde{f}(x^{\prime\prime})-\tilde{f}(x)\|\leq\lambda\, \rho(x,x^{\prime\prime}),\] and \[\|f_{{\cal M}^{\prime}}(y^{\prime})-\tilde{f}(y)\|=\|\tilde{f}(y^{\prime})- \tilde{f}(y)\|\leq\lambda\,\rho(y,y^{\prime}),\ \ \ \ \ \|f_{{\cal M}^{\prime}}(y^{\prime\prime})-\tilde{f}(y)\|=\|\tilde{f}(y^{\prime \prime})-\tilde{f}(y)\|\leq\lambda\,\rho(y,y^{\prime\prime}).\] Hence, \[\tilde{f}(x)\in\{F(x^{\prime})+\lambda\,\rho(x^{\prime},x)\,Q_{0}\}\cap\{F(x^ {\prime\prime})+\lambda\,\rho(x^{\prime\prime},x)\,Q_{0}\}\] and \[\tilde{f}(y)\in\{F(y^{\prime})+\lambda\,\rho(y^{\prime},y)\,Q_{0}\}\cap\{F(y^ {\prime\prime})+\lambda\,\rho(y^{\prime\prime},y)\,Q_{0}\}\] so that \(\tilde{f}(x)\in{\cal W}_{F}[x,x^{\prime},x^{\prime\prime}:\lambda]\) and \(\tilde{f}(y)\in{\cal W}_{F}[y,y^{\prime},y^{\prime\prime}:\lambda]\), see (3.1). Furthermore, because \(\|\tilde{f}\|_{\mathrm{Lip}({\cal M})}\leq\lambda\), we have \(\|\tilde{f}(x)-\tilde{f}(y)\|\leq\lambda\,\rho(x,y).\) Hence, \[\tilde{f}(x)\in{\cal W}_{F}[x,x^{\prime},x^{\prime\prime}:\bar{\lambda}]\cap \{{\cal W}_{F}[y,y^{\prime},y^{\prime\prime}:\bar{\lambda}]+\lambda\,\rho(x, y)\,Q_{0}\}\] proving that condition (3.3) of Theorem 3.2 holds. Thanks to this theorem, \(F\) has a Lipschitz selection \(f:{\cal M}\to{\bf R}^{2}\) with \(\|f\|_{\mathrm{Lip}({\cal M})}\leq 2\lambda+\bar{\lambda}=3\lambda\), and the proof of Theorem 5.2 is complete. **Remark 5.3**: We have proved Theorem 5.2 for the Banach space \(\ell^{2}_{\infty}=({\bf R}^{2},\|\cdot\|)\), i.e., \({\bf R}^{2}\) equipped with the uniform norm \(\|\cdot\|\). Let us consider a slightly more general version of Theorem 5.2 for a Banach space \(X=({\bf R}^{2},\|\cdot\|_{X})\) supplied with a certain Banach norm \(\|\cdot\|_{X}\). Let \(B_{X}\) be the unit ball \(X\). According to a result of E. Asplund [2], \(B_{X}\) contains a parallelogram \(P\) centered at \((0,0)\) which expanded by \(\frac{3}{2}\) will cover \(B_{X}.\) Since the Banach space \(X_{P}\) with the unit ball \(P\) is linearly isometric to \(\ell^{2}_{\infty}\), we conclude that an analog of Theorem 5.2 holds for an _arbitrary_ Banach space \(X\) with the constant \(\frac{3}{2}\cdot 3\lambda=4.5\lambda\) (instead of \(3\lambda\) as for \(\ell^{2}_{\infty}\)). Combining Theorem 3.2 with the part (ii) of Proposition 3.1, we obtain the following criterion for Lipschitz selections. **Theorem 5.4**: _Let \(\Re=({\cal M},\rho)\) be a pseudometric space, and let \(F:{\cal M}\to\mathrm{Conv}({\bf R}^{2})\) be a set-valued mapping satisfying Condition 1.9._ _The mapping \(F\) has a Lipschitz selection if and only if there exists a constant \(\lambda\geq 0\) such that for every \(x,x^{\prime},x^{\prime\prime},y,y^{\prime},y^{\prime\prime}\in{\cal M}\), we have_ \[{\cal W}_{F}[x,x^{\prime},x^{\prime\prime}:\lambda]\cap\{{\cal W}_{F}[y,y^{ \prime},y^{\prime\prime}:\lambda]+\lambda\,\rho(x,y)\,Q_{0}]\neq\emptyset. \tag{5.3}\] _Furthermore, the following inequality holds:_ \[\inf\lambda\leq\,|F|_{\Re}\leq 3\inf\lambda. \tag{5.4}\] Comparing this result with Theorem 1.10, we note that Theorem 5.4 provides better upper bound for \(|F|_{\Re}\) (\(3\inf\lambda\) in (5.4) rather than \(5\inf\lambda\) as in inequality (1.17)). The price of such a refinement is as follows: property (5.3) exploits rectangles \({\cal W}_{F}[\cdot,\cdot,\cdot:\lambda]\) each of these depend on _three_ arbitrary elements of \({\cal M}\). In turn, condition (1.16) of Theorem 1.10 formulates in terms of rectangles \({\cal R}_{F}[\cdot,\cdot:\lambda]\) each depending only on _two_ elements of \({\cal M}\). In other words, in the criterion of Theorem 5.4, we use more information about geometric properties of the set-valued mapping \(F\), and, as a result, obtain better estimates for the Lipschitz seminorm of its Lipschitz selection. **5.2 Criteria for Lipschitz selections in terms of intersections of sets.** In this section we prove two constructive Lipschitz selection criteria formulated in terms of intersections of certain families of rectangles in \({\bf R}^{2}\). We begin with a Lipschitz selection criterion of such a kind for mappings taking values in the family \(\mathfrak{R}({\bf R}^{2})\) of all rectangles in \({\bf R}^{2}\) with sides parallel to the coordinate axes. **Proposition 5.5**: _Let \({\cal T}:{\cal M}\to{\cal R}({\bf R}^{2})\) be a set-valued mapping. Let us assume that either \({\cal M}\) is finite or all rectangles \({\cal T}(x)\), \(x\in{\cal M}\), are closed and at least one of them is bounded._ _Then \({\cal T}\) has a Lipschitz selection if and only if there exists a constant \(\lambda\geq 0\) such that the set_ \[{\cal T}^{[1]}[x:\lambda]=\bigcap_{z\in{\cal M}}\left[{\cal T}(z)+\lambda\, \rho(x,z)\,Q_{0}\right] \tag{5.5}\] _is non-empty for every \(x\in{\cal M}\). Furthermore,_ \[|{\cal T}|_{\mathfrak{R}_{\mathfrak{R}}}=\inf\{\lambda:{\cal T}^{[1]}[x: \lambda]\neq\emptyset\ \ \mbox{ for all }\ \ x\in{\cal M}\}.\] _Proof. (Necessity.)_ Suppose that \({\cal T}\) has a Lipschitz selection \(\tau:{\cal M}\to\mathfrak{R}({\bf R}^{2})\) with \(\|\tau\|_{\mathrm{Lip}({\cal M})}\leq\lambda\). Then, given \(x\in{\cal M}\), we have \(\|\tau(x)-\tau(z)\|\leq\lambda\,\rho(x,z)\) for every \(z\in{\cal M}\). But \(\tau(x)\in{\cal T}(x)\) and \(\tau(z)\in{\cal T}(z)\) (because \(\tau\) is a selection of \({\cal T}\)) so that \(\tau(x)\in{\cal T}(z)+\lambda\,\rho(x,z)Q_{0}\). Therefore, thanks to (5.5), \(\tau(x)\in{\cal T}^{[1]}[x:\lambda]\) proving the necessity. (Sufficiency.) Suppose that \({\cal T}^{[1]}[x:\lambda]\neq\emptyset\) for every \(x\in{\cal M}\). Then, thanks to (5.5), \({\cal T}(x)\cap[{\cal T}(z)+\lambda\,\rho(x,z)\,Q_{0}]\neq\emptyset\) for every \(x,z\in{\cal M}\), proving the existence of points \(\tilde{\tau}(x)\in{\cal T}(x)\), \(\tilde{\tau}(z)\in{\cal T}(z)\) such that \(\|\tilde{\tau}(x)-\tilde{\tau}(z)\|\leq\lambda\,\rho(x,z)\). This property and the assumption of the proposition's hypothesis enables us to apply Proposition 2.10 to the set-valued mapping \({\cal T}\). This proposition tells us that \({\cal T}\) has a Lipschitz selection \(\tau:{\cal M}\to{\bf R}^{2}\) with \(\|\tau\|_{\mathrm{Lip}({\cal M})}\leq\eta\) completing the proof of the proposition. The following theorem is the main result of this section. **Theorem 5.6**: _Suppose that a pseudometric space \(\mathfrak{W}=({\cal M},\rho)\) and a set-valued mapping \(F:{\cal M}\to\mathrm{Conv}({\bf R}^{2})\) satisfy Condition 1.9._ _Then \(F\) has a Lipschitz selection if and only if there exists a constant \(\lambda\geq 0\) such that_ \[\bigcap_{y,y^{\prime}\in{\cal M}}\{{\cal R}_{F}[y,y^{\prime}:\lambda]+\lambda \,\rho(x,y)Q_{0}\}\neq\emptyset\ \ \ \ \mbox{ for every }\ x\in{\cal M}. \tag{5.6}\] _Moreover, in this case, \(\inf\lambda\leq\,|F|_{\mathfrak{R}}\leq 5\inf\lambda\)._ _Proof. (Necessity)_. Let \(f:{\cal M}\to{\bf R}^{2}\) be a Lipschitz selection of \(F\), and let \(\lambda=\|f\|_{\mathrm{Lip}({\cal M})}\). Then for every \(x,y,y^{\prime}\in{\cal M}\) the following is true: \(f(x)\in F(x),f(y)\in F(y)\), \(f(y^{\prime})\in F(y^{\prime})\), \[\|f(x)-f(y)\|\leq\lambda\,\rho(x,y)\ \ \mbox{ and }\ \ \|f(y)-f(y^{\prime})\|\leq\lambda\,\rho(y,y^{\prime}).\] Hence, \(f(y)\in F(y)\cap[F(y^{\prime})+\lambda\,\rho(y,y^{\prime})Q_{0}]\) and \(f(x)\in f(y)+\lambda\,\rho(x,y)Q_{0}\), proving that \[f(x)\in(F(y)\cap[F(y^{\prime})+\lambda\,\rho(y,y^{\prime})Q_{0}])+\lambda\, \rho(x,y)Q_{0}\ \ \ \mbox{ for all }\ \ \ y,y^{\prime}\in{\cal M}.\] Clearly, \[(F(y)\cap\{F(y^{\prime})+\lambda\,\rho(y^{\prime},y)Q_{0}\})+\lambda \,\rho(x,y)Q_{0} \subset \mathcal{H}[F(y)\cap\{F(y^{\prime})+\lambda\,\rho(y^{\prime},y)Q_{0} \}]+\lambda\,\rho(x,y)Q_{0}\] \[= \mathcal{R}_{F}[y,y^{\prime}:\lambda]+\lambda\,\rho(x,y)Q_{0}\] so that \(f(x)\in\mathcal{R}_{F}[y,y^{\prime}:\lambda]+\lambda\,\rho(x,y)Q_{0}\) for every \(y,y^{\prime}\in\mathcal{M}\). This property implies (5.6) and inequality \(\inf\lambda\leq\,|F|_{\mathbb{R}^{n}}\) completing the proof of the necessity. _(Sufficiency.)_ We assume that property (5.6) holds. Thanks to this property, \[\mathcal{R}_{F}[x,x^{\prime}:\lambda]\cap\{\mathcal{R}_{F}[y,y^{\prime}: \lambda]+\lambda\,\rho(x,y)\,Q_{0}\}\neq\emptyset\] for every \(x,x^{\prime},y,y^{\prime}\in\mathcal{M}\) proving that condition (1.16) of Theorem 1.10 holds. This theorem tells us that \(F\) has a Lipschitz selection with Lipschitz seminorm at most \(5\lambda\). This proves the inequality \(|F|_{\mathbb{R}^{n}}\leq 5\inf\lambda\) and completes the proof of Theorem 5.6. **Remark 5.7**: We note that, thanks to representation (2.12), property (5.6) is equivalent to the following one: for every \(x\in\mathcal{M}\) and and every \(i=1,2\), the set \[\bigcap_{y,y^{\prime}\in\mathcal{M}}\mathrm{Pr}_{i}[\{F(y)\cap(F(y^{\prime})+ \lambda\,\rho(y,y^{\prime})Q_{0})\}+\lambda\,\rho(x,y)Q_{0}]\neq\emptyset.\] We recall that \(\mathrm{Pr}_{i}\) denotes the operator of the orthogonal projection onto the axis \(Ox_{i}\). **6. Projection Algorithm for nearly optimal Lipschitz selections.** In this section we present a number of nearly optimal algorithms for Lipschitz selections based on the geometrical construction suggested in the proof of Theorem 3.2. We refer to these algorithms as the _"Projection Algorithms"_. **6.1 The \(\vec{\lambda}\)-Projection Algorithm.** Let \(\mathfrak{M}=(\mathcal{M},\rho)\) be a pseudometric space, and let \(F:\mathcal{M}\to\mathrm{Conv}(\mathbf{R}^{2})\) be a set-valued mapping. Given a constant \(\lambda\geq 0\) we define a set-valued mapping \(F^{[1]}[\cdot:\lambda]\) on \(\mathcal{M}\) by letting \[F^{[1]}[x:\lambda]=\bigcap_{y\in\mathcal{M}}\,\left[F(y)+\lambda\,\rho(x,y)\,Q _{0}\right],\quad x\in\mathcal{M}. \tag{6.1}\] We refer to the mapping \(F^{[1]}[\cdot:\lambda]\) as _the \(\lambda\)-balanced refinement of \(F\)_. (This important object was introduced in [17] and [33]. We have already met various variants of it in (2.35), (2.43), (3.4), (3.15) and (5.5).) **The Projection Algorithm 6.1**: Given a vector \(\vec{\lambda}=(\lambda_{1},\lambda_{2})\) with non-negative coordinates \(\lambda_{1},\lambda_{2}\geq 0\), a pseudometric space \(\mathfrak{M}=(\mathcal{M},\rho)\), and a set-valued mapping \(F:\mathcal{M}\to\mathrm{Conv}(\mathbf{R}^{2})\), the Projection Algorithm either produces a selection \(f_{\vec{\lambda},F}\) of \(F\) (the outcome **"Success"**) or stops (the outcome **"No go"**). This procedure includes the following five main steps. **STEP 1.** At this step we construct the \(\lambda_{1}\)-balanced refinement of \(F\), i.e., the set-valued mapping \[F^{[1]}[x:\lambda_{1}]=\bigcap_{y\in\mathcal{M}}\,\left[F(y)+\lambda_{1}\, \rho(x,y)\,Q_{0}\right]. \tag{6.2}\] _If \(F^{[1]}[x:\lambda_{1}]=\emptyset\) for some \(x\in{\cal M}\), the algorithm produces the outcome "No go" and stops._ **STEP 2.** Let us assume that the above condition does not hold, i.e., _for every element \(x\in{\cal M}\) the \(\lambda_{1}\)-balanced refinement \(F^{[1]}[x:\lambda_{1}]\) is non-empty_. In this case, for each \(x\in{\cal M}\), we construct _the rectangular hull_ of \(F^{[1]}[x:\lambda_{1}]\), the set \[{\cal T}_{F,\lambda_{1}}(x)={\cal H}[F^{[1]}[x:\lambda_{1}]]. \tag{6.3}\] Thus, \({\cal T}_{F,\lambda_{1}}\) is a set-valued mapping which maps \({\cal M}\) into the family \(\Re({\bf R}^{2})\) of all rectangles in \({\bf R}^{2}\). See Fig. 14. **STEP 3.** For every \(x\in{\cal M}\), we construct the \(\lambda_{2}\)-balanced refinement of the mapping \({\cal T}_{F,\lambda_{1}}\), i.e., the rectangle \({\cal T}_{F,\lambda_{1}}^{[1]}[x:\lambda_{2}]\) defined by \[{\cal T}_{F,\lambda_{1}}^{[1]}[x:\lambda_{2}]=\bigcap_{y\in{ \cal M}}\left[{\cal T}_{F,\lambda_{1}}(y)+\lambda_{2}\,\rho(x,y)\,Q_{0}\right]. \tag{6.4}\] See Fig. 15. If \[{\cal T}_{F,\lambda_{1}}^{[1]}[x:\lambda_{2}]=\emptyset\quad\mbox{ for some}\quad x\in{\cal M}, \tag{6.5}\] the algorithm produces the outcome "No go" and stops. Fig. 14: The second step of the Projection Algorithm. Fig. 15: The third step of the Projection Algorithm. **STEP 4.** At this step, we assume that _for each \(x\in\mathcal{M}\) the rectangle \(\mathcal{T}^{[1]}_{F,\lambda_{1}}[x:\lambda_{2}]\neq\emptyset\)_. We let \(\mathcal{T}^{[1]}_{F,\lambda_{1}}[x:\lambda_{2}]^{\mathrm{el}}\) denote _the closure_ of the rectangle \(\mathcal{T}^{[1]}_{F,\lambda_{1}}[x:\lambda_{2}]\neq\emptyset\). Let \(O=(0,0)\) be the origin. We define a mapping \(g_{F}:\mathcal{M}\to\mathbf{R}^{2}\) by letting \[g_{F}(x)=\mathrm{center}\left(\mathrm{Pr}\left(O,\mathcal{T}^{[1]}_{F, \lambda_{1}}[x:\lambda_{2}]^{\mathrm{el}}\right)\right),\ \ \ \ \ x\in\mathcal{M}. \tag{6.6}\] See Fig. 16. We recall that \(\mathbf{Pr}(\cdot,S)\) denotes the operator of metric projection onto a closed convex subset \(S\subset\mathbf{R}^{2}\), see (2.6), and center \((\cdot)\) denotes the center of a centrally symmetric bounded set in \(\mathbf{R}^{2}\). **STEP 5.** We define the mapping \(f_{\hat{x};F}:\mathcal{M}\to\mathbf{R}^{2}\) by letting \[f_{\hat{x};F}(x)=\mathbf{Pr}(g(x),F^{[1]}[x:\lambda_{1}]),\ \ \ \ \ x\in \mathcal{M}. \tag{6.7}\] See Fig. 17. At this stage, the algorithm produces the outcome **"Success"** and stops. To specify the dependence on the parameters \(\lambda_{1}\) and \(\lambda_{2}\), we refer to the above algorithm as the \(\vec{\lambda}\)-Projection Algorithm (\(\vec{\lambda}\)-PA for short). Fig. 17: The final step of the Projection Algorithm. **Remark 6.2**: We note that the \((\lambda_{1},\lambda_{2})\)-Projection Algorithm produces the outcome **"Success"** if and only if the following conditions are satisfied: \[F^{[1]}[x:\lambda_{1}]\neq\emptyset\ \ \ \mbox{and}\ \ \ {\cal T}^{[1]}_{F, \lambda_{1}}[x:\lambda_{2}]\neq\emptyset\ \ \ \mbox{for every}\ \ \ x\in{\cal M}. \tag{6.8}\] Indeed, this algorithm does not stop at **STEP 1** which is equivalent to the first condition in (6.8). Also, the algorithm does not stop at **STEP 3** which, thanks to (6.5), is equivalent to the second condition in (6.8). \(\blacktriangleleft\) The next theorem describes the main properties of the \(\vec{\lambda}\)-PA. **Theorem 6.3**: _Let \(\lambda_{1},\lambda_{2}\geq 0\), and let \(\vec{\lambda}=(\lambda_{1},\lambda_{2})\). Let \(\Re=({\cal M},\rho)\) be a pseudometric space, and let \(F:{\cal M}\to\mbox{\rm Conv}({\bf R}^{2})\) be a set-valued mapping. Suppose that \(\Re\) and \(F\) satisfy Condition 1.9._ _If \(\vec{\lambda}\)-Projection Algorithm produces the outcome **"No go"** (see **STEP 1** and **STEP 3**), then we guarantee that there does not exist a Lipschitz selection of \(F\) with Lipschitz seminorm at most \(\min\{\lambda_{1},\lambda_{2}\}\)._ _Otherwise, the \(\vec{\lambda}\)-Projection Algorithm produces the outcome **"Success"** (see **STEP 5**) and returns the mapping \(f_{\vec{\lambda},F}:{\cal M}\to{\bf R}^{2}\) defined by formula (6.7). This mapping has the following properties:_ _(\(\bigstar\)A) The mapping \(f_{\vec{\lambda},F}\) is well defined. This means the following: (a) for every \(x\in{\cal M}\), the set \(F^{[1]}[x:\lambda_{1}]\neq\emptyset\) and the rectangle \({\cal T}^{[1]}_{F,\lambda_{1}}[x:\lambda_{2}]\neq\emptyset\) (see (6.4)), (b) the mapping \(g_{F}\) (see (6.6)) is well defined, and (c) the metric projection defined by the right hand side of (6.7) is a singleton._ _(\(\bigstar\)B) \(f_{\vec{\lambda},F}\) is a Lipschitz selection of \(F\) with the Lipschitz seminorm_ \[\|f_{\vec{\lambda},F}\|_{\mbox{\scriptsize Lip}({\cal M})}\leq\lambda_{1}+2 \lambda_{2}.\] _Proof._ Let us see that whenever the \(\vec{\lambda}\)-PA produces the outcomes **"No go"** there does not exist a Lipschitz selection of \(F\) with Lipschitz seminorm at most \(\lambda^{\min}=\min\{\lambda_{1},\lambda_{2}\}\). Indeed, suppose that \(F\) has a Lipschitz selection \(f:{\cal M}\to{\bf R}^{2}\) with \(\|f\|_{\mbox{\scriptsize Lip}({\cal M})}\leq\lambda^{\min}\). Then, \(f(x)\in F(x)\) for every \(x\in{\cal M}\) and \[\|f(x)-f(y)\|\leq\lambda^{\min}\,\rho(x,y)\ \ \ \mbox{for all}\ \ \ \ x,y\in{\cal M}.\] Hence, \[f(x)\in F(y)+\lambda^{\min}\,\rho(x,y)Q_{0}\ \ \ \mbox{for every}\ \ \ y\in{\cal M}.\] Therefore, thanks to definition (6.1), we have \[f(x)\in F^{[1]}[x:\lambda^{\min}]\subset F^{[1]}[x:\lambda_{1}]\ \ \ \mbox{for every}\ \ \ x\in{\cal M}.\] Furthermore, from this and (6.3), we have \(f(x)\in{\cal T}_{F,\lambda_{1}}(x)\), \(x\in{\cal M}\), proving that \(f\) is a Lipschitz selection of the set-valued mapping \({\cal T}_{F,\lambda_{1}}\) with \(\|f\|_{\mbox{\scriptsize Lip}({\cal M})}\leq\lambda^{\min}\). Repeating the above argument for \({\cal T}_{F,\lambda_{1}}\) rather than for \(F\), we conclude that \[f(x)\in{\cal T}^{[1]}_{F,\lambda_{1}}[x:\lambda^{\min}]\subset{\cal T}^{[1]}_ {F,\lambda_{1}}[x:\lambda_{2}]\ \ \ \mbox{for every}\ \ \ x\in{\cal M}.\] See (6.4). Thus, if there exists \(x\in{\cal M}\) such that \[F^{[1]}[x:\lambda_{1}]=\emptyset\ \ \ \ \mbox{(the outcome **``No go"** at **STEP 1**)}\] or \[{\cal T}^{[1]}_{F,\lambda_{1}}[x:\lambda_{2}]=\emptyset\ \ \ \ \mbox{(the outcome **``No go"** at **STEP 3**)},\] we can guarantee that there does not exist a Lipschitz selection of \(F\) with Lipschitz seminorm \(\leq\lambda^{\min}\). Next, let us assume that the \(\lambda\)-PA produces the outcome **"Success"** (see **STEP 5**). Then, thanks to Remark 6.2, the set-valued mapping \(F\) satisfies condition (6.8). Let us prove property (\(\bigstar\!\!\!\bigstar A\)). Clearly, the part (a) is immediate from (6.8). To prove part (b), we note that metric projection of the origin \(O\) onto the rectangle \({\cal T}^{[1]}_{F,\lambda_{1}}[x:\lambda_{2}]^{\rm cl}\) is either a singleton or a closed line segment. Therefore, \(g_{F}\) is well defined on \({\cal M}\) (because the set \({\bf Pr}(O,{\cal T}^{[1]}_{F,\lambda_{1}}[x:\lambda_{2}])\) is a non-empty _bounded_ centrally symmetric set). See formula (6.6). Let us prove part (c). Thanks to (6.3) and (6.4), \[g_{F}(x)\in{\cal T}^{[1]}_{F,\lambda_{1}}[x:\lambda_{2}]^{\rm cl}\subset{\cal T }^{\rm cl}_{F,\lambda_{1}}(x)={\cal H}[F^{[1]}[x:\lambda_{1}]]^{\rm cl}. \tag{6.9}\] (To prove the inclusion "\(\subset\)" in (6.9) put \(y=x\) in (6.4).) Therefore, thanks to Lemma 3.7 and definition (6.6), \(f_{\lambda;F}(x)\), the metric projection of \(g_{F}(x)\) onto \(F^{[1]}[x:\lambda_{1}]\), see (6.7), is a singleton. The proof of property (\(\bigstar\!\!\!\bigstar A\)) is complete. Let us prove property (\(\bigstar\!\!\!\bigstar B\)) of the theorem. We will follow the scheme of the proof of the sufficiency part of the key Theorem 3.2 for the special case \[\tilde{\lambda}=\lambda_{1}\quad\mbox{and}\quad\lambda=\lambda_{2}. \tag{6.10}\] Thanks to (6.8), \({\cal T}^{[1]}_{F,\tilde{\lambda}}[x:\lambda]\neq\emptyset\) for every \(x\in{\cal M}\). Part (a) of Proposition 2.7 tells us that in this case the set-valued mapping \({\cal T}^{[1]}_{F,\tilde{\lambda}}[\cdot:\lambda]\) is Lipschitz with respect to the Hausdorff distance, i.e., \[{\rm d}_{\rm H}\left({\cal T}^{[1]}_{F,\tilde{\lambda}}[x:\lambda],{\cal T}^{ [1]}_{F,\tilde{\lambda}}[y:\lambda]\right)\leq\lambda\,\rho(x,y)\ \ \mbox{ for all }\ \ x,y\in{\cal M}. \tag{6.11}\] It is clear that for every two rectangles \(S_{1},S_{2}\in\Re({\bf R}^{2})\) we have \({\rm d}_{\rm H}(S_{1},S_{2})={\rm d}_{\rm H}(S_{1}^{\rm d},S_{2}^{\rm cl})\), so that, thanks to (6.11), \[{\rm d}_{\rm H}\left({\cal T}^{[1]}_{F,\tilde{\lambda}}[x:\lambda]^{\rm cl},{ \cal T}^{[1]}_{F,\tilde{\lambda}}[y:\lambda]^{\rm cl}\right)\leq\lambda\,\rho (x,y)\ \ \mbox{ for every }\ \ x,y\in{\cal M}. \tag{6.12}\] Let \(\delta(x)={\rm dist}(0,{\cal T}^{[1]}_{F,\tilde{\lambda}}[x:\lambda])\). Clearly, \(\delta(x)={\rm dist}(0,{\cal T}^{[1]}_{F,\tilde{\lambda}}[x:\lambda]^{\rm cl})\). Then, \[|\delta(x)-\delta(y)|=|\,{\rm dist}(0,{\cal T}^{[1]}_{F,\tilde{\lambda}}[x: \lambda])-{\rm dist}(0,{\cal T}^{[1]}_{F,\tilde{\lambda}}[y:\lambda])|\leq{ \rm d}_{\rm H}\left({\cal T}^{[1]}_{F,\tilde{\lambda}}[x:\lambda],{\cal T}^{[1 ]}_{F,\tilde{\lambda}}[y:\lambda]\right)\] proving that \[|\delta(x)-\delta(y)|\leq\lambda\,\rho(x,y)\ \ \mbox{ for every }\ \ x,y\in{\cal M}. \tag{6.13}\] We note that \[{\bf Pr}(O,{\cal T}^{[1]}_{F,\tilde{\lambda}}[x:\lambda]^{\rm cl})=Q(O,\delta (x))\cap{\cal T}^{[1]}_{F,\tilde{\lambda}}[x:\lambda]^{\rm cl}.\] From this and Lemma 2.4, we have \[{\bf Pr}(O,{\cal T}^{[1]}_{F,\tilde{\lambda}}[x:\lambda]^{\rm cl })+\lambda\,\rho(x,y)Q_{0} = \{Q(O,\delta(x))+\lambda\,\rho(x,y)Q_{0}\}\cap\{{\cal T}^{[1]}_{F, \tilde{\lambda}}[x:\lambda]^{\rm cl}+\lambda\,\rho(x,y)Q_{0}\}\] \[= Q(O,\delta(x)+\lambda\,\rho(x,y))\cap\{{\cal T}^{[1]}_{F,\tilde{ \lambda}}[x:\lambda]^{\rm cl}+\lambda\,\rho(x,y)Q_{0}\}.\] Note that, thanks to (6.13), \(\delta(y)\leq\delta(x)+\lambda\,\rho(x,y)\), and, thanks to (6.12), \[{\cal T}^{[1]}_{F,\tilde{\lambda}}[y:\lambda]^{\rm cl}\subset\ {\cal T}^{[1]}_{F,\tilde{ \lambda}}[x:\lambda]^{\rm cl}+\lambda\,\rho(x,y)Q_{0}.\] Hence, \[\mathbf{Pr}(O,\mathcal{T}^{[1]}_{F,\lambda}[x:\lambda]^{\mathbf{d}})+\lambda\,\rho (x,y)Q_{0}\supset Q(O,\delta(y))\cap\mathcal{T}^{[1]}_{F,\lambda}[y:\lambda]^{ \mathbf{d}}=\mathbf{Pr}(O,\mathcal{T}^{[1]}_{F,\lambda}[y:\lambda]^{\mathbf{d}}).\] By interchanging the roles of \(x\) and \(y\), we obtain also \[\mathbf{Pr}(O,\mathcal{T}^{[1]}_{F,\lambda}[y:\lambda]^{\mathbf{d}})+\lambda \,\rho(x,y)Q_{0}\supset\mathbf{Pr}(O,\mathcal{T}^{[1]}_{F,\lambda}[y:\lambda] ^{\mathbf{d}}).\] These two inclusions imply the inequality \[\mathrm{d}_{\mathrm{H}}\left(\mathbf{Pr}(O,\mathcal{T}^{[1]}_{F,\lambda}[x: \lambda]^{\mathbf{d}}),\mathbf{Pr}(O,\mathcal{T}^{[1]}_{F,\lambda}[y:\lambda] ^{\mathbf{d}})\right)\leq\lambda\,\rho(x,y),\quad x,y\in\mathcal{M}. \tag{6.14}\] As we have noted above, the set \(\mathbf{Pr}(O,\mathcal{T}^{[1]}_{F,\lambda}[x:\lambda])\) is either a singleton or a closed line segment. This line segment lies on one of the sides of the square \(Q(O,\delta(x))\), proving that the metric projection \(\mathbf{Pr}(O,\mathcal{T}^{[1]}_{F,\lambda}[x:\lambda]^{\mathbf{d}})\) is a _bounded rectangle_, i.e., an element of the family \(\Re(\mathbf{R}^{2})\). Therefore, thanks to part (ii) of Claim 2.6 and (6.14), the mapping \(g_{F}\) defined by formula (6.6) is Lipschitz with Lipschitz seminorm at most \(\lambda\). Let us also not that, thanks to (6.9) and (6.10), for every \(x\in\mathcal{M}\), we have \[g_{F}(x)\in\mathcal{T}^{\mathbf{d}}_{F,\lambda}(x)=\mathcal{H}[F^{[1]}[x: \lambda]]^{\mathbf{d}}.\] These properties of \(g_{F}\) show that the mapping \(g=g_{F}\) satisfies conditions (3.23) and (3.24). Furthermore, comparing (3.25) with (6.7), we conclude that \(f=f_{\lambda;F}\) where \(f\) is the mapping defined by the formula (3.25). Proposition 3.6 tells us that this mapping is a selection of \(F\) with Lipschitz seminorm \[\|f\|_{\mathrm{Lip}(\mathcal{M})}=\|f_{\lambda;F}\|_{\mathrm{Lip}(\mathcal{M} )}\leq\tilde{\lambda}+2\lambda=\lambda_{1}+2\lambda_{2}.\] The proof of Theorem 6.3 is complete. **Remark 6.4**: Theorem 6.3 holds for various versions of the \(\tilde{\lambda}\)-Projection Algorithm related to the specific choice of the mapping \(g_{F}\) at **STEP 4** of the algorithm. The proof of Theorem 6.3 shows that \(g_{F}\) should be a Lipschitz selection of the set-valued mapping \[\mathcal{T}^{\mathbf{cl}}_{F,\lambda_{1}}(x)=\mathcal{H}[F^{[1]}[\cdot:\lambda _{1}]]^{\mathbf{d}}\quad\text{with}\quad\|g_{F}\|_{\mathrm{Lip}(\mathcal{M})} \leq\lambda_{2},\] i.e., \(g_{F}\) have to satisfy condition (3.23) and inequality (3.24) with constants \(\tilde{\lambda}=\lambda_{1}\) and \(\lambda=\lambda_{2}\). Let us note some of these versions. _(i)_ We can define \(g_{F}(x)\) by formula (6.6) with replacing the origin \(O\) by an _arbitrary_ point in \(\mathbf{R}^{2}\); _(ii)_ Suppose that \[\text{for every}\ \ x\in\mathcal{M}\ \ \text{the set}\ \ F^{[1]}[x:\lambda_{1}]\ \ \text{is bounded}. \tag{6.15}\] Then the rectangle \(\mathcal{T}^{[1]}_{F,\lambda_{1}}[x:\lambda_{2}]\), (i.e., the rectangular hull of \(F^{[1]}[x:\lambda_{1}]\)) is also bounded for all \(x\in\mathcal{M}\). In this case, we can define \(g_{F}\) by the formula \[g_{F}(x)=\text{center}\left(\mathcal{T}^{[1]}_{F,\lambda_{1}}[x:\lambda_{2}] \right),\quad\ x\in\mathcal{M}. \tag{6.16}\] See Fig. 18. Then, thanks to part (iii) of Proposition 3.5, \(g_{F}\) is Lipschitz with \(\|g_{F}\|_{\mathrm{Lip}(\mathcal{M})}\leq\lambda_{2}\). Thus, for this choice of \(g_{F}\) both property (3.23) and inequality (3.24) hold. We note that, thanks to Lemma 3.3, property (6.15) holds provided the set \(\mathcal{M}\) is _infinite_, so that in this case we can define \(g_{F}\) by formula (6.16). Of course, if \(\mathcal{M}\) is finite, we can not guarantee that property (6.15) holds. However, in this case, the following property of the rectangles \(\{\mathcal{T}_{F,\lambda_{1}}^{[1]}[x:\lambda_{2}]:x\in\mathcal{M}\}\) maybe useful: if one of the rectangles of this family is a _bounded set_, then _all rectangles_ from this family are bounded as well, i.e., (6.15) holds. This property is immediate from the fact that at **STEP 4** the mapping \(\mathcal{T}_{F,\lambda_{1}}^{[1]}[\cdot:\lambda_{2}]\) is Lipschitz with respect to the Hausdorff distance. See inequality (6.11). (Recall that \(\lambda=\lambda_{2}\).) _(iii)_ Suppose that there exists \(r>0\) such that, for every \(x\in\mathcal{M}\), the intersection of the square \(Q(O,r)\) and the rectangle \(\mathcal{T}_{F,\lambda_{1}}^{[1]}[x:\lambda_{2}]\) is non-empty. (For instance, such \(r\) exists provided the set \(\mathcal{M}\) is finite.) In this case, we can define \(g_{F}\) by the formula \[g_{F}(x)=\mathrm{center}\left(\mathcal{T}_{F,\lambda_{1}}^{[1]}[x:\lambda_{2} ]^{\mathbf{cl}}\cap Q(O,r)\right),\ \ \ \ x\in\mathcal{M}.\] Clearly, \[g_{F}(x)\in\mathcal{T}_{F,\lambda_{1}}^{[1]}[x:\lambda_{2}]^{\mathbf{cl}} \subset\mathcal{T}_{F,\lambda_{1}}^{\mathbf{cl}}(x)\ \ \mbox{on}\ \ \mathcal{M}\] so that property (3.23) holds. The proof of (3.24) in this case follows the same scheme as the proof of this inequality for \(g_{F}\) defined by formula (6.6). \(\blacktriangleleft\) **6.2 Projection Algorithms and a solution to the second main problem.** We are in a position to present a _solution to Problem 1.2_, the second main problem of the paper. Let \(\mathfrak{M}=(\mathcal{M},\rho)\) be a pseudometric space, and let \(F:\mathcal{M}\to\mathrm{Conv}(\mathbf{R}^{2})\) be a set-valued mapping. **Definition 6.5**: Let \[\mathcal{L}_{\mathcal{R}}(F)=\{\lambda\geq 0:\mathcal{R}_{F}[x,x^{\prime}: \lambda]\cap\{\mathcal{R}_{F}[y,y^{\prime}:\lambda]+\lambda\,\rho(x,y)\,Q_{0} \}\neq\emptyset\ \mbox{for all}\ x,x^{\prime},y,y^{\prime}\in\mathcal{M}\}. \tag{6.17}\] We set \[\Lambda_{\mathcal{R}}(F)=\inf\mathcal{L}_{\mathcal{R}}(F) \tag{6.18}\] provided \(\mathcal{L}_{\mathcal{R}}(F)\neq\emptyset\), and \(\Lambda_{\mathcal{R}}(F)=+\infty\) if \(\mathcal{L}_{\mathcal{R},T}(F)=\emptyset\). \(\blacktriangleleft\) Thus, \(\Lambda_{\cal R}(F)\) is the infimum of constants \(\lambda\geq 0\) such that condition (1.16) of Theorem 1.10 is satisfied. Hence, thanks to the necessity part of Theorem 1.10 and (1.17), we have \[\Lambda_{\cal R}(F)\leq\,|F|_{\cal R}. \tag{6.19}\] Let us prove the following useful property of the constant \(\Lambda_{\cal R}(F)\). **Lemma 6.6**: _Let \(\mathfrak{M}=({\cal M},\rho)\) be a pseudometric space. Suppose that either \(F:{\cal M}\to{\cal K}({\bf R}^{2})\) or \(F:{\cal M}\to{\cal H}{\cal P}({\bf R}^{2})\), see (1.3) and (1.4). If \(\Lambda_{\cal R}(F)<\infty\), then the infimum in the right hand side of definition (6.18) is attained. In other words, in these settings, condition (1.16) of Theorem 1.10 is satisfied with \(\lambda=\Lambda_{\cal R}(F)\)._ _Proof._ Let \(T=(x,x^{\prime},y,y^{\prime})\subset{\cal M}\) be an (ordered) quadruple of elements of \({\cal M}\), and let \[{\cal L}_{{\cal R},T}(F)=\{\lambda\geq 0:{\cal R}_{F}[x,x^{\prime}:\lambda] \cap\{{\cal R}_{F}[y,y^{\prime}:\lambda]+\lambda\rho(x,y)\,Q_{0}\}\neq\emptyset\}.\] Let \[\Lambda_{{\cal R},T}(F)=\inf{\cal L}_{{\cal R},T}(F) \tag{6.20}\] provided \({\cal L}_{{\cal R},T}(F)\neq\emptyset\), and \(\Lambda_{{\cal R},T}(F)=+\infty\) if \({\cal L}_{{\cal R},T}(F)=\emptyset\). Clearly, thanks to Definition 6.5 and (6.20), \[\Lambda_{\cal R}(F)=\sup\Lambda_{{\cal R},T}(F) \tag{6.21}\] where the supremum is taken over all (ordered) quadruples \(T=(x,x^{\prime},y,y^{\prime})\subset{\cal M}\). Let us prove that the infimum in the right hand side of (6.20) is attained, i.e., for each (ordered) quadruple \(T=(x,x^{\prime},y,y^{\prime})\) of elements of \({\cal M}\), we have \[{\cal R}_{F}[x,x^{\prime}:\lambda]\cap\{{\cal R}_{F}[y,y^{\prime}:\lambda]+ \lambda\,\rho(x,y)\,Q_{0}\}\neq\emptyset\quad\mbox{for every}\quad\lambda\geq \Lambda_{{\cal R},T}(F). \tag{6.22}\] Let us note that if this property is true, then the proof of the lemma is immediate. Indeed, thanks to (6.21), \(\Lambda_{{\cal R},T}(F)\leq\Lambda_{\cal R}(F)\) for every \(T\), so that, thanks to (6.22), for every \(x,x^{\prime},y,y^{\prime}\in{\cal M}\), we have \[{\cal R}_{F}[x,x^{\prime}:\lambda]\cap\{{\cal R}_{F}[y,y^{\prime}:\lambda]+ \lambda\,\rho(x,y)\,Q_{0}\}\neq\emptyset\quad\mbox{with}\quad\lambda=\Lambda_ {\cal R}(F)\] proving the lemma. We turn to the proof of property (6.22). Thanks to part (i) of Remark 1.11, the quantity \[\Lambda_{{\cal R},T}(F)=\inf\lambda \tag{6.23}\] where the infimum is taken over all constants \(\lambda\geq 0\) such that for every \(i=1,2\), there exist points \[A(i)=(a_{1}(i),a_{2}(i))\in F(x),\ \ A^{\prime}(i)\in F(x^{\prime}),\ \ B(i)=(b_{1}(i),b_{2}(i))\in F(y),\ \ B^{ \prime}(i)\in F(y^{\prime}) \tag{6.24}\] satisfying the following inequalities: \[\|A(i)-A^{\prime}(i)\|\leq\lambda\rho(x,x^{\prime}),\ \ \ \|B(i)-B^{\prime}(i)\|\leq \lambda\rho(y,y^{\prime})\ \ \mbox{and}\ \ |a_{i}(i)-b_{i}(i)|\leq\lambda\rho(x,y). \tag{6.25}\] Furthermore, property (6.22) holds provided the infimum in the right hand side of (6.23) is attained. Let us prove this property. We know that \(\Lambda_{\cal R}(F)\) is finite, i.e., \(\Lambda_{\cal R}(F)\leq\lambda_{0}\) for some \(\lambda_{0}\geq 0\). Therefore, thanks to (6.21), \(\Lambda_{{\cal R},T}(F)\leq\lambda_{0}\) as well. Thus, without loss of generality, we can assume that in (6.23) we have \(0\leq\lambda\leq\lambda_{0}\). Furthermore, this observation and Definition 6.5 imply the existence of a constant \(\lambda\) and points \(A(i)\), \(A^{\prime}(i)\), \(B(i)\), \(B^{\prime}(i)\) satisfying this inequality and constraints (6.24) and (6.25). Our aim is to show that under all these conditions, the infimum in (6.23) is attained. To this end, let us consider a tuple \[{\cal T}=(\lambda,A(1),A^{\prime}(1),B(1),B^{\prime}(1),A(2),A^{\prime}(2),B(2 ),B^{\prime}(2))\] where \(A(i)=(a_{1}(i),a_{2}(i))\), \(A^{\prime}(i)=(a^{\prime}_{1}(i),a^{\prime}_{2}(i))\), \(B(i)=(b_{1}(i),b_{2}(i))\), \(B^{\prime}(i)=(b^{\prime}_{1}(i),b^{\prime}_{2}(i))\), \(i=1,2\), are points in \({\bf R}^{2}\). Because \({\cal T}\) depends on 17 parameters, we will identify this tuple with a point in \({\bf R}^{17}\). Then the constrains (6.24), (6.25) and the inequality \(0\leq\lambda\leq\lambda_{0}\) determine a certain _non-empty convex closed subset_\(E\subset{\bf R}^{17}\). We have to prove that the minimum of the function \(G(T)=\lambda\), \(T\in E\), is attained on the constraint set \(E\). First, let us show this provided \(F:{\cal M}\to{\cal K}({\bf R}^{2})\). In this case \(F(x)\), \(F(x^{\prime})\), \(F(y)\) and \(F(y^{\prime})\) are _compact subsets_ of \({\bf R}^{2}\) so that the set \(E\) is a _compact subset of_\({\bf R}^{17}\). Therefore, the continuous function \(G\) takes the minimum value on the set \(E\) proving (6.22) in the case under consideration. Let us prove (6.22) provided \(F:{\cal M}\to{\cal HP}({\bf R}^{2})\). In this case the sets \(F(x)\), \(F(x^{\prime})\), \(F(y)\) are half-planes. We also recall that \(\|\cdot\|\) is the _uniform norm_ in \({\bf R}^{2}\). These observation tell us the set \(E\) is determined by _the finite number of linear constrains_, i.e., \(E\) is a _non-empty convex polytope_ (not necessarily bounded) in \({\bf R}^{17}\). We also note that the objective function \(G(T)=\lambda\) is a _linear functional bounded from below_ on \(E\) (because \(\lambda\geq 0\)). Thus, we have a problem of linear programming on a non-empty convex polytope \(E\) in \({\bf R}^{17}\) (i.e., with at least one feasible solution in the terminology of the linear programming theory), and with the objective function is bounded from below on the set \(E\), the set of all feasible solutions. In this case, there exists an optimal solution to this problem, see, e.g., [22], Theorem 4.2.3. Thus, in the both cases, the infimum in the right hand side of (6.23) is attained, and the proof of the lemma is complete. Applying to \(F\) the \(\vec{\lambda}\)-Projection Algorithm with an appropriate parameter \(\vec{\lambda}\), we obtain the following solution to Problem 1.2 for the class \(\mathfrak{T}\) of the set-valued mappings satisfying Condition 1.9. **Theorem 6.7**: _Let \(\Re=({\cal M},\rho)\) be a pseudometric space, and let \(F:{\cal M}\to{\rm Conv}({\bf R}^{2})\) be a set-valued mapping. Suppose that \(\Re\) and \(F\) satisfy Condition 1.9. Then the following statements hold:_ _(i) There exists a Lipschitz selection of \(F\) if and only if \(\Lambda_{\cal R}(F)<\infty\)._ _(ii) Suppose that \(0<\Lambda_{\cal R}(F)<\infty\). Then for every constant \(\gamma>1\), the \(\vec{\lambda}\)-Projection Algorithm with_ \[\vec{\lambda}=(\,3\gamma\Lambda_{\cal R}(F),\,\gamma\Lambda_{\cal R}(F)\,)\] _produces the outcome "\({\bf Success}\)" and returns the Lipschitz selection \(f_{\vec{\lambda},F}\) of \(F\) with Lipschitz seminorm_ \[\|f_{\vec{\lambda},F}\|_{{\rm Lip}({\cal M})}\leq 5\gamma\,|F|_{\Re}.\] _(iii) If \(\Lambda_{\cal R}(F)=0\), then for every \(\varepsilon>0\), the \(\vec{\lambda}\)-Projection Algorithm with \(\vec{\lambda}=(3\varepsilon,\varepsilon)\) produces the outcome "\({\bf Success}\)" and returns the Lipschitz selection \(f_{\vec{\lambda},F}\) of \(F\) with \(\|f_{\vec{\lambda},F}\|_{{\rm Lip}({\cal M})}\leq 5\varepsilon\)._ _(iv) Suppose that either \(F:{\cal M}\to{\cal K}({\bf R}^{2})\) or \(F:{\cal M}\to{\cal HP}({\bf R}^{2})\). If \(\Lambda_{\cal R}(F)\) is finite then for every \(\gamma\geq 1\), the \(\vec{\lambda}\)-PA with \(\vec{\lambda}=(\,3\gamma\Lambda_{\cal R}(F),\,\gamma\Lambda_{\cal R}(F)\,)\) produces the outcome "\({\bf Success}\)" and returns the Lipschitz selection \(f_{\vec{\lambda},F}\) of \(F\) with Lipschitz seminorm \(\|f_{\vec{\lambda},F}\|_{{\rm Lip}({\cal M})}\leq 5\gamma\,|F|_{\Re}\)._ _Proof._ Part (i) of the theorem is immediate from Theorem 1.10. Let us prove parts (ii) and (iii). We set \(\hat{\lambda}=\gamma\Lambda_{\mathcal{R}}(F)\) if \(\Lambda_{\mathcal{R}}(F)\in(0,\infty)\), and \(\hat{\lambda}=\varepsilon\) if \(\Lambda_{\mathcal{R}}(F)=0\). Because \(\gamma>1\) and \(\varepsilon>0\), we have \(\Lambda_{\mathcal{R}}(F)<\hat{\lambda}<\infty\). Therefore, thanks to Definition 6.5, condition (1.16) of Theorem 1.10 is satisfied with \(\lambda=\hat{\lambda}\). Thus \[\mathcal{R}_{F}[x,x^{\prime}:\hat{\lambda}]\cap\{\mathcal{R}_{F}[y,y^{\prime} :\hat{\lambda}]+\hat{\lambda}\,\rho(x,y)\,Q_{0}\}\neq\emptyset\quad\text{ for every }\ x,x^{\prime},y,y^{\prime}\in\mathcal{M} \tag{6.26}\] proving that \(F\) satisfies the hypothesis of Proposition 5.1 with \(\bar{\lambda}=\lambda=\hat{\lambda}\). Thanks to this proposition, for every \(x,x^{\prime},x^{\prime\prime},y,y^{\prime},y^{\prime\prime}\in\mathcal{M}\), we have: \[\mathcal{W}_{F}[x,x^{\prime},x^{\prime\prime}:3\bar{\lambda}]\cap\left\{ \mathcal{W}_{F}[y,y^{\prime},y^{\prime\prime}:3\bar{\lambda}]+\lambda\,\rho( x,y)Q_{0}\right\}\neq\emptyset.\] This property tells us that \(F\) satisfies the hypothesis of Theorem 3.2 with \[\bar{\lambda}=3\hat{\lambda}\ \ \text{ and }\ \ \lambda=\hat{\lambda}. \tag{6.27}\] Now, let us apply to \(F\) the \(\bar{\lambda}=(\lambda_{1},\lambda_{2})\)-Projection Algorithm with \(\lambda_{1}=3\hat{\lambda}\) and \(\lambda_{2}=\hat{\lambda}\), and prove that this algorithm produces the outcome **"Success"**. In fact, we know that \(F\) satisfies the hypothesis of Theorem 3.2. Lemma 3.3 tells us that in this case the set \(F^{[1]}[x:\bar{\lambda}]\neq\emptyset\) for every \(x\in\mathcal{M}\). Furthermore, thanks to (3.16), the rectangle \(\mathcal{T}^{[1]}_{F,\bar{\lambda}}[x:\lambda]\neq\emptyset\) on \(\mathcal{M}\). Thus, condition (6.8) holds proving that \(\bar{\lambda}\)-PA produces the outcome **"Success"**. See Remark 6.2. Parts \((\bigstar A)\) and \((\bigstar B)\) of Theorem 6.3 tell us that in this case, the \(\bar{\lambda}\)-Projection Algorithm returns the mapping \(f_{\bar{\lambda},F}:\mathcal{M}\to\mathbf{R}^{2}\) which is a Lipschitz selection of \(F\) with the Lipschitz seminorm \[\|f_{\lambda;F}\|_{\mathrm{Lip}(\mathcal{M})}\leq\lambda_{1}+2\lambda_{2}=3 \hat{\lambda}+2\hat{\lambda}=5\hat{\lambda}. \tag{6.28}\] If \(\Lambda_{\mathcal{R}}(F)=0\), then \(\hat{\lambda}=\varepsilon\) so that \(\|f_{\lambda;F}\|_{\mathrm{Lip}(\mathcal{M})}\leq 5\varepsilon\) proving part (iii) of the theorem. If \(\Lambda_{\mathcal{R}}(F)>0\), then \(\|f_{\lambda;F}\|_{\mathrm{Lip}(\mathcal{M})}\leq 5\gamma\Lambda_{ \mathcal{R}}(F)\). From this inequality and (6.19), we have \(\|f_{\lambda;F}\|_{\mathrm{Lip}(\mathcal{M})}\leq 5\gamma\,|F|_{\mathbb{N}}\), and the proof of part (ii) of the theorem is complete. Let us prove part (iv) of the theorem. Thanks to Lemma 6.6, for arbitrary \(\gamma\geq 1\), condition (6.26) holds with \(\bar{\lambda}=\gamma\Lambda_{\mathcal{R}}(F)\). In other words, the set-valued mapping \(F\) satisfies the hypothesis of Proposition 5.1 with \(\bar{\lambda}=\lambda=\hat{\lambda}\). This enables us to repeat the proof of part (ii) of the theorem given above (for this choice of the constant \(\hat{\lambda}\)). This proof leads us to the statement of part (iv) completing the proof of the theorem. Theorem 6.7 leads us to a solution to Problem 1.2 for pseudometric spaces \(\mathfrak{M}\) and set-valued mappings \(F\) satisfying Condition 1.9. For simplicity, let us demonstrate this solution for the case of a _finite set_\(\mathcal{M}\) and a set-valued mapping \(F:\mathcal{M}\to\mathfrak{T}\) where \(\mathfrak{T}\) is either the family \(\mathcal{K}(\mathbf{R}^{2})\) or the family \(\mathcal{HP}(\mathbf{R}^{2})\). (See (1.3) and (1.4).) **Algorithm 6.8** _(A constructive algorithm for a nearly optimal Lipschitz selection.)_ Given a finite pseudometric space \(\mathfrak{M}=(\mathcal{M},\rho)\), and a set-valued mapping \(F:\mathcal{M}\to\mathfrak{T}\), the algorithm produces a nearly optimal Lipschitz selection \(f\) of \(F\) (the outcome **"Success"**) or stops (the outcome **"No go"**). This procedure includes the following two main steps. **STEP 1.** At this step we compute the constant \(\Lambda_{\mathcal{R}}(F)\). If it turns out that \(\Lambda_{\mathcal{R}}(F)=+\infty\), then a _Lipschitz selection of \(F\) does not exist_. In this case, the algorithm produces the outcome **"No go"** and stops. If we determine that \(\Lambda_{\mathcal{R}}(F)<+\infty\), we calculate this constant up to some constant \(\gamma\geq 1\). Thus, at this step the algorithm returns the number \(\gamma\,\Lambda_{\mathcal{R}}(F)\). **STEP 2.** At this step we apply to \(F\) the \(\vec{\lambda}=(\lambda_{1},\lambda_{2})\)-Projection Algorithm with \[\lambda_{1}=3\gamma\,\Lambda_{\mathcal{R}}(F)\quad\text{and}\quad\lambda_{2}= \gamma\,\Lambda_{\mathcal{R}}(F). \tag{6.29}\] The Projection Algorithm produces the outcome **"Success"** and returns the mapping \(f_{\vec{\lambda};F}\) which is a Lipschitz selection of \(F\) with Lipschitz seminorm \[\|f_{\lambda;F}\|_{\mathrm{Lip}(\mathcal{M})}\leq 5\gamma\,\Lambda_{ \mathcal{R}}(F). \tag{6.30}\] At this stage, Algorithm 6.8 produces the outcome **"Success"** and stops. Note that Algorithm 6.8 is immediate from part (iv) of Theorem 6.7. **Remark 6.9**: (i) Clearly, if \(\Re=(\mathcal{M},\rho)\) is a _metric space_ and \(\mathcal{M}\) is _finite_ then _any selection_\(f\) of the set-valued mapping \(F\) from Algorithm 6.8 is Lipschitz, i.e., \(\|f\|_{\mathrm{Lip}(\mathcal{M})}<\infty\). Hence, \(|F|_{\Re}<\infty\), so that, thanks to inequality (6.19), \(\Lambda_{\mathcal{R}}(F)<\infty\) as well. Thus, if \(\rho\) is a metric and \(\mathcal{M}\) is finite, Algorithm 6.8 always produces the outcome **"Success"**. However, if \(\rho\) is a pseudometric, i.e., \(\rho(x,y)=0\) may hold with certain \(x\neq y\), in general, the quantity \(\Lambda_{\mathcal{R}}(F)\) may take the value \(+\infty\) (even for a finite set \(\mathcal{M}\)). For instance, if there exist elements \(x,x^{\prime}\in\mathcal{M}\) such that \(\rho(x,x^{\prime})=0\) and \(F(x)\cap F(x^{\prime})=\emptyset\), then, thanks to (1.15), \(\mathcal{R}_{F}[x,x^{\prime}:\lambda]=\emptyset\) for every \(\lambda\geq 0\). Therefore, inequality (1.16) does not hold for every \(y,y^{\prime}\in\mathcal{M}\) and any \(\lambda\geq 0\) proving that \(\Lambda_{\mathcal{R}}(F)=+\infty\). See Definition 6.5. It is also clear that in this case \(F\) does not have a Lipschitz selection (with respect to the pseudometric \(\rho\)). (ii) In general, the _precise computation_ of the quantity \(\Lambda_{\mathcal{R}}(F)\) maybe a very difficult technical problem. For this reason, we introduce in Algorithm 6.8 the parameter \(\gamma\geq 1\) which enables us to calculate \(\Lambda_{\mathcal{R}}(F)\) up to this parameter. Inequality (6.30) tells us that in this case the algorithm construct a selection of \(F\) whose Lipschitz seminorm is bounded by a constant linearly depending on \(\gamma\). **6.3 The constant \(\Lambda_{\mathcal{R}}(F)\) and other related constants.** In this section we give several remarks related to efficient algorithms for computing the constant \(\Lambda_{\mathcal{R}}(F)\). See Definition 6.5. Theorem 5.6 motivates us to introduce the following constant. **Definition 6.10**: Let \(\Re=(\mathcal{M},\rho)\) be a pseudometric space, and let \(F:\mathcal{M}\to\mathrm{Conv}(\mathbf{R}^{2})\) be a set-valued mapping. Let \[\mathcal{L}_{\mathcal{R}}^{(int)}(F)=\{\lambda\geq 0:\bigcap_{y,y^{\prime}\in \mathcal{M}}\{\mathcal{R}_{F}[y,y^{\prime}:\lambda]+\lambda\,\rho(x,y)Q_{0}\} \neq\emptyset\,\text{ for all }x\in\mathcal{M}\}. \tag{6.31}\] Let \[\Lambda_{\mathcal{R}}^{(int)}(F)=\inf\,\mathcal{L}_{\mathcal{R}}^{(int)}(F) \tag{6.32}\] provided \(\mathcal{L}_{\mathcal{R}}^{(int)}(F)\neq\emptyset\), and let \(\Lambda_{\mathcal{R}}^{(int)}(F)=+\infty\) if \(\mathcal{L}_{\mathcal{R}}^{(int)}(F)=\emptyset\). Thus, \(\Lambda_{\mathcal{R}}^{(int)}(F)\) is the infimum of constants \(\lambda\geq 0\) such that condition (5.6) of Theorem 5.6 holds. **Lemma 6.11**: _Let \(\Re=({\cal M},\rho)\) be a pseudometric space, and let \(F:{\cal M}\to{\rm Conv}({\bf R}^{2})\) be a set-valued mapping. Suppose that either \({\cal M}\) is finite or at least one of the sets \(F(x)\), \(x\in{\cal M}\), is compact._ _Then \({\cal L}^{(int)}_{\Re}(F)={\cal L}_{\Re}(F)\) and \(\Lambda^{(int)}_{\Re}(F)=\Lambda_{\Re}(F)\)._ _Proof._ Clearly, thanks to (6.18) and (6.32), it suffices to prove the equality \({\cal L}^{(int)}_{\Re}(F)={\cal L}_{\Re}(F)\). If \(\lambda\in{\cal L}^{(int)}_{\Re}(F)\), then, thanks to (6.31), for every \(x\in{\cal M}\), we have \[\bigcap_{y,y^{\prime}\in{\cal M}}\{{\cal R}_{F}[y,y^{\prime}:\lambda]+\lambda \,\rho(x,y)Q_{0}\}\neq\emptyset. \tag{6.33}\] Clearly, thanks to this property, for every \(x,x^{\prime},y,y^{\prime}\in{\cal M}\), we have \[{\cal R}_{F}[x,x^{\prime}:\lambda]\cap\{{\cal R}_{F}[y,y^{\prime}:\lambda]+ \lambda\,\rho(x,y)\,Q_{0}\}\neq\emptyset \tag{6.34}\] proving that \(\lambda\in{\cal L}_{\Re}(F)\), see (6.17). Conversely, suppose that \(\lambda\in{\cal L}_{\Re}(F)\), i.e., (6.34) holds for all \(x,x^{\prime},y,y^{\prime}\in{\cal M}\). Prove that in this case, for every \(x\in{\cal M}\), property (6.33) holds as well. We set \[{\cal V}=\{{\cal R}_{F}[y,y^{\prime}:\lambda]+\lambda\,\rho(x,y)Q_{0}:y,y^{ \prime}\in{\cal M}\}.\] Let us prove that any two members of the family \({\cal V}\) have a common point. Indeed, thanks to (6.34), given \(y,y^{\prime},z,z^{\prime}\in{\cal M}\), there exist points \(a\in{\cal R}_{F}[y,y^{\prime}:\lambda]\) and \(b\in{\cal R}_{F}[z,z^{\prime}:\lambda]\) such that \(\|a-b\|\leq\lambda\,\rho(y,z)\). Therefore, thanks to the triangle inequality, \(\|a-b\|\leq\lambda\,(\rho(y,x)+\rho(x,z))\) so that there exists a point \(w\in[a,b]\) such that \(\|a-w\|\leq\lambda\,\rho(y,x)\) and \(\|b-w\|\leq\lambda\,\rho(z,x)\). Hence, \[w\in\{{\cal R}_{F}[y,y^{\prime}:\lambda]+\lambda\,\rho(x,y)Q_{0}\}\cap\{{\cal R }_{F}[z,z^{\prime}:\lambda]+\lambda\,\rho(x,z)Q_{0}\}\] proving the required property of the family \({\cal V}\). From this property and the lemma's hypothesis, it follows that the family \({\cal V}\) satisfies the hypothesis of Lemma 2.2 (i.e., Helly's theorem for rectangles). Thanks to this lemma, the family \({\cal V}\) has non-empty intersection proving the required property (6.33). The proof of the lemma is complete. Below we will see how the equality \(\Lambda^{(int)}_{\Re}(F)=\Lambda_{\Re}(F)\), i.e., the representation of \(\Lambda_{\Re}(F)\) in the form \(\Lambda_{\Re}(F)=\inf\,{\cal L}^{(int)}_{\Re}(F)\), see (6.32), will help us calculate this constant in an efficient way. But now we introduce one more useful constant directly related to the Finiteness Principle in \({\bf R}^{2}\), see Theorem 5.2. **Definition 6.12**: Let \(\Re=({\cal M},\rho)\) be a pseudometric space, and let \(F:{\cal M}\to{\rm Conv}({\bf R}^{2})\) be a set-valued mapping. We let \({\cal L}^{({\cal FP})}(F)\) denote the family of all constants \(\lambda\geq 0\) such that for every subset \({\cal M}\subset{\cal M}\) consisting of at most _four_ points, the restriction \(F|_{{\cal M}}\) of \(F\) to \({\cal M}^{\prime}\) has a Lipschitz selection \(f_{{\cal M}^{\prime}}\) with Lipschitz seminorm \(\|f_{{\cal M}^{\prime}}\|_{{\rm Lip}({\cal M}^{\prime})}\leq\lambda\). Let \[\Lambda^{({\cal FP})}(F)=\inf\,{\cal L}^{({\cal FP})}(F) \tag{6.35}\] provided \({\cal L}^{({\cal FP})}(F)\neq\emptyset\), and let \(\Lambda^{({\cal FP})}(F)=+\infty\) if \({\cal L}^{({\cal FP})}(F)=\emptyset\). Thus, \(\Lambda^{({\cal FP})}(F)\) is the smallest \(\lambda\geq 0\) such that the hypothesis of Theorem 5.2 holds. Clearly, \[\Lambda^{({\cal FP})}(F)=\sup\,\{|F|_{{\cal M}^{\prime}}|_{\Re}:M^{\prime}\subset M,\,\#M^{\prime}\leq 4\}.\qquad\qquad{\rm See\ (\ref{eq:1})}. \tag{6.36}\] Furthermore, thanks to Definitions 6.5 and 6.12 and formula (6.36), the following inequality \[\Lambda_{\mathcal{R}}(F)\leq\Lambda^{(\mathcal{FP})}(F)\leq|F|_{\mathbb{R}} \tag{6.37}\] holds provided \(F:\mathcal{M}\to\mathrm{Conv}(\mathbf{R}^{2})\) is an arbitrary set-valued mapping. Moreover, if \(\mathcal{M}\) and \(F\) satisfy Condition 1.9, then, thanks to Theorem 1.10 and Theorem 5.2, we have \[|F|_{\mathbb{R}}\leq\min\,\left\{5\Lambda_{\mathcal{R}}(F),3\Lambda^{( \mathcal{FP})}(F)\right\}. \tag{6.38}\] **Lemma 6.13**: _Let \(\mathfrak{M}=(\mathcal{M},\rho)\) be a pseudometric space, and let \(F:\mathcal{M}\to\mathfrak{T}\) where \(\mathfrak{T}\) is either \(\mathcal{K}(\mathbf{R}^{2})\) or \(\mathcal{HP}(\mathbf{R}^{2})\). If \(\Lambda_{\mathcal{R}}(F)<\infty\), then \(\Lambda^{(\mathcal{FP})}(F)\in\mathcal{L}^{(\mathcal{FP})}(F)\). Cf. (6.35)._ _Proof._ We follow the scheme of the proof of Lemma 6.6, and use equality (6.36). We leave the details to the interested reader. **Theorem 6.14**: _Let \(\mathfrak{M}=(\mathcal{M},\rho)\) be a pseudometric space, and let \(F:\mathcal{M}\to\mathrm{Conv}(\mathbf{R}^{2})\) be a set-valued mapping. Suppose that either \(F:\mathcal{M}\to\mathcal{K}(\mathbf{R}^{2})\) or \(F:\mathcal{M}\to\mathcal{HP}(\mathbf{R}^{2})\)._ _If \(\Lambda^{(\mathcal{FP})}(F)<\infty\), then for every \(\gamma\geq 1\), the \(\vec{\lambda}\)-Projection Algorithm with_ \[\vec{\lambda}=(\,\gamma\Lambda^{(\mathcal{FP})}(F),\,\gamma\Lambda^{( \mathcal{FP})}(F)\,)\] _produces the outcome **"Success"** and returns the Lipschitz selection \(f_{\vec{\lambda},F}\) of \(F\) with Lipschitz seminorm \(\|f_{\vec{\lambda},F}\|_{\mathrm{Lip}(\mathcal{M})}\leq 3\gamma\,|F|_{ \mathbb{R}}\)._ _Proof._ We follow the approach suggested in the proof of part (iv) of Theorem 6.7. Thanks to Lemma 6.13, \(\gamma\Lambda^{(\mathcal{FP})}(F)\in\mathcal{L}^{(\mathcal{FP})}(F)\) for every \(\gamma\geq 1\). In other words, for every subset \(\mathcal{M}^{\prime}\subset\mathcal{M}\) consisting of at most four points, the restriction \(F|_{\mathcal{M}}\) of \(F\) to \(\mathcal{M}\) has a Lipschitz selection \(f_{\mathcal{M}^{\prime}}\) with Lipschitz seminorm \(\|f_{\mathcal{M}^{\prime}}\|_{\mathrm{Lip}(\mathcal{M}^{\prime})}\leq\lambda\) where \(\lambda=\gamma\Lambda^{(\mathcal{FP})}(F)\). As we have shown in the sufficiency part of the proof of Theorem 5.2, in this case condition (3.3) of Theorem 3.2 is satisfied with \(\vec{\lambda}=\lambda\). Combining this condition with the hypothesis of Theorem 6.14, we conclude that \(\mathfrak{M}\) and \(F\) satisfy the hypothesis of Theorem 3.2 with \(\vec{\lambda}=\lambda=\gamma\Lambda^{(\mathcal{FP})}(F)\) and \(\lambda=\gamma\Lambda^{(\mathcal{FP})}(F)\). We then repeat the proof of part (ii) of Theorem 6.7 from definition (6.27) up to and including inequality (6.28), setting \(\vec{\lambda}=\lambda\) and \(\lambda_{1}=\lambda_{2}=\lambda\) in this proof. As a result, we show that in this case the \(\vec{\lambda}\)-Projection Algorithm with \(\vec{\lambda}=(\lambda,\lambda)\) produces the outcome **"Success"** and returns the mapping \(f_{\vec{\lambda},F}:\mathcal{M}\to\mathbf{R}^{2}\) which is a Lipschitz selection of \(F\) with the Lipschitz seminorm \[\|f_{\vec{\lambda},F}\|_{\mathrm{Lip}(\mathcal{M})}\leq\lambda_{1}+2\lambda_ {2}=3\lambda=3\gamma\Lambda^{(\mathcal{FP})}(F).\] But \(\Lambda^{(\mathcal{FP})}(F)\leq|F|_{\mathbb{R}}\), see (6.37), and the proof of the theorem is complete. **Algorithm 6.15**: Theorem 6.14 leads us to a new algorithm for a nearly optimal Lipschitz selection of a set-valued mapping. We obtain this algorithm by a slight modification of Algorithm 6.8. More specifically, at **STEP 1** of this algorithm we replace the parameter \(\Lambda_{\mathcal{R}}(F)\) with the parameter \(\Lambda^{(\mathcal{FP})}(F)\). At **STEP 2** of Algorithm 6.8 we set \[\lambda_{1}=\gamma\,\Lambda^{(\mathcal{FP})}(F)\quad\text{and}\quad\lambda_{2 }=\gamma\,\Lambda^{(\mathcal{FP})}(F). \tag{6.39}\] Then, thanks to Theorem 6.14, inequality (6.30) transforms into the following one: \[\|f_{\vec{\lambda},F}\|_{\mathrm{Lip}(\mathcal{M})}\leq 3\gamma\,\Lambda_{ \mathcal{R}}(F).\] At this stage, Algorithm 6.15 produces the outcome **"Success"** and stops. Let us make two remarks related to some preliminary estimates of the computational efficiency of Algorithm 6.8 and Algorithm 6.15. **Remark 6.16**: We note that at the second step of Algorithms 6.8 and 6.15 the \(\vec{\lambda}\)-Projection Algorithm is applied with some parameters \(\vec{\lambda}=(\lambda_{1},\lambda_{2})\), see (6.29) and (6.39). In the forthcoming paper [34] we will show that if \(\mathcal{M}\) is an \(N\)-element pseudometric space and \(F\) satisfies certain rather mild geometric conditions, the running time of the \(\vec{\lambda}\)-Projection Algorithm is \(O(N^{3})\). In particular, this is true provided each \(F(x)\) is a half-plane, i.e., \(F:\mathcal{M}\to\mathcal{HP}(\mathbf{R}^{2})\). The most difficult part of Algorithms 6.8 and 6.15 is **STEP 1**, i.e., calculation (up to some constant \(\gamma\geq 1\)) the values of the constants \(\Lambda_{\mathcal{R}}(F)\) and \(\Lambda^{(\mathcal{FP})}(F)\) respectively. Although the definitions of these constants are given in explicit geometric terms, their calculation may require a huge number of computer operations. (Even if every set \(F(x)\), \(x\in\mathcal{M}\), is a disc in \(\mathbf{R}^{2}\), it is not clear how to calculate \(\Lambda_{\mathcal{R}}(F)\) and \(\Lambda^{(\mathcal{FP})}(F)\) (even up to an absolute constant)). Let us consider the simplest non-trivial case of _a set-valued mapping from \(\mathcal{M}\) (with \(\#\mathcal{M}=N\)) to the family \(\mathcal{HP}(\mathbf{R}^{2})\) of all half-planes in \(\mathbf{R}^{2}\)_. It is not difficult to compute \(\Lambda_{\mathcal{R}}(F)\) with \(O(N^{4})\) running time. Indeed, to calculate the constant \(\Lambda_{\mathcal{R}}(F)\) we have to consider all possible (ordered) quadruples \(T=(x,x^{\prime},y,y^{\prime})\subset\mathcal{M}\) and solve the corresponding linear programming problem determined by (6.23), (6.24) and (6.25). For each \(T\), solving this problem will take \(O(1)\) running time. Since the number of such quadruples \(T\) is \(O(N^{4})\), the total running time is also \(O(N^{4})\). The same estimate \(O(N^{4})\) is obtained for the running time of computing the constant \(\Lambda^{(\mathcal{FP})}(F)\). Thus, for both Algorithm 6.8 and Algorithm 6.15 this approach provides the running time \[O(N^{4})\] _(at_ **STEP 1**_) \[+O(N^{3})\] _(at_ **STEP 2**_) \[=O(N^{4}).\]_ **Remark 6.17**: For the same case of a set-valued mapping \(F:\mathcal{M}\to\mathcal{HP}(\mathbf{R}^{2})\) defined on a finite set \(\mathcal{M}\) with \(\#\mathcal{M}=N\), there exists another algorithm for calculating the constant \(\Lambda_{\mathcal{R}}(F)\) with \(O(N^{3})\) running time. Let us briefly explain the main idea of this algorithm. Definition 6.10 and Lemma 6.11 tell us that, in the case under the consideration, \(\Lambda_{\mathcal{R}}^{(int)}(F)=\Lambda_{\mathcal{R}}(F)\). Thus, \(\Lambda_{\mathcal{R}}^{(int)}(F)=\inf\lambda\) where \(\lambda\) runs over all non-negative numbers such that for every \(x\in\mathcal{M}\) there exists a point \(u\in\mathbf{R}^{2}\) such that \[u\in\mathcal{R}_{F}[y,y^{\prime}:\lambda]+\lambda\,\rho(x,y)Q_{0} \tag{6.40}\] for all \(y,y^{\prime}\in\mathcal{M}\). Fix \(x\in\mathcal{M}\) and consider the family \(\mathcal{V}_{x}\) of all points \(v=(u,\lambda)\in\mathbf{R}^{3}\) with \(\lambda\geq 0\) such that property (6.40) holds. The reader can easily see that \(\mathcal{V}_{x}\)_is intersection of at most four half-spaces in \(\mathbf{R}^{3}\)_. Each of these half-spaces depends only on parameters determined the half-planes \(F(y)\) and \(F(y^{\prime})\). Hence, the total number of linear constraints determining the set \(\mathcal{V}_{x}\subset\mathbf{R}^{3}\) is bounded by \(O(N^{2})\). Let \(\Lambda_{\mathcal{R},x}(F)=\inf\lambda:(u,\lambda)\in\mathcal{V}_{x}\). Then, \[\Lambda_{\mathcal{R}}(F)=\max_{x\in\mathcal{M}}\Lambda_{\mathcal{R},x}(F). \tag{6.41}\] Let us see that there exists an algorithm which for every \(x\in\mathcal{M}\), computes the quantity \(\Lambda_{\mathcal{R},x}(F)\) with the running time \(O(N^{2})\). Indeed, we know that the problem \[\text{\it Find}\quad\inf\lambda\quad\text{\it under the condition}\quad(u, \lambda)\in\mathcal{V}_{x}, \tag{6.42}\] is a _linear program in three variables with \(O(N^{2})\) linear constraints_. The following theorem is the classical results on low-dimensional linear programming by N. Megiddo [23] and M. E. Dyer [8]. **Theorem 6.18**: _A linear program in three variables and \(m\) constraints can be solved in \(O(m)\) time._ This theorem implies the following useful corollary. **Corollary 6.19**: _(i) There exists an algorithm which given a convex polygon \(G\subset{\bf R}^{2}\) determined by \(N\) linear constraints, constructs its rectangular hull \({\cal H}[G]\) in \(O(N)\) running time._ _(ii) There exists an algorithm which given a point \(A\in{\bf R}^{2}\) and a convex polygon \(G\subset{\bf R}^{2}\) determined by \(N\) linear constraints, calculates the distance from \(A\) to \(G\) using at most \(O(N)\) computer operations._ _Proof._ These results are well known in the theory of geometric algorithms. Each of them can be readily reduced to a certain linear program in three variables and \(CN\) constraints where \(C\) is an absolute constant. Applying Theorem 6.18 to the corresponding linear program, we obtain the statements of the corollary. We leave the details to the interested reader. Thus, for every \(x\in{\cal M}\) the problem (6.42) can be solved in \(O(N^{2})\) time so that, thanks to (6.41), the constant \(\Lambda_{\cal R}(F)\) can be calculated in \(N\cdot O(N^{2})=O(N^{3})\) time. Combining this algorithm for \(\Lambda_{\cal R}(F)\) with Algorithm 6.8, we obtain a constructive algorithm which, given a set-valued mapping \(F:{\cal M}\to{\cal H}{\cal P}({\bf R}^{2})\) assigns a Lipschitz selection \(f\) of \(F\) with \(\|f\|_{\mathrm{Lip}({\cal M})}\leq 5\,|F|_{\mathfrak{M}}\). The running time of this algorithm is \[O(N^{3})\ \ \mbox{\bf(STEP 1)}+O(N^{3})\ \ \mbox{\bf(STEP 2)}=O(N^{3}).\] **6.4 Lipschitz selections of polygon-set valued mappings.** In this section, we provide some remarks related to a recent paper by C. Fefferman and B. Peguero-les [15]. Let \(D\) and \(L\) be positive integers and let \({\cal P}_{L}({\bf R}^{D})\) be the family of all compact convex polytope in \({\bf R}^{D}\) defined by at most \(L\) linear constraints. In [15] C. Fefferman and B. Pegueroles solved a selection problem for set-valued mappings \(F\) from a set \(E\subset{\bf R}^{n}\) into \({\cal P}_{L}({\bf R}^{D})\)_with a slight enlarging of the targets \(F(x)\)_, \(x\in E\). (See also [11, Chapter 7.2].) Let us recall a particular case of this result related to Lipschitz selections. Let \(E\) be a finite subset of \({\bf R}^{n}\) with \(\#E=N\), and let \(F:E\to{\cal P}_{L}({\bf R}^{D})\) be a set-valued mapping. Given \(\tau>0\) and \(x\in E\), we let \((1+\tau)\diamond F(x)\) denote the convex set obtained by dilating \(F(x)\) about its center of mass by a factor \((1+\tau)\). **Theorem 6.20**: _Let \(M,\tau>0\) be given. The algorithm described in [15] produces one of the following two outcomes, using at most \(C_{1}\,N\log N\) computer operations and \(C_{1}\,N\) units of computer memory._ **Outcome 1 ("No Go"):** _We guarantee that there does not exist \(f\in\mathrm{Lip}(E,{\bf R}^{D})\) with the Lipschitz seminorm \(\leq M\) such that \(f(x)\in F(x)\) for all \(x\in E\)._ **Outcome 2 ("Success"):** _The algorithm produces a mapping \(f:E\to{\bf R}^{D}\) with the Lipschitz seminorm at most \(C_{2}\,M\) satisfying \(f(x)\in(1+\tau)\diamond F(x)\) for all \(x\in E\)._ _Here, \(C_{1}>0\) depends only on \(\tau\), \(L\), \(n\) and \(D\), and \(C_{2}>0\) depends only on \(n\) and \(D\)._ This theorem motivates us to consider Problem 1.2 for the family \(\mathfrak{T}={\cal P}_{L}({\bf R}^{2})\) of all non-empty _convex polygons in \({\bf R}^{2}\)_ each defined by at most \(L\) linear constraints. Let us note that this problem is a particular case of the same problem but for the family \(\mathfrak{T}={\cal H}{\cal P}({\bf R}^{2})\) of all half-planes in \({\bf R}^{2}\). Indeed, let \(\mathfrak{M}=({\cal M},\rho)\) be a pseudometric space, and let \(F:{\cal M}\to{\cal P}_{L}({\bf R}^{2})\) be a set-valued mapping. We know that each polygon \(F(x)\), \(x\in{\cal M}\), can be represented as an intersection of at most \(L\) half-planes. We denote this family of half-planes by \({\cal H}_{F}(x)\); thus \[F(x)=\cap\{H:H\in{\cal H}_{F}(x)\}\ \ \mbox{for every}\ \ \ x\in{\cal M}. \tag{6.43}\] We introduce a new pseudometric space \(\widetilde{\mathfrak{M}}=(\widetilde{\mathcal{M}},\tilde{\rho})\) whose elements are all couples \(u=(x,H)\) where \(x\in\mathcal{M}\) and \(H\in\mathcal{H}_{F}(x)\). We define on \(\widetilde{\mathcal{M}}\) The pseudometric \(\tilde{\rho}\) on \(\widetilde{\mathcal{M}}\) is defined by \[\tilde{\rho}(u,u^{\prime})=\rho(x,x^{\prime})\ \ \ \mbox{for every}\ \ \ u=(x,H),\ u^{\prime}=(x^{\prime},H^{\prime})\in\widetilde{\mathcal{M}}.\] Finally, we define a new set-valued mapping \(\widetilde{F}:\widetilde{\mathcal{M}}\to\mathcal{H}\mathcal{P}(\mathbf{R}^{2})\) by letting \[\widetilde{F}((x,H))=H\ \ \ \mbox{provided}\ \ \ x\in\mathcal{M}\ \ \mbox{and}\ \ H\in\mathcal{H}_{F}(x). \tag{6.44}\] In particular, \(\tilde{\rho}((x,H),(x,H^{\prime}))=0\) for every \(H,H^{\prime}\in\mathcal{H}_{F}(x)\). This observation implies the following simple claim. **Claim 6.21**: _(i) If a mapping \(f:\mathcal{M}\to\mathbf{R}^{2}\) is a Lipschitz selection of \(F\) then the mapping \(\tilde{f}:\widetilde{\mathcal{M}}\to\mathbf{R}^{2}\) defined by_ \[\tilde{f}((x,H))=f(x),\ \ \ (x,H)\in\widetilde{\mathcal{M}}, \tag{6.45}\] _is a Lipschitz (with respect to \(\tilde{\rho}\)) selection of \(\widetilde{F}\), and the following equality_ \[\|\tilde{f}\|_{\mathrm{Lip}(\widetilde{\mathcal{M}}_{\tilde{\rho}})}=\|f\|_{ \mathrm{Lip}(\mathcal{M}_{\rho})} \tag{6.46}\] _holds._ _(ii) Conversely, if a mapping \(\tilde{f}:\widetilde{\mathcal{M}}\to\mathbf{R}^{2}\) is a Lipschitz (with respect to \(\tilde{\rho}\)) selection of \(\widetilde{F}\), then there exists a unique mapping \(f:\mathcal{M}\to\mathbf{R}^{2}\) satisfying (6.45). Furthermore, \(f\) is a Lipschitz selection of \(F\), and equality (6.46) holds._ _Proof._ Part (i) of the claim is obvious. Let us prove part (ii). Because \(\tilde{f}\) is Lipschitz (with respect to \(\tilde{\rho}\)), for every \(x\in\mathcal{M}\) and every \(H,H^{\prime}\in\mathcal{H}_{F}(x)\), we have \[\|\tilde{f}((x,H))-\tilde{f}((x,H^{\prime}))\|\leq\|\tilde{f}\|_{\mathrm{Lip }(\widetilde{\mathcal{M}}_{\tilde{\rho}})}\rho(x,x)=0. \tag{6.47}\] This enables us to define a mapping \(f:\mathcal{M}\to\mathbf{R}^{2}\) by letting \[f(x)=\tilde{f}((x,H))\ \ \mbox{where}\ \ H\in\mathcal{H}_{F}(x)\ \ \mbox{is arbitrary}. \tag{6.48}\] Thanks to (6.47), \(f\) is well defined (i.e., \(\tilde{f}((x,H))\) depends only on \(x\) and independent of \(H\) provided \(H\in\mathcal{H}_{F}(x)\)). Furthermore, \(f\) is a selection of \(F\) on \(\mathcal{M}\). Indeed, because \(\tilde{f}\) which is a Lipschitz selection of \(\widetilde{F}\), the point \(\tilde{f}((x,H))\in\widetilde{F}((x,H))\) for every \(H\in\mathcal{H}_{F}(x)\). Hence, thanks to (6.48), \(f(x)\in\widetilde{F}((x,H))\) for all \(H\in\mathcal{H}_{F}(x)\) proving that \[f(x)\in\cap\{H:H\in\mathcal{H}_{F}(x)\}=F(x).\ \ \ \mbox{See (\ref{eq:f}).}\] Finally, thanks to (6.48) and part (i) of the claim, the equality (6.46)holds. Applying the Projection Algorithm 6.1 to the pseudometric space \(\widetilde{\mathfrak{M}}=(\widetilde{\mathcal{M}},\tilde{\rho})\) and the set-valued mapping \(\widetilde{F}:\widetilde{\mathcal{M}}\to\mathcal{H}\mathcal{P}(\mathbf{R}^{2})\), we obtain the following theorem. **Theorem 6.22**: _Let a positive integer \(L\) and a constant \(M>0\) be given. Let \(\mathfrak{M}=(\mathcal{M},\rho)\) be a finite pseudometric space with \(\#\mathcal{M}=N\), and let \(F:\mathcal{M}\to\mathcal{P}_{L}(\mathbf{R}^{2})\) be a set-valued mapping._ _The \((M,M)\)-Projection Algorithm produces one of the following two outcomes, using O(\(L^{3}N^{3}\)) computer operations._ Outcome 1 **("No Go"):** _We guarantee that there does not exist \(f\in\mathrm{Lip}(\mathcal{M})\) with the Lipschitz seminorm \(\|f\|_{\mathrm{Lip}(\mathcal{M})}\leq M\) such that \(f(x)\in F(x)\) for all \(x\in\mathcal{M}\)._ Outcome 2 **("Success"):** _The algorithm produces a mapping \(f:\mathcal{M}\to\mathbf{R}^{2}\) with the Lipschitz seminorm \(\|f\|_{\mathrm{Lip}(\mathcal{M})}\leq 3M\) satisfying \(f(x)\in F(x)\) for all \(x\in\mathcal{M}\)._ _Proof._ We set \(\vec{\lambda}=(\lambda_{1},\lambda_{2})\) where \(\lambda_{1}=\lambda_{2}=M\). Then we apply Theorem 6.3 with this \(\vec{\lambda}\) to \(\widetilde{\mathfrak{M}}=(\widetilde{\mathcal{M}},\bar{\rho})\) and \(\widetilde{F}:\widetilde{\mathcal{M}}\to\mathcal{HP}(\mathbf{R}^{2})\). This theorem tells us that the \(\vec{\lambda}\)-Projection Algorithm produces the following two outcomes: Outcome 1 **"No go"**. In this case we guarantee that there does not exist a Lipschitz selection of \(\widetilde{F}\) with Lipschitz seminorm at most \(\min\{\lambda_{1},\lambda_{2}\}=M\); Outcome 2 **"Success"**. In this case, the \(\vec{\lambda}\)-PA returns a mapping \(\tilde{f}:\widetilde{\mathcal{M}}\to\mathbf{R}^{2}\) which is a Lipschitz selection of \(\widetilde{F}\) with \[\|\tilde{f}\|_{\mathrm{Lip}(\widetilde{\mathcal{M}}\bar{\rho})}\leq\lambda_ {1}+2\lambda_{2}=3M.\] Let us see that Outcome 1 and Outcome 2 provide the statements given in the formulation of Theorem 6.20. Indeed, suppose that in case of Outcome 1 there exists a Lipschitz selection \(f\) of \(F\) with \(\|f\|_{\mathrm{Lip}(\mathcal{M})}\leq M\). Then the mapping \(\tilde{f}:\widetilde{\mathcal{M}}\to\mathbf{R}^{2}\) defined by formula (6.45) is a Lipschitz selection of \(\widetilde{F}\) on \(\widetilde{\mathcal{M}}\) with \[\|\tilde{f}\|_{\mathrm{Lip}(\widetilde{\mathcal{M}}\bar{\rho})}=\|f\|_{ \mathrm{Lip}(\mathcal{M},\bar{\rho})}\leq M,\] a contradiction. We turn to Outcome 2. We know that the mapping \(\tilde{f}:\widetilde{\mathcal{M}}\to\mathbf{R}^{2}\) is Lipschitz with respect to the pseudometric \(\tilde{\rho}\). Therefore, thanks to part (ii) of Claim 6.21, there exists a unique mapping \(f:\mathcal{M}\to\mathbf{R}^{2}\) satisfying (6.45) such that \(f\) is a Lipschitz selection of \(F\). Furthermore, in this case equality (6.46) holds proving that \[\|f\|_{\mathrm{Lip}(\mathcal{M})}=\|\tilde{f}\|_{\mathrm{Lip}(\widetilde{ \mathcal{M}}\bar{\rho})}\leq 3M.\] Thus, both the statement of Outcome 1 and the statement of Outcome 2 hold. Finally, let us estimate the running time of the \(\vec{\lambda}\)-Projection Algorithm with \(\vec{\lambda}=(M,M)\) which produces Outcome 1 and Outcome 2. As we have noted at the beginning of Remark 6.16, the running time of the \(\vec{\lambda}\)-Projection Algorithm is \(O(N^{3})\) provided \(F:\mathcal{M}\to\mathcal{HP}(\mathbf{R}^{2})\) and \(\#\mathcal{M}=N\). (As we have noted in this remark, this result will be proved in [34].) We also know that the pseudometric space \(\widetilde{\mathfrak{M}}=(\widetilde{\mathcal{M}},\bar{\rho})\) contains at most \(L\cdot(\#\mathcal{M})\) elements. Therefore, the \((M,M)\)-Projection Algorithm produces Outcome 1 and Outcome 2, using at most \(O(L^{3}N^{3})\) computer operations. The proof of the theorem is complete. Comparing the results of Theorem 6.20 and Theorem 6.22, we ask the following question: Let \(\mathcal{M}=E\) where \(E\) is a finite subset of \(\mathbf{R}^{n}\) with \(\#E=N\), and let \(\rho\) be the Euclidean metric in \(\mathbf{R}^{n}\). _Does there exist an algorithm which produces Outcome 1 and Outcome 2 in Theorem 6.22 and uses \(O(N\log N)\) computer operations as in Theorem 6.20?_ We will concern this problem in paper [34]. The last result of this section is the following theorem. **Theorem 6.23**: _Let \(L\) be a positive integer. Let \(\mathfrak{M}=(\mathcal{M},\rho)\) be a finite metric space with \(\#\mathcal{M}=N\)._ _There exists an algorithm with \(O(L^{3}N^{3})\) running time, which, given \(F:\mathcal{M}\to\mathcal{P}_{L}(\mathbf{R}^{2})\), produces its Lipschitz selection with Lipschitz seminorm at most \(5|F|_{\mathfrak{M}}\)._ _Proof._ Let \(\widetilde{F}:\widetilde{\mathcal{M}}\to\mathcal{HP}(\mathbf{R}^{2})\) be the set-valued mapping defined by (6.44). Let \(f:\mathcal{M}\to\mathbf{R}^{2}\) be a selection of \(F\). Because \(\mathfrak{M}=(\mathcal{M},\rho)\) is a finite _metric_ space, \(f\) is Lipschitz. Part (i) of Claim 6.21 tells us that the mapping \(\tilde{f}:\widetilde{\mathcal{M}}\to\mathbf{R}^{2}\) defined by formula (6.45) is a Lipschitz (with respect to \(\tilde{\rho}\)) selection of \(\widetilde{F}\). Hence, \(|\widetilde{F}|_{\widetilde{\mathfrak{M}}}<\infty\) so that, thanks to (6.19), \(\Lambda_{\mathcal{R}}(\widetilde{F})<\infty\). Part (iv) of Theorem 6.7 tells us that in this case the \(\vec{\lambda}\)-Projection Algorithm with \[\vec{\lambda}=(\,3\Lambda_{\mathcal{R}}(\widetilde{F}),\Lambda_{\mathcal{R}}( \widetilde{F})\,)\] produces the outcome **"Success"** and returns the Lipschitz selection \(\tilde{f}=f_{\tilde{\lambda};\widetilde{F}}\) of \(\widetilde{F}\) with Lipschitz seminorm \(\|\tilde{f}\|_{\mathrm{Lip}(\widetilde{\mathcal{M}}_{\widetilde{\nu}})}\leq 5| \widetilde{F}|_{\widetilde{\mathfrak{H}}}\). In Remark 6.17 we have shown the main ideas of an algorithm which produces the mapping \(\tilde{f}\) in \(O((\#\widetilde{\mathcal{M}})^{3})=O(L^{3}N^{3})\) running time. (A detailed description and justification of this algorithm based on the results of the works [23] and [8] will be done in [34].) We note that, thanks to part (ii) of Claim 6.21, there exists a (unique) mapping \(f:\mathcal{M}\to\mathbb{R}^{2}\) satisfying (6.45). This mapping is a Lipschitz selection of \(F\) with \(\|f\|_{\mathrm{Lip}(\mathcal{M})}=\|\tilde{f}\|_{\mathrm{Lip}(\widetilde{ \mathcal{M}}_{\widetilde{\nu}})}\). Hence, \(\|f\|_{\mathrm{Lip}(\mathcal{M})}\leq 5|\widetilde{F}|_{\widetilde{\mathfrak{H}}}\). But, thanks to part (i) of Claim 6.21, \(|\widetilde{F}|_{\widetilde{\mathfrak{H}}}\leq|F|_{\mathfrak{H}}\) proving the required inequality \(\|f\|_{\mathrm{Lip}(\mathcal{M})}\leq 5|F|_{\mathfrak{H}}\). It is also clear that, thanks to formula (6.45), we can construct the mapping \(f\) using the same number of computer operation as when constructing the mapping \(\tilde{f}\), i.e., at most \(O(L^{3}N^{3})\). The proof of the theorem is complete. ## 7 Lipschitz selections and iterations of balanced refinements. ### 7.1 The Stabilization Principle for balanced refinements of set-valued mappings. In Section 6, we studied a number of efficient algorithms which provide a solution to Problem 1.2. These algorithms are based on the Projection Algorithm introduced in Section 6.1. In the next section, we present another approach to Problem 1.2 based on the so-called _Iterative Algorithm for Lipschitz selections_. This algorithm relies on the results of a recent author's paper [33]. More specifically, the Iterative Algorithm is a new constructive and nearly optimal algorithm for Lipschitz selections based on an interesting property of successive balanced refinements of set-valued mappings. We refer to this property as _the Stabilization Principle for balanced refinements_. Let us formulate the Stabilization Principle for the special case of the space \(X=\ell_{\infty}^{2}\). Given constants \(\lambda_{1},\lambda_{2},\lambda_{3}\geq 0\), and a set-valued mapping \(F:\mathcal{M}\to\mathrm{Conv}(\mathbb{R}^{2})\), we introduce the following mappings: \[F^{[1]}[x:\lambda_{1}]=\bigcap_{z\in\mathcal{M}}\{F(z)+\lambda_{1}\,\rho(x,z) \,Q_{0}],\quad x\in\mathcal{M}, \tag{7.1}\] \[F^{[2]}[x:\lambda_{1},\lambda_{2}]=\bigcap_{z\in\mathcal{M}}\{F^{[1]}[z: \lambda_{1}]+\lambda_{2}\,\rho(x,z)\,Q_{0}],\quad x\in\mathcal{M}. \tag{7.2}\] and \[F^{[3]}[x:\lambda_{1},\lambda_{2},\lambda_{3}]=\bigcap_{z\in\mathcal{M}}\{F^{ [2]}[z:\lambda_{1}]+\lambda_{3}\,\rho(x,z)\,Q_{0}],\quad x\in\mathcal{M}. \tag{7.3}\] Thus, the mapping \(F^{[1]}[\cdot:\lambda_{1}]\) is the \(\lambda_{1}\)-balanced refinement of \(F\). See (6.1). We refer to the mapping \(F^{[2]}[\cdot:\lambda_{1},\lambda_{2}]:\mathcal{M}\to\mathrm{Conv}(\mathbb{R}^ {2})\cup\{\emptyset\}\) as _the second order \((\lambda_{1},\lambda_{2})\)-balanced refinement of \(F\)_. Accordingly, we refer to the mapping \(F^{[3]}[\cdot:\lambda_{1},\lambda_{2},\lambda_{3}]:\mathcal{M}\to\mathrm{Conv} (\mathbb{R}^{2})\cup\{\emptyset\}\) as _the third order \((\lambda_{1},\lambda_{2},\lambda_{3})\)-balanced refinement of \(F\)_. Clearly, \[F^{[3]}[x:\lambda_{1},\lambda_{2},\lambda_{3}]\subset F^{[2]}[x: \lambda_{1},\lambda_{2}]\subset F^{[1]}[x:\lambda_{1}] \tag{7.4}\] for every \(\lambda_{1},\lambda_{2},\lambda_{3}\geq 0\) and every \(x\in\mathcal{M}\). **Theorem 7.1**: _(The Stabilization Principle for \(\ell^{2}_{\infty}\)) Let \(\mathfrak{M}=(\mathcal{M},\rho)\) be a pseudometric space, and let \(\lambda\geq 0\). Let \(F:\mathcal{M}\to\mathrm{Conv}(\mathbf{R}^{2})\) be a set-valued mapping such that for every subset \(\mathcal{M}^{\prime}\subset\mathcal{M}\) with \(\#\mathcal{M}\leq 4\), the restriction \(F|_{\mathcal{M}}\) of \(F\) to \(\mathcal{M}^{\prime}\) has a Lipschitz selection with Lipschitz seminorm at most \(\lambda\). Suppose that either \(\mathcal{M}\) is finite or \(F:\mathcal{M}\to\mathcal{K}(\mathbf{R}^{2})\)._ _Then_ \[F^{[2]}[x:\lambda,3\lambda]\neq\emptyset\quad\text{for every}\quad x\in \mathcal{M}. \tag{7.5}\] _Furthermore,_ \[F^{[3]}[x:\lambda,3\lambda,15\lambda]=F^{[2]}[x:\lambda,3\lambda]\quad\text{ for every}\quad x\in\mathcal{M}. \tag{7.6}\] In particular, Theorem 7.1 implies the following property: the sequence of the successive refinements defined by \[F^{[k+1]}[x:\lambda_{1},...,\lambda_{k+1}]=\bigcap_{z\in\mathcal{M}}\{F^{[k]} [z:\lambda_{1},...,\lambda_{k}]+\lambda_{k+1}\rho(x,z)B_{X}\},\quad x\in \mathcal{M}, \tag{7.7}\] stabilizes at the third step of this procedure provided \(F\) satisfies the hypothesis of this theorem and \(\lambda_{1}=\lambda\), \(\lambda_{2}=3\lambda\) and \[\lambda_{k}=\lambda_{3}=15\lambda\quad\text{for all}\quad k\geq 3. \tag{7.8}\] In other words, \(F^{[k]}=F^{[2]}\) on \(\mathcal{M}\) for every \(k=3,4,...\). Indeed, if \(F^{[k]}=F^{[2]}\) for some \(k\geq 3\), then, thanks to (7.7), (7.8) and (7.6), for every \(x\in\mathcal{M}\), we have \[F^{[k+1]}[x:\lambda_{1},...,\lambda_{k+1}]=\bigcap_{z\in\mathcal{M}}\{F^{[2]} [z:\lambda_{1},\lambda_{2}]+\lambda_{3}\rho(x,z)B_{X}\}=F^{[3]}[x:\lambda_{1},\lambda_{2},\lambda_{3}]=F^{[2]}[x:\lambda_{1},\lambda_{2}].\] Also, let us note that, thanks to (7.1), (7.2), (7.3) and (7.4), equality (7.6) is equivalent to the following inequality: \[\mathrm{d}_{\mathrm{H}}(F^{[2]}[x:\lambda,3\lambda],F^{[2]}[y:\lambda,3 \lambda])\leq 15\lambda\,\rho(x,y)\quad\text{for every}\quad x,y\in\mathcal{M}. \tag{7.9}\] (Recall that the sign \(\mathrm{d}_{\mathrm{H}}\) denotes the Hausdorff distance between sets. See (2.1).) **Remark 7.2**: Theorem 7.1 is a slight generalization of the main result of the work [33], Theorem 1.9, for the case of the space \(X=\ell^{2}_{\infty}\). More specifically, in this theorem we give a proof of the Stabilization Principle only for set-valued mappings from \(\mathcal{M}\) into \(\mathcal{K}(\mathbf{R}^{2})\). However, obvious changes to this proof show that this principle also holds for an arbitrary _finite_ pseudometric space \(\mathfrak{M}=(\mathcal{M},\rho)\) and arbitrary set-valued mapping \(F:\mathcal{M}\to\mathrm{Conv}(\mathbf{R}^{2})\). These changes are mainly related to the classical Helly intersection theorem in \(\mathbf{R}^{2}\). It known that this theorem is true both for arbitrary _families of convex compacts_ (this version was used in [33]) and arbitrary _finite collections of convex sets_ in \(\mathbf{R}^{2}\). This enables us to add to the hypotheses of Lemma 3.4 in [33] the case of a _finite_ collection of closed sets. Using this version of this lemma in the proof given in [33], we obtain the required generalization of the Stabilization Principle to the case of a finite set \(\mathcal{M}\) and a mapping \(F\) from \(\mathcal{M}\) into \(\mathrm{Conv}(\mathbf{R}^{2})\). **7.2 The Iterative Algorithm for set-valued mappings.** Let \(\mathfrak{M}=(\mathcal{M},\rho)\) be a _finite_ pseudometric space. In this section we describe the Iterative Algorithm for Lipschitz selections. This geometrical algorithm relies on Theorem 7.1. An important parameter of the Iterative Algorithm is the constant \(\Lambda^{(\mathcal{FP})}(F)\) introduced in Section 6.3, see Definition 6.12. Let \(\mathfrak{T}\) be either the family \(\mathcal{K}(\mathbf{R}^{2})\) or the family \(\mathcal{HP}(\mathbf{R}^{2})\). (See (1.3) and (1.4).) **Algorithm 7.3**: _(The Iterative Algorithm for nearly optimal Lipschitz selections.)_ Given a finite pseudometric space \(\mathfrak{M}=(\mathcal{M},\rho)\) and a set-valued mapping \(F:\mathcal{M}\to\mathfrak{T}\), the Iterative Algorithm produces a nearly optimal Lipschitz selection of \(F\) (the outcome **"Success"**) or stops (the outcome **"No go"**). This procedure includes the following three steps. **STEP 1.** At this step we compute the constant \(\Lambda^{(\mathcal{FP})}(F)\). If it turns out that \[\Lambda^{(\mathcal{FP})}(F)=+\infty,\] the algorithm produces the outcome **"No go"** and stops. In this case, we guarantee that \(F\) does not have a Lipschitz selection. **STEP 2.** At this and the next steps, we assume that the constant \(\Lambda^{(\mathcal{FP})}(F)<\infty\). We fix a constant \[\lambda\in[\Lambda^{(\mathcal{FP})}(F),+\infty),\] and, for all \(x\in\mathcal{M}\), construct the \(\lambda\)-balanced refinement of \(F\), \[F^{[1]}[x:\lambda]=\bigcap_{z\in\mathcal{M}}\{F(z)+\lambda\,\rho(x,z)Q_{0}\}. \tag{7.10}\] Then, for every \(x\in\mathcal{M}\), we construct the second order \((\lambda,3\lambda)\)-balanced refinement of \(F\), i.e., the set \[F^{[2]}[x:\lambda,3\lambda]=\bigcap_{z\in\mathcal{M}}\{F^{[1]}[z:\lambda]+3 \lambda\rho(x,z)Q_{0}\}. \tag{7.11}\] We define a set \(\mathscr{F}[x:\lambda]\) by letting \[\mathscr{F}[x:\lambda]=F^{[2]}[x:\lambda,3\lambda]\cap Q(0,2r_{x}) \tag{7.12}\] where \[r_{x}=\operatorname{dist}\left(0,F^{[2]}[x:\lambda,3\lambda]\right). \tag{7.13}\] **STEP 3.** Finally, we define a mapping \(f^{[\lambda;F]}:\mathcal{M}\to\mathbf{R}^{2}\) by the formula \[f^{[\lambda;F]}(x)=\operatorname{center}(\,\Pi_{\lambda,F}(x)),\ \ \ \ \ x\in\mathcal{M}, \tag{7.14}\] where \[\Pi_{\lambda,F}(x)=\mathcal{H}[\mathscr{F}[x:\lambda]]. \tag{7.15}\] Thus, \(\Pi_{\lambda,F}(x)\) is the rectangular hull of the set \(\mathscr{F}[x:\lambda]\). See (2.11). We also recall that \(\operatorname{center}(S)\) denotes the center of a centrally symmetric set \(S\subset\mathbf{R}^{2}\). See Fig. 19. At this stage, Algorithm 7.3 produces the outcome **"Success"** and stops. We refer to Algorithm 7.3 as the \((\lambda;F)\)-Iterative Algorithm. **Remark 7.4**: Let if \(\Re=(\mathcal{M},\rho)\) be a _finite metric space_ and let \(F:\mathcal{M}\to\mathrm{Conv}(\mathbf{R}^{2})\) be a set-valued mapping. Clearly, in this case _any selection_\(f\) of \(F\) is Lipschitz. In particular, in this case, the constant \(\Lambda^{(\mathcal{FP})}(F)<+\infty\). Thus, if \(\rho\) is a metric on a finite set \(\mathcal{M}\), and \(\lambda\in[\Lambda^{(\mathcal{FP})}(F),+\infty)\), the \((\lambda;F)\)-Iterative Algorithm 7.3 always produces the outcome **"Success"**. Cf. Remark 6.9, part (i). **Theorem 7.5**: _Let \(\Re=(\mathcal{M},\rho)\) be a finite pseudometric space. Let \(\mathfrak{T}\) be either the family \(\mathcal{K}(\mathbf{R}^{2})\) or the family \(\mathcal{HP}(\mathbf{R}^{2})\), and let \(F:\mathcal{M}\to\mathfrak{T}\) be a set-valued mapping._ _If Algorithm 7.3 produces the outcome **"No go"** (i.e., if \(\Lambda^{(\mathcal{FP})}(F)=+\infty\), see_ **STEP 1**_), the set-valued mapping \(F\) does not have a Lipschitz selection._ _Otherwise, given a constant \(\lambda\in[\Lambda^{(\mathcal{FP})}(F),+\infty)\), the \((\lambda;F)\)-Iterative Algorithm produces the outcome **"Success"** and returns the mapping \(f^{[\lambda;F]}:\mathcal{M}\to\mathbf{R}^{2}\) defined by formula (7.14). This mapping has the following properties:_ _(\(\bigstar\mathcal{A}\)) The mapping \(f^{[\lambda;F]}\) is well defined. This means the following: (i) For every \(x\in\mathcal{M}\), the sets \(F^{[1]}[x:\lambda]\) and \(F^{[2]}[x:\lambda,3\lambda]\) are non-empty; (ii) The rectangle \(\Pi_{\lambda,F}(x)=\mathcal{H}[\mathscr{F}[x:\lambda]]\) (see (7.12) and (7.13)) is a non-empty bounded subset of \(\mathbf{R}^{2}\)._ _(\(\bigstar\mathcal{B}\)) \(f^{[\lambda;F]}\) is a Lipschitz selection of \(F\) with the Lipschitz seminorm \(\|f^{[\lambda;F]}\|_{\mathrm{Lip}(\mathcal{M})}\leq\gamma\lambda\) with \(\gamma=420\)._ _Proof._ Suppose that \(\Lambda^{(\mathcal{FP})}(F)=+\infty\), and at the same time there exists a Lipschitz selection \(f\) of \(F\). Then, for every subset \(\mathcal{M}^{\prime}\subset\mathcal{M}\) with \(\#\mathcal{M}^{\prime}\leq 4\), the mapping \(f|_{\mathcal{M}^{\prime}}\) is a Lipschitz selection of the restriction \(F|_{\mathcal{M}^{\prime}}\) of \(F\) to \(\mathcal{M}^{\prime}\) with \(\|f|_{\mathcal{M}^{\prime}}\|_{\mathrm{Lip}(\mathcal{M})}\leq\lambda\) where \(\lambda=\|f\|_{\mathrm{Lip}(\mathcal{M})}\). Therefore, thanks to Definition (6.12), \(\lambda\in\mathcal{L}^{(\mathcal{FP})}(F)\) so that, thanks to (6.35), \(\Lambda^{(\mathcal{FP})}(F)<\infty\), a contradiction. This proves that if Algorithm 7.3 produces the outcome **"No go"**, the set-valued mapping \(F\) does not have a Lipschitz selection. Fig. 19: The \((\lambda;F)\)-Iterative Algorithm. Now, let us assume that \(\Lambda^{(\mathcal{FP})}(F)<\infty\) and \(\lambda\in[\Lambda^{(\mathcal{FP})}(F),+\infty)\). Prove that in this case the mapping \(f^{[\lambda;F]}:\mathcal{M}\to\mathbf{R}^{2}\) defined by (7.14) has properties \((\bigstar\mathcal{A})\) and \((\bigstar\mathcal{B})\). Prove part (i) of the statement \((\bigstar\mathcal{A})\). Thanks to Lemma 6.13, the constant \(\Lambda^{(\mathcal{FP})}(F)\in\mathcal{L}^{(\mathcal{FP})}(F)\), see Definition 6.12. Therefore, \(\lambda\in\mathcal{L}^{(\mathcal{FP})}(F)\) as well, so that, thanks to Definition 6.12, for every subset \(\mathcal{M}\subset\mathcal{M}\) with \(\#\mathcal{M}\leq 4\), the restriction \(F|_{\mathcal{M}}\) of \(F\) to \(\mathcal{M}^{\prime}\) has a Lipschitz selection \(f_{\mathcal{M}^{\prime}}\) with \(\|f_{\mathcal{M}}\|_{\mathrm{Lip}(\mathcal{M})}\leq\lambda\). We also recall that \(\mathcal{M}\) is finite. Therefore, the pseudometric space \(\mathfrak{M}=(\mathcal{M},\rho)\) and the constant \(\lambda\) satisfy the hypothesis of Theorem 7.1. This theorem tells us that property (7.5) holds, i.e., the set \[F^{[2]}[x:\lambda,3\lambda]\neq\emptyset\quad\text{for every}\quad x\in \mathcal{M}. \tag{7.16}\] But, thanks to (7.4), \(F^{[2]}[x:\lambda,3\lambda]\subset F^{[1]}[x:\lambda]\) so that \(F^{[1]}[x:\lambda]\neq\emptyset\) as well. Thus, part (i) of \((\bigstar\mathcal{A})\) holds. Let us note that part (ii) of \((\bigstar\mathcal{A})\) is immediate from part (i). Indeed, thanks to (7.16), the set \(\mathcal{F}[x:\lambda]\) defined by (7.12) and (7.13) is a non-empty. Futhermore, \(\mathcal{F}[x:\lambda]\) is _bounded_. Therefore, its rectangular hull, the rectangular \(\Pi_{\lambda,F}(x)\), is also non-empty and bounded. In particular, the center of \(\Pi_{\lambda,F}(x)\) is well defined so that the mapping \(f^{[\lambda;F]}\) is well defined on \(\mathcal{M}\). See (7.14). Let us prove property \((\bigstar\mathcal{B})\). Let \[\mathcal{G}(x)=F^{[2]}[x:\lambda,3\lambda],\quad x\in\mathcal{M}. \tag{7.17}\] Then, thanks to (7.9), \[\mathrm{d}_{\mathrm{H}}(\mathcal{G}(x),\mathcal{G}(y))\leq 15\lambda\rho(x,y) \quad\text{for every}\quad x,y\in\mathcal{M}. \tag{7.18}\] In these settings, definitions (7.12) and (7.13) read as follows: \[\mathcal{F}[x:\lambda]=\mathcal{G}(x)\cap Q(0,2r_{x})\quad\text{ where }\ r_{x}=\operatorname{dist}\left(0,\mathcal{G}(x)\right). \tag{7.19}\] Let us estimate the Hausdorff distance between the sets \(\mathcal{F}[x:\lambda]\) and \(\mathcal{F}[y:\lambda]\). We will do this with the help of the following result: _Let \(C_{1},C_{2}\subset\mathbf{R}^{2}\) be convex sets. Given \(i=1,2\), let \(a_{i}\in\mathbf{R}^{2}\), \(r_{i}\geq 0\), and let \(Q(a_{i},r_{i})\) be a square in \(\mathbf{R}^{2}\), see (2.5). Suppose that \(C_{i}\cap Q(a_{i},r_{i})\neq\emptyset\), \(i=1,2\). Then_ \[\mathrm{d}_{\mathrm{H}}(C_{1}\cap Q(a_{1},2r_{1}),C_{2}\cap Q(2a_{2},r_{2})) \leq 14\left(\mathrm{d}_{\mathrm{H}}(C_{1},C_{2})+\|a_{1}-a_{2}\|+|r_{1}-r_{ 2}|\right). \tag{7.20}\] See [26, Theorem 4], and also [30, Lemma 3.9]. We note, that, thanks to (7.19), \[\mathcal{G}(x)\cap Q(0,r_{x})\neq\emptyset\quad\text{and}\quad\mathcal{G}(y) \cap Q(0,r_{y})\neq\emptyset.\] Therefore, thanks to (7.20), \[\mathrm{d}_{\mathrm{H}}(\mathcal{F}[x:\lambda],\mathcal{F}[y:\lambda])= \mathrm{d}_{\mathrm{H}}(\mathcal{G}(x)\cap Q(0,2r_{x}),\mathcal{G}(y)\cap Q(0,2 r_{y}))\leq 14\left(\mathrm{d}_{\mathrm{H}}(\mathcal{G}(x),\mathcal{G}(y))+|r_{ x}-r_{y}|\right).\] Furthermore, from (7.18), we have \[|r_{x}-r_{y}|=|\operatorname{dist}(0,\mathcal{G}(x))-\operatorname{dist}(0, \mathcal{G}(y))|\leq\mathrm{d}_{\mathrm{H}}(\mathcal{G}(x),\mathcal{G}(y)) \leq 15\lambda\rho(x,y).\] This inequality and (7.18) imply the following: \[\mathrm{d}_{\mathrm{H}}(\mathcal{F}[x:\lambda],\mathcal{F}[y:\lambda])\leq 14 \left(15\lambda\rho(x,y)+15\lambda\rho(x,y)\right)=\gamma\lambda\rho(x,y) \tag{7.21}\] with \(\gamma=420\). We are in a position to prove that \(f^{\{\lambda;F\}}\) is a Lipschitz selection of \(F\) with Lipschitz seminorm at most \(\gamma\lambda\). This property easily follows from (7.21) and the following simple claim proven in [16, Remark 7.1]: _For every compact convex set \(S\subset{\bf R}^{2}\), the center of its rectangular hull belongs to \(S\), i.e., \({\rm center}({\cal H}[S])\in S\). Furthermore, for every compact convex sets \(S_{1},S_{2}\subset{\bf R}^{2}\), we have_ \[\|\operatorname{center}({\cal H}[S_{1}])-\operatorname{center}({\cal H}[S_{ 2}])\|\leq{\rm d}_{\rm H}(S_{1},S_{2}). \tag{7.22}\] See also inequality (2.34). Now, thanks to this claim and definition (7.14), for every \(x\in{\cal M}\), we have \[f^{\{\lambda;F\}}(x)=\operatorname{center}(\Pi_{\lambda,F}(x))=\operatorname {center}({\cal H}[\mathscr{F}[x:\lambda]])\in\mathscr{F}[x:\lambda]\subset F^{ [2]}[x:\lambda,3\lambda]\subset F(x)\] proving that \(f^{\{\lambda;F\}}\) is a _selection_ of \(F\). Finally, combining (7.14) with inequality (7.22) (where we set \(S_{1}=\mathscr{F}[x:\lambda]\) and \(S_{2}=\mathscr{F}[y:\lambda]\)) and inequality (7.21), we conclude that \(\|f^{\{\lambda;F\}}\|_{{\rm Lip}({\cal M})}\leq\gamma\lambda\) with \(\gamma=420\). The proof of the theorem is complete. **Remark 7.6**: Formula (7.14) can be simplified essentially provided \[\text{for some }\bar{x}\in{\cal M}\text{ the set }\ \ \mathscr{G}(\bar{x})=F^{[2]}[\bar{x}: \lambda,3\lambda]\ \ \text{ is bounded.} \tag{7.23}\] See (7.17). In this case, thanks to (7.18), _every set \(\mathscr{G}(x)=F^{[2]}[x:\lambda,3\lambda]\), \(x\in{\cal M}\), is bounded._ This enables us to modify **STEP 3** of Algorithm 7.3 as follows: **STEP 3\({}^{\prime}\).** We define the mapping \(f^{\{\lambda;F\}}:{\cal M}\to{\bf R}^{2}\) by the formula \[f^{\{\lambda;F\}}(x)=\operatorname{center}({\cal V}_{\lambda,F}(x)),\ \ \ \ \ x\in{\cal M},\] where \({\cal V}_{\lambda,F}(x)={\cal H}[\mathscr{G}(x)]={\cal H}[F^{[2]}[x:\lambda,3 \lambda]]\). See Fig. 20. Fig. 20: The \((\lambda;F)\)-Iterative Algorithm for bounded sets \(F^{[2]}[x:\lambda,3\lambda]\). Because \({\cal G}(x)\) is bounded for every \(x\in{\cal M}\), the set \({\cal V}_{\lambda,F}(x)={\cal H}[{\cal G}(x)]\) is a _bounded_ rectangle so that the mapping \(f^{[\lambda;F]}\) is well defined on \({\cal M}\). Following the scheme of the proof of Theorem 7.5, we show that \(f^{[\lambda;F]}\) is a selection of \(F\). Furthermore, thanks to (7.18) and (7.22), we have \[\|f^{[\lambda;F]}(x)-f^{[\lambda;F]}(y)\|=\|\operatorname{center}({\cal H}[ {\cal G}(x)])-\operatorname{center}({\cal H}[{\cal G}(y)])\|\leq\operatorname {d}_{\rm H}({\cal G}(x),{\cal G}(y))\leq 15\lambda\rho(x,y)\] proving that \[\|f^{[\lambda;F]}\|_{\operatorname{Lip}({\cal M})}\leq 15\lambda. \tag{7.24}\] We note that condition (7.23) holds for every set-valued mapping \(F\) from \({\cal M}\) into the family \({\cal K}({\bf R}^{2})\). Therefore, in this important case, we can apply this simplified version of Algorithm 7.3 which produces a Lipschitz selection of \(F\) with Lipschitz seminorm satisfying inequality (7.24). \(\blacktriangleleft\) **Remark 7.7**: We recall Definitions 6.5 and 6.12 of the constants \(\Lambda_{\cal R}(F)\) and \(\Lambda^{({\cal FP})}(F)\) respectively. As we have noted in Remark 6.16, in general, given a set-valued mapping \(F:{\cal M}\to\operatorname{Conv}({\bf R}^{2})\), the precise calculation of the constants \(\Lambda_{\cal R}(F)\) and \(\Lambda^{({\cal FP})}(F)\) is a rather difficult technical problem. Nevertheless, comparing these constants, we note that the constant \(\Lambda_{\cal R}(F)\) is defined in a more constructive way than \(\Lambda^{({\cal FP})}(F)\). Moreover, Remark 6.17 tells us there exists an algorithm which given \(F:{\cal M}\to{\cal HP}({\bf R}^{2})\) calculates \(\Lambda_{\cal R}(F)\) using at most \(O(N^{3})\) computer operations. (Here \(N=\#{\cal M}\).) Thus, in applications, it would be preferable to express Algorithm 7.3 in terms of the constant \(\Lambda_{\cal R}(F)\) rather than \(\Lambda^{({\cal FP})}(F)\). An appropriate choice of the constant \(\lambda\) at **STEP 2** of this algorithm enables us to do this. Indeed, suppose we have calculated \(\Lambda_{\cal R}(F)\) up to a constant \(\eta\geq 1\). From inequalities (6.37) and (6.38), we have \(\Lambda^{({\cal FP})}(F)\leq 5\Lambda_{\cal R}(F)\). From this inequality it follows that the parameter \[\lambda=5\eta\Lambda_{\cal R}(F) \tag{7.25}\] belongs to the interval \([\Lambda^{({\cal FP})}(F),+\infty)\). Therefore, thanks to Theorem 7.5, if \(\Lambda_{\cal R}(F)<\infty\), the \((\lambda;F)\)-Iterative Algorithm 7.3 produces the outcome **"Success"** and returns a Lipschitz selection of \(F\) with Lipschitz seminorm at most \(\gamma\lambda=5\gamma\eta\Lambda_{\cal R}(F)\). (Recall that \(\gamma=420\)). \(\blacktriangleleft\) Let \(\Re=({\cal M},\rho)\) be a finite pseudometric space, \(N=\#{\cal M}\), and let \(F:{\cal M}\to{\cal HP}({\bf R}^{2})\) be a set-valued mapping. Let us see, how with the help of the \((\lambda;F)\)-Iterative Algorithm 7.3 with \(\lambda\) defined by (7.25) one can produce a nearly optimal Lipschitz selection of \(F\) using at most \(O(N^{3})\) computer operations. First of all, Remark 6.17 tells us that there exists an algorithm for calculating the constant \(\Lambda_{\cal R}(F)\) with \(O(N^{3})\) running time. Therefore, we can calculate the constant \(\lambda\) from (7.25) using at most \(O(N^{3})\) computer operations. Let us estimate the running time of the \((\lambda;F)\)-Iterative Algorithm 7.3. **Proposition 7.8**: _Let \(F:{\cal M}\to{\cal HP}({\bf R}^{2})\) be a set valued mapping defined on a finite pseudometric space \(\Re=({\cal M},\rho)\). Suppose that the constant \(\Lambda^{({\cal FP})}(F)<\infty\)._ _If \(\lambda\in[\Lambda^{({\cal FP})}(F),+\infty)\), then the \((\lambda;F)\)-Iterative Algorithm 7.3 produces the outcome **"Success"** and returns the mapping \(f^{[\lambda;F]}:{\cal M}\to{\bf R}^{2}\) (see (7.14)) with \(\|f^{[\lambda;F]}\|_{\operatorname{Lip}({\cal M})}\leq\gamma\lambda\) using at most \(O(N^{2})\) computer operations. (Here \(\gamma\) is an absolute constant.)_ The proof of this proposition relies on a formula for the set \(F^{[2]}[x:\lambda_{1},\lambda_{2}]\), see (7.2), which we present in Lemma 7.9 below. Before stating this lemma, let us recall the definitions of some objects from the Projection Algorithm 6.1. These are the set \(F^{[1]}[x:\lambda_{1}]\), i.e., the \(\lambda_{1}\)-balanced refinement of \(F\) defined by (6.2), its rectangular hull, the set \(\mathcal{T}_{F,\lambda_{1}}(x)=\mathcal{H}[F^{[1]}[x:\lambda_{1}]]\), see (6.3), and, finally, the \(\lambda_{2}\)-balanced refinement of \(\mathcal{T}_{F,\lambda_{1}}\), i.e., the rectangle \(\mathcal{T}_{F,\lambda_{1}}^{[1]}[x:\lambda_{2}]\) defined by formula (6.4). **Lemma 7.9**: _Let \(\Re=(\mathcal{M},\rho)\) be a finite pseudometric space and let \(F:\mathcal{M}\to\mathrm{Conv}(\mathbf{R}^{2})\). Given constants \(\lambda_{1},\lambda_{2}\geq 0\), \(\lambda_{1}\leq\lambda_{2}\), let us assume that, for each \(x\in\mathcal{M}\), the sets \(F^{[1]}[x:\lambda_{1}]\) and \(F^{[2]}[x:\lambda_{1},\lambda_{2}]\) are non-empty. Then,_ \[F^{[2]}[x:\lambda_{1},\lambda_{2}]=F^{[1]}[x:\lambda_{1}]\cap\mathcal{T}_{F, \lambda_{1}}^{[1]}[x:\lambda_{2}]\quad\text{for every}\quad x\in\mathcal{M}. \tag{7.26}\] _Proof._ We note that \(F^{[2]}[x:\lambda_{1},\lambda_{2}]\subset F^{[1]}[x:\lambda_{1}]\), see (7.4). Furthermore, \[\mathcal{T}_{F,\lambda_{1}}(z)=\mathcal{H}[F^{[1]}[z:\lambda_{1}]]\supset F^ {[1]}[z:\lambda_{1}],\ \ z\in\mathcal{M},\] so that \[\mathcal{T}_{F,\lambda_{1}}^{[1]}[x:\lambda_{2}]=\bigcap_{z\in\mathcal{M}} \{\mathcal{T}_{F,\lambda_{1}}(z)+\lambda_{2}\rho(x,z)Q_{0}\}\supset\bigcap_{ z\in\mathcal{M}}\{F^{[1]}[z:\lambda_{1}]+\lambda_{2}\rho(x,z)Q_{0}\}.\] From this and definition (7.11), we have \[\mathcal{T}_{F,\lambda_{1}}^{[1]}[x:\lambda_{2}]\supset F^{[2]}[x:\lambda_{1},\lambda_{2}] \tag{7.27}\] proving that the right hand side of (7.26) contains its left hand side. Let us prove the converse statement. We will need two auxiliary results. Here is the first of them: _Let \(C_{1},C_{2}\in\mathrm{Conv}(\mathbf{R}^{2})\), and let \(r\geq 0\). Suppose that \(C_{1}\cap C_{2}\neq\emptyset\). Then_ \[C_{1}\cap C_{2}+rQ_{0}=(C_{1}+rQ_{0})\cap(C_{2}+rQ_{0})\cap\mathcal{H}[C_{1} \cap C_{2}+rQ_{0}]. \tag{7.28}\] The second auxiliary result states the following: _Let \(\mathcal{K}\subset\mathrm{Conv}(\mathbf{R}^{2})\) be a finite family of closed convex sets in \(\mathbf{R}^{2}\) with non-empty intersection, and let \(r\geq 0\). Then_ \[\left(\bigcap_{K\in\mathcal{K}}K\right)+rQ_{0}=\bigcap_{K,K^{\prime}\in \mathcal{K}}\left\{(K\cap K^{\prime})+rQ_{0}\right\}. \tag{7.29}\] These result were proved in [33] under the conditions \(C_{1},C_{2}\in\mathcal{K}(\mathbf{R}^{2})\) and \(\mathcal{K}\subset\mathcal{K}(\mathbf{R}^{2})\) respectively. (See [33, Lemma 3.4, Lemma 5.4].) Obvious changes in the proofs of these results show that these property are satisfied for every \(C_{1},C_{2}\in\mathrm{Conv}(\mathbf{R}^{2})\) and any finite family \(\mathcal{K}\subset\mathrm{Conv}(\mathbf{R}^{2})\). Property (7.29) tells us that, for every \(z\in\mathcal{M}\), we have \[F^{[1]}[z:\lambda_{1}]+\lambda_{2}\rho(x,z)Q_{0} = \left(\bigcap_{z\in\mathcal{M}}\{F(u)+\lambda_{1}\rho(z,u)Q_{0} \}\right)+\lambda_{2}\rho(x,z)Q_{0}\] \[= \bigcap_{u,u^{\prime}\in\mathcal{M}}\left[\{F(u)+\lambda_{1}\rho( z,u)Q_{0}\}\cap\{F(u^{\prime})+\lambda_{1}\rho(z,u^{\prime})Q_{0}\}+\lambda_{2} \rho(x,z)Q_{0}\right]\] \[= \bigcap_{u,u^{\prime}\in\mathcal{M}}A(x:z,u,u^{\prime})\] where \[A(x:z,u,u^{\prime})=\{F(u)+\lambda_{1}\rho(z,u)Q_{0}\}\cap\{F(u^{\prime})+ \lambda_{1}\rho(z,u^{\prime})Q_{0}\}+\lambda_{2}\rho(x,z)Q_{0}.\] Thanks to property (7.28), for every \(z,u,u^{\prime}\in{\cal M}\), we have \[A(x:z,u,u^{\prime})=\{F(u)+(\lambda_{1}\rho(z,u)+\lambda_{2}\rho(x,z))Q_{0}\} \cap\{F(u^{\prime})+(\lambda_{1}\rho(z,u^{\prime})+\lambda_{2}\rho(x,z))Q_{0}\} \cap{\cal H}[A(x:z,u,u^{\prime})].\] Recall that \(0\leq\lambda_{1}\leq\lambda_{2}\). From this, the triangle inequality and (7.10), we have \[S_{1} = F(u)+(\lambda_{1}\rho(z,u)+\lambda_{2}\rho(x,z))Q_{0}\] \[\supset F(u)+\lambda_{1}(\rho(z,u)+\rho(x,z))Q_{0}\supset F(u)+\lambda_ {1}\rho(x,u)Q_{0}\supset F^{[1]}[x:\lambda_{1}].\] In the same way we prove that \[S_{2}=F(u^{\prime})+(\lambda_{1}\rho(z,u^{\prime})+\lambda_{2}\rho(x,z))Q_{0} \supset F^{[1]}[x:\lambda_{1}].\] Let us show that the set \({\cal H}[A(x:z,u,u^{\prime})]\) contains \({\cal T}^{[1]}_{F,\lambda_{1}}[x:\lambda_{2}]\). Indeed, thanks to (7.10), \[G=\{F(u)+\lambda_{1}\rho(z,u)Q_{0}\}\cap\{F(u^{\prime})+\lambda_{1}\rho(z,u^{ \prime})Q_{0}\}\supset F^{[1]}[z:\lambda_{1}],\] so that \({\cal H}[G]\supset{\cal H}[F^{[1]}[z:\lambda_{1}]]={\cal T}_{F,\lambda_{1}}(z)\). From this and (2.14), we have \[S_{3} = {\cal H}[A(x:z,u,u^{\prime})]={\cal H}[G+\lambda_{2}\rho(x,z)Q_{0 }]={\cal H}[G]+\lambda_{2}\rho(x,z)Q_{0}\] \[\supset {\cal T}_{F,\lambda_{1}}(z)+\lambda_{2}\rho(x,z)Q_{0}\supset{ \cal T}^{[1]}_{F,\lambda_{1}}[x:\lambda_{2}]\] proving the required inclusion \(S_{3}\supset{\cal T}^{[1]}_{F,\lambda_{1}}[x:\lambda_{2}]\). These properties of the sets \(S_{1},S_{2}\) and \(S_{3}\) imply the following: \[A(x:z,u,u^{\prime})=S_{1}\cap S_{2}\cap S_{3}\supset F^{[1]}[x:\lambda_{1}] \cap{\cal T}^{[1]}_{F,\lambda_{1}}[x:\lambda_{2}]\ \ \mbox{ for every }\ \ z,u,u^{\prime}\in{\cal M}.\] Thus, for every \(z\in{\cal M}\), we have \[F^{[1]}[z:\lambda_{1}]+\lambda_{2}\rho(x,z)Q_{0}=\bigcap_{u,u^{\prime}\in{ \cal M}}A(x:z,u,u^{\prime})\supset F^{[1]}[x:\lambda_{1}]\cap{\cal T}^{[1]}_{ F,\lambda_{1}}[x:\lambda_{2}].\] Thanks to this property and definition (7.2), we get \[F^{[2]}[x:\lambda_{1},\lambda_{2}]=\bigcap_{z\in{\cal M}}\{F^{[1]}[z:\lambda_ {1}]+\lambda_{2}\,\rho(x,z)\,Q_{0}\}\supset F^{[1]}[x:\lambda_{1}]\cap{\cal T }^{[1]}_{F,\lambda_{1}}[x:\lambda_{2}]\] proving that the left hand side of (7.26) contains its right hand side. The proof of the lemma is complete. **Remark 7.10**: Let us note the following interesting representation of the mapping \(F^{[2]}[\cdot:\lambda_{1},\lambda_{2}]\): in the settings of Lemma 7.9, for every \(x\in{\cal M}\), the following is true \[F^{[2]}[x:\lambda_{1},\lambda_{2}]=F^{[1]}[x:\lambda_{1}]\cap{\cal T}^{[1]}_{F,\lambda_{1}}[x:\lambda_{2}]=F^{[1]}[x:\lambda_{1}]\cap{\cal H}[F^{[2]}[x: \lambda_{1},\lambda_{2}]]. \tag{7.30}\] This is immediate from Lemma 7.9 and inclusion (7.27). Indeed, because \({\cal T}^{[1]}_{F,\lambda_{1}}[x:\lambda_{2}]\) is a _rectangle_, from (7.27) we have \({\cal T}^{[1]}_{F,\lambda_{1}}[x:\lambda_{2}]\supset{\cal H}[F^{[2]}[x:\lambda _{1},\lambda_{2}]]\). This and (7.26) imply the second equality in (7.30). Note that, in general, \[{\cal T}^{[1]}_{F,\lambda_{1}}[x:\lambda_{2}]\supsetneq{\cal H}[F^{[2]}[x: \lambda_{1},\lambda_{2}]]\ \ \mbox{on}\ \ \ {\cal M}.\] However, thanks to (7.30), intersection of these rectangles with \(F^{[1]}[x:\lambda_{1}]\) produces the same set, i.e., the set \(F^{[2]}[x:\lambda_{1},\lambda_{2}]\). _Proof of Proposition 7.8._ We know that \(\Lambda^{(\mathcal{PP})}(F)<\infty\) and \(\lambda\in[\Lambda^{(\mathcal{PP})}(F),+\infty)\). Therefore, thanks to Theorem 7.5, the \((\lambda;F)\)-Iterative Algorithm 7.3 produces the outcome **"Successes"** and returns the mapping \(f^{[\lambda;F]}:\mathcal{M}\to\mathbf{R}^{2}\) defined by formula (7.14). Its Lipschitz seminorm is bounded by \(\gamma\lambda\) where \(\gamma\) is an absolute constant. Thanks to part (\(\bigstar\mathcal{A}\)) of Theorem 7.5, in this case, the sets \(F^{[1]}[x:\lambda]\) and \(F^{[2]}[x:\lambda,3\lambda]\) are non-empty for every \(x\in\mathcal{M}\). Furthermore, Lemma 7.9 tells us that in this case, for every \(x\in\mathcal{M}\), the following equality \[F^{[2]}[x:\lambda,3\lambda]=F^{[1]}[x:\lambda]\cap\mathcal{T}^{[1]}_{F, \lambda}[x:3\lambda] \tag{7.31}\] holds. Recall that, given \(x\in\mathcal{M}\), \[\mathcal{T}^{[1]}_{F,\lambda}[x:3\lambda]=\bigcap_{z\in\mathcal{M}}\{ \mathcal{T}_{F,\lambda}(z)+3\lambda\,\rho(x,z)\,Q_{0}\} \tag{7.32}\] and \(\mathcal{T}_{F,\lambda}(x)=\mathcal{H}[F^{[1]}[x:\lambda]]\). Let us see how the mapping \(f^{[\lambda;F]}\) can be constructed in \(O(N^{2})\) running time. At _the first step_ of this procedure, for every \(x\in\mathcal{M}\), we construct the rectangle \(\mathcal{T}_{F,\lambda}(x)\). Because \(F:\mathcal{M}\to\mathcal{HP}(\mathbf{R}^{2})\) and \(\#\mathcal{M}=N\), the convex polygon \(F^{[1]}[x:\lambda]\) is determined by \(O(N)\) linear constrains. See (7.10). Part (i) of Corollary 6.19 tells us that there exists an algorithm which, for every \(x\in\mathcal{M}\), constructs the set \(\mathcal{T}_{F,\lambda}(x)\), the rectangular hull of \(F^{[1]}[x:\lambda]\), in \(O(N)\) running time. Thus, at this step we are able to construct all \(N\) rectangles \(\mathcal{T}_{F,\lambda}(x)\) in \(O(N^{2})\) running time. At _the second step_ of the procedure, following (7.32), we construct each rectangle \(\mathcal{T}^{[1]}_{F,\lambda}[x:3\lambda]\), \(x\in\mathcal{M}\), in \(O(N)\) running time. Thus, the constructing all \(N\) rectangles \(\mathcal{T}^{[1]}_{F,\lambda}[x:3\lambda]\) will require at most \(O(N^{2})\) computer operations. We turn to _the third step_ of the procedure. Thanks to representation (7.31), each set \(F^{[2]}[x:\lambda,3\lambda]\) is a convex polygon determined by at most \(N+4\) linear constraints. Part (ii) of Corollary 6.19 tells us that there exists an algorithm which, for every \(x\in\mathcal{M}\), calculates the distance from \(O\) to \(F^{[2]}[x:\lambda,3\lambda]\) (i.e., the quantity \(r_{x}\), see (7.13)) using at most \(O(N)\) computer operations. Thus, calculating all \(N\) numbers \(r_{x}\), \(x\in\mathcal{M}\), can be proceed in \(O(N^{2})\) running time. At _the fourth step_ of the procedure, we treat the sets \(\mathcal{F}[x:\lambda]=F^{[2]}[x:\lambda,3\lambda]\cap Q(0,2r_{x})\), \(x\in\mathcal{M}\), see (7.12). Because each \(F^{[2]}[x:\lambda,3\lambda]\) is determined by at most \(N+4\) linear constraints, the set \(\mathcal{F}[x:\lambda]\) is a convex polygon determined by at most \(N+8\) linear constraints. Therefore, thanks to part (i) of Corollary 6.19, there exists an algorithm which, for every \(x\in\mathcal{M}\), constructs the set \(\Pi_{\lambda,F}(x)\), the rectangular hull of \(\mathcal{F}[x:\lambda]\) (see (7.15)), in \(O(N)\) running time. Thus, at this step we construct all \(N\) rectangles \(\Pi_{\lambda,F}(x)\) using at most \(O(N^{2})\) computer operations. Finally, at _the fifth step_ of the procedure, for every \(x\in\mathcal{M}\), we construct the point \(f^{[\lambda;F]}(x)\) as _the center of the rectangle \(\Pi_{\lambda,F}(x)\)_. See formula (7.14). Because the number of all such rectangles is \(N\), this procedure will take \(4N\) computer operations. Thus, we construct the mapping \(f^{[\lambda;F]}:\mathcal{M}\to\mathbf{R}^{2}\) in five steps, and each of these steps requires at most \(O(N^{2})\) computer operations. This shows that \(f^{[\lambda;F]}\) can be constructed in \(O(N^{2})\) running time completing the proof of Proposition 7.8.
2305.12163
Scattering of swell by currents
The refraction of surface gravity waves by currents leads to spatial modulations in the wave field and, in particular, in the significant wave height. We examine this phenomenon in the case of waves scattered by a localised current feature, assuming (i) the smallness of the ratio between current velocity and wave group speed, and (ii) a swell-like, highly directional wave spectrum. We apply matched asymptotics to the equation governing the conservation of wave action in the four-dimensional position--wavenumber space. The resulting explicit formulas show that the modulations in wave action and significant wave height past the localised current are controlled by the vorticity of the current integrated along the primary direction of the swell. We assess the asymptotic predictions against numerical simulations using WAVEWATCH III for a Gaussian vortex. We also consider vortex dipoles to demonstrate the possibility of `vortex cloaking' whereby certain currents have (asymptotically) no impact on the significant wave height. We discuss the role of the ratio of the two small parameters characterising assumptions (i) and (ii) above and show that caustics are only significant for unrealistically large values of this ratio, corresponding to unrealistically narrow directional spectra.
Han Wang, Ana B. Villas Bôas, William R. Young, Jacques Vanneste
2023-05-20T10:42:37Z
http://arxiv.org/abs/2305.12163v2
# Scattering of swell by currents ###### Abstract The refraction of surface gravity waves by currents leads to spatial modulations in the wave field and, in particular, in the significant wave height. We examine this phenomenon in the case of waves scattered by a localised current feature, assuming (i) the smallness of the ratio between current velocity and wave group speed, and (ii) a swell-like, highly directional wave spectrum. We apply matched asymptotics to the equation governing the conservation of wave action in the four-dimensional position-wavenumber space. The resulting explicit formulas show that the modulations in wave action and significant wave height past the localised current are controlled by the vorticity of the current integrated along the primary direction of the swell. We assess the asymptotic predictions against numerical simulations using WAVEWATCH III for a Gaussian vortex. We also consider vortex dipoles to demonstrate the possibility of 'vortex cloaking' whereby certain currents have (asymptotically) no impact on the significant wave height. We discuss the role of the ratio of the two small parameters characterising assumptions (i) and (ii) above and show that caustics are only significant for unrealistically large values of this ratio, corresponding to unrealistically narrow directional spectra. ## 1 Introduction Surface gravity waves (SGWs) play a key role in the exchanges of energy, momentum and gases between the ocean and the atmosphere (Villas Boas & Pizzo, 2021). SGWs are forced by the wind and modulated by ocean currents through transport and refraction. Over the past few decades, several studies have explored the effects of ocean currents on SGWs. Early theoretical work focusses on the formation of freak waves and identifies refraction as a possible mechanism for the generation of large amplitude waves (White & Fornberg, 1998; Heller _et al._, 2008; Dysthe _et al._, 2008). Recent studies examine how meso- and submesoscale ocean variability, such as fronts, filaments and vortices, induces a corresponding variability in wave amplitudes (Ardhuin _et al._, 2017; Romero _et al._, 2017, 2020; Villas Boas _et al._, 2020; Vrecica _et al._, 2022). These studies often characterise the wave amplitudes using the significant wave height \(H_{s}\), defined as four times the standard deviation of the surface displacement. They find that wave-current interactions at horizontal scales ranging from 10 to 200 km drive spatial gradients of \(H_{s}\) at similar scales. This indicates that air-sea fluxes might have spatial variability on these relatively small spatial scales. One common approach to studying wave-current interactions is the use of ray tracing, often in its simplest form in which the kinematics of SGWs is tracked by solving the ray equations and ray density is used as a proxy for wave amplitude (e.g., Kenyon, 1971; Mapp _et al._, 1985; Quilfen & Chapron, 2019). While this simple form of ray tracing is a valuable tool for understanding wave refraction, it does not provide an accurate quantification of changes in wave amplitude, in particular changes in \(H_{s}\). This quantification requires to solve the conservation equation for the density of wave action in the four-dimensional position-wavenumber phase space. This is challenging especially for the wave spectra of realistic sea states, distributed in both wavenumber and direction, instead of the pure plane waves that are often considered (see Heller _et al._, 2008, however). It is possible to solve the action equation numerically, albeit at great computational cost, either by discretising the phase space or by sampling its full four-dimensionality with a large ensemble of rays. This paper proposes a complementary approach. It develops an asymptotic solution of the wave action equation, leading to explicit formulas for the changes in action and \(H_{s}\) induced by localised currents. Motivated by their ubiquity in the ocean, we focus on swell, that is, SGWs characterised by a spectrum that is narrow banded in both frequency (equivalently, wavenumber) and direction. We exploit the smallness of two parameters reflecting the narrowness of the spectrum and the weakness of the current relative to the wave speed. The formulas we obtain show that the changes in action and \(H_{s}\) depend on the currents through a 'deflection function' \(\Delta\) given by the integral of the vorticity along the primary direction of wave propagation. We apply these formulas to simple flows - vortices and dipoles - and compare their predictions with the results of full integrations of the action conservation equation by a numerical wave model. We formulate the problem, relate action and \(H_{s}\), and introduce a model spectrum for swell in SS2. We detail our scaling assumptions and carry out the (matched) asymptotics treatment of the wave action equation in SS3. We compare asymptotic and numerical results for vortices and dipoles in SS4. For vortices, we consider four different parameter combinations representative of ocean swell. We consider dipoles with axis along and perpendicular to the direction of the swell to demonstrate the possibility of a vanishing deflection function \(\Delta\), leading to asymptotically negligible changes in \(H_{s}\), a phenomenon we refer to as 'vortex cloaking'. In SS5 we explore two limiting regimes of scattering: a linear regime corresponding to weak currents and/or swell with relatively large angular spread, and a caustic regime corresponding to strong currents and/or small angular spread. The caustic regime, in which the changes in \(H_{s}\) are large and concentrated along caustic curves, arises only for parameters values that are outside the range of typical ocean values. We conclude with a summary of our findings and discuss prospects for future work on the spatial variability of \(H_{s}\) in SS6. ## 2 Formulation We study the scattering problem sketched in figure 1. Deep-water SGWs, with small initial directional spreading and a well defined peak frequency (swell) impinge on a spatially compact coherent flow, such as axisymmetric vortex or a dipole. ### Action conservation equation In figure 1 we illustrate the scattering problem by tracing rays through an axisymmetric vortex. We go beyond ray tracing, however, by using asymptotic methods to obtain approximate analytic solutions of the conservation equation \[\partial_{t}\mathcal{A}+\boldsymbol{\nabla}_{\boldsymbol{k}}\omega\boldsymbol{ \cdot}\boldsymbol{\nabla}_{\boldsymbol{x}}\mathcal{A}-\boldsymbol{\nabla}_{ \boldsymbol{x}}\omega\boldsymbol{\cdot}\boldsymbol{\nabla}_{\boldsymbol{k}} \mathcal{A}=0 \tag{2.1}\] for the wave action density \(\mathcal{A}(\boldsymbol{x},\boldsymbol{k},t)\) in the four-dimensional position-wavenumber space. The conservation equation relies on the WKB assumption of spatial scale separation between waves and currents. In (2.1) \(\omega(\boldsymbol{x},\boldsymbol{k})\) is the absolute frequency of deep-water SGWs \[\omega(\boldsymbol{x},\boldsymbol{k})=\sigma(k)+\boldsymbol{k}\boldsymbol{ \cdot}\boldsymbol{U}(\boldsymbol{x}). \tag{2.2}\] In (2.2) the intrinsic frequency is \(\sigma(k)=\sqrt{gk}\), with \(k=|\boldsymbol{k}|\). The current velocity is taken to be horizontal and independent of time and depth, \[\boldsymbol{U}(\boldsymbol{x})=U(x,y)\hat{\boldsymbol{x}}+V(x,y)\hat{ \boldsymbol{y}}. \tag{2.3}\] Figure 1: The scattering problem: a localised flow, here shown as an axisymmetric vortex with radius \(r_{v}\), scatters waves incident from the left (\(x\to-\infty\)) with action spectrum \(\mathcal{A}_{*}(K,\Theta)\). Rays bend significantly only in the scattering region in which there is non-zero vorticity i.e. where \(x=O(r_{v})\). In this illustration \(r_{v}\) is equivalent to \(\ell_{s}\). (a) The case \(\delta\neq 0\): directional spreading in the incident spectrum \(\mathcal{A}_{*}\) is indicated schematically by two rays emanating from each source point. (b) The case \(\delta=0\) (or much less than \(\varepsilon\)): the incident spectrum \(\mathcal{A}_{*}\) is a plane wave with little or no directional spreading. ### Action spectrum and significant wave height Denoting the sea-surface vertical displacement by \(\zeta(\mathbf{x},t)\), with root mean square \(\zeta_{rms}\), we introduce a spectrum \(\mathcal{F}(\mathbf{k},\mathbf{x},t)\) such that \[\zeta^{2}_{rms}(\mathbf{x},t)=\int\!\mathcal{F}(\mathbf{k},\mathbf{x},t)\,\mathrm{d}\mathbf{k}. \tag{2.4}\] Later we use a polar coordinate system \((k,\theta)\) in \(\mathbf{k}\)-space so that in (2.4) \(\mathrm{d}\mathbf{k}=k\,\mathrm{d}k\mathrm{d}\theta\). By equipartition, the energy spectrum is \(g\mathcal{F}\) and the action spectrum, \(\mathcal{A}(\mathbf{x},\mathbf{k},t)\) in (2.1), is \(\mathcal{A}=g\mathcal{F}/\sigma\). The significant wave height, \(4\zeta_{rms}\)(Komen _et al._, 1996), is therefore \[H_{s}(\mathbf{x},t)=\left(\frac{16}{g}\int\!\mathcal{A}(\mathbf{k},\mathbf{x},t)\sigma(k) \mathrm{d}\mathbf{k}\right)^{1/2}. \tag{2.5}\] The incident swell is characterized by a spatially uniform spectrum \(\mathcal{F}_{\star}(\mathbf{k})\) with constant significant wave height \(H_{s\star}\). The subscript \(\star\) denotes quantities associated with the incident waves. Swell is characterized by a narrow spectrum in both wavenumber \(k\) (equivalently, frequency \(\sigma\)) and direction \(\theta\). The dominant wavenumber of the incident swell is \(k_{\star}\) with frequency \(\sigma_{\star}=\sqrt{gk_{\star}}\), and the dominant direction is taken without loss of generality as \(\theta=0\). Thus, as illustrated in figure 1, the waves arrive from \(x=-\infty\) and impinge on an isolated flow feature, centred at \((x,y)=(0,0)\). As an example of incident spectrum we use a separable construction described in Appendix A. In the narrow-band limit corresponding to swell, this spectrum simplifies to the Gaussian \[\mathcal{F}_{\star}(k,\theta)\approx\zeta^{2}_{rms\star}\underbrace{\frac{ \mathrm{e}^{-(k-k_{\star})^{2}/2\delta_{k}^{2}}}{k_{\star}\sqrt{2\pi\delta_{k }^{2}}}}_{F_{\star}(k)}\times\underbrace{\frac{\mathrm{e}^{-\theta^{2}/2 \delta_{\theta}^{2}}}{\sqrt{2\pi\delta_{\theta}^{2}}}}_{D_{\star}(\theta)}. \tag{2.6}\] The two parameters \(\delta_{k}\) and \(\delta_{\theta}\) capture the wavenumber and directional spreading (see Appendix A). The narrow-band limit assumes that \(\delta_{k}/k_{\star}\ll 1\) and \(\delta_{\theta}\ll 1\). ## 3 The scattering problem We consider an incident spectrum such as (2.6). To make its localisation in \(k\) and \(\theta\) explicit we introduce the \(O(1)\) independent variables \[K=\frac{k-k_{\star}}{\delta}\quad\text{and}\quad\Theta=\frac{\theta}{\delta}, \tag{3.1}\] where \(\delta\ll 1\) is a small dimensionless parameter. The incident action spectrum has the form \[\mathcal{A}(x,y,k,\theta)=\mathcal{A}_{\star}(K,\Theta)\quad\text{as}\ \ x\to-\infty, \tag{3.2}\] where the function \(\mathcal{A}_{\star}(K,\Theta)\) is localised where both \(K\) and \(\Theta\) are \(O(1)\). The example spectrum (2.6) is of this form provided that \(\delta_{k}/k_{\star}\) and \(\delta_{\theta}\) are both \(O(\delta)\). This assumption of similarly small spectral widths in \(k\) and \(\theta\) enforces the relevant distinguished limit for the scattering problem. We assume that the currents are weak (e.g. Peregrine, 1976; Villas Boas & Young, 2020). This means that the typical speed \(U\) of the currents is much less than the intrinsic group velocity of the incident swell \(c_{\star}\): \[\varepsilon \stackrel{{\mathrm{def}}}{{=}}U/c_{\star}, \tag{3.3}\] \[\ll 1. \tag{3.4}\] Accordingly we rewrite the frequency (2.2) as \[\omega(\boldsymbol{x},\boldsymbol{k})=\sigma(k)+\varepsilon\boldsymbol{k} \boldsymbol{\cdot}\boldsymbol{U}(\boldsymbol{x}). \tag{3.5}\] We indulge in a slight abuse of notation here: we develop the approximation in dimensional variables, hence the dimensionless parameters \(\varepsilon\) and \(\delta\) in expressions such as (3.1) and (3.5) should be interpreted as bookkeeping parameters to be set to one at the end. We examine the distinguished limit \[\delta,\,\varepsilon\to 0\quad\text{with}\quad\gamma\stackrel{{ \mathrm{def}}}{{=}}\varepsilon/\delta=O(1) \tag{3.6}\] and use matched asymptotics to solve the action conservation equation (2.1). ### The scattering region: \(x=O(\ell_{s})\) The spatially compact flow has a typical horizontal length scale which we denote by \(\ell_{s}\). We refer to the region where \(x=O(\ell_{s})\) as the'scattering region'. The solution in this region has the form \[\mathcal{A}(K,\Theta,x,y) \tag{3.7}\] and must limit to \(\mathcal{A}_{\star}(K,\Theta)\) in (3.2) as \(x\to-\infty\). With \(\mathcal{A}\) in (3.7) the transport term in (2.1) is approximated as \[\boldsymbol{\nabla}_{\boldsymbol{k}}\omega\boldsymbol{\cdot} \boldsymbol{\nabla}_{\boldsymbol{x}}\mathcal{A} =c_{\star}\left(\cos(\delta\Theta)\mathcal{A}_{x}+\sin(\delta \Theta)\mathcal{A}_{y}\right)+\varepsilon\boldsymbol{U}\boldsymbol{\cdot} \boldsymbol{\nabla}_{\boldsymbol{x}}\mathcal{A}\] \[=c_{\star}\mathcal{A}_{x}+O(\delta,\varepsilon). \tag{3.8}\] In particular, transport by the current, \(\varepsilon\boldsymbol{U}\boldsymbol{\cdot}\boldsymbol{\nabla}_{\boldsymbol {x}}\mathcal{A}\) is negligible compared with transport by the intrinsic group velocity \(c_{\star}\). With the approximations \[\boldsymbol{\nabla}_{\boldsymbol{k}}\mathcal{A} =\delta^{-1}\left(\partial_{K}\mathcal{A}\,\hat{\boldsymbol{x}}+ k_{\star}^{-1}\partial_{\Theta}\mathcal{A}\,\hat{\boldsymbol{y}}\right)+O(1), \tag{3.9}\] \[\boldsymbol{\nabla}_{\boldsymbol{x}}\omega =\varepsilon k_{\star}(U_{x}\hat{\boldsymbol{x}}+U_{y}\hat{ \boldsymbol{y}})+O(\varepsilon\delta), \tag{3.10}\] the refraction term in (2.1) simplifies to \[\boldsymbol{\nabla}_{\boldsymbol{x}}\omega\boldsymbol{\cdot}\boldsymbol{ \nabla}_{\boldsymbol{k}}\mathcal{A}=\gamma\left(k_{\star}U_{x}\partial_{K} \mathcal{A}+U_{y}\partial_{\Theta}\mathcal{A}\right)+O(\varepsilon). \tag{3.11}\] Thus in the scattering region the leading-order approximation to (2.1) is \[c_{\star}\partial_{x}\mathcal{A}-\gamma\left(k_{\star}U_{x}\partial_{K} \mathcal{A}+U_{y}\partial_{\Theta}\mathcal{A}\right)=0. \tag{3.12}\] By inspection, the solution to (3.12) that matches the incident action spectrum (3.2) as \(x\to-\infty\) is \[\mathcal{A}(x,y,K,\Theta)=\mathcal{A}_{\star}\left(K+\frac{\gamma k_{\star}}{ c_{\star}}U(x,y)\,,\Theta+\frac{\gamma}{c_{\star}}\int_{-\infty}^{x}\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! For reference, we rewrite this expression in terms of the original independent variables, setting the bookkeeping parameters \(\varepsilon\), \(\delta\), and hence \(\gamma\) to \(1\) to obtain \[\mathcal{A}(x,y,k,\theta)=\mathcal{A}_{\star}\left(k+\frac{k_{\star}}{c_{\star}}U (x,y)\,,\theta+\frac{1}{c_{\star}}V(x,y)-\frac{1}{c_{\star}}\int_{-\infty}^{x} \!\!Z(x^{\prime},y)\,\mathrm{d}x^{\prime}\right). \tag{3.15}\] ### The intermediate region: \(O(\ell_{s})\ll x\ll O(\ell_{s}/\delta)\) The outer limit of the inner solution (3.14) follows from taking \(x\to\infty\): \[\mathcal{A}(x,y,K,\Theta)\to\mathcal{A}_{\star}\left(K,\,\Theta-\gamma \varDelta(y)\right), \tag{3.16}\] where we have introduced the dimensionless 'deflection' \[\varDelta(y)\stackrel{{\mathrm{def}}}{{=}}\frac{1}{c_{\star}} \int_{-\infty}^{\infty}\!\!Z(x^{\prime},y)\,\mathrm{d}x^{\prime}. \tag{3.17}\] According to (3.16) the effect of the flow on the dependence of \(\mathcal{A}\) on \(K\) is reversible: after passage through the scattering region this dependence reverts to the incident form. To physically interpret (3.16) and the deflection \(\varDelta(y)\), recall that if \(\varepsilon\) is small then \[\text{ray curvature} \approx\frac{\text{vorticity}}{\text{group velocity}}\,, \tag{3.18}\] \[\approx\frac{Z(x,y)}{c_{\star}}. \tag{3.19}\] The approximation in (3.18) requires only \(\varepsilon\ll 1\)(e.g. Kenyon, 1971; Landau & Lifshitz, 2013; Dysthe, 2001; Gallet & Young, 2014). Passing from (3.18) to (3.19) requires the further approximation that \(k\) is close to \(k_{\star}\) so that the group velocity in the denominator can be approximated by \(c_{\star}\). On the left of (3.18) ray curvature is \(\mathrm{d}\theta/\mathrm{d}\ell\), where \(\ell\) is arc-length along a ray. But within the compact scattering region we approximate \(\ell\) with \(x\). Thus the deflection \(\varDelta(y)\) in (3.17) is the integrated ray curvature, accumulated as rays pass through the scattering region in which \(x=O(\ell_{s})\) and vorticity \(Z(x,y)\) is non-zero. From (3.17) and (3.18) we conclude that the scattering region is best characterized as the region with \(O(1)\) vorticity, e.g. the vortex core in figure 1 (hence \(\ell_{s}=r_{v}\) with \(r_{v}\) a typical vortex radius). The region with palpably non-zero velocity is much larger. In figure 1 the rays are straight where \(x=O(r_{v}/\varepsilon)\), despite the slow (\(\propto r^{-1}\)) decay of the azimuthal vortex velocity. ### The far field: \(x=O(\ell_{s}/\delta)\) Far from the scattering region, where \(x\gg\ell_{s}\), we introduce the slow coordinate \(X\stackrel{{\mathrm{def}}}{{=}}\delta x\). In the far-field the currents and hence the refraction term \(\boldsymbol{\nabla}_{\boldsymbol{x}}\boldsymbol{\omega}\boldsymbol{\cdot} \boldsymbol{\nabla}_{\boldsymbol{k}}\mathcal{A}\) in (2.1) are negligible. The steady action conservation equation collapses to \[\boldsymbol{\nabla}_{\boldsymbol{k}}\boldsymbol{\sigma}\boldsymbol{\cdot} \boldsymbol{\nabla}_{\boldsymbol{x}}\mathcal{A}=c_{\star}\left(\delta\cos( \delta\Theta)\mathcal{A}_{X}+\sin(\delta\Theta)\mathcal{A}_{y}\right)=0, \tag{3.20}\] i.e. propagation along straight rays. Retaining only the leading-order term gives \[\partial_{X}\mathcal{A}+\Theta\partial_{y}\mathcal{A}=0. \tag{3.21}\] By inspection the solution of (3.21) that matches the intermediate solution (3.16) is \[\mathcal{A}(X,y,K,\Theta)=\mathcal{A}_{\star}\left(K,\Theta-\gamma\varDelta \left(y-X\Theta\right)\right). \tag{3.22}\] This formula, which converts the incident spectrum into the far-field spectrum, is a key result of the paper. In terms of the original independent variables and with the bookeeping parameters set to \(1\) it takes the convenient form \[\mathcal{A}(x,y,k,\theta)=\mathcal{A}_{\star}\left(k,\theta-\Delta\left(y-x \theta\right)\right). \tag{3.23}\] ### Significant wave height Significant wave height \(H_{s}\) is the most commonly reported statistic of wave amplitudes, being routinely observed by satellite altimeters and wave buoys. We obtain an approximation for \(H_{s}\) by performing the \(k\) and \(\theta\) integrals in (2.5) using the approximations (3.15) and (3.23) for \(\mathcal{A}(\mathbf{x},\mathbf{k})\). The scattering region is simple. We can approximate \(\sigma\) and \(\mathrm{d}\mathbf{k}\) in (2.5) by \(\sigma_{\star}=\sigma(k_{\star})\) and \(k_{\star}\,\mathrm{d}k\mathrm{d}\theta\) to find \[H_{s}(\mathbf{x},t) \sim\left(\frac{16\sigma_{\star}k_{\star}}{g}\iint\mathcal{A}( \mathbf{k},\mathbf{x},t)\mathrm{d}k\mathrm{d}\theta\right)^{1/2} \tag{3.24}\] \[\sim H_{s\star} \tag{3.25}\] The second equality holds because, according to (3.15), \(\mathcal{A}(\mathbf{x},\mathbf{k})\) is obtained from \(\mathcal{A}_{\star}(\mathbf{x},\mathbf{k})\) by an \(\mathbf{x}\)-dependent shift of the \(k\) and \(\theta\) that does not affect the integral. Thus \(H_{s}\) in the scattering region is unchanged from the incident value \(H_{s\star}\). This conclusion also follows directly from steady-state wave action conservation under the assumptions \(\varepsilon\), \(\delta\ll 1\): multiplying (3.12) by \(\sigma_{\star}k_{\star}\) and integrating over \(k\) and \(\theta\) we find \[c_{\star}\partial_{x}\underbrace{\left(\sigma_{\star}k_{\star}\iint\mathcal{A }(\mathbf{x},\mathbf{k})\,\mathrm{d}k\mathrm{d}\theta\right)}_{\approx gH_{s}^{2}(\bm {x})/16}=0. \tag{3.26}\] Hence \(H_{s}(\mathbf{x})=H_{s\star}\) throughout the scattering region. In the far field, \(H_{s}\) is obtained by substituting (3.23) into (2.5). The result is \[H_{s}(\mathbf{x})=4\sqrt{\frac{k_{\star}\sigma_{\star}}{g}\int\!\!\mathrm{d}\theta \int\!\!\mathrm{d}k\,\mathcal{A}_{\star}(k,\theta-\Delta(y-x\theta))}. \tag{3.27}\] The \(k\)-integral can be evaluated in terms of the incident directional spectrum which, in the general case of a non-separable spectrum, is defined as \[D_{\star}(\theta)\overset{\mathrm{def}}{=}\frac{1}{\zeta_{rms\star}^{2}}\int \!\mathcal{F}_{\star}(\mathbf{k})\,k\,\mathrm{d}k. \tag{3.28}\] We summarize the results above with: \[H_{s}(\mathbf{x})=H_{s\star}\begin{cases}1&\text{in the scattering region,}\\ \sqrt{\int D_{\star}\left(\theta-\Delta(y-x\theta)\right)\mathrm{d}\theta}& \text{in the far field.}\end{cases} \tag{3.29}\] ## 4 Applications to simple flows ### Gaussian vortex As an application, we consider scattering by an axisymmetric Gaussian vortex with circulation \(\kappa\), vorticity \[Z(x,y)=\frac{\kappa\,\mathrm{e}^{-r^{2}/2r_{v}^{2}}}{2\pi r_{v}^{2}}, \tag{4.1}\] and velocity \[(U(x,y),V(x,y))=\frac{\kappa}{2\pi}\frac{1-\mathrm{e}^{-r^{2}/2r_{v}^{2}}}{r^{2 }}\,(-y,x)\,, \tag{4.2}\] where \(r^{2}=x^{2}+y^{2}\). The vortex radius \(r_{v}\) can be taken as the scattering length scale \(\ell_{s}\). The maximum azimuthal velocity is \(U_{m}=0.072\,\kappa/r_{v}\) at radius \(1.585\,r_{v}\). The deflection (3.17) resulting from this Gaussian vortex is \[\Delta(y)=\frac{\kappa\,\mathrm{e}^{-y^{2}/2r_{v}^{2}}}{\sqrt{2\pi}\,r_{v}c_{ \star}}. \tag{4.3}\] The asymptotic solution in the scattering region is obtained from (3.15) as \[\mathcal{A}(x,y,k,\theta)=\mathcal{A}_{\star}\Big{(}k+k_{*}c_{*} ^{-1}U(x,y),\\ \theta+c_{*}^{-1}V(x,y)-\tfrac{1}{2}\left(\mathrm{erf}\big{(}x/ \sqrt{2}r_{v}\big{)}+1\right)\Delta(y)\Big{)}, \tag{4.4}\] where \(\mathrm{erf}\) is the error function. Eq. (4.4) can be combined with the far-field approximation (3.23) into a single, uniformly valid approximation, \[\mathcal{A}(x,y,k,\theta)=\mathcal{A}_{\star}\Big{(}k+k_{*}c_{*} ^{-1}U(x,y),\\ \theta+c_{*}^{-1}V(x,y)-\tfrac{1}{2}\left(\mathrm{erf}\big{(}x/ \sqrt{2}r_{v}\big{)}+1\right)\Delta(y-x\theta)\Big{)}. \tag{4.5}\] The significant wave height is approximated by (3.29) which can be written as the uniform expression \[H_{s}(x,y)=H_{s\star}\sqrt{\int D_{\star}\left(\theta-\Delta(y-x^{+}\theta) \right)\mathrm{d}\theta}, \tag{4.6}\] where \(x^{+}\) is equal to \(x\) for \(x>0\) and to \(0\) for \(x<0\) and (4.3) is used for \(\Delta\). We now compare the matched asymptotic (MA hereafter) predictions (4.5)-(4.6) with numerical solutions of the wave action equation (2.1) obtained with the Wave Height, Water Depth, and Current Hindcasting third generation wave model (WAVEWATCH III, hereafter WW3). The incident spectrum used for WW3 is described in Appendix A. The directional function for this spectrum is the Longuet-Higgins _et al._ (1963) model \[D_{*}(\theta)\propto\cos^{2s}\frac{\theta}{2}. \tag{4.7}\] The parameter \(s>0\) controls the directional spreading: for \(s\gg 1\), (4.7) reduces to the Gaussian in (2.6) with directional spreading \(\delta_{\theta}=\sqrt{2/s}\). The configuration of WW3 and spectrum parameters are detailed in Appendix B. The most important parameter is the peak frequency of the incident spectrum, taken fixed for all simulations as \(\sigma_{\star}=0.61\) rad s\({}^{-1}\). This corresponds to a period of 10.3 s, wavelength of 166 m and group speed \(c_{\star}=8\) m s\({}^{-1}\). Because the problem is linear in the action density, the values of \(\zeta_{rms\star}\) or equivalently \(H_{s\star}\) are less important. For definiteness we set \(H_{s\star}=1\) m. Figure 2 compares the wavenumber-integrated wave action \(\int\mathcal{A}(x,y,k,\theta)\,\mathrm{d}k\) obtained from (4.5) and WW3 for a Gaussian vortex with maximum velocity \(U_{m}=0.8\) m s\({}^{-1}\) and directional spreading parameter \(s=40\). Figure 2 shows a good agreement, especially in the far-field region (\(x\geqslant 3r_{v}\)). The most noticeable difference between MA and WW3 is in panels c and d, which show a section through the middle of the vortex. The MA action spectrum in panel d is obtained via a \(y\)-dependent shift in \(\mathcal{A}_{\star}(k,\theta)\); there is no change in the intensity of \(\mathcal{A}\) associated with this shift. In panel c, on the other hand, the intensity of the WW3 action spectrum varies with \(y/r_{v}\). We attribute this difference to asymptotically small effects such as the contribution \(\mathbf{U\cdot\nabla}_{\mathbf{x}}\mathcal{A}\) to wave-action transport. In the remainder of this section, we assess the dependence of significant wave height \(H_{s}\) on the directional spreading parameter \(s\) and flow strength \(U_{m}\). We consider the four different combinations of \(s\) and \(U_{m}\) given in Table 1. The corresponding values of the dimensionless parameters, taken as \[\delta=\delta_{\theta}=\sqrt{2/s}\quad\text{and}\qquad\varepsilon=U_{m}/c_{ \star}, \tag{4.8}\] are also in the table. Figure 2: Wavenumber-integrated action density \(\int\mathcal{A}(x,y,k,\theta)\,\mathrm{d}k\) as a function of \(y\) and \(\theta\) at \(x=-5\,r_{v}\), 0, \(r_{v}\), \(3\,r_{v}\) and \(5\,r_{v}\) from WW3 (left) and MA (Eq. (4.5), right) for swell impinging on a Gaussian vortex with \(U_{m}=0.8\) m s\({}^{-1}\). The directional spreading of the incident spectrum is \(s=40\). Observations of the directional spreading for swell typically range between \(10^{\circ}-20^{\circ}\)(Ewans, 2002), which correspond to a range of \(s\) between 16 and 66. In our experiments, setting \(s=10\) and \(s=40\) leads to a directional spreading of \(24^{\circ}\) and \(12^{\circ}\) respectively, which correspond to very broad and very narrow swells. Figures 3 and 4 show the significant wave height anomaly \[h_{s}(\boldsymbol{x})\stackrel{{\mathrm{def}}}{{=}}H_{s}( \boldsymbol{x})-H_{s\star} \tag{4.9}\] for each combination of \(s\) and \(U_{m}\). Because of our choice of \(H_{s\star}=1\)m, \(h_{s}\) in cm can be interpreted as the fractional change in significant wave height expressed as a percentage. A control run of WW3 in the absence of currents shows that \(h_{s}\) is not exactly zero but decreases slowly with \(x\). This is caused by the finite \(y\)-extent of the computational domain which leads to a wave forcing with compact support. To mitigate this numerical artefact, we compute the WW3 significant wave height anomaly as \(h_{s}(\boldsymbol{x})=H_{s}(\boldsymbol{x})-H_{s}^{\mathrm{ctrl}}(\boldsymbol {x})\), where \(H_{s}^{\mathrm{ctrl}}(\boldsymbol{x})\) is the significant wave height of the current-free control run. See Appendix B for details. Figures 3 and 4 show that \(h_{s}\) has a wedge-like pattern in the wake of the vortex resulting from wave focussing and defocussing, with \(h_{s}>0\) mainly for \(y>0\) and \(h_{s}<0\) for \(y<0\). The pattern is not anti-symmetric about \(y=0\), and positive anomalies are larger than negative anomalies. These characteristics, which indicate a nonlinear response, are increasingly marked as \(s\) and \(U_{m}\) increase. Specifically, the parameter \[\gamma=\frac{\varepsilon}{\delta}=\frac{U_{m}}{c_{\star}}\sqrt{\frac{s}{2}} \tag{4.10}\] controls the degree of nonlinearity and hence of asymmetry. We discuss the two limiting regimes \(\gamma\ll 1\) and \(\gamma\gg 1\) in SS5. There is good overall agreement between WW3 and MA, even though, in the case \(s=10\), the parameter \(\delta=0.447\) is only marginally small. The pattern is more diffuse for WW3 than for MA, with a less sharply defined wedge and a non-zero \(h_{s}\) over a larger proportion of the domain. We attribute the differences to the finiteness of \(\delta\) (they are more marked for \(s=10\), \(\delta=0.447\) than for \(s=40\), \(\delta=0.224\)), and to the limited spectral resolution of WW3 (simulations with degraded angular resolution lead to an even more diffuse \(h_{s}\)). The most conspicuous differences between WW3 and MA appear in the scattering region, where the non-zero \(h_{s}\) obtained with WW3 appears to contradict the MA prediction that \(h_{s}=0\). The non-zero \(h_{s}\) results from \(O(\varepsilon,\,\delta)\) terms neglected by MA. Relaxing \begin{table} \begin{tabular}{l c c c c} \hline \hline \(s\) & \(U_{m}\) (m s\({}^{-1}\)) & \(\delta=\sqrt{2/s}\) & \(\varepsilon=U_{m}/c_{\star}\) & \(\gamma=\varepsilon/\delta\) \\ \hline 10 & 0.4 & 0.447 (25.6\({}^{\circ}\)) & 0.05 & 0.112 \\ 40 & 0.4 & 0.224 (12.8\({}^{\circ}\)) & 0.05 & 0.224 \\ 10 & 0.8 & 0.447 (25.6\({}^{\circ}\)) & 0.1 & 0.224 \\ 40 & 0.8 & 0.224 (12.8\({}^{\circ}\)) & 0.1 & 0.447 \\ \end{tabular} \end{table} Table 1: Parameters corresponding to each configuration in section 4.1, arranged in the order of the rows in figure 3. In all cases the group speed is \(c_{\star}=8\) m s\({}^{-1}\), corresponding to a 166 m wavelength and 10.3 s period. \(U_{m}\) is the maximum vortex velocity and the vortex radius is \(r_{v}=50\) km. Figure 3: Significant wave height anomaly \(h_{s}(x,y)\) from WW3 (left column) and MA (right column) for swell impinging on a Gaussian vortex. Each row corresponds to the indicated values of the directional spreading parameter \(s\) of the incident wave spectrum and of the maximum velocity \(U_{m}\) (in m s\({}^{-1}\)). The corresponding non-dimensional parameters are given in Table 1. The dashed circles has radius \(r_{v}\) around vortex center. The solid lines on the right panels indicate the caustics computed from (D.6). The colourbars differ between rows but are the same within each row. White corresponds to \(h_{s}=0\) in all panels. The notebook that generates panel (h) can be accessed at [https://shorturl.at/fswA3](https://shorturl.at/fswA3). some of the approximations leading to (3.24) gives a heuristic correction to MA that captures the bulk of the difference with WW3 in the scattering region. We explain this in Appendix C. As further demonstration of the MA approach, we provide a Jupyter notebook accessible at [https://shorturl.at/fswA3](https://shorturl.at/fswA3), where users can modify the form of the current and the incoming wave spectrum to experiment with the resulting \(\int\mathcal{A}(x,y,k,\theta)\,\mathrm{d}k\) and \(h_{s}\). ### Vortex dipole A striking feature of the far-field spectrum and hence of \(H_{s}\) is that, according to MA, they depend on the flow only through the deflection \(\Delta(y)\) in (3.17), proportional to the integral of the vorticity along the direction of dominant wave propagation (the \(x\)-direction in our set up). This implies that if the integrated vorticity vanishes because of cancellations between positive and negative contributions, the differences between far-field and incident fields are asymptotically small. This can be interpreted as a form of 'vortex cloaking', whereby an observer positioned well downstream of a flow feature is unable to detect its presence through changes in Figure 4: Significant wave height anomaly \(h_{s}\) as a function of \(y\) for \(x=r_{v},5\,r_{v},15\,r_{v}\) (left, centre and right) from WW3 (solid lines) and MA (Eq. (4.5), dashed lines) in the set up of figure 3. Results are shown for two sets of parameters \(s\) and \(U_{m}\) as indicated in the leftmost panels. The range of \(h_{s}\) differs between panels. wave statistics. We demonstrate this phenomenon by examining the scattering of swell by vortex dipoles. We consider two cases, corresponding to dipoles whose axes (the vector joining the centres of positive and negative vorticity) are, respectively, perpendicular and parallel to the direction of wave propagation. The corresponding vorticity fields are chosen, up to a constant multiple, as the derivative of the Gaussian profile (4.1) with respect to \(y\) or \(x\). Figure 5 shows the significant wave height anomaly obtained for the incident spectrum of SS4.1 with \(s=40\) and dipoles with maximum velocity \(U_{m}=0.8\) m s\({}^{-1}\). When the dipole axis is in the \(y\)-direction (top row) the deflection \(\Delta(y)\) does not vanish identically. As a result, \(H_{s}\) is affected by the flow, strongly for our choice of parameters. This applies to both the MA and WW3 predictions which match closely in the far field. When the dipole axis is in the \(x\)-direction (bottom row), \(\Delta(y)=0\). The MA prediction is then that \(H_{s}=H_{s\star}\), i.e. \(h_{s}=0\), everywhere. The WW3 simulation is consistent with this, with only a weak signal in \(h_{s}\). In general, for a dipole with axis making an angle \(\alpha\) with the direction of wave propagation, the deflection \(\Delta(y)\) is proportional to \(\sin\alpha\) and the cloaking effect is partial unless \(\alpha=0\). ## 5 Limiting cases In this section, we return to the far-field asymptotics (3.22) for \(\mathcal{A}\) in terms of the scaled dependent variables in order to examine two limiting regimes characterized by extreme values of \(\gamma=\varepsilon/\delta\). The regime \(\gamma\ll 1\) corresponds to a weak flow and/or relatively broad spectrum, leading to a linear dependence of \(h_{s}\) on the Figure 5: Swell impinging on vortex dipoles with axes perpendicular (top) and parallel (bottom) to the dominant direction of wave propagation (\(x\)-axis). The vorticity (colour) and velocity (vectors) are shown (left) together with the significant wave height anomaly \(h_{s}\) from WW3 (middle) and MA (right). The directional spreading parameter \(s=40\) and the maximum flow velocity is \(0.8\) m s\({}^{-1}\). flow fields. The opposite regime \(\gamma\gg 1\) corresponds to strong flow and/or highly directional spectrum. The wave response is then highly nonlinear and, as we show below, controlled by the caustics that exist for pure-plane incident waves (\(\gamma=\infty\)). Heller _et al._ (2008)'s 'freak index', given by \(\varepsilon^{2/3}/\delta\), is the analogue of \(\gamma\) for spatially extended, random currents. ### Linear regime: \(\gamma\ll 1\) For \(\gamma\ll 1\), we can expand (3.22) in Taylor series to obtain \[\mathcal{A}(X,y,K,\Theta)=\mathcal{A}_{*}(K,\Theta)-\gamma\Delta(y-X\Theta) \,\partial_{\Theta}\mathcal{A}_{*}(K,\Theta)+O(\gamma^{2}). \tag{5.1}\] This indicates that the flow induces the small correction \(-\gamma\Delta(y-X\Theta)\partial_{\Theta}\mathcal{A}_{*}(K,\Theta)\) to the action of the incident wave. We deduce an approximation for \(H_{s}\) by integrating (5.1) with respect to \(K\) and \(\Theta\) to obtain \(H_{s}^{2}\) followed by a Taylor expansion of a square root. Alternatively, we can carry out a Taylor expansion of the far-field approximation (3.29) of \(H_{s}\), treating \(\Delta(y)\) as small. The result is best expressed in terms of the anomaly \(h_{s}\), found to be \[h_{s}(x,y)=-\frac{H_{s*}}{2}\int D_{s}^{\prime}(\theta)\,\Delta(y-x\theta)\, \mathrm{d}\theta \tag{5.2}\] after reverting to the unscaled variables and setting \(\gamma=1\). This simple expression is readily evaluated once the flow, hence \(\Delta(y)\), and directional spectrum \(D_{*}(\theta)\) are specified. For the Gaussian vortex of SS4.1 and the directional spectrum in (2.6), the integration can be carried out explicitly, yielding \[h_{s}(x,y)=\frac{H_{s*}\kappa}{c_{*}\sqrt{\pi}}\frac{x^{+}y\,\mathrm{e}^{-y^{2 }/\left(2r_{v}^{2}+4x^{2}/s\right)}}{(2r_{v}^{2}+4x^{2}/s)^{3/2}}. \tag{5.3}\] This formula makes it plain that \(h_{s}\) depends on space through \((x/\sqrt{s},y)\), is antisymmetric about the \(x\) axis, and is maximised along the curves \(y=\pm\sqrt{r_{v}^{2}+2x^{2}/s}\). Decay as \(|\mathbf{x}|\to\infty\) is slowest along these curves and proportional to \(x^{-1}\). We illustrate (5.3) and assess its range of validity by comparing it with MA for two sets of parameters in figure 6. The match is very good for \(s=10\) and \(U_{m}=0.4\) m s\({}^{-1}\) (top row), corresponding to \(\gamma=0.112\). It is less good for \(s=40\) and \(U_{m}=0.8\) m s\({}^{-1}\), unsurprisingly since \(\gamma=0.447\) is not particularly small and the MA prediction is obviously far from linear, with a pronounced asymmetry. The curves \(y=\pm\sqrt{r_{v}^{2}+2x^{2}/s}\) shown in the figure are useful indicators of the structure of \(h_{s}\) for small enough \(\gamma\). ### Caustic regime: \(\gamma\gg 1\) The limit \(\gamma\to\infty\) corresponds to an incident wave field that is almost a plane wave. It is natural to rescale variables according to \(\Theta\mapsto\gamma\Theta\) and \(X\mapsto\gamma^{-1}X\) so that (3.22) becomes \[\mathcal{A}(X,y,K,\Theta)=\mathcal{A}_{*}\left(K,\gamma\mathcal{S}(X,y,\Theta) \right), \tag{5.4}\] where \[\mathcal{S}(X,y,\Theta)\stackrel{{\mathrm{def}}}{{=}}\Theta- \Delta(y-X\Theta). \tag{5.5}\] In \((X,y,\Theta)\)-space, the \(K\)-integrated action is concentrated in a thin \(O(\gamma^{-1})\) layer around the surface \(\mathcal{S}(X,y,\Theta)=0\). Quantities such as \(H_{s}\) obtained by further integrating the action with respect to \(\Theta\) can be obtained by approximating the dependence of right-hand side of (5.4) on \(\mathcal{S}\) by \(\delta(\mathcal{S})\). This fails, however, when \((X,y,\Theta)\) satisfy both \[\mathcal{S}(X,y,\Theta)=0,\qquad\text{and}\qquad\partial_{\Theta}\mathcal{S}(X, y,\Theta)=1+X\Delta^{\prime}(y-X\Theta)=0. \tag{5.6}\] The corresponding curves in the \((X,y)\) plane are caustics near which \(\int\mathcal{A}(X,y,K,\Theta)\,\mathrm{d}K\mathrm{d}\Theta\) is an order \(\gamma^{1/2}\) larger than elsewhere; correspondingly, \(H_{s}=O(\gamma^{1/4})\). In figure 7 the two caustics meet at a cusp point from opposite sides of a common tangent. The cusp point is located by the condition \(\partial_{\Theta}^{2}\mathcal{S}=0\) and the integrated action at the cusp point is \(O(\gamma^{2/3})\) so that \(H_{s}=O(\gamma^{1/3})\). We have numerically verified these \(\gamma\)-scalings at the caustics and at the cusp point by varying \(s\) in the MA solutions. For the Gaussian vortex (4.1), the system (5.6) can be solved to obtain an explicit equation for the caustics. This equation is derived in Appendix D and given by (D.6). It describes two curves \(y(x)\) emanating from the cusp point at \(x=x_{c}\) given by (D.5). The caustics (which depend on \(U_{m}\) but not on \(s\)) are indicated on the right panels of figure 3. For the parameters of the figure, the caustics do not map regions of particularly large \(h_{s}\). This is unsurprising since \(\gamma\) is at most \(0.447\). To assess how large \(\gamma\) or equivalently \(s\) need to be for caustics to be the dominant feature of \(H_{s}\), we show in figure 7\(h_{s}\) computed from MA for \(U_{m}=0.8\) m s\({}^{-1}\) and \(s=200\) (left panel, \(\gamma=1\)) and \(s=4000\) (right panel, \(\gamma=4.47\)). It is only for \(s=4000\) that the caustics are evidently controlling the significant wave height Figure 6: Significant wave height anomaly \(h_{s}(x,y)\) for swell impinging on a Gaussian vortex: comparison between the predictions of MA (left) and and its \(\gamma\to 0\) limit ((5.3), right column). The set up is as in figure 3 with parameters \(s\) and \(U_{m}\) (in m s\({}^{-1}\)) as indicated. Dashed lines indicates the curves \(y=\pm\sqrt{r_{v}^{2}+2x^{2}/s}\) where \(h_{s}\) reach maximum amplitudes according to (5.3). pattern. We emphasise that \(s=200\) and a fortiori \(s=4000\) are unrealistically large values: observational estimates for \(s\) in the open ocean seldom exceed \(s=80\). We conclude that caustics are unlikely to play a role in real ocean conditions. With academic rather than practical interest in mind, then, we show in figure 8 the integrated action \(\int\mathcal{A}\,\mathrm{d}k\) as a function of \(y\) for different three different values of \(x\) (identified by dashed vertical lines in figure 7). The figure illustrates how caustics emerge from a fold singularity in the surface \(\mathcal{S}(x,y,\theta)=0\) along which action is concentrated in the \((x,y,\theta)\) phase space. For \(x=r_{v}\), the surface is a graph over \((x,y)\) and there are no caustics; for \(x=x_{c}\approx 3r_{v}\), the surface has a single point of vertical tangency (P1 in panel (f) of 7) corresponding to the birth of caustics at a cusp in the \((x,y)\)-plane; for \(x=5r_{c}\), there are two points of vertical tangency, P2 and P3 in panel (h), corresponding to the two caustic curves. The Figure 8: Wavenumber-integrated action density \(\int\mathcal{A}(x,y,k,\theta)\mathrm{d}k\) as a function of \(y\) and \(\theta\) for \(x=r_{v},3r_{v}\) and \(5\,r_{v}\) corresponding to the significant wave height shown in figure 7 for \(s=200\) (left column) and \(s=4000\) (right column). P1 in panel d corresponds to the values of \((x,y)\) of the cusp from where the caustics emanate; P2 and P3 are associated to points on each of the two caustics. Figure 7: Caustics for swell impinging on a Gaussian vortex: the caustics (D 6) (solid lines) are superimposed to the MA prediction of \(h_{s}\) for \(U_{m}=0.8\) m s\({}^{-1}\) and the indicated values of \(s\). The dashed vertical lines correspond to the values of \(x=r_{v}\), \(3\,r_{v}\) and \(5\,r_{v}\) used in figure 8. picture is increasingly blurred as \(s\) decreases (compare the right panel of figure 8 with the left panels and with figure 2), explaining the diminishing importance of caustics for \(H_{s}\). ## 6 Discussion and conclusion The main results in this study are obtained by approximate solution of the wave action equation in the four-dimensional position-wavenumber space. In addition to the WKB approximation used to derive the action conservation equation (1) there are two independent approximations involved: 1. the current speed is much less than the group velocity of the incident swell; 2. swell with small directional spreading is incident on a region of spatially compact currents e.g. an axisymmetric vortex or a vortex dipole. Provided that (a) and (b) are satisfied the approximate solution of the wave action equation compares well with numerical solutions provided by WW3. A main organizing principle identified by the analysis is that scattering of SGWs by spatially compact currents results in the deflection function, \(\Delta(y)\) in (3.17). Although \(\Delta\) varies linearly with the vertical vorticity of the currents, \(\Delta\) figures in a nonlinear transformation of the action density. This nonlinear transformation produces the modulation of the significant wave height \(H_{s}\) behind the scattering region, e.g. the expression for \(H_{s}\) in (3.29). Our results show that \(H_{s}\) behind an axisymmetric vortex with parameters in table 1 has spatial variation as large as \(\pm 30\%\) of the incident constant value \(H_{s*}\). Spatial inhomogeneities in \(H_{s}\) of this magnitude are important for wave breaking and exchange of momentum, heat and gas between the ocean and atmosphere. For example, airborne observations of the ocean surface by Romero _et al._ (2017) indicate that \(\pm 30\%\) variations in \(H_{s}\) are associated with an order of magnitude increase in whitecap coverage. Approximation (a) is usually justified. To challenge (a) one must consider current speeds such as 2 m s\({}^{-1}\) e.g. observed as a peak current speed in the Agulhas system (Quilfen & Chapron, 2019). Swell with 100 m wavelength has group velocity \(\sim 6\) m s\({}^{-1}\) so that the small parameter in (a) is as large as 1/3. In less extreme cases approximation (a) will be satsified. Approximation (b) is less secure: ocean swell is not sufficiently unidirectional to strongly justify (b) e.g. see the \(\delta\)-column in table 1. Over long distances, the continuous scattering by uncorrelated currents leads to a broadening of the angular spectrum. When approximation (a) applies, this broadening is described by the directional diffusion equation for wave action derived by Villas Boas & Young (2020). This diffusion process is one of the mechanisms that makes swell with very small values of \(\delta\) unlikely. Because of the relatively large directional spreading of ocean swell the mathematical ideal of a sharp wave caustic is not realized. Instead the caustic singularity is 'washed out' (Heller _et al._, 2008). Behind a vortex we find instead an elongated streaky pattern in \(H_{s}\). The directional diffusion equation of Villas Boas & Young (2020) uses only approximation (a). One does not need to assume that the wave field is strongly unidirectional or that the currents are spatially compact. Moreover the directional diffusion equation is obtained without detailed consideration of the perturbations to the action spectrum that accompany wave scattering. But there is useful information hiding in these unexamined perturbations to the action spectrum. We are currently engaged in extracting these perturbations, calculating the attendant spatial variability to \(H_{s}\), and relating the statistics of these fluctuations in \(H_{s}\) to those of the surface currents. These future developments promise to explain numerical experiments that identify relations between the spectral slopes of surface-current spectra and those of significant wave height (Villas Boas _et al._, 2020). ###### Acknowledgements. We thank Victor Shrira for conversations about this work. JV and HW are supported by the UK Natural Environment Research Council (grant NE/W002876/1). WRY is supported by the National Science Foundation award 2048583. Declaration of interests. The authors report no conflict of interest. Data availability statement. The WW3 configuration files applied in this work can be found at [https://github.com/biawillasboas/SwellVortex](https://github.com/biawillasboas/SwellVortex). The Jupyter Notebook file demonstrating the matched asymptotics approach is available at [https://shorturl.at/fswA3](https://shorturl.at/fswA3). Author ORCID. H. Wang, [https://orcid.org/0000-0002-5841-5474](https://orcid.org/0000-0002-5841-5474). A. B. Villas Boas, [https://orcid.org/0000-0001-6767-6556](https://orcid.org/0000-0001-6767-6556) W.R Young, [https://orcid.org/0000-0002-1842-3197](https://orcid.org/0000-0002-1842-3197). J. Vanneste, [https://orcid.org/0000-0002-0319-589X](https://orcid.org/0000-0002-0319-589X). ## Appendix A Incident spectrum We use the separable spectrum \[\mathcal{F}_{\star}(k,\theta)=\zeta_{\text{rms}\star}^{2}F_{\star}(k)\,D_{ \star}(\theta). \tag{11}\] The wavenumber function in (11) is \[F_{\star}(k)\stackrel{{\text{def}}}{{=}}\frac{2}{\text{erfc}(- \sigma_{\star}/\sqrt{2}\delta_{\sigma})}\frac{\text{e}^{-(\sigma-\sigma_{\star })^{2}/2\delta_{\sigma}^{2}}}{\sqrt{2\pi\delta_{\sigma}^{2}}}\ \frac{1}{k}\frac{\text{d}\sigma}{\text{d}k}, \tag{12}\] where erfc is the complementary error function. It corresponds to a Gaussian spectrum in frequency truncated at \(\sigma=0\). The angular part of the spectrum in (11) is \[D_{\star}(\theta)\stackrel{{\text{def}}}{{=}}\frac{\Gamma(s+1)} {2\sqrt{\pi}\Gamma(s+\frac{1}{2})}\ \cos^{2s}\left(\frac{\theta}{2}\right) \tag{13}\] (Longuet-Higgins _et al._, 1963), which corresponds to incoming waves spread around \(\theta=0\). The four parameters in this model spectrum are the root mean square sea-surface displacement \(\zeta_{\text{rms}\star}\), the peak radian frequency \(\sigma_{\star}=\sqrt{gk_{\star}}\), the spectral width \(\delta_{\sigma}\) and the directional spreading parameter \(s\). Normalization is ensured with \[\int_{-\pi}^{\pi}\!\!\!D_{\star}(\theta)\,\text{d}\theta=1\qquad\text{and} \qquad\int_{0}^{\infty}\!\!\!F_{\star}(k)k\,\text{d}k=1. \tag{14}\] In the narrow-band limit \(\delta_{\sigma}/\sigma_{\star}\ll 1\) and \(s\gg 1\), the spectrum is approximated by (6) with \(\delta_{k}=2\delta_{\sigma}\sqrt{k_{\star}/g}\) and \(\delta_{\theta}=\sqrt{2/s}\). The parameter \(\delta_{\theta}\) captures the standard deviation in the angular distribution, which is the definition of 'directional spreading' (Kuik _et al._, 1988). We note that the expressions for directional spreading are sometimes formally different, but equivalent to our expression for \(\delta_{\theta}\) at large \(s\). For example, another popular way to state the definition for a generic directional distribution is \[\sigma_{\theta}\stackrel{{\rm def}}{{=}}\left[2\left(1-\left(a^{2}+b^ {2}\right)^{1/2}\right)\right]^{1/2}, \tag{10}\] where \[a=\int\cos\theta D_{\star}(\theta)\,\mathrm{d}\theta\quad\text{and}\quad b=\int \sin\theta D_{\star}(\theta)\,\mathrm{d}\theta \tag{11}\] (Villas Boas _et al._, 2020). Using the expression of \(D_{\star}\) in (6), we can compute the integrals in (11) analytically, getting \(a=\mathrm{e}^{-1/s}\) and \(b=0\). Therefore, \[\sigma_{\theta}^{2}=2(1-\mathrm{e}^{-1/s})\to 2/s\quad\text{as}\ \ s\to\infty. \tag{12}\] Thus the definition of \(\sigma_{\theta}\) in (10) indeed agrees with the parameter \(\delta_{\theta}\) at large \(s\). ## Appendix B Set up of WAVEWATCH III We compare our results with numerical simulations from an idealized setup of WW3 which integrates the action balance equation (1). Here, we focus on freely propagating swell-type waves, so the effects of wind forcing, nonlinear interactions and wave breaking are ignored (e.g., Villas Boas _et al._, 2020). We use WW3 version v6.07.1 ([https://github.com/NOAA-EMC/WW3/releases/tag/6.07.1](https://github.com/NOAA-EMC/WW3/releases/tag/6.07.1)) to solve (1) on a 1000 km\(\times\)1000 km Cartesian domain with 5 km grid spacing. To resolve swells with \(s=10\) and 40 the spectral grid has 80 directions and 32 frequencies. Larger values of \(s\) (i.e., narrower directional spreading) would require higher directional resolution for the model to converge. We use a global integration time step of 200 s, spatial advection time step of 50 s, spectral advection time step of 12 s, and minimum source term time step of 5 s. We verified that decreasing the time stepping or the spatial grid spacing does not significantly change the results (not shown). All simulations are initialized with the narrow-banded wave spectrum in (10). Waves enter the domain from the left boundary with initial mean direction \(\theta=0^{\circ}\) (propagating from left to right), directional spreading parameter \(s=10\) or \(s=10\), peak frequency \(\sigma_{\star}=0.61\) rad s\({}^{-1}\) (peak period of 10.3 s), spectral width \(\delta_{\sigma}=0.04\), and \(H_{s\ast}=1\) m. The boundary condition at the left boundary is kept constant throughout the experiment and each experiment is run until steady state is reached. As mentioned in SS4.1, a control run is conducted in the absence of currents. Although there is no scattering from the currents, a nonuniform \(h_{s}^{\rm ctrl}=H_{s}^{\rm ctrl}-H_{s\ast}\) arises, due to the limited domain size in \(y\), which leads to a reduction of incident wave action from waves arriving from large \(|y|\) -- an effect that is more pronounced at large \(x\). As \(s\) increases, the action density in the incident spectrum is more concentrated in the eastward direction, leading to less leakage of wave action through the top and bottom boundaries and a more spatially uniform \(h_{s}^{\rm ctrl}\). This leakage of wave action corresponds to a reduction of 5% in \(h_{s}^{\rm ctrl}\) for \(s=10\), and 2% for \(s=40\) towards the right-hand side boundary. ## Appendix C MA-WW3 mismatch in the scattering region We develop a heuristic correction to MA that we show captures the non-zero \(h_{s}\) in the scattering region. First, we note that the non-zero \(h_{s}\) in the scattering region from WW3 appears localized, likely caused by the term proportional to \(\partial_{k}\mathcal{A}\) in (11), as the terms proportional to \(\partial_{\theta}\mathcal{A}\) result in non-local effects. This observation is confirmed by a WW3 run, which we refer to as WW3\({}^{-}\), where the term in \(\partial_{k}\mathcal{A}\) is suppressed in the wave action equation, yielding a more uniform \(h_{s}\) in the scattering region (see panel (d) in Figure 9). We then recall that in the MA solution, the insignificance of the \(\partial_{k}\mathcal{A}\) term is due to the approximation of a single dominant wavenumber in the steps leading to (38). We thus return to the approximation (20) of the wave-action transport equation in the scattering region and relax the approximation of replacing \(k\) by \(k_{\star}\). We focus on the \(\theta\)-integrated action \[\mathcal{B}(\mathbf{x},k)=\int\mathcal{A}(\mathbf{x},\mathbf{k})\,\mathrm{d}\theta. \tag{21}\] It satisfies \[c(k)\,\partial_{x}\mathcal{B}-U_{x}(\mathbf{x})k\,\partial_{k}\mathcal{B}=0. \tag{22}\] Noting that \(c(k)=g^{1/2}k^{-1/2}/2\), we solve this equation using the method of characteristics to find \[\mathcal{B}(\mathbf{x},k)=\mathcal{B}_{\star}\left(\left(k^{-1/2}-g^{-1/2}U(\mathbf{x })\right)^{-2}\right). \tag{23}\] The significant wave height is deduced by integration as \[H_{s}(\mathbf{x})=\left(\frac{16}{g^{1/2}}\int\mathcal{B}_{\star} \left(\left(k^{-1/2}-g^{-1/2}U(\mathbf{x})\right)^{-2}\right)k^{3/2}\,\mathrm{d}k \right)^{1/2}. \tag{24}\] We now change the integration variable, taking advantage of the localisation of \(\mathcal{B}_{\star}(k)\) to ignore the corresponding change in the lower limit of integration and obtain \[H_{s}(\mathbf{x}) =\left(\frac{16}{g^{1/2}}\int\mathcal{B}_{\star}(k)\left(k^{-1/2}+ g^{-1/2}U(\mathbf{x})\right)^{-6}k^{-3/2}\,\mathrm{d}k\right)^{1/2}\] \[=\left(\frac{16}{g^{1/2}}\int\mathcal{B}_{\star}(k)k^{3/2}\left(1 +k^{1/2}g^{-1/2}U(\mathbf{x})\right)^{-6}\,\mathrm{d}k\right)^{1/2}\] \[=\left(\frac{16}{g^{1/2}}\int\mathcal{B}_{\star}(k)k^{3/2}\left(1 +\frac{U(\mathbf{x})}{2c(k)}\right)^{-6}\,\mathrm{d}k\right)^{1/2} \tag{25}\] At this point, we can approximate \(c(k)\) by \(c_{\star}\) in the small, \(O(\varepsilon)\) term \(U(\mathbf{x})/(2c(k))\) and use two binomial expansions to obtain \[H_{s}(\mathbf{x})\approx H_{s\star}\left(1+\frac{3U(\mathbf{x})}{2c_{\star}}\right). \tag{26}\] We emphasise the heuristic nature of this approximation (MA\({}^{+}\)) which is formally no more accurate than the MA approximation \(H_{s}(\mathbf{x})=H_{s\star}\) since it neglects some, though not all, \(O(\delta)\) terms. Nonetheless, it captures most of the significant wave height anomaly close to the Gaussian vortex, as figure 9 demonstrates under parameters \(s=40\) and \(U_{m}=0.8\) m s\({}^{-1}\). ## Appendix D Caustics for the Gaussian vortex In the Gaussian vortex example, we can derive the locations of the caustics in the \((x,y)\) plane analytically. Using expression (4.3) for \(\Delta(y)\) and introducing the functions \[w(x,y)\stackrel{{\mathrm{def}}}{{=}}-(y-x\theta)^{2}/r_{v}^{2}\] (D.1) and \[q(x)\stackrel{{\mathrm{def}}}{{=}}-2\pi r_{v}^{4}c_{\star}^{2}/( x^{2}\kappa^{2}),\] (D.2) we can write equations (5.6) defining the caustics as \[\theta-\frac{\kappa}{\sqrt{2\pi}r_{v}c_{\star}}\mathrm{e}^{w/2}=0\] (D.3) and \[w\mathrm{e}^{w}=q.\] (D.4) Eq. (D.4) relates \(w\) to \(q\), and takes the standard form defining the Lambert \(W\)-functions (see 1, Eq. 4.13.1). This equation has two branches of solutions \(w=W_{i}(q)\), \(i=0\), \(-1\), when \(0<-q<\mathrm{e}\) and no solutions when \(-q>\mathrm{e}\) (\(q<0\) by definition (D.2)). The two branches meet at \(q=-\mathrm{e}^{-1}\) which corresponds to \[x=x_{c}\stackrel{{\mathrm{def}}}{{=}}\sqrt{2\pi}er_{v}^{2}c_{ \star}/\kappa.\] (D.5) Physically, the two branches \(w=W_{i}(q)\) correspond to two caustic lines in the \((x,y)\) plane that emanate from a cusp point with \(x=x_{c}\). The equation of the Figure 9: Significant wave height anomaly \(h_{s}\) computed from WW3 (a) and MA (b) as in the main text (same as Figure 3, fourth row); Panel (d) shows \(h_{s}\) from the WW3\({}^{-}\) run, where the term proportional to \(\partial_{k}\mathcal{A}\) is switched off. Panel (e) shows the MA\({}^{+}\) solution as in (C.6). Panel (c) shows the difference between (a) and (b), and panel (f) shows the difference between (d) and (e). All panels have the same colorbar. caustics is found using (D.1) and (D.3) as \[y=x\frac{\kappa}{\sqrt{2\pi}r_{v}c_{\star}}\mathrm{e}^{W_{i}(q(x))/2}+\sqrt{-W_{i} (q(x))}r_{v},\quad x\geqslant x_{c}.\] (D.6) The cusp point is at \((x,y)=(x_{c},2r_{v})\). The asymptotic form of the caustics for \(x\to\infty\) is readily obtained by noting that \(q(x)\to 0^{-}\) as \(x\to\infty\) and then that \(W_{0}(q)\to 0\) and \(W_{-1}(q)\sim\ln(-q)\). Thus the \(i=0\) caustic asymptotes to a straight line and the \(i=-1\) caustic to \(y\sim(2\ln x)^{1/2}\).
2307.04178
Shock excitation of H$_2$ in the James Webb Space Telescope era
(Abridged) H2 is the most abundant molecule in the Universe. Thanks to its widely spaced energy levels, it predominantly lights up in warm gas, T > 100 K, such as shocked regions, and it is one of the key targets of JWST observations. These include shocks from protostellar outflows, all the way up to starburst galaxies and AGN. Shock models are able to simulate H2 emission. We aim to explore H2 excitation using such models, and to test over which parameter space distinct signatures are produced in H2 emission. We present simulated H2 emission using the Paris-Durham shock code over an extensive grid of 14,000 plane-parallel stationary shock models, a large subset of which are exposed to an external UV radiation field. The grid samples 6 input parameters: preshock density, shock velocity, transverse magnetic field strength, UV radiation field strength, cosmic-ray-ionization rate, and PAH abundance. Physical quantities, such as temperature, density, and width, have been extracted along with H2 integrated line intensities. The strength of the transverse magnetic field, set by the scaling factor, b, plays a key role in the excitation of H2. At low values of b (<~ 0.3, J-type shocks), H2 excitation is dominated by vibrationally excited lines; at higher values (b >~ 1, C-type shocks), rotational lines dominate the spectrum for shocks with an external radiation field comparable to (or lower than) the solar neighborhood. Shocks with b >= 1 can be spatially resolved with JWST for nearby objects. When the input kinetic energy flux increases, the excitation and integrated intensity of H2 increases similarly. An external UV field mainly serves to increase the excitation, particularly for shocks where the input radiation energy is comparable to the input kinetic energy flux. These results provide an overview of the energetic reprocessing of input kinetic energy flux and the resulting H2 line emission.
L. E. Kristensen, B. Godard, P. Guillard, A. Gusdorf, G. Pineau des Forets
2023-07-09T14:02:17Z
http://arxiv.org/abs/2307.04178v1
# Shock excitation of H\({}_{2}\) in the _James Webb_ Space Telescope era+ ###### Abstract Context:Molecular hydrogen, H\({}_{2}\), is the most abundant molecule in the Universe. Thanks to its widely spaced energy levels, it predominantly lights up in warm gas, \(T\gtrsim 10^{2}\) K, such as shocked regions externally irradiated or not by interstellar UV photons, and it is one of the prime targets of _James Webb_ Space Telescope (JWST) observations. These may include shocks from protostellar outflows, supernova remnants impinging on molecular clouds, all the way up to starburst galaxies and active galactic nuclei. Aims:Spohistated shock models are able to simulate H\({}_{2}\) emission from such shocked regions. We aim to explore H\({}_{2}\) excitation using shock models, and to test over which parameter space distinct signatures are produced in H\({}_{2}\) emission. Methods:We here present simulated H\({}_{2}\) emission using the Paris-Durham shock code over an extensive grid of \(\sim\) 14,000 plane-parallel stationary shock models, a large subset of which are exposed to a semi-isotropic external UV radiation field. The grid samples six input parameters: the preshock density, shock velocity, transverse magnetic field strength, UV radiation field strength, the cosmic-ray-ionization rate, and the abundance of polycyclic aromatic hydrocarbons, PAHs. Physical quantities resulting from our self-consistent calculations, such as temperature, density, and width, have been extracted along with H\({}_{2}\) integrated line intensities. These simulations and results are publicly available on the Interstellar Medium Services platform. Results:The strength of the transverse magnetic field, as quantified by the magnetic scaling factor, \(b\), plays a key role in the excitation of H\({}_{2}\). At low values of \(b\) (\(\lesssim 0.3\), J-type shocks), H\({}_{2}\) excitation is dominated by vibrationally excited lines; whereas, at higher values (\(b\gtrsim 1\), C-type shocks), rotational lines dominate the spectrum for shocks with an external radiation field comparable to (or lower than) the solar neighborhood. Shocks with \(b\geq 1\) can potentially be spatially resolved with JWST for nearby objects. H\({}_{2}\) is typically the dominant coolant at lower densities (\(\lesssim 10^{4}\) cm\({}^{-3}\)); at higher densities, other molecules such as CO, OH, and H\({}_{2}\)O take over at velocities \(\lesssim 20\) km s\({}^{-1}\) and atoms, for example, H, O, and S, dominate at higher velocities. Together, the velocity and density set the input kinetic energy flux. When this increases, the excitation and integrated intensity of H\({}_{2}\) increases similarly. An external UV field mainly serves to increase the excitation, particularly for shocks where the input radiation energy is comparable to the input kinetic energy flux. These results provide an overview of the energetic reprocessing of input kinetic energy flux and the resulting H\({}_{2}\) line emission. Conclusions: ## 1 Introduction Shocks are inherently out-of-equilibrium time-dependent phenomena that permeate space. They appear over a wide range of scales, ranging from, for example, accretion onto stars or protoplanetary disks, winds and jets driven by accreting (proto)stars, planetary nebulae, supernova remnants, starburst galaxies, jets from active galactic nuclei (AGN), and to galaxy-galaxy collisions (physical sizes ranging from subastronomical unit to kiloparsec scales; e.g., Bally 2016; Wright et al. 1993; Mouri 1994; Goldader et al. 1997; Appleton et al. 2006). Common to all these phenomena is that the input kinetic energy flux dissipated by the shock accelerates, heats, and compresses the medium. When the medium cools down, radiation is emitted, which we observe. To understand the physical origin of emission (e.g., preshock density, shock velocity) and the energetic processing taking place in shocks, it is thus necessary to reverse engineer the observed light. Doing so requires models. One of the often-used tracers of shocks is molecular hydrogen, H\({}_{2}\)(e.g., Hollenbach & McKee 1989; Kaufman & Neufeld 1996; Rosenthal et al. 2000). This is the most abundant molecule in the interstellar medium by some four orders of magnitude over CO and H\({}_{2}\)O. The molecule is the lightest, and so it has the most widely spaced rotational levels (\(J=1\) has \(E_{\rm up}/k_{\rm B}=170\) K and \(J=2\) has \(E_{\rm up}/k_{\rm B}=510\) K). As such, it is predominantly excited in warm (\(T\gtrsim 10^{2}\) K) and hot (\(T\gtrsim 10^{3}\) K) molecular gas. This molecule has no permanent dipole moment, and only forbidden electric quadrupole transitions occur, although at low probabil ity. The main reason H\({}_{2}\) emission is still bright is because of its high abundance. H\({}_{2}\) emission is readily observed from the ground, particularly in higher-excited rovibrational transitions at near-infrared wavelengths (e.g., Froebrich et al., 2015). The brightest of these is typically the \(v=\) 1-0 S(1) line at 2.12 \(\mu\)m. A few pure rotational lines are also accessible from the ground, and the line profiles may even be velocity resolved on telescopes such as the Very Large Telescope (VLT, Santangelo et al., 2014). However, it is necessary to go above the atmosphere to observe the lower-excited pure rotational transitions of H\({}_{2}\). Space-based telescopes such as the Infrared Space Observatory (ISO) and the _Spitzer_ Space Telescope (_Spitzer_) both observed these transitions toward numerous shocked regions (e.g., Rosenthal et al., 2000; Neufeld et al., 2006; Valentijn & van der Werf, 1999; Lutz et al., 2003; Verma et al., 2005), as did the Stratospheric Observatory For Infrared Astronomy (SOFIA, Reach et al., 2019; Neufeld et al., 2019). Now the _James Webb_ Space Telescope (JWST) is doing the same (e.g., Garcia-Bernete et al., 2022; Berne et al., 2022; Yang et al., 2022; Appleton et al., 2023; Alvarez-Marquez et al., 2022). Particularly, the MIRI instrument is observing the rotational H\({}_{2}\) transitions with a gain in sensitivity and spatial resolution of two orders of magnitude compared with _Spitzer_, and an increase in spectral resolution of a factor five (e.g., Fig. 7 and 8 of Rigby et al., 2023). Similar improvements are reached with the NIRSpec instrument compared with the VLT-SINFONI integral-field unit, allowing deep observations of the rovibrational lines of H\({}_{2}\). The wavelength coverage of NIRSpec, NIRCam, and MIRI are illustrated in Fig. 1, which shows a simulated H\({}_{2}\) spectrum with the instrument wavelength coverages displayed. Planning and interpreting the abovementioned observations is often done by use of models. With models, it is possible to constrain, for example, the shock velocity and preshock density, which together give the input kinetic energy flux, 1/2 \(\rho\)\(v_{\rm s}^{3}\), where \(\rho\) is the mass density and \(v_{\rm s}\) is the shock velocity. In molecular shocks, a comparison reveals that up to 50% of the input energy is radiated away in H\({}_{2}\) emission (Kaufman & Neufeld, 1996), depending on shock conditions, making H\({}_{2}\) the dominant coolant in these shocks. _Spitzer_ particularly opened up for characterization of the pure rotational H\({}_{2}\) lines. Observations and subsequent modeling revealed that most H\({}_{2}\) emission could be reproduced by shock models (e.g., in protostellar outflows; Maret et al., 2009; Dionatos et al., 2010). However, when additional constraints, such as the H/H\({}_{2}\) ratio and the cooling length are included for protostellar outflows, a single shock model no longer reproduces observations (Nisini et al., 2010). Instead, as argued, the observational beam likely catches different shocks, or more complex shock geometries than 1D, which is to be expected; this is not just the case for protostellar outflows, but also observations of shocks in the diffuse gas of starburst and colliding galaxies (Kristensen et al., 2008; Gustafsson et al., 2010; Lesaffre et al., 2013; Tram et al., 2018; Lehmann et al., 2022). Irrespective of the specific science case, the first step in comparing observations to models is to have the models available. The Paris-Durham shock code (e.g., Godard et al., 2019, and references therein) has been developed and maintained for more than 35 years (Flower et al., 1985). The code can either find jump (J-type shocks) or continuous (C-type shocks) solutions depending on the input physical parameters. Recent developments include the treatment of an external UV radiation field (Godard et al., 2019), and self-irradiation in high-velocity shocks (\(v_{\rm s}\gtrsim 30\) km s\({}^{-1}\); Lehmann et al., 2022). Here we present the results of running a large grid of simulations of (externally irradiated) shocks with the goal of exploring how the input energy flux (kinetic and radiative) is reprocessed and ultimately results in H\({}_{2}\) emission. These model predictions can be used directly to interpret, for example, JWST observations of shock emission. The paper is organized as follows. Section 2 describes the shock model and the model grid, with a particular emphasis on H\({}_{2}\) excitation and emission. The section also describes which physical quantities were extracted from the models, and the methodology applied. Section 3 describes the results and provides a discussion of these results. Finally, the main points are summarized in Sect. 4. ## 2 Model and grid description The current version of the multifluid shock code is extensively described in Godard et al. (2019) and references therein, and only the main relevant points will be described here. These points particularly relate to H\({}_{2}\) emission and other observable diagnostics, but also how the initial shock conditions are calculated. The code is publicly available1, and the entire grid presented in this paper is also available on the ISM platform2. In Appendix A we provide an introduction to this platform and demonstrate how it can be used. Footnote 1: [http://ism.obspm.fr/shock.html](http://ism.obspm.fr/shock.html) Footnote 2: [https://app.ism.obspm.fr/ismdb/](https://app.ism.obspm.fr/ismdb/) ### Initial conditions The main focus of this paper is on H\({}_{2}\), and so the chemistry considered in this paper and, more importantly, in the models run, is a gas-phase-only chemistry. That is, grain adsorption and desorption processes are not included. The only exceptions are the formation of H\({}_{2}\) on grains, and grain erosion for the release of elemental Si, Fe, etc. into the gas phase. Photochemistry is included in all steps of the calculation; readers can refer to the text below for more details. Our assumption is that the initial conditions are in equilibrium, that is, thermal and chemical equilibrium with or without an incident radiation field. Running a shock model therefore requires multiple steps, all done using the Paris-Durham code (see Godard et al., 2019, for details). This code simulates steady-state gas equilibrium, photon-dominated regions (PDRs), or shocks. These steps are illustrated in Fig. 2. First, a chemical steady-state calculation is run with the given density and radiation field. \begin{table} \begin{tabular}{l c c c c} \hline \hline & Fractional & Gas & & Grain \\ Element & abundance & phase & PAHs\({}^{a}\) & cores\({}^{b}\) \\ \hline H & 1.00 & 1.00 & 1.8(–5) & \\ He & 1.00(–1) & 1.00(–1) & & \\ C & 3.55(–4) & 1.38(–4) & 5.4(–5) & 1.63(–4) \\ N & 7.94(–5) & 7.94(–5) & & \\ O & 4.42(–4) & 3.02(–4) & & 1.40(–4) \\ Mg & 3.70(–5) & & 3.70(–5) \\ Si & 3.67(–5) & 3.00(–6) & & 3.37(–5) \\ S & 1.86(–5) & 1.86(–5) & & \\ Fe & 3.23(–5) & 1.50(–8) & & 3.23(–5) \\ \hline \end{tabular} 1 \end{table} Table 1: Initial fractional elemental abundances, \(n_{\rm x}/n_{\rm H}\). For irradiated shocks, the next step is to take the final equilibrium conditions from the chemical steady-state calculation and use these as input for a PDR calculation, where a tracer particle is advected at a small velocity (\(\leq\) 0.01 km s\({}^{-1}\)) from an \(A_{\rm V}\) of 10\({}^{-9}\) to 10\({}^{-1}\). The advection speed is chosen such that the time it takes to cross the PDR front is long enough that equilibrium is reached; this timescale is 10\({}^{5}\)-10\({}^{9}\) years for high to low densities. The choice of a final \(A_{\rm V}\) of 0.1 is motivated by two considerations. First, the primary focus of this paper is H\({}_{2}\) and the \(A_{\rm V}\) thus needs to be high enough that the preshock gas is substantially molecular (molecular fraction \(\geq\) 0.1) for the majority of the \(G_{0}\) values here, specifically the part of the grid where \(G_{0}/n_{\rm H}<\) 1. Second, the \(A_{\rm V}\) should be low enough that H\({}_{2}\) is not fully self-shielded. These two conditions are met at an \(A_{\rm V}\) of 0.1. The final conditions, in terms of steady-state abundances, temperature, and H\({}_{2}\) level populations, are then used as the input physical conditions of the shock calculation. The shock is run in the final step. The initial elemental abundances are provided in Table 1. Of particular importance is the abundance of polycyclic aromatic hydrocarbons (PAHs). In the model, a representative PAH molecule is included, C\({}_{54}\)H\({}_{18}\) and its singly charged ions. Table 1 reports the amount of H and C locked up in this PAH for a PAH abundance of \(X\)(PAH) = 10\({}^{-6}\). The grain temperature is kept fixed at 15 K. We cover a 6D parameter space with preshock density (\(n_{\rm H}\) = 2 \(n\)(H\({}_{2}\)) + \(n\)(H)), shock velocity (\(v_{\rm s}\)), strength of the transverse magnetic field3 (\(b\)), external UV radiation (\(G_{0}\) in units of the field from Mathis et al. 1983), H\({}_{2}\) cosmic-ray ionization rate (\(\xi_{\rm H2}\)), and the fractional abundance of the PAHs (\(X\)(PAH)). The parameter space is presented in Table 2. Depending on the initial conditions, the code either finds a Jump (J-type) solution or a Continuous (C-type) solution (see below, Sect. 3.1 for more details). Throughout this paper, we use two shock models to illustrate differences when changing \(b\) from 0.1 to 1.0; these are referred to as model A and B (Table 3). For the given set of input parameters, model A gives rise to a J-type shock, and model B a C-type shock. Footnote 3: The transverse magnetic field strength scales with the density as \(B=b\times\sqrt{n_{\rm H}}\) (cm\({}^{-3}\)) \(\mu\)G, where \(b\) is a scaling factor. ### Molecular hydrogen Collisional excitation and de-excitation of H\({}_{2}\) is calculated for collisions with H, H\({}_{2}\), and He. The collisional rate coefficients for H\({}_{2}\)-H\({}_{2}\) collisions are adopted from Flower & Roueff (1998a) and for H\({}_{2}\)-He collisions from Flower et al. (1998). In the case \begin{table} \begin{tabular}{l c} \hline \hline Parameter & Values \\ \hline \(n_{\rm H_{2}}\)(cm\({}^{-3}\)) & 10\({}^{\circ}\), 10\({}^{\circ}\), 10\({}^{\circ}\), 10\({}^{\circ}\), 10\({}^{\circ}\), 10\({}^{\circ}\), 10\({}^{\circ}\), 10\({}^{\circ}\) \\ \(b^{(b)}\) & 0.1, 0.3, 1.0, 3.0, 10.0 \\ \(v_{\rm s}\) (km s\({}^{-1}\)), \(b\)=0.1 & 2, 3, 4, 5, 10, 15, 20, 25, 30 \\ \(v_{\rm s}\) (km s\({}^{-1}\)), \(b\)=0.3 & 2, 3, 4, 5, 10, 15, 20, 25, 30 \\ \(v_{\rm s}\) (km s\({}^{-1}\)), \(b\)=10.0 & 2, 3, 4, 5, 10, 15, 20, 25, 30 \\ \(v_{\rm s}\) (km s\({}^{-1}\)), \(b\)=3.0 & 10, 20, 30, 40, 50, 60 \\ \(v_{\rm s}\) (km s\({}^{-1}\)), \(b\)=10.0 & 20, 40, 60, 80, 90 \\ \(G_{0}\)(\({}^{-1}\)) & 0, 10\({}^{-1}\), 10\({}^{\circ}\), 10\({}^{\circ}\), 10\({}^{\circ}\), 10\({}^{\circ}\) \\ \(\xi_{\rm H2}\)(\({}^{4}\)) (s\({}^{-1}\)) & 10\({}^{-17}\), 10\({}^{-16}\), 10\({}^{-15}\) \\ \(X\)(PAH) & 10\({}^{-8}\), 10\({}^{-7}\), 10\({}^{-6}\) \\ \hline \end{tabular} 1 \end{table} Table 2: Shock grid parameters. Figure 1: Synthetic H\({}_{2}\) spectrum produced with a shock model with velocity 30 km s\({}^{-1}\), preshock density 10\({}^{4}\) cm\({}^{-3}\), a transverse magnetic field strength of 10 \(\mu\)G, and no external UV radiation. Wavelength ranges of the NIRSpec and MIRI spectrographs, as well as the wide-, medium-, and narrow-band filters for NIRCam and the MIRI filters on JWST are indicated as black and gray horizontal bars. The colors are for lines with different vibrational upper levels. The resolving power is assumed to be uniform across the wavelength range at \(\lambda\)/\(\Delta\lambda\)=2500. Figure 2: Illustration of the three steps required for running an externally irradiated shock model. The shock model shown here has a preshock density of \(10^{4}\) cm\({}^{-3}\), shock velocity of 20 km s\({}^{-1}\), and it is irradiated by a UV radiation field with \(G_{0}\) of 10. The strength of the transverse magnetic field is 100 \(\mu\)G. First a chemical steady-state model is run, and the thermal, chemical, and excitation output is used as input for a PDR model. The output of the PDR model is then used as input for the shock model. The top row shows the temperature evolution across the model run, the middle row the abundances of H and H\({}_{2}\), while the bottom row shows normalized populations of the first five rotational levels of H\({}_{2}\). The “bump” in the temperature profile at \(t\sim 10^{2}\) years in the chemical steady-state model comes from reformation of a small fraction of H\({}_{2}\) on the grain, and the release of its binding energy. of H\({}_{2}\)-H collisions, for the first 49 levels of H\({}_{2}\) the rates are from Flower (1997) and Flower & Roueff (1998b), where the rates have been calculated using a full quantum mechanical approach. For the remaining levels, the rates from Martin & Mandy (1995) are used. They were calculated using a quasi-classical approach. The reactive reaction rates of H\({}_{2}\) with H are from Le Bourlot et al. (1999). The number of levels has been set to 150 here, and the highest level is \(v\) = 8, \(J\) = 3 (\(E/k_{\rm B}=39,000\) K). The model assumes that there are no levels between the user-set value and the dissociation level. This may be important when calculating the dissociation rate of H\({}_{2}\), since molecules that are already excited have internal energies that are closer to the dissociation limit, and thus require less energy to dissociate. For the models run here, we find that there is no significant difference in H\({}_{2}\) emission by increasing the number of levels. Depending on the initial conditions, H\({}_{2}\) may dissociate in the shock through collisions. As the post-shock gas cools, H\({}_{2}\) reforms on the grains (Appendix A of Flower & Pineau des Forets 2013) and it is necessary to account for the bond energy released (4.5 eV \(\sim 5.1\times 10^{4}\) K). We assume that approximately one third of the energy goes to internal energy of the molecule. This internal energy distribution follows a Boltzmann distribution with a temperature corresponding to \(\sim\) 17,000 K. The remaining energy is equally split between kinetic energy of the newly formed H\({}_{2}\) molecule, and heating of the grain. The H\({}_{2}\) level populations are used for calculating the local H\({}_{2}\) line emissivities. This is done under the assumption of optically thin emission, which typically applies to H\({}_{2}\) emission because of its lack of a permanent dipole moment. Of these lines, 1000 are output explicitly and stored as emissivity profiles in this grid. About 900 of these H\({}_{2}\) lines are covered by the JWST instruments MIRI and NIRSpec. These two instruments together cover the wavelength range of 0.6 - 28 \(\mu\)m, that is the \(v\) = 0-0 S(0) ground-state line at 28.3 \(\mu\)m (Fig. 1) is not covered. ### Grid The total set of grid parameters is presented in Table 2; covering this range of parameter space resulted in \(\sim\) 14,000 simulations in total. Each simulation produces a number of outputs that are all stored in human-readable ASCII files and an HDF5 file for easy extraction4. These include physical properties of the shock (e.g., temperature, density, velocity) as a function of distance and time through the shock, and chemical properties (e.g., local densities, charge state, column densities), excitation of H\({}_{2}\) (level populations and local emissivities). In this case, the time is calculated as the neutral flow time, \(t_{\rm n}=\int\mathrm{d}z/v_{\rm n}\). In total, more than 2600 quantities are stored as profiles through each shock, and 1400 quantities are stored as integrated values. Footnote 4: The full model outputs are provided on the ISM platform: [https://app.ism.obspm.fr/ismdb/](https://app.ism.obspm.fr/ismdb/) The model integrates the gas state far downstream in order to ensure that a steady-state solution is contained within the simulation. Therefore, special care needs to be taken when extracting integrated quantities such as column densities or line intensities. We here adopt a similar criterion for the size of the shock as in Godard et al. (2019) based on radiative energy dissipation. We here set that limit as the point where 99.9% of the total radiation has been emitted (see Appendix B). Specifically, this means that the size, \(z_{\rm s}\) is defined as: \[\frac{\Upsilon(z_{\rm s})-\Upsilon(0)}{\Upsilon(\infty)-\Upsilon(0)}=99.9\%\, \tag{1}\] where \(\Upsilon\) is the sum of the kinetic, magnetic, and thermal energy fluxes. For ease of use, we provide a number of tables containing already-extracted results at the Centre de Donnees astronomiques de Strasbourg (CDS5). Example tables are provided in Appendix B in Tables 2 - 4. These tables include: Footnote 5: Add link to CDS archive at publication stage. 1. Physical parameters such as peak temperature, density, width, and age of the shock; 2. Column densities of selected species, particularly H, H\({}_{2}\), O, OH, H\({}_{2}\), C\({}^{+}\), C, and CO; 3. Data required for creating H\({}_{2}\) excitation diagrams, i.e., \(\ln(N/a)\) and \(E\) for each of the 150 levels; 4. H\({}_{2}\) integrated intensities of the 1000 lines extracted, along with their wavelength; 5. Width of the H\({}_{2}\) emitting zone for the \(v\) = 0-0 S(1), 1-0 S(1), 0-0 S(9), 1-0 O(5), and 2-1 S(1) lines; 6. H\({}_{2}\)\(o/p\) ratios determined both locally and integrated through the shock; 7. Integrated line intensities of 29 transitions arising from C\({}^{+}\), Si\({}^{+}\), H, C, Si, O, S\({}^{+}\), N\({}^{+}\), N, and S. On occasion, the model does not converge for numerical reasons; this happens in \(\sim\)5% of cases. This convergence-failure occurs often in C\({}^{+}\)-type shocks, when the flow crosses the first sonic point (see Appendix C in Godard et al. 2019). In these cases, the model output is ignored but the input parameters are still recorded in the tables. ### Model limitations The model has a number of inherent assumptions, which are discussed in the following. The include the shock geometry, magnetic field orientation, self-irradiation, stationary shocks, and grain chemistry. **Geometry.** The model treats a plane-parallel shock front, thus ignoring geometry. The lack of geometry is especially important in J-type shocks, where the gas may be compressed by four orders of magnitude or more. In nature, such a compression would quickly lead to a expansion of the high-pressure post-shock gas into the surrounding low-pressure medium, however, that is not possible in a 1D simulation. As a result, the post-shock density could be overestimated. For the case of H\({}_{2}\) emission, this is less important: most of the H\({}_{2}\) emission is generated in the warm parts of the shock where \(T>100\) K, prior to where significant post-shock expansion would occur. **Magnetic field orientation.** The magnetic field orientation is assumed to be perpendicular to the direction of motion. This \begin{table} \begin{tabular}{l c c} \hline \hline & Model A & Model B \\ \hline \(n_{\rm H}\) (cm\({}^{-3}\)) & \(10^{4}\) & \(10^{4}\) \\ \(b\) & 0.1 & 1.0 \\ \(v_{\rm n}\) (km s\({}^{-1}\)) & 20 & 20 \\ \(G_{0}\) & 0 & 0 \\ \(\zeta_{\rm H2}\) (s\({}^{-1}\)) & \(10^{-17}\) & \(10^{-17}\) \\ \(X\)(PAH) & \(10^{-6}\) & \(10^{-6}\) \\ \hline \end{tabular} \end{table} Table 3: Reference models. may not always be the case in molecular clouds, in fact, there is no a priori reason to assume the shock wave and field orientation are well aligned. If the field is not perpendicular to the direction of motion, the compression will lead to a change in field geometry, as described and discussed in Lehmann & Wardle (2016). These effects are not included here. **Self-irradiation.** The model is best suited for molecular shocks. In shocks where H\({}_{2}\) is dissociated and atomic H is excited, the shocks become self-irradiated. While this self-irradiation can be solved iteratively (Lehmann et al. 2020, 2022), it is not included in the present version of the grid. This limits J-type shocks to \(v_{\rm s}\lesssim 30\) km s\({}^{-1}\). **Stationary shocks.** All the shocks in this paper are stationary shocks. This implies there needs to be enough time for the stationary structure to fully develop. While the code can mimic non-stationary shocks, an additional free parameter, the age of the shock, is needed, and it is deemed beyond the scope of this work to explore the effects of that parameter (e.g., Lesaffre et al. 2004a,b; Gusdorf et al. 2008). **Grain chemistry.** Grain-grain interactions are omitted in this grid. For conditions where the velocity is below \(\sim 25\) km s\({}^{-1}\) and the density is below \(\sim 10^{5}\) cm\({}^{-3}\), this assumption is likely valid (Guillet et al. 2009, 2011). At larger velocities or densities, grains may interact, leading to grain evaporation and fragmentation which changes the size distribution of grains. Finally, in this grid we do not include ice mantles on the grains. ## 3 Results and discussion The shock has an initial kinetic energy flux of 1/2 \(\rho\)\(v_{\rm s}^{3}\), where \(\rho=1.4\)\(n_{\rm H}\)\(m_{\rm H}\) is the mass density; most of this energy is radiated away in the shock. Figure 3 shows how the energy is lost in shocks with \(b=0.1\), velocities of 20 and 30 km s\({}^{-1}\), and densities of \(10^{4}\) and \(10^{6}\) cm\({}^{-3}\). The pie charts are sorted by initial kinetic energy flux going from left to right, and top to bottom. The H\({}_{2}\) fraction decreases with increasing velocity and density because of dissociation. H\({}_{2}\) then reforms on the grains in the postshock gas introducing a heating term which counteracts the cooling of H\({}_{2}\). This is visible in the pie charts as the fraction of H\({}_{2}\) emission decreases monotonically with input kinetic energy flux, from 75% to 0.5%. Figure 3: Energetic reprocessing for four shocks with \(b=0.1\). The pie charts show the percentage of energy lost relative to the input kinetic energy flux. The kinetic energy flux is primarily converted to heat, which goes to exciting the atoms and molecules that then radiate the energy away. This radiation is either from H\({}_{2}\) (rotational and vibrational emission), other molecules (primarily CO, OH and H\({}_{2}\)O), or atoms (primarily H, O, and S). Some kinetic energy goes into compressing the magnetic field (“mag”), dissociating H\({}_{2}\) collisionally (“H\({}_{2}\) chem”), or atoms/molecules thermalizing with grains (“grain”). The percentages are shown in each pie slice, and the input shock parameters inside the pie. The input parameters all result in the shocks being J-type shocks, and model A is marked. Figure 4 is similar to Fig. 3, but for a stronger magnetic field (\(b=1.0\)), i.e., the input kinetic energy fluxes are the same as above. Increasing \(b\) to 1 has the consequence that the two 20-km S\({}^{-1}\) shocks become C-type shocks; the 30-km s\({}^{-1}\) shocks remain J-type shocks. The J-type shocks are dissociative, and the H\({}_{2}\) cooling fraction thus decreases significantly, as also illustrated in Fig. 3. The distribution of energy flux into emission lines has been described previously (e.g., Kaufman and Neufeld 1996; Flower and Pineau des Forets 2010, 2015; Lehmann et al. 2020), and a comparison in H\({}_{2}\) cooling fractions of the total input kinetic energy flux reveals broad agreement between different models and previous versions of the Paris-Durham model. These pie charts provide a global view of the energetic reprocessing in these shocks. In the following, the role of the different input parameters on the energetic reprocessing will be discussed in more detail, with a specific emphasis on H\({}_{2}\) emission. ### Magnetic field The strength of the transverse magnetic field, \(B\), sets the ion-magnetosonic speed, \(c_{\rm ins}\), together with the ion mass density, \(\rho_{\rm i}\): \[c_{\rm ins}=\left(c_{\rm s}+B^{2}/4\pi\rho_{\rm i}\right)^{1/2}, \tag{2}\] where \(c_{\rm s}\) is the sound speed. For \(v_{\rm s}<c_{\rm ins}\), the ionized and neutral fluids are decoupled and a magnetic precursor is present (Mullan 1971; Draine 1980); the code treats these multiple fluids self-consistently. For \(v_{\rm s}>c_{\rm ins}\), the ionized and neutral fluids are coupled, and there is no magnetic precursor (Fig. 5). We refer to Sect. 2.1 of Lehmann et al. (2022) for a more in-depth description of the differences between J- and C-type shocks. Figure 5 shows where the different shock types are as a function of \(b\) and \(v_{\rm s}\) for a density of \(10^{4}\) cm\({}^{-3}\), Fig. 6 shows the shock type for a part of the grid presented in this paper. For low values of \(b\) (\(\lesssim\)0.3), the resulting shocks are J-type shocks, while for \(b\gtrsim 1.0\) the resulting shocks are predominantly C-type shocks. The effects of the magnetic precursor is that the input kinetic energy flux is deposited over a much larger spatial range (Fig. 5), resulting in lower peak temperatures when compared to shocks with the same input kinetic energy flux but no magnetic precursor. This naturally affects the excitation of H\({}_{2}\), as illustrated in Fig. 7 in the form of the fraction of total integrated intensity to initial kinetic energy flux. The H\({}_{2}\) excitation is illustrated for the two reference shocks (Table 3), both with the same input kinetic energy flux. The figure demonstrates that for both shocks, most of the kinetic energy is radiated away in H\({}_{2}\) emission (see Fig. 3 and 4); the difference in total H\({}_{2}\) integrated intensity from the two shocks is \(\sim 15\%\). However, the integrated intensity from model B (\(b\)=1.0) is dominated by pure rotational emission (\(>\) Figure 4: As Fig. 3 but for \(b=1.0\). For this change in \(b\), shock models represented in the two left-most pie-charts are C-type, while the two right pie charts are J-type shocks; model B is marked.
2308.04925
Entanglement degradation as a tool to detect signatures of quantum gravity
We investigate entanglement degradation in the vicinity of a quantum corrected black hole. We consider a biprtite system (Alice-Rob) with Alice freely falling (radially) into the event horizon of a quantum corrected black hole and Rob being in the vicinity of the event horizon of the black hole. We consider a maximally entangled state (in the Fock basis) and start with the basic assumption that Rob is an uniformly accelerated observer. We then give a pedagogical analysis of the relation involving the Minkowski vaccum state and Rindler number states. Following the analogy given in https://link.aps.org/doi/10.1103/PhysRevD.82.064006 {Phys. Rev. D 82 (2010) 064006}, we establish the relation between the Hartle-Hawking vacuum state and Boulware and Anti-Bouware number states from the Minkowski-Rindler relation. We then write down the quantum corrected black hole metric by making use of the near horizon approximation in an appropriate form. Next, we obtain the analytical forms of logarithmic negativity and mutual information and plot as a function of Rob's distance from the $r=0$ point. We observe that the entanglement degradation slows down due to the incorporation of quantum gravity corrections in the Schwarzschild black hole. This observation may lead to identification of quantum gravity signatures in future generation of advanced observational scenarios. We can also interpret this effect as a noisy quantum channel with an operator sum representation of a completely positive and trace preserving (CPTP) map. We then finally obtain the entanglement fidelity using this operator sum representation.
Soham Sen, Arnab Mukherjee, Sunandan Gangopadhyay
2023-08-09T12:48:38Z
http://arxiv.org/abs/2308.04925v1
# Entanglement degradation as a tool to detect signatures of quantum gravity ###### Abstract We investigate entanglement degradation in the vicinity of a quantum corrected black hole. We consider a bipartite system (Alice-Rob) with Alice freely falling (radially) into the event horizon of a quantum corrected black hole and Rob being in the vicinity of the event horizon of the black hole. We consider a maximally entangled state (in the Fock basis) and start with the basic assumption that Rob is an uniformly accelerated observer. We then give a pedagogical analysis of the relation involving the Minkowski vaccum state and Rindler number states. Following the analogy given in Phys. Rev. D 82 (2010) 064006, we establish the relation between the Hartle-Hawking vacuum state and Boulware and Anti-Bouware number states from the Minkowski-Rindler relation. We then write down the quantum corrected black hole metric by making use of the near horizon approximation in an appropriate form. Next, we obtain the analytical forms of logarithmic negativity and mutual information and plot as a function of Rob's distance from the \(r=0\) point. We observe that the entanglement degradation slows down due to the incorporation of quantum gravity corrections in the Schwarzschild black hole. This observation may lead to identification of quantum gravity signatures in future generation of advanced observational scenarios. We can also interpret this effect as a noisy quantum channel with an operator sum representation of a completely positive and trace preserving (CPTP) map. We then finally obtain the entanglement fidelity using this operator sum representation. ## I Introduction Our day to day classical information theory is restricted to a binary system where we can make use of 0 and 1 as measures of information stored or communicated. With the advent of quantum mechanics in the first quarter of twentieth century, the idea of a quantum version of the classical information theory came as a byproduct of the quantum superposition principle. This new branch of physics was later named as quantum information theory. The relativistic generalization of quantum information theory, which involves general relativity, quantum field theory and quantum information theory, is also known as relativistic quantum information theory. The study of quantum correlations in case of a noninertial perspective is a very intersting sector in the genre of relativistic quantum information technology [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22]. In several of these works, the case of an entangled bipartite system was investigated when one of the observer was uniformly accelerated. The idea was to transport the stationary state to the Rindler space in order to truly investigate the effect of acceleration. In all of these cases the entangled states were taken as Fock states and instead of entanglement between spins, entanglement between number states were considered. In [4], the generic Alice-Rob picture in the Minkowski-Rindler background was transferred to the black hole picture for bosonic fields. This study was inadequate in a sense that the Rindler horizon and the event horizon of a Schwarzschild black hole are very different in nature. The Rindler horizon can only be perceived by an accelerated observer whereas the event horizon exists for all observers. To deal with this problem, in [20] a one to one correspondance was observed among different vacuums from both the Minkowski and curved spacetimes. The system consists of two observers, Alice and Rob. Alice is freely falling into the event horizon of a Schwarzschild black hole and Rob is at a fixed radial distance just outside the event horizon of the black hole. Both Alice and Rob are observing a bipartite quantum state and the state is maximally entangled for the freely falling observer. Rob sees a degradation in the state due to the Hawking effect. In their analysis it was shown that the major interesting entanglement behaviours are observed in the vicinity of the event horizon. In case of the black hole picture when the observer is on the event horizon of the black hole, it imitates the infinite acceleration case in the Rindler spacetime. In our analysis, we shall consider Alice to be freely falling into the event horizon of a quantum corrected black hole and Rob to be at a fixed distance just outside the event horizon of the same. Our main motivation behind this analysis is to investigate the effects of quantum gravitational corrections on entanglement degradation. The line element of the quantum corrected black hole spacetime following from a renormalization group approach of gravity is given by [23] \[ds^{2}=-f(r)dt^{2}+\frac{1}{f(r)}dr^{2}+r^{2}d\Omega^{2} \tag{1}\] where \[f(r)=1-\frac{2G(r)M}{r} \tag{2}\] with \[G(r)=\frac{G}{1+\frac{\bar{\omega}G}{r^{2}}}. \tag{3}\] Throughout our analysis, we have used \(\hbar=c=1\). The metric structure, given above, originates from the well known "_asymptotic safety approach_" to quantum gravity. This formalism revolves around an effective average action and by taking into consideration all loop effects, this effective action describes all gravitational phenomenon [24; 25; 26; 27]. This action satisfies a renormalization group equation which results in the flow of the Newton's gravitational constant as a function of this scale. Using the flow of the Newton's gravitational constant, the metric in eq.(1) was obtained where the constant \(\tilde{\omega}\) carries quantum gravity corrections to the black hole geometry as a result of this renormalization group approach. At first, we have used the near horizon approximation to probe any static spherically symmetric black hole metric in the well known Rindler form and following the analysis in [20], we have then obtained three unique timelike Killing vectors. The positive frequency modes associated with these Killing vectors let one define three unique vacuum states (Hartle-Hawking, Boulware and anti-Boulware). Finally, one can obtain the relation between the Hartle-Hawking vacuum state and Boulware - anti-Boulware Fock space basis. Using this relation, we then calculate the logarithmic negativity and mutual information for the reduced density matrix (where all the anti-Boulware states have been traced out). For the next part of our analysis, we have used the formalism in [22] and shown that the entanglement degradation due to the Hawking effect can be described via a quantum channel with a completely positive and trace preserving (CPTP) map. We then finally compute the entanglement fidelity to investigate how the quantum channel preserves the initial entanglement between the two parties of the bipartite state. It is important to note that we have considered only bosonic field modes in our analysis. The construction of the paper goes as follows. In section section (II), we give a brief preview of the Alice-Rob system and obtain the relation between the Minkowski vacuum and Rindler Fock state basis. In section (III), we express a static spherically symmetric black hole in the Rindler form and obtain the analogy between several vacuum states. In section (IV), we obtain the analytical forms of the logarithmic negativity and mutual information for a quantum corrected black hole and plotted against the distance of the observer from the \(\mathpzc{r}=0\) point. In section (V), we investigate the entire process as a quantum channel with a CPTP map and obtain the analytical form of the entanglement fidelity for a quantum corrected black hole. Finally, we conclude our analysis in section (VI). ## II Minkowski-Rindler identification: a brief review In this section, we start by providing a detailed and pedagogical derivation of the expression connecting Minkowski vacuum state and the product of two mode squeezed states of the Rindler vacuum [4]. The Rindler coordinate system is the one describing an uniformly accelerated observer. The Minkowski coordinates in 3+1-spacetime dimensions is given by \(\{t,x,y,z\}\) and the Rindler coordinates are denoted by \(\{\bar{t},\bar{x},\bar{y},\bar{z}\}\). In region I (right Rindler wedge), we can express the Minkowski coordinates, in terms of the Rindler coordinates as \[t=\bar{z}\sinh a\bar{t},\ x=\bar{x},\ y=\bar{y},\ z=\bar{z}\cosh a\bar{t} \tag{4}\] and in region IV (left Rindler wedge) \[t=-\bar{z}\sinh a\bar{t},\ x=\bar{x},\ y=\bar{y},\ z=-\bar{z}\cosh a\bar{t}. \tag{5}\] In order to interconnect the coordinate transformation between the two coordinate systems, we have considered that the observer is uniformly accelerating in the \(z\) direction only with an uniform acceleration \(a\). In order to proceed further, we now come down to a 1+1-dimensional analysis involving \((t,z)\) coordinates only. The massless Klein-Gordon equation for a scalar field in the Minkowski background reads \[\partial_{\mu}\partial^{\mu}\phi(t,z)=0\implies(\partial_{t}^{2}-\partial_{z }^{2})\phi(t,z)=0. \tag{6}\] In order to define the normalization constant, we need to write down the Lorentz invariant inner product, which is given by \[(\phi_{1},\phi_{2})=-i\int_{\Sigma}d\Sigma^{\mu}\left(-\phi_{1}^{*}\partial_{ \mu}\phi_{2}+\phi_{2}\partial_{\mu}\phi_{1}^{*}\right) \tag{7}\] where \(\Sigma\) is a spacelike hypersurface. Now for a constant time hypersurface, we can simplify the above inner product in the following form (in 1+1-dimensions) \[(\phi_{1},\phi_{2})=-i\int_{z}dz\left(-\phi_{1}^{*}\partial_{t}\phi_{2}+\phi_{ 2}\partial_{t}\phi_{1}^{*}\right). \tag{8}\] Using a separation of variables method, we can obtain a solution of the Klein-Gordon equation (eq.(6)) and write down the analytical forms of the Minkowski field modes as \[u_{k}^{\mathpzc{L}}(t,z)=\mathscr{N}(k)e^{-i\omega t+i\omega z} \tag{9}\] where \(\omega=k\) when the speed of light is set equal to unity and \(\mathscr{N}(k)(=\mathscr{N}_{\omega})\) is a real and undetermined normalization constant. We shall now make use of eq.(8) to determine the undetermined normalization constant given by \[(u_{k}^{\mathpzc{L}},u_{k^{\prime}}^{\mathpzc{L}}) =-i\int_{-\infty}^{\infty}dz\left[-u_{k}^{\mathpzc{L}^{*}} \partial_{t}u_{k^{\prime}}^{\mathpzc{L}}+u_{k^{\prime}}^{\mathpzc{L}} \partial_{t}u_{k}^{\mathpzc{L}^{*}}\right] \tag{10}\] \[\implies\delta(\omega-\omega^{\prime}) =2\pi(\omega+\omega^{\prime})\mathscr{N}_{\omega^{\prime}} \mathscr{N}(\omega-\omega^{\prime})\] \[=4\pi\omega\mathscr{N}_{\omega}^{2}\delta(\omega-\omega^{\prime})\] \[\implies\mathscr{N}_{\omega} =\frac{1}{\sqrt{4\pi\omega}}\.\] Using the above form of the normalization constant, we can finally write down the Minkowski mode solution as \[u_{k}^{\mathpzc{L}}(t,z)=\frac{1}{\sqrt{4\pi\omega}}e^{-i\omega t+i\omega z}. \tag{11}\] Next, we shall be calculating the Rindler modes in region I. We start by obtaining the Klein-Gordon equation in Rindler coordinates. To do this we write down the relations among the partial derivatives corresponding to Minkowski and Rindler coordinates: \[\partial_{t} =\frac{\partial\bar{t}}{\partial t}\partial_{\bar{t}}+\frac{ \partial\bar{z}}{\partial t}\partial_{\bar{z}}\] \[=\frac{1}{a\bar{z}}\cosh a\bar{t}\partial_{\bar{t}}-\sinh a\bar{ t}\partial_{\bar{z}}\, \tag{12}\] \[\partial_{z} =\frac{\partial\bar{t}}{\partial z}\partial_{\bar{t}}+\frac{ \partial\bar{z}}{\partial z}\partial_{\bar{z}}\] \[=-\frac{1}{a\bar{z}}\sinh a\bar{t}\partial_{\bar{t}}+\cosh a\bar {t}\partial_{\bar{z}}. \tag{13}\] Using eq.(s)(12,13) back in eq.(6), we obtain the Klein-Gordon equation in the Rindler spacetime to be \[(\partial_{\bar{t}}^{2}-\partial_{z}^{2})\phi(t,z)=\frac{1}{a^{2}\bar{z}^{2}} \left(\partial_{\bar{t}}^{2}-a^{2}\partial_{\bar{m}\,\bar{z}}^{2}\right)\phi( \bar{t},\bar{z})=0. \tag{14}\] Solving eq.(14) and making use of the inner product definition (eq.(8)), we obtain the Rindler mode solution in region I to be \[u_{k,\pm}^{\mathscr{A}_{I}}=\frac{1}{\sqrt{4\pi\omega}}e^{-i\omega\bar{t}\pm \frac{i\omega}{a}\ln z}. \tag{15}\] In this analysis, we shall be mainly considering the \(u_{k,+}^{\mathscr{A}_{I}}\) mode solutions. In terms of the Minkowski coordinates, the Rindler mode solution in eq.(15) reads \[u_{k,\pm}^{\mathscr{A}_{I}}=\sqrt{\frac{a}{4\pi\omega}}\left(\frac{z\mp t}{l_ {\omega}}\right)^{\pm\frac{i\omega}{a}}=\frac{1}{\sqrt{4\pi\Omega}}\left( \frac{z\mp t}{l_{\Omega}}\right)^{\pm\Omega}\equiv u_{\Omega,\pm}^{I} \tag{16}\] where \(\Omega\) (\(=\frac{\omega}{a}\)) is a dimensionless constant, \(l_{\omega}=l_{\Omega}\) has dimension of length in natural units and \(u_{\Omega,+}^{I}\) denotes field modes which are propagating to the right direction along lines of constant \(z-t\). Similarly, the Rindler mode solutions in region IV reads \[u_{k,\pm}^{\mathscr{A}_{IV}}=\frac{1}{\sqrt{4\pi\Omega}}\left(\frac{\pm t-z}{ l_{\Omega}}\right)^{\mp i\Omega}\equiv u_{\Omega,\pm}^{IV}. \tag{17}\] As we shall mainly be considering the right moving modes, we shall be omitting the plus sign while writing down the mode solutions. One can now do a second quantization of the classical field \(\phi\) which satisfies the Klein-Gordon equation given by \(\Box\hat{\phi}=0\). In terms of the Minkowski mode solutions and the corresponding creation and annihilation operators, we can write down the quantized scalar field as \[\hat{\phi}=\int dk\left(u_{k}^{\mathscr{A}}(t,z)\hat{a}_{k,\mathscr{A}}+{u_{k }^{\mathscr{A}}}^{*}(t,z)\hat{a}_{k,\mathscr{A}}^{\dagger}\right) \tag{18}\] where the creation and annihilation operators satisfy the following commutation relation \[[\hat{a}_{k,\mathscr{A}},\hat{a}_{k^{\prime},\mathscr{A}}^{\dagger}]=\delta(k- k^{\prime}). \tag{19}\] The action of the annihilation operator on the vacuum state corresponding to a fixed field mode is defined as \[\hat{a}_{k,\mathscr{A}}|0\rangle_{\mathscr{A}}^{k}=0 \tag{20}\] and the total Minkowski vacuum state is defined as a product of all the individual vacuum states corrponding to each field modes as \[|0\rangle_{\mathscr{A}}=\prod_{k}|0\rangle_{\mathscr{A}}^{k}. \tag{21}\] It is important to note that the mode solutions in regions I and IV provide a complete set of orthonormal solutions. As a result, one can express the field \(\hat{\phi}\) in terms of the Rindler mode solutions as \[\hat{\phi}=\int d\Omega\left(u_{\Omega}^{I}\hat{a}_{\Omega,I}+{u_{\Omega}^{I} }^{*}\hat{a}_{\Omega,I}^{\dagger}+u_{\Omega}^{IV}\hat{a}_{\Omega,IV}+{u_{ \Omega}^{IV}}^{*}\hat{a}_{\Omega,IV}^{\dagger}\right) \tag{22}\] where the creation and the annihilation operators act on the vacuum states of the two Rindler wedges respectively as \[\hat{a}_{\Omega,I}\otimes\mathbb{1}_{IV}|0_{I},0_{IV}\rangle =(\hat{a}_{\Omega,I}|0_{I}\rangle)\otimes(\mathbb{1}_{IV}|0_{IV} \rangle)=0\, \tag{23}\] \[\mathbb{1}_{I}\otimes\hat{a}_{\Omega,IV}|0_{I},0_{IV}\rangle =(\mathbb{1}_{I}|0_{I}\rangle)\otimes(\hat{a}_{\Omega,IV}|0_{I}V \rangle)=0. \tag{24}\] It is to be noted that region I and region IV are causally disconnected and as a result it is possible to write down the following commutation relations \[[\hat{a}_{\Omega,I},\hat{a}_{\Omega^{\prime},I}^{\dagger}] =[\hat{a}_{\Omega,IV},\hat{a}_{\Omega^{\prime},IV}^{\dagger}]= \delta(\Omega-\Omega^{\prime})\, \tag{25}\] \[[\hat{a}_{\Omega,I},\hat{a}_{\Omega^{\prime},I}] =[\hat{a}_{\Omega,I}^{\dagger},\hat{a}_{\Omega^{\prime},I}^{ \dagger}]=[\hat{a}_{\Omega,I},\hat{a}_{\Omega^{\prime},IV}^{\dagger}]=0\,\] (26) \[[\hat{a}_{\Omega,IV},\hat{a}_{\Omega^{\prime},IV}] =[\hat{a}_{\Omega,IV}^{\dagger},\hat{a}_{\Omega^{\prime},IV}^{ \dagger}]=[\hat{a}_{\Omega,I}^{\dagger},\hat{a}_{\Omega^{\prime},IV}^{\dagger}]= 0. \tag{27}\] We now need to express the creation and annihilation operators of the Minkowski states in terms of the creation and annihilation operators of the Rindler states. Before proceeding with this analysis, we need to remember that the mode solutions \(u_{k}^{\mathscr{A}}\) satisfies the following relation with respect to the inner product defined in eq.(8) as \[(u_{k}^{\mathscr{A}},u_{k^{\prime}}^{\mathscr{A}})=\delta(k-k^{\prime})\,\ \ (u_{k}^{ \mathscr{A}},{u_{k^{\prime}}^{\mathscr{A}}}^{*})=0. \tag{28}\] Taking the inner product of \(\hat{\phi}\) (for the decomposition of \(\hat{\phi}\) in terms of Minkowski field modes) with \(u_{k^{\prime}}^{\mathscr{A}}\), we obtain the following relation \[\left(u_{k^{\prime}}^{\mathscr{A}},\hat{\phi}\right) =\int dk\left((u_{k^{\prime}}^{\mathscr{A}},u_{k}^{\mathscr{A}})\hat {a}_{k,\mathscr{A}}+(u_{k^{\prime}}^{\mathscr{A}},{u_{k}^{\mathscr{A}}}^{*})\hat {a}_{k,\mathscr{A}}^{\dagger}\right)\] \[=\int dk\delta(k-k^{\prime})\hat{a}_{k,\mathscr{A}}\] \[=\hat{a}_{k^{\prime},\mathscr{A}}. \tag{29}\] It is now possible to substitute the mode expansion of \(\hat{\phi}\) from eq.(22) in the left hand side of the above equation and we can recast eq.(29) in the following form \[\begin{split}\hat{a}_{k,\mathscr{A}}=&\int d\Omega \Big{(}(u_{k}^{\mathscr{A}},u_{\Omega}^{I})\hat{a}_{\Omega,I}+(u_{k}^{\mathscr{ A}},u_{\Omega}^{I\;*})\hat{a}_{\Omega,I}^{\dagger}\\ &+(u_{k}^{\mathscr{A}},u_{\Omega}^{IV})\hat{a}_{\Omega,IV}+(u_{k} ^{\mathscr{A}},u_{\Omega}^{IV\;*})\hat{a}_{\Omega,IV}^{\dagger}\Big{)}\.\end{split} \tag{30}\] We shall now evaluate all of the four innerproducts in the above equation. Applying the definition of the inner product, the first inner product turns out to be \[\begin{split}(u_{k}^{\mathscr{A}},u_{\Omega}^{I})& =-i\int dz\left(u_{k}^{\mathscr{A}\;*}\partial_{t}u_{\Omega}^{I}+u _{\Omega}^{I}\partial_{t}u_{k}^{\mathscr{A}\;*}\right)\\ &=\frac{1}{4\pi l_{\Omega}^{i\Omega}\sqrt{\omega\Omega}}\int dz \bigg{(}\Omega(z-t)^{i\Omega-1}\\ &+\omega(z-t)^{i\Omega}\bigg{)}e^{-i\omega(z-t)}\.\end{split} \tag{31}\] We shall now make a change of coordinates given by \(z-t=\zeta\) and in the Rindler wedge I, \((z-t)>0\). Hence, \(\zeta\) will range from \(0\) to \(\infty\). We can recast eq.(31) in the following form \[\begin{split}(u_{k}^{\mathscr{A}},u_{\Omega}^{I})& =\frac{1}{4\pi l_{\Omega}^{i\Omega}\sqrt{\omega\Omega}}\int_{0} ^{\infty}d\zeta\left(\Omega\zeta^{i\Omega-1}+\omega\zeta^{i\Omega}\right)e^{ -i\omega\zeta}\\ &=\frac{\Omega(i\omega)^{-i\Omega}}{2\pi l_{\Omega}^{i\Omega} \sqrt{\omega\Omega}}\Gamma[i\Omega]\\ &=\frac{\Omega(il_{\Omega}\omega)^{-i\Omega}}{2\pi\sqrt{\omega \Omega}}\sqrt{\frac{\pi}{\Omega}}\sqrt{\frac{2}{e^{\pi\Omega}-e^{-\pi\Omega}}} e^{i\arg[\Gamma[i\Omega]]}\\ &=\frac{1}{\sqrt{2\pi\omega}}(l_{\Omega}e^{-\frac{\theta}{\Omega} }\omega)^{-i\Omega}\frac{1}{\sqrt{1-e^{-2\pi\Omega}}}\\ &=\frac{1}{\sqrt{2\pi\omega}}(l\omega)^{-i\Omega}\frac{1}{\sqrt{1 -e^{-2\pi\Omega}}}\end{split} \tag{32}\] where \(\phi\equiv\text{Arg}[\Gamma[i\Omega]]\), \(l\equiv l_{\Omega}e^{-\frac{\phi}{\Omega}}\), and \((i)^{-i\Omega}=(e^{\frac{i\pi}{\Omega}})^{-i\Omega}=e^{\frac{i\pi}{\Omega}}\). The next innerproduct of \(u_{k}^{\mathscr{A}}\) with \(u_{\Omega}^{I\;*}\) is given as follows \[(u_{k}^{\mathscr{A}},u_{\Omega}^{I\;*})=-\frac{1}{\sqrt{2\pi\omega}}(l\omega)^ {i\Omega}\frac{e^{-\pi\Omega}}{\sqrt{1-e^{-2\pi\Omega}}}. \tag{33}\] The final two innerproducts have the forms given by \[(u_{k}^{\mathscr{A}},u_{\Omega}^{IV}) =\frac{1}{\sqrt{2\pi\omega}}(l\omega)^{i\Omega}\frac{1}{\sqrt{1- e^{-2\pi\Omega}}}\, \tag{34}\] \[(u_{k}^{\mathscr{A}},u_{\Omega}^{IV\;*}) =-\frac{1}{\sqrt{2\pi\omega}}(l\omega)^{-i\Omega}\frac{e^{-\pi \Omega}}{\sqrt{1-e^{-2\pi\Omega}}}. \tag{35}\] With a new redefinition \(e^{-\pi\Omega}\equiv\tanh r_{\Omega}\), we can recast eq.(30) as \[\begin{split}\hat{a}_{k,\mathscr{A}}=&\int_{0}^{ \infty}d\Omega\bigg{(}{\alpha_{\omega,\Omega}^{R\;*}}\left(\cosh r_{\Omega} \hat{a}_{\Omega,I}-\sinh r_{\Omega}\hat{a}_{\Omega,IV}^{\dagger}\right)\\ &+{\alpha_{\omega,\Omega}^{L\;*}}\left(-\sinh r_{\Omega}\hat{a} _{\Omega,I}^{\dagger}+\cosh r_{\Omega}\hat{a}_{\Omega,IV}\right)\bigg{)}\end{split} \tag{36}\] where \({\alpha_{\omega,\Omega}^{R\;*}}=\frac{1}{\sqrt{2\pi\omega}}(l\omega)^{-i\Omega}\) and \({\alpha_{\omega,\Omega}^{L\;*}}=\frac{1}{\sqrt{2\pi\omega}}(l\omega)^{i\Omega}\) are the Bogoliubov coefficients. We can now express the right and left moving Unruh annihilation operators as \[\hat{a}_{\Omega}^{R}=\cosh r_{\Omega}\hat{a}_{\Omega,I}-\sinh r_{\Omega}\hat{ a}_{\Omega,IV}^{\dagger}\, \tag{37}\] \[\hat{a}_{\Omega}^{L}=-\sinh r_{\Omega}\hat{a}_{\Omega,I}^{ \dagger}+\cosh r_{\Omega}\hat{a}_{\Omega,IV}. \tag{38}\] By means of eq.(s)(37,38), we can indeed reexpress the Minkowski annihilation operator in eq.(36) as \[\hat{a}_{k,\mathscr{A}}=\int_{0}^{\infty}d\Omega\left({\alpha_{\omega,\Omega}^{ R\;*}}\hat{a}_{\Omega}^{R}+{\alpha_{\omega,\Omega}^{L\;*}}\hat{a}_{\Omega}^{L} \right). \tag{39}\] It is important to note from eq.(39) that the Minkowski annihilation operator can be expressed as a combination of the Unruh annihilation operators only. As a result the Unruh annihilation operator will annihilate the Minkowski vacuum as well. Hence, we can write down the following relation \[\hat{a}_{\omega,\mathscr{A}}|0\rangle_{\mathscr{A}}=\hat{a}_{\Omega}^{R}|0 \rangle_{\mathscr{A}}=\hat{a}_{\Omega}^{L}|0\rangle_{\mathscr{A}}=0. \tag{40}\] From eq.(40), it is straightforward to conclude that the Minkowski vacuum and Unruh vacuum are identical which can be represented in the following form \[|0\rangle_{\mathscr{A}}=|0\rangle_{U}=\prod_{\Omega}|0\rangle_{U}^{\Omega} \tag{41}\] where \(|0\rangle_{U}^{\Omega}\) is the Unruh vacuum corresponding to an individual field mode with frequency \(\Omega\). Now we take an ansatz given by \[|0\rangle_{U}^{\Omega}=\sum_{n}f_{\Omega}(n)|n\rangle_{I}^{\Omega}\otimes|n \rangle_{IV}^{\Omega} \tag{42}\] where \(f_{\Omega}(n)\) is an unknown normalization factor, dependent on the dimensionless number \(\Omega\). Before acting with \(\hat{a}_{\Omega}^{R}\) on the both sides of eq.(42), we need to express \(\hat{a}_{\Omega}^{R}\) rigorously as \[\hat{a}_{\Omega}^{R}=\cosh r_{\Omega}\;\hat{a}_{\Omega,I}\otimes\mathbb{1}_{IV}- \sinh r_{\Omega}\;\mathbb{1}_{I}\otimes\hat{a}_{\Omega,IV}^{\dagger}. \tag{43}\] Action of \(\hat{a}_{\Omega}^{R}\) from eq.(43) on the both sides of eq.(42) is given by \[\begin{split} 0=\hat{a}_{\Omega}^{R}|0\rangle_{U}^{\Omega}=& \sum_{n}f_{\Omega}(n)\Big{(}\cosh r_{\Omega}\hat{a}_{\Omega,I}|n \rangle_{I}^{\Omega}\otimes|n\rangle_{IV}^{\Omega}\\ &-\sinh r_{\Omega}|n\rangle_{I}^{\Omega}\hat{a}_{\Omega,IV}^{ \dagger}|n\rangle_{IV}^{\Omega}\Big{)}\\ =&\sum_{n}f_{\Omega}(n)\Big{(}\cosh r_{\Omega} \sqrt{n}|n-1\rangle_{I}^{\Omega}\otimes|n\rangle_{IV}^{\Omega}\\ &-\sinh r_{\Omega}\sqrt{n+1}|n\rangle_{I}^{\Omega}\otimes|n+1 \rangle_{IV}^{\Omega}\Big{)}\.\end{split} \tag{44}\] We now act with \(\begin{smallmatrix}\Omega\\ \Omega\end{smallmatrix}\otimes\begin{smallmatrix}\Omega\\ \Omega\end{smallmatrix}\langle m^{\prime}|\) from the left in the above equation and we can then recast eq.(44) as \[0= \sum_{n}f_{\Omega}(n)\Big{(}\cosh r_{\Omega}\sqrt{n}\ \overset{ \Omega}{\rangle}\!\!\langle m|n|-1\rangle\overset{\Omega}{\rangle}\ \overset{ \Omega}{\rangle}\!\!\langle m^{\prime}|n\rangle\overset{\Omega}{\rangle}\!\! \!\langle\ \\ -\sinh r_{\Omega}\sqrt{n+1}\ \overset{\Omega}{\rangle}\!\!\langle m|n \rangle\overset{\Omega}{\rangle}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! In terms of the proper time \(\tau\), we can recast eq.(56) as follows \[ds^{2}=-\frac{\kappa^{2}\zeta^{2}}{f(\mathbf{r})}d\tau^{2}+d\zeta^{2}. \tag{58}\] Now the value of the proper acceleration for an accelerated observer at some \(r\) is defined as \[a=\sqrt{a_{\mu}a^{\mu}} \tag{59}\] where \(a^{\mu}=\frac{\xi^{\mu}}{|\xi|}\nabla_{\beta}\left(\frac{\xi^{\mu}}{|\xi|}\right)\) gives the four acceleration with \(v^{\mu}=\frac{\xi^{\mu}}{|\xi|}\) denoting the four velocity of the observer and \(\xi^{\mu}=\{1,0,0,0\}\) being a timelike Killing vector. It is now straightforward to evaluate the four acceleration of the observer \[a^{\mu}=\left\{0,\frac{1}{2}\partial_{r}f,0,0\right\}\,\ \ a_{\mu}=g_{\kappa\mu}a^{ \kappa}=\left\{0,\frac{1}{2f}\partial_{r}f,0,0\right\}. \tag{60}\] Using eq.(60), we can obtain the proper acceleration of the observer to be of the form \[a(r)=\sqrt{a_{\mu}a^{\mu}}=\frac{\partial_{r}f}{2\sqrt{f(r)}}. \tag{61}\] From eq.(61), it is straightforward to infer that the acceleration becomes infinite when \(r=r_{+}\). In the near horizon approximation, we can evaluate the following relation \[\partial_{r}f(r)\simeq\partial_{r}\left((r-r_{+})f^{\prime}(r_{+})\right)=f^{ \prime}(r_{+})=2\kappa. \tag{62}\] Hence, we can write down the proper acceleration for an observer sitting at a distance \(\mathbf{r}\) from the \(r=0\) point to be \[a=a(\mathbf{r})=\frac{\kappa}{\sqrt{f(\mathbf{r})}}. \tag{63}\] Using eq.(63), we can recast eq.(58) as \[ds^{2}=-a^{2}\zeta^{2}d\tau^{2}+d\zeta^{2}. \tag{64}\] Eq.(63) depicts the fact that any static spherically symmetric blackhole metric can be expressed in the Rindler form by means of near horizon approximation and the constant acceleration in the Rindler case is now replaced by the proper acceleration of an observer sitting at a fixed radial distance outside but in the vicinity of the event horizon of the black hole. Our next aim is to define timelike vectors. We start by writing down the null Kruskal-Szekeres coordinates as \[\mathbf{\omega}=-\frac{1}{\kappa}e^{-\kappa\left(t-\int\frac{d\mathbf{r}}{f(\mathbf{r})} \right)}\,\ \ \mathbf{\sigma}=\frac{1}{\kappa}e^{\kappa\left(t+\int\frac{d\mathbf{r}}{f(\mathbf{r})} \right)}. \tag{65}\] Using eq.(65), one can write down the radial part of the black hole metric in the following form \[ds^{2}=-f(r)e^{-2\kappa\int\frac{d\mathbf{r}}{f(\mathbf{r})}}d\mathbf{\omega}d\mathbf{\omega}. \tag{66}\] Very near the horizon, eq.(66) can be expressed as (keeping only leading constant terms and setting the integration constant to \(\frac{1}{2\kappa}\)) \[ds^{2}\simeq-e^{-1}d\mathbf{\omega}d\mathbf{\nu}. \tag{67}\] This analysis shows (following [20]) that there are three possible timelike Killing vectors. The first timelike Killing vector is \(\partial_{t}\propto\partial_{u}+\partial_{\sigma}\), where this timelike vector is similar to the timelike Killing vector in the Minkowski spacetime. One can construct a vacuum state out of positive frequency modes associated with this timelike Killing vector and this vacuum state is also known as the Hartle-Hawking vacuum state. As a result of the analogy between the Killing vectors, we can also claim that the Hartle-Hawking vacuum state is analogous to the Minkowski vacuum state. The Hawking-Hartle vacuum state is generally written as \(|0\rangle_{H}\) and \(|0\rangle_{H}\leftrightarrow|0\rangle_{\mathscr{M}}\). The second Killing vector is \(\partial_{t}\) and it is straight forward to obtain a relation in terms of the \(\{\mathbf{\omega},\mathbf{\nu}\}\) coordinate system as follows \[\partial_{t} =\frac{\partial_{t}}{\partial t}\frac{\partial}{\partial\mathbf{ \iota}}+\frac{\partial\mathbf{\iota}}{\partial t}\frac{\partial}{\partial\mathbf{ \iota}}\] \[=\left(-\kappa\right)\left[-\frac{1}{\kappa}e^{-\kappa\left(t- \int\frac{d\mathbf{r}}{f(\mathbf{r})}\right)}\right]\frac{\partial}{\partial\mathbf{ \iota}}+\kappa\left[\frac{1}{\kappa}e^{\kappa\left(t+\frac{d\mathbf{r}}{f(\mathbf{r} )}\right)}\right]\frac{\partial}{\partial\mathbf{\iota}}\] \[=-\kappa\left(\mathbf{\iota}\partial_{u}-\mathbf{\iota}\partial_{\sigma} \right). \tag{68}\] From the above calculation, we deduce that \(\partial_{t}\propto\mathbf{\iota}\partial_{u}-\mathbf{\iota}\partial_{v}\). \(\partial_{t}\) is a timelike Killing vector for any static spherically symmetric black hole geometry and the positive frequency modes associated with this timelike Killing vector results in the rotor results in a vacuum state known as the Boulware vacuum state. The Boulware vacuum state is denoted by \(|0\rangle_{B}\) and \(|0\rangle_{B}\leftrightarrow|0\rangle_{I}\) which indicates the Boulware vacuum state is analogous to the Rindler vacuum state in region I. Another timelike Killing vector which can be defined is \(-\partial_{t}\) and the positive frequency modes associated with this time like Killing vectors results in the \(|0\rangle_{\bar{B}}\), also known as the anti-Boulware vacuum state. The anti-Boulware vacuum state is analogous to \(|0\rangle_{IV}\). From the analogy among the Hartle-Hawking (Boulware, anti-Boulware) and Minkowski (Rindler I, Rindler IV) vacuum states, we can rewrite eq.(49) in a static spherically black hole geometry as \[|0\rangle_{H}^{\omega_{i}}=\frac{1}{\cosh\sigma_{\omega_{i}}}\sum_{n}\tanh^{n} \sigma_{\omega_{i}}|n\rangle_{\bar{B}}^{\omega_{i}}|n\rangle_{\bar{B}}^{ \omega_{i}} \tag{69}\] where \(|0\rangle_{H}=\otimes_{j}|0\rangle_{H}^{\omega_{j}}\) and \[\tanh\sigma_{\omega_{i}}=e^{-\frac{\pi\omega_{i}}{\kappa}}=\exp\left(-\frac{\pi \omega_{i}\sqrt{f(\mathbf{r})}}{\kappa}\right). \tag{70}\] The above result comes from direct analogy with the corresponding result in the Minkowski-Rindler scenario. For a quantum corrected black hole metric we can recast eq.(70) as \[\tanh\sigma_{\omega_{i}}=e^{-\frac{2\pi\omega_{i}GM\left(1-\frac{2GM}{\kappa^{2} +\delta G}\right)\left(GM+\sqrt{G^{2}M^{2}-\delta G}\right)^{2}}{G^{2}M^{2}+GM \sqrt{G^{2}M^{2}-\delta G}}}. \tag{71}\] As \(\tilde{\omega}\) is a quantum gravity correction (which is very small), we can recast eq.(71) in a much simpler form given by \[\tanh\sigma_{\omega_{i}}\simeq e^{-4\pi\omega_{i}GM\sqrt{1-2\frac{GM}{ \sqrt{2}}}\left(1+\frac{\tilde{\omega}}{4GM^{2}}+\frac{\tilde{\omega}G^{2}M}{r^ {2}(\nu-2GM)}\right)}. \tag{72}\] For the quantum corrected black hole metric, we redefine the \(\sigma_{\omega_{i}}\) term as \(\mathbf{r}_{\tilde{\omega},i}\). The one particle Hartle-Hawking state takes the form given as (for a quantum corrected black hole) \[|1\rangle_{H}^{\omega_{i}}=\frac{1}{\cosh^{2}\mathbf{r}_{\tilde{\omega},i}}\sum_{n= 0}^{\infty}\tanh^{n}\mathbf{r}_{\tilde{\omega},i}\sqrt{n+1}|n+1\rangle_{B}^{\omega _{i}}|n\rangle_{\tilde{B}}^{\omega_{i}}. \tag{73}\] Here, we consider a maximally entangled bipartite state in the basis of an observer freely falling into the event horizon of a black hole as \[|\psi\rangle=\frac{1}{\sqrt{2}}\left(|0\rangle_{A}^{\omega_{i}}|0\rangle_{R}^{ \omega_{i}}+|1\rangle_{A}^{\omega_{i}}|1\rangle_{R}^{\omega_{i}}\right). \tag{74}\] The suffix '\(A\)' in the first part of the system (out of the two subsystems) denotes freely falling Alice and the second subsystem is for Rob, who is at a distance \(\mathbf{r}\) near the event horizon of the quantum corrected black hole. ## IV Logarithmic negativity and mutual information In this section, we shall obtain the logarithmic negativity and mutual information corresponding to the maximally entangled bipartite state given in eq.(74). Our main aim is to do a side by side comparison for the case of a Schwarzschild and a quantum corrected black hole to truly investigate the effect of the underlying quantum nature of the black hole. Before proceeding further it is important to note that \(|0\rangle_{A}\leftrightarrow|0\rangle_{H}\) and \(|0\rangle_{R}\) for a fixed frequency value is described by eq.(69). Boulware and anti-Boulware states are causally disconnected and Rob is causally disconnected from accessing the anti-Boulware states. As a result, we shall be tracing over the anti-Boulware states which shall lead to a mixed state. The reduced density matrix is given by \[\rho_{AR}= \sum_{m=0}^{\infty}\ \ {}_{B}\!\langle m|\psi\rangle\langle\psi|m \rangle_{B}\] \[= \frac{1}{2\cosh^{2}\mathbf{r}_{\tilde{\omega},i}}\sum_{n=0}^{\infty }\tanh^{2n}\mathbf{r}_{\tilde{\omega},i}\bigg{[}|0\ n\rangle\langle 0\ n|\] \[+\frac{\sqrt{n+1}}{\cosh\mathbf{r}_{\tilde{\omega},i}}\left(|1\ n+1 \rangle\langle 0\ n|+|0\ n\rangle\langle 1\ n+1|\right)\] \[+\frac{n+1}{\cosh^{2}\mathbf{r}_{\tilde{\omega},i}}|1\ n+1\rangle \langle 1\ n+1|\bigg{]}\.\] We shall now make use of the partial transpose criteria which shall provide us with sufficient criteria for entanglement. The \(\{n,n+1\}\) block of the reduced density matrix in eq.(IV) is given by \[\left(\frac{1}{2\cosh^{2}\mathbf{r}_{\tilde{\omega},i}}\right)\left[\begin{array}[ ]{cccc}\mathscr{B}_{0n}^{0n}&\mathscr{B}_{1n}^{0n}&\mathscr{B}_{0n+1}^{0n}& \mathscr{B}_{1n+1}^{0n}\\ \mathscr{B}_{0n}^{1n}&\mathscr{B}_{1n}^{1n}&\mathscr{B}_{0n+1}^{1n}&\mathscr{ B}_{1n+1}^{1n}\\ \mathscr{B}_{0n}^{0n+1}&\mathscr{B}_{1n}^{0n+1}&\mathscr{B}_{0n+1}^{0n+1}& \mathscr{B}_{1n+1}^{0n+1}\\ \mathscr{B}_{0n}^{1n+1}&\mathscr{B}_{1n}^{1n+1}&\mathscr{B}_{0n+1}^{1n+1}& \mathscr{B}_{1n+1}^{1n+1}\end{array}\right] \tag{76}\] where \(\mathscr{B}_{cd}^{ab}\) denotes the coefficient associated with the \(|ab\rangle\langle cd|\) state. After taking partial transpose of the matrix in eq.(76), we obtain the following matrix \[\left(\frac{1}{2\cosh^{2}\mathbf{r}_{\tilde{\omega},i}}\right)\left[\begin{array}[ ]{cccc}\mathscr{B}_{0n}^{0n}&\mathscr{B}_{1n}^{0n}&\mathscr{B}_{0n+1}^{0n}& \mathscr{B}_{0n+1}^{1n}\\ \mathscr{B}_{0n}^{1n}&\boxed{\mathscr{B}_{1n}^{1n}}&\boxed{\mathscr{B}_{1n+1}^ {0n}}&\boxed{\mathscr{B}_{1n+1}^{1n}}\\ \mathscr{B}_{0n}^{0n+1}&\boxed{\mathscr{B}_{0n}^{1n+1}}&\boxed{\mathscr{B}_{0n+1 }^{0n+1}}&\boxed{\mathscr{B}_{1n+1}^{0n+1}}\\ \mathscr{B}_{1n}^{0n+1}&\mathscr{B}_{1n}^{1n+1}&\mathscr{B}_{0n+1}^{1n+1}& \mathscr{B}_{1n+1}^{1n+1}\end{array}\right]. \tag{77}\] The new matrix consisting of the boxed elements from eq.(77) is given by \[\mathscr{P}_{n,n+1}=\frac{\tanh^{2n}\mathbf{r}_{\tilde{\omega},i}}{2\cosh^{2}\mathbf{r }_{\tilde{\omega},i}}\frac{\frac{\tanh^{2}\mathbf{r}_{\tilde{\omega},i}}{\cosh^{2} \mathbf{r}_{\tilde{\omega},i}}}{\frac{\sqrt{n+1}}{\cosh\mathbf{r}_{\tilde{\omega},i}}{ \cosh\mathbf{r}_{\tilde{\omega},i}}} \tag{78}\] The eigenvalues of the \(\mathscr{P}_{n,n+1}\) matrix are given as \[\xi_{n,\pm}= \frac{\tanh^{2n}\mathbf{r}_{\tilde{\omega},i}}{4\cosh^{2}\mathbf{r}_{ \tilde{\omega},i}}\Bigg{[}\left(\tanh^{2}\mathbf{r}_{\tilde{\omega},i}+\frac{n}{ \sinh^{2}\mathbf{r}_{\tilde{\omega},i}}\right)\pm\sqrt{\left(\tanh^{2}\mathbf{r}_{ \tilde{\omega},i}+\frac{n}{\sinh^{2}\mathbf{r}_{\tilde{\omega},i}}\right)^{2}+\frac{ 4}{\cosh^{2}\mathbf{r}_{\tilde{\omega},i}}}. \tag{79}\] From eq.(79), it is straightforward to infer that \(\xi_{n,-}<0\). The logarithmic negativity is obtained as \[\begin{split}& N(\rho_{AR})=\log_{2}||\rho_{AR}^{T}||\\ =&\log_{2}\left[1+\sum_{n=0}^{\infty}\left(|\xi_{n,-}|- \xi_{n,-}\right)\right]\\ =&\log_{2}\left[1-2\sum_{n=0}^{\infty}\xi_{n,-} \right]\\ =&\log_{2}\biggl{[}1+\sum_{n=0}^{\infty}\frac{\tanh^{ 2n}r_{\tilde{\omega},i}}{2\cosh^{2}r_{\tilde{\omega},i}}\sqrt{\left(\tanh^{2} r_{\tilde{\omega},i}+\frac{n}{\sinh^{2}r_{\tilde{\omega},i}}\right)^{2}+\frac{4}{ \cosh^{2}r_{\tilde{\omega},i}}}-\sum_{n=0}^{\infty}\frac{\tanh^{2n}r_{\tilde{ \omega},i}}{2\cosh^{2}r_{\tilde{\omega},i}}\left(\frac{n}{\sinh^{2}r_{\tilde{ \omega},i}}+\tanh^{2}r_{\tilde{\omega},i}\right)\biggr{]}\\ =&\log_{2}\biggl{[}\frac{1}{2\cosh^{2}r_{\tilde{ \omega},i}}+\sum_{n=0}^{\infty}\frac{\tanh^{2n}r_{\tilde{\omega},i}}{2\cosh^{ 2}r_{\tilde{\omega},i}}\sqrt{\left(\tanh^{2}r_{\tilde{\omega},i}+\frac{n}{ \sinh^{2}r_{\tilde{\omega},i}}\right)^{2}+\frac{4}{\cosh^{2}r_{\tilde{\omega}, i}}}\biggr{]}\\ =&\log_{2}\left[\frac{1}{2\cosh^{2}r_{\tilde{\omega},i}}+\Lambda(r_{\tilde{\omega},i})\right]\end{split} \tag{80}\] where \[\Lambda(r_{\tilde{\omega},i})=\sum_{n=0}^{\infty}\frac{\tanh^{2n}r_{\tilde{ \omega},i}}{2\cosh^{2}r_{\tilde{\omega},i}}\sqrt{\left(\tanh^{2}r_{\tilde{ \omega},i}+\frac{n}{\sinh^{2}r_{\tilde{\omega},i}}\right)^{2}+\frac{4}{\cosh^{ 2}r_{\tilde{\omega},i}}}. \tag{81}\] For an observer at infinite distance, \(a(r\rightarrow\infty)=0\) leading to \(N(\rho_{AR})=1\). When the observer is on the event horizon of the black hole then \(a(r_{+})\rightarrow\infty\) which is identical to the condition \(r_{\tilde{\omega},i}\rightarrow\infty\). To obtain the value of the entanglement negativity at this point, we need to truly investigate the bound on the negativity value in this limit. It is straightforward to obtain a bound on the summation term in the above equation. We know that \(a^{2}+b^{2}<(a+b)^{2}\) and we can write down the following inequality \[\begin{split}\Lambda(r_{\tilde{\omega},i})&<\sum_{n =0}^{\infty}\frac{\tanh^{2n}r_{\tilde{\omega},i}}{2\cosh^{2}r_{\tilde{\omega},i}}\biggl{[}\tanh^{2}r_{\tilde{\omega},i}+\frac{n}{\sinh^{2}r_{\tilde{\omega}, i}}\\ &+\frac{2}{\cosh r_{\tilde{\omega},i}}\biggr{]}\\ &=\frac{1}{2}\left(1+\frac{2}{\cosh r_{\tilde{\omega},i}}+\tanh^{ 2}r_{\tilde{\omega},i}\right)\\ &<1+\frac{1}{\cosh r_{\tilde{\omega},i}}\.\end{split} \tag{82}\] Now in the \(r_{\tilde{\omega},i}\rightarrow\infty\) limit \(\Lambda\) goes to \(1\). Hence, in the infinite acceleration case or when the observer is on the event horizon of the black hole, the logarithmic negativity becomes \(0\). In the \(\tilde{\omega}\to 0\) limit \(r_{\tilde{\omega},i}\rightarrow\)\(r_{\tilde{S}ch.,i}\) with "\(Sch.\)" denotes the case for a Schwarzschild black hole. For the next part of our analysis, we shall be denoting \(r_{\tilde{\omega},i}\) as \(\mathcal{R}_{i}\). The logarithmic negativity from eq.(80) can be expressed in terms of \(\mathcal{R}_{i}\) as \[\begin{split}& N(\rho_{AR})\simeq\log_{2}\biggl{[}\frac{1}{2 \cosh^{2}\mathcal{R}_{i}}\left(1+\tilde{\omega}\mathcal{K}_{i}\sinh^{2} \mathcal{R}_{i}\right)+\\ &\sum_{n=0}^{\infty}\frac{\tanh^{2n}\mathcal{R}_{i}}{2\cosh^{2} \mathcal{R}_{i}}\sqrt{\left(\frac{n}{\sinh^{2}\mathcal{R}_{i}}+\tanh^{2} \mathcal{R}_{i}\right)^{2}+\frac{4}{\cosh^{2}\mathcal{R}_{i}}}\biggr{]}\\ &\times\Biggl{(}1+\tilde{\omega}\mathcal{K}_{i}\biggl{(}\sinh^{2} \mathcal{R}_{i}-n+\biggr{[}2\tanh^{2}\mathcal{R}_{i}+\biggr{(}\frac{n}{\sinh^{ 2}\mathcal{R}_{i}}\] \[+\tanh^{2}\mathcal{R}_{i}\biggr{)}\biggr{]}\biggr{/}\biggl{[}\left( \frac{n}{\sinh^{2}\mathcal{R}_{i}}\right.\\ &\left.+\tanh^{2}\mathcal{R}_{i}\right)^{2}+\frac{4}{\cosh^{2} \mathcal{R}_{i}}\biggr{]}\biggr{)}\Biggr{)}\end{split} \tag{83}\] where \[\mathcal{K}_{i}\equiv 8\pi\omega_{i}GM\sqrt{1-\frac{2GM}{r}}\left[\frac{1}{4GM^{2}}+ \frac{G^{2}M}{r^{2}(r-2GM)}\right]. \tag{84}\] Eq.(83) is one of the main results in our paper. In order to plot Fig.(1), we set \(G=0.1\ell_{0}^{2}\), \(M=1.0\ell_{0}^{-1}\), \(\omega_{i}=\frac{1}{\pi}\ell_{0}^{-1}\), and \(\tilde{\omega}=0.9\) with respect to some arbitrary length scale \(\ell_{0}\). Here, the values are chosen in a manner such that the quantum effects get amplified. From Fig.(1), we observe that the negativity has a slower rate of decreasing than the Schwarzschild black hole. Hence, if an observer finds out that at the Schwarzschild radius, the logarithmic negativity does not drop to zero, then it will be a direct detection of the quantum nature of the black hole. It is also important to note that the negativ ity goes to zero on the event horizon radius of the black hole (the points for each curve where they meet on the \(N(\rho_{AR})=0\) axis) which signifies that the states does nit possess any distillable entanglement anymore. It is important to note that we have made use of eq.(71) instead of eq.(72) to obtain Fig.(1) (and later Fig.(s)(2,3)). Our next aim is to calculate the mutual information and compare between the Shwarzschild and quantum corrected case. The mutual information gives one the idea of the total amount of correlation. The mutual information is given by \[I(\rho_{AR})=S(\rho_{A})+S(\rho_{R})-S(\rho_{AR}) \tag{85}\] where \(S(\rho)=-\mathrm{tr}\left(\rho\log_{2}\rho\right)=-\sum_{n}\rho_{n,n}\log_{2} \rho_{n,n}\). In eq.(85), \(\rho_{A}\) denotes Alice's density matrix while Rob's states are traced out. The values of the individual entropies can be obtained as follows \[S(\rho_{A})= 1\, \tag{86}\] \[S(\rho_{R})= -\sum_{n=0}^{\infty}\frac{\tanh^{2n}\nu_{\vec{\omega},i}}{2\cosh^ {2}r_{\omega,i}}\left(1+\frac{n}{\sinh^{2}r_{\omega,i}}\right)\] (87) \[\times\log_{2}\biggl{[}1+\frac{n}{\sinh^{2}r_{\omega,i}}\biggr{]}\,\] \[S(\rho_{AR})= -\sum_{n=0}^{\infty}\frac{\tanh^{2n}r_{\vec{\omega},i}}{2\cosh^ {2}r_{\omega,i}}\left(1+\frac{n+1}{\cosh^{2}r_{\omega,i}}\right)\] (88) \[\times\log_{2}\biggl{[}1+\frac{n+1}{\cosh^{2}r_{\omega,i}} \biggr{]}\.\] Substituting eq.(s)(86-88) in eq.(85), we obtain the following relation \[I(\rho_{AR})= 1-\sum_{n=0}^{\infty}\frac{\tanh^{2n}r_{\vec{\omega},i}}{2\cosh ^{2}r_{\omega,i}}\left[\left(1+\frac{n}{\sinh^{2}r_{\omega,i}}\right)\log_{2} \left[1+\frac{n}{\sinh^{2}r_{\omega,i}}\right]-\left(1+\frac{n+1}{\cosh^{2}r _{\omega,i}}\right)\log_{2}\left[1+\frac{n+1}{\cosh^{2}r_{\omega,i}}\right]\right]\] \[\simeq 1-\sum_{n=0}^{\infty}\frac{\tanh^{2n}\mathcal{R}_{i}}{2\cosh^{2 }\mathcal{R}_{i}}\left(1+\tilde{\omega}\mathcal{K}_{i}(\sinh^{2}\mathcal{R}_ {i}-n)\right)\Biggr{[}\left(1+\frac{n}{\sinh^{2}\mathcal{R}_{i}}\left(1+ \tilde{\omega}\mathcal{K}_{i}(1+\sinh^{2}\mathcal{R}_{i})\right)\right)\log_{2 }\biggl{[}1+\frac{n}{\sinh^{2}\mathcal{R}_{i}}\] \[\times\left(1+\tilde{\omega}\mathcal{K}_{i}(1+\sinh^{2}\mathcal{R}_ {i})\right)\biggr{]}-\left(1+\frac{n+1}{\cosh^{2}\mathcal{R}_{i}}\left(1+ \tilde{\omega}\mathcal{K}_{i}\sinh^{2}\mathcal{R}_{i}\right)\right)\log_{2} \left[1+\frac{n+1}{\cosh^{2}\mathcal{R}_{i}}\left(1+\tilde{\omega}\mathcal{K}_ {i}\sinh^{2}\mathcal{R}_{i}\right)\right]\Biggr{]}. \tag{89}\] Eq.(89) is also one of the main results in our paper. We shall now investigate the entanglement-degradation for a quantum corrected black hole and compare it with that of the Schwarzschild black hole. Using the same parameters as before, we plot mutual information vs \(\boldsymbol{\nu}\) in Fig.(2). In order to obtain Fig.(2), we have used the value of \(\boldsymbol{\tau}_{\vec{\omega}_{i}}\) from eq.(71) (\(\sigma_{\omega_{i}}=\boldsymbol{\tau}_{\vec{\omega}_{i}}\) in this equation) instead of eq.(72). It is straightforward to observe that the entanglement degradation gets significant as the observer approaches the event horizons of the respective black holes. It is again important to notice that for the quantum corrected black the mutual information degrades at a slower rate. When the mutual information becomes 1, there are no distillable entanglement left between the two states. It can be a direct detection of quantum gravity signatures if for the observer being at the Schwarzschild radius, the mutual information doesn't go to unity which implies that the entanglement does not degrade completely. It is very difficult to costruct such experimental scenarios where the degradation in mutual information is directly observed but it may be possible to do the same in future with advanced experimental set ups. In the next section, we shall demonstrate the entire set up as a quantum channel with a completely positive and trace preserving map and try to obtain the entanglement fidelity for the same channel. Figure 1: Logarithmic negativity vs radial distance (of the observer) plot for a Schwarzschild and quantum corrected black hole. ## V Noisy quantum channel and entanglement fidelity We start with the initial density matrix (anti-Boulware states traced out) \[\begin{split}\rho_{AR}^{\mathcal{J}}&=|\phi\rangle \langle\phi|\\ &=\frac{1}{2}\left(|00\rangle\langle 00|+|00\rangle\langle 11|+|11 \rangle\langle 00|+|11\rangle\langle 11|\right)\end{split} \tag{90}\] where \(|\phi\rangle=\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)\). We need to construct a map such that we obtain \(\rho_{AR}\) in eq.(75) from the above equation. We consider a map of the following form [28] \[\rho_{AR}=\mathscr{E}\left(\rho_{AR}^{\mathcal{J}}\right)=\sum_{n}\mathscr{S} _{n}\rho_{AR}^{\mathcal{J}}\mathscr{S}_{n}^{\dagger}=\sum_{n}\mathscr{S}_{n}| \phi\rangle\langle\phi|\mathscr{S}_{n}^{\dagger}. \tag{91}\] One can obtain the analytical form of \(\mathscr{S}_{n}\) as \[\mathscr{S}_{n}=\frac{1}{\sqrt{n!}}\frac{\tanh^{n}\mathbf{r}_{\tilde{\omega},i}}{ \cosh\mathbf{r}_{\tilde{\omega},i}}(\text{sech}\mathbf{r}_{\tilde{\omega},i})^{\tilde {N}_{A}}\otimes\left(\hat{a}_{B}^{\dagger}\right)^{n} \tag{92}\] with \(\tilde{N}_{A}\) being the number operator whose action is defined on Alice's Hilbert space (Hawking-Hartle states) and \(\hat{a}_{B}^{\dagger}\) being the raising operator for the states measured by Rob (Boulware states). Now the operator \(\mathscr{S}_{n}\) is an operator of the Hilbert space where the density matrix \(\rho_{AR}^{\mathcal{J}}\) is prepared. Hence the map \(\mathscr{E}\) is a positive map [28; 29]. It is also straightforward to check that \(\text{tr}(\rho_{AR}^{\mathcal{J}})=\text{tr}(\rho_{AR})\). Hence, the map \(\mathscr{E}\) is a CPTP map. Our final aim is to investigate how this quantum channel preserves the initial entanglement. For this we need to calculate entanglement fidelity given by [28] \[\mathscr{F}_{\mathscr{E}}=\sum_{n=0}^{\infty}\text{tr}\left[\rho_{AR}^{ \mathcal{J}}\mathscr{S}_{n}\right]\text{tr}\left[\rho_{AR}^{\mathcal{J}} \mathscr{S}_{n}^{\dagger}\right]. \tag{93}\] The analytical forms of the two traces are given by \[\begin{split}\text{tr}\left[\rho_{AR}^{\mathcal{J}}\mathscr{S}_ {n}\right]&=\frac{\tanh^{n}\mathbf{r}_{\tilde{\omega},i}}{2\cosh\bm {r}_{\tilde{\omega},i}}\left(1+\frac{\sqrt{n+1}}{\cosh\mathbf{r}_{\tilde{\omega}, i}}\right)\delta_{n,0}\\ &=\frac{1}{2\cosh\mathbf{r}_{\tilde{\omega},i}}\left(1+\frac{1}{ \cosh\mathbf{r}_{\tilde{\omega},i}}\right)\delta_{n,0}\\ &=\text{tr}\left[\rho_{AR}^{\mathcal{J}}\mathscr{S}_{n}^{\dagger }\right]\.\end{split} \tag{94}\] Using eq.(94) in eq.(93), we obtain the entanglement fidelity given by \[\begin{split}\mathscr{F}_{\mathscr{E}}&=\sum_{n=0}^ {\infty}\text{tr}\left[\rho_{AR}^{\mathcal{J}}\mathscr{S}_{n}\right]\text{tr} \left[\rho_{AR}^{\mathcal{J}}\mathscr{S}_{n}^{\dagger}\right]\\ &=\sum_{n=0}^{\infty}\frac{\tanh^{2n}\mathbf{r}_{\tilde{\omega},i}}{4 \cosh^{2}\mathbf{r}_{\tilde{\omega},i}}\left(1+\frac{\sqrt{n+1}}{\cosh\mathbf{r}_{ \tilde{\omega},i}}\right)^{2}\left(\delta_{n,0}\right)^{2}\\ &=\frac{1}{4\cosh^{2}\mathbf{r}_{\tilde{\omega},i}}\left(1+\frac{1}{ \cosh\mathbf{r}_{\tilde{\omega},i}}\right)^{2}\\ &\simeq\mathscr{F}_{\mathscr{E}}^{Sch.}\left(1+\tilde{\omega} \mathscr{K}_{i}\sinh^{2}\mathscr{R}_{i}\left(1+\frac{1}{1+\cosh\mathscr{R}_{i }}\right)\right)\.\end{split} \tag{95}\] where \[\mathscr{F}_{\mathscr{E}}^{Sch.}=\frac{1}{4\cosh^{2}\mathscr{R}_{i}}\left(1+ \frac{1}{\cosh\mathscr{R}_{i}}\right)^{2} \tag{96}\] denotes the entanglement fidelity for a Schwarzschild black hole. In Fig.(3), we plot the entanglement fidelity vs the distance of the observer from \(\mathbf{r}=0\). It is important to observe from Fig.(3) that near the event horizon of the black hole (which depicts the infinite acceleration limit in the flat spacetime case), the entanglement fidelity approaches zero and the rate of degradation is slower for the quantum corrected black hole (compared to the Schwarzschild black hole) which shows a similar behaviour as shown by logarithmic negativity and mutual information. Figure 3: Entanglement fidelity vs \(\mathbf{r}\) plot for a Schwarzschild and a quantum corrected black hole. Figure 2: Mutual information vs radial distance (of the observer) plot for a Schwarzschild and quantum corrected black hole. Conclusion We investigate the phenomenon of entanglement degradation for a quantum corrected black hole, in the vicinity of the event horizon of the same. We observe that in the near horizon approximation, it is possible to write down any static and spherically symmetric metric in a Rindler form which helps later in identifying three timelike Killing vectors and ultimately in identifying the vacuum modes and their analogy with the flat spacetime case. For the next part of our analysis, we obtain the logarithmic negativity for the quantum corrected black hole using the partial transpose criterion of the reduced density matrix and expressed it in terms of the Schwarzschild parameters. Then we have plotted logarithmic negativity with respect to the change in the position of the observer \(\mathbf{r}\) for a quantum corrected black hole and compared it with that of the Schwarzshild black hole. We observe that the logarithmic negativity asymptotically reaches unity when the observer is sitting very far away from each of the black holes and attains a zero value for an observer sitting on the event horizon of the black hole. It is although important to note that when the logarithmic negativity reaches zero value for observer sitting at the event horizon radius of the Schwarzschild black hole, it still would have been non zero if there are underlying quantum gravity corrections in the black hole. Next we have calculated the mutual information for the Alice-Rob bipartite state for the quantum corrected black hole. We have then plotted the mutual information with respect to \(\mathbf{r}\) for both the black holes and observe that very near the event horizon radius, mutual information drops from 2 to very close to unity and reaches unity while the observer is sitting on the event horizon of the black hole. Similar to previous case the mutual information has a slower rate of fall for the quantum corrected black hole. It affirms that if a black hole has underlying quantum gravity corrections (which is almost impossible to notice for any observer) then even if the observer is at the Schwarzschild radius there will still be some distillable entanglement left. This observation may be considered as an important quantum gravity signature which can be looked for. Finally, we consider the entire procedure as a quantum channel and obtained a completely positive trace preserving map which translates the initial stationary entangled state to a mixed state in the black hole spacetime. We finally calculate the entanglement fidelity to investigate how the quantum channel preserves entanglement. We find out that the entanglement fidelity degrades near the vicinity of the event horizon of the black hole and as per the earlier cases the rate of fall is slower in case of the quantum corrected black hole. It is then important to conclude that quantum gravity corrections delay entanglement degradation and the interesting physics occurs in the vicinity of the event horizon of the black hole. Our future plan involves doing a rigorous calculation of the Bogoliubov coefficients and obtain the Hawking-Hartle and Boulware vacuum connection considering the effects of the curved background rather by using an analogy.
2307.15075
Manin triples associated to $n$-Lie bialgebras
In this paper, we study the Manin triples associated to $n$-Lie bialgebras. We introduce the concept of operad matrices for $n$-Lie bialgebras. In particular, by studying a special case of operad matrices, it leads to the notion of local cocycle $n$-Lie bialgebras. Furthermore, we establish a one-to-one correspondence between the double of $n$-Lie bialgebras and Manin triples of $n$-Lie algebras.
Ying Chen, Chuangchuang Kang, Jiafeng Lü, Shizhuo Yu
2023-07-14T10:53:12Z
http://arxiv.org/abs/2307.15075v2
# Manin triples associated to \(n\)-Lie bialgebras ###### Abstract. In this paper, we study the Manin triples associated to \(n\)-Lie bialgebras. We develop the method of double constructions as well as operad matrices to make \(n\)-Lie bialgebras into Manin triples. Then, the related Manin triples lead to a natural construction of metric \(n\)-Lie algebras. Moreover, a one-to-one correspondence between the double of \(n\)-Lie bialgebras and Manin triples of \(n\)-Lie algebras be established. Key words and phrases:Manin triples, \(n\)-Lie bialgebras, double of \(n\)-Lie bialgebras, \(n\)-Lie algebras 2010 Mathematics Subject Classification: 17B62,17A42,17B37,17B60 ## 1. Introduction The aim of this paper is to extend the Manin triples structure from Lie algebras to \(n\)-Lie algebras and get some applications, with the goal of establishing a one-to-one correspondence between the double of \(n\)-Lie bialgebras and Manin triples of \(n\)-Lie algebras. Manin triples structures naturally induce a class of quasi-triangular \(r\)-matrix, which provide a class of examples of Poisson manifolds and Poisson homogeneous spaces in Lie theory [19]. In 1983, Drinfeld [9] introduced the notion of Lie bialgebras, which is well established as the infinitesimalisation of quantum groups [18]. A Lie bialgebra consists of a Lie algebra \(\mathfrak{g}\) and a compatible Lie cobracket \(\delta_{\mathfrak{g}}\), such that the cobracket induces a Lie bracket on the dual space and satisfies the 1-cocycle condition [9, 17]. Moreover, Lie bialgebras exponentiate to Poisson-Lie groups, which has attracted considerable interest from Poisson and symplectic geometers [7]. In fact, let \((\mathfrak{g},\delta_{\mathfrak{g}})\) be a Lie bialgebra, then there exists a canonical Lie bialgebra structure on \(\mathfrak{g}\oplus\mathfrak{g}^{*}\) induced by Manin triples of Lie algebras. The Lie bialgebra on \(\mathfrak{g}\oplus\mathfrak{g}^{*}\) is called the double Lie bialgebra of \(\mathfrak{g}\), and it can be used to construct examples of Poisson manifolds [16]. The usual interpretation of the 1-cocycle condition for Lie bialgebras is that \(\delta_{\mathfrak{g}}\) is a 1-cocycle of \(\mathfrak{g}\) associated to the representation \(\mathrm{ad}\otimes 1+1\otimes\mathrm{ad}\) on the tensor space \(\mathfrak{g}\otimes\mathfrak{g}\). Another way to interpret the 1-cocycle condition is to decompose \(\delta_{\mathfrak{g}}\) into \(\delta_{\mathfrak{g}}^{1}\) and \(\delta_{\mathfrak{g}}^{2}\). Here, \(\delta_{\mathfrak{g}}^{1}\) and \(\delta_{\mathfrak{g}}^{2}\) are 1-cocycles of \(\mathfrak{g}\) associated to \(\mathrm{ad}\otimes 1\) and \(1\otimes\mathrm{ad}\), respectively, satisfying a compatibility condition [4]. This equivalent interpretation leads to the local cocycle condition and can be understood from an operadic point of view [15]. It is natural to extend such structures to the \(n\)-Lie bialgebras, that is, consider the operad matrices of \(n\)-Lie bialgebras. In 1985, Filippov [11] introduced the definition of \(n\)-Lie algebras (also known as Filippov algebras). His paper considered \(n\)-ary multi-linear and skew-symmetric operation that satisfy the generalized Jacobi identity, which appear in many fields in mathematics and mathematical physics[1]. Particularly, 3-Lie algebras play an important role in string theory [2, 8, 12, 13, 14]. In 2016, [4] introduced two types of 3-Lie bialgebras, whose compatibility conditions are given by local co-cycles and double constructions respectively. The notion of 3-Lie classical Yang-Baxter equation (3-Lie CYBE) is derived from local cocycle 3-Lie bialgebras, and the solutions to this equation give rise to coboundary local cocycle 3-Lie bialgebras. Meanwhile, 3-pre-Lie algebras give rise to solutions of the 3-Lie CYBE. In 2017, [10] classified the double construction 3-Lie bialgebras for complex 3-Lie algebras in dimensions 3 and 4 and provided the corresponding pseudo-metric 3-Lie algebras of dimension 8. In [5], \(n\)-Lie coalgebras with rank \(r\) are defined, their structures are discussed, and \(n\)-Lie bialgebras are introduced and their structures investigated. However, there is currently no known coboundary theory or structure for the double space \(\mathfrak{g}\oplus\mathfrak{g}^{\ast}\) of \(n\)-Lie bialgebras. Inspired by the notations of local cocycle 3-Lie bialgebras and double construction 3-Lie bialgebra, it is natural to consider the analogue of Manin triples associated to \(n\)-Lie bialgebras. The paper is organized as follows. Theorem 4.23 in Section 4 is the main result and Section 2 as well as Section 3 are the preparation for it. Concretely, in Section 2, we introduce some concepts and known results about \(n\)-Lie algebras that will be used later. In Section 3, we summarize the coboundary theory of \(n\)-Lie algebras. We also define \(n\)-Lie bialgebras and show that each \(n\)-Lie bialgebra must have a dual \(n\)-Lie bialgebra whose dual is the \(n\)-Lie bialgebra itself. In Section 4, we define an operad matrix of \(n\)-Lie bialgebras and a local cocycle \(n\)-Lie bialgebra. We also establish a one-to-one correspondence between the double of \(n\)-Lie bialgebras and Manin triples of \(n\)-Lie algebras. Throughout this paper, all algebras are finite-dimensional and over a field \(F\) of characteristic zero. ## 2. Preliminary results on \(n\)-Lie algebras In this section, we will give some preliminaries and basic results on \(n\)-Lie algebras from [11]. **Definition 2.1**.: _An \(n\)_**-Lie algebra** _is a vector space \(\mathfrak{g}\) with a skew-symmetric \(n\)-linear map \([\cdot,\cdots,\cdot]:\otimes^{n}\mathfrak{g}\to\mathfrak{g}\) such that the following Filippov-Jacobi identity holds, for all \(x_{i},y_{i}\in\mathfrak{g},1\leq i\leq n\),_ \[[x_{1},\cdots,x_{n-1},[y_{1},\cdots,y_{n}]]=\sum_{i=1}^{n}[y_{1},\cdots,y_{i- 1},[x_{1},\cdots,x_{n-1},y_{i}],y_{i+1},\cdots,y_{n}]. \tag{1}\] The Filippov-Jacobi identity can be described in another way. For \(X=(x_{1},\cdots,x_{n-1})\in\wedge^{n-1}\mathfrak{g}\), the operator \[\operatorname{ad}(X):\mathfrak{g}\to\mathfrak{g},\quad\operatorname{ad}(X)( y):=[x_{1},\cdots,x_{n-1},y],\quad\forall\ y\in\mathfrak{g},\] is a derivation in the sense that \[\operatorname{ad}(X)([y_{1},\cdots,y_{n}])=\sum_{i=1}^{n}[y_{1},\cdots,y_{i-1}, \operatorname{ad}(X)(y_{i}),y_{i+1},\cdots,y_{n}]. \tag{2}\] **Definition 2.2**.: \(A\) **representation** _of an \(n\)-Lie algebra \((\mathfrak{g},[\cdot,\cdots,\cdot])\) on a vector space \(M\) is a skew-symmetric linear map \(\rho:\wedge^{n-1}\mathfrak{g}\to\mathfrak{gl}(M)\) satisfying_ \[\rho([x_{1},\cdots,x_{n}],y_{1},\cdots,y_{n-2}) =\sum_{i=1}^{n}(-1)^{n-i}\rho(x_{1},\cdots,\hat{x}_{i},\cdots,x_{n })\rho(x_{i},y_{1},\cdots,y_{n-2}), \tag{4}\] \[[\rho(x_{1},\cdots,x_{n-1}),\rho(y_{1},\cdots,y_{n-1})] =\sum_{i=1}^{n-1}\rho(y_{1},\cdots,y_{i-1},[x_{1},\cdots,x_{n-1}, y_{i}],y_{i+1},\cdots,y_{n-1}), \tag{3}\] _for all \(x_{i},y_{i}\in\mathfrak{g},\ 1\leq i\leq n\), where \(\hat{x}_{i}\) means that the element \(x_{i}\) is omitted._ We denote the representation by the pair \((M,\rho)\), and say that \(M\) is a \(\mathfrak{g}\)-module as well. When \(\rho=\operatorname{ad}:\wedge^{n-1}\mathfrak{g}\to\mathfrak{gl}(\mathfrak{g})\) given by \[\operatorname{ad}(x_{1},\cdots,x_{n-1})(x_{n})=[x_{1},\cdots,x_{n-1},x_{n}], \quad\forall\ x_{1},\cdots,x_{n}\in\mathfrak{g},\] then the pair \((\mathfrak{g},\operatorname{ad})\) is a \(\mathfrak{g}\)-module and is called the adjoint module of \(\mathfrak{g}\). **Definition 2.3**.: _Denote by \([A_{1},A_{2},\cdots,A_{n}]\) the subspace of \(A\) generated by all vectors \([x_{1},x_{2},\cdots,x_{n}]\), where \(x_{i}\in A_{i}\), for \(i=1,2,\cdots,n\). Let \(\mathfrak{g}\) be an \(n\)-Lie algebra over field \(F\) and let \(\mathfrak{h}\) be a subspace of \(\mathfrak{g}\). If \([\mathfrak{h},\mathfrak{b},\cdots,\mathfrak{h}]_{\mathfrak{g}}\subset \mathfrak{h}\), then \(\mathfrak{h}\) is called an \(n\)**-Lie subalgebra** of \(\mathfrak{g}\)._ **Proposition 2.4**.: _Let \((\mathfrak{g},[\cdot,\cdots,\cdot])\) be an \(n\)-Lie algebra, then we have_ \[\sum_{i=1}^{n-1}[y_{1},\cdots,y_{i-1},[x_{1},\cdots,x_{n-1},y_{ i}],y_{i+1},\cdots,y_{n-1},y_{n}]\] \[+ \sum_{j=1}^{n-1}[x_{1},\cdots,x_{j-1},[y_{1},\cdots,y_{n-1},x_{j} ],x_{j+1},\cdots,x_{n-1},y_{n}]=0. \tag{5}\] Proof.: By (1), we have \[[x_{1},\cdots,x_{n-1},[y_{1},\cdots,y_{n}]] = \sum_{i=1}^{n-1}[y_{1},\cdots,y_{i-1},[x_{1},\cdots,x_{n-1},y_{i} ],y_{i+1},\cdots,y_{n}]\] \[+ [y_{1},\cdots,y_{n-1},[x_{1},\cdots,x_{n-1},y_{n}]]\] \[= \sum_{i=1}^{n-1}[y_{1},\cdots,y_{i-1},[x_{1},\cdots,x_{n-1},y_{i} ],y_{i+1},\cdots,y_{n-1},y_{n}]\] \[+ \sum_{j=1}^{n-1}[x_{1},\cdots,x_{j-1},[y_{1},\cdots,y_{n-1},x_{j} ],x_{j+1},\cdots,x_{n-1},y_{n}]\] \[+ [x_{1},\cdots,x_{n-1},[y_{1},\cdots,y_{n}]],\] implying (5). Let \(V\) be a vector space, \(V^{*}\) be the dual space of \(V\). For each positive integer \(k\), we identify the tensor product \(\otimes^{k}V\) with the space of multi-linear maps from \(\underbrace{V^{*}\times\cdots\times V^{*}}_{k-times}\to\mathbb{C}\), such that: \[\langle\xi_{1}\otimes\cdots\otimes\xi_{n},v_{1}\otimes\cdots\otimes v_{n} \rangle=\langle\xi_{1},v_{1}\rangle\cdots\langle\xi_{n},v_{n}\rangle,\quad \forall\ \xi_{1},\cdots,\xi_{n}\in V^{*},v_{1},\cdots,v_{n}\in V,\] where \(\langle\xi_{i},v_{i}\rangle=\xi_{i}(v_{i})\), \(1\leq i\leq n\). For \(v_{1},\cdots,v_{n}\in V\), define that \[v_{1}\wedge v_{2}\wedge\cdots\wedge v_{k}=\sum_{\sigma\in S_{k}}sgn(\sigma)v_{ \sigma(1)}\otimes v_{\sigma(2)}\cdots v_{\sigma(k)}\in\wedge^{k}V\subset \otimes^{k}V.\] ## 3. \(n\)-Lie bialgebras In this section, we first introduce the coboundary theory of \(n\)-Lie algebras. We then give the notions of \(n\)-Lie bialgebras and the coadjoint representation of \(n\)-Lie algebras. Finally we show that each \(n\)-Lie bialgebra must have a dual \(n\)-Lie bialgebra whose dual is the \(n\)-Lie bialgebra itself. ### \(n\)-Lie algebra cohomology Let \(\mathfrak{g}\) be an \(n\)-Lie algebra over the field \(F\) and let \((M,\rho)\) be a representation of \(\mathfrak{g}\) on \(M\). For all \(X^{1},\cdots,X^{n-1}\in\mathfrak{g}\), denote the elements \(X=X^{1}\wedge X^{2}\wedge\cdots\wedge X^{n-1}\) in \(\wedge^{n-1}\mathfrak{g}\) by \(X=(X^{1},\cdots,X^{n-1})\). An action of \(\mathfrak{g}\) on \(M\) is a \(\mathfrak{g}\)-module \(\rho(X)\) and for \(a\in M\) we denote \(\mathfrak{g}\) acts on \(M\) by \(X(X^{1},\cdots,X^{n-1}).a\). For example, \(\mathfrak{g}\) acts on itself is the adjoint representation \((\mathfrak{g},\text{ad})\). More generally, \(\mathfrak{g}\) acts on any tensor product of \(\mathfrak{g}\) in the following way. For decomposable elements, \(y_{1}\otimes\cdots\otimes y_{p}\) in \(\otimes^{p}\mathfrak{g}=\mathfrak{g}\otimes\cdots\otimes\mathfrak{g}\) (\(p\) times), \[X.(y_{1}\otimes\cdots\otimes y_{p}) := \text{ad}^{(p)}_{x_{1},\cdots,x_{n-1}}(y_{1}\otimes\cdots \otimes y_{p}),\] \[= \text{ad}_{x_{1},\cdots,x_{n-1}}y_{1}\otimes y_{2}\otimes\cdots \otimes y_{p}+y_{1}\otimes\text{ad}_{x_{1},\cdots,x_{n-1}}y_{2}\otimes y_{3} \otimes\cdots\otimes y_{p}\] \[+ \cdots+y_{1}\otimes y_{2}\otimes\cdots\otimes y_{p-1}\otimes \text{ad}_{x_{1},\cdots,x_{n-1}}y_{p}.\] **Definition 3.1**.: _Let \(\mathfrak{g}\) be an \(n\)-Lie algebra and let \((M,\rho)\) be a representation of \(\mathfrak{g}\). \(k\)_**-cochains**_on \(\mathfrak{g}\) with values in \(M\) is the set_ \[C^{k}(\mathfrak{g};M):=\ \{\text{ linear maps }u:\underbrace{\wedge^{n-1}\mathfrak{g}\otimes\cdots \otimes\wedge^{n-1}\mathfrak{g}}_{k-1-times}\mathfrak{g}\wedge\mathfrak{g} \to M\ \}.\] A \(1\)-cochain on \(\mathfrak{g}\) with values in \(M\) is just a linear map \(u\) from \(\mathfrak{g}\) to \(M\), i.e. \[k=1,\ u:\mathfrak{g}\to M.\] For all \(X=X^{1}\wedge\cdots\wedge X^{n-1}\in\wedge^{n-1}\mathfrak{g},\ z\in\mathfrak{g}\), the coboundary operator \(\delta:C^{1}(\mathfrak{g};M)\to C^{2}(\mathfrak{g};M)\) of a \(1\)-cochain \(u\) is given by \[\delta u(X,z) = X(X^{1},\cdots,X^{n-1}).u(z)\] \[+ \sum_{i=1}^{n-1}(-1)^{i+1}X(X^{1},\cdots,\hat{X}^{i},\cdots,X^{n- 1},z).u(X^{i})-u([X^{1},\cdots,X^{n-1},z]).\] We can deduce that for any \(1\)-cochain \(u\) on \(\mathfrak{g}\) with values in \(M\), \[\delta(\delta u)=0.\] In fact, for any \(X_{1},X_{2}\in\wedge^{n-1}\mathfrak{g}\), \(z\in\mathfrak{g}\), \[(\delta(\delta u))(X_{1},X_{2},z) = \rho(X_{1}^{1},\cdots,X_{1}^{n-1})\delta u(X_{2},z)-\rho(X_{2}^{1}, \cdots,X_{2}^{n-1})\delta u(X_{1},z)\] \[+ \sum_{i=1}^{n-1}(-1)^{n-i+1}\rho(X_{2}^{1},\cdots,\hat{X_{2}^{i}},\cdots,X_{2}^{n-1},z)\delta u(X_{1},X_{2}^{i})\] \[- \delta u(X_{2},[X_{1}^{1},\cdots,X_{1}^{n-1},z])+\delta u(X_{1},[X _{2}^{1},\cdots,X_{2}^{n-1},z])\] \[- \sum_{m=1}^{n-1}(-1)^{m+1}\delta u([X_{1}^{1},\cdots,X_{1}^{n-1}, X_{2}^{m}]\wedge X_{2}^{1}\wedge\cdots\wedge\hat{X_{2}^{m}}\wedge\cdots \wedge X_{2}^{n-1},z).\] Since \(X\mapsto\rho(X)\) is a representation of \(\mathfrak{g}\) in \(M\), then \(\delta(\delta u)=0\). **Definition 3.2**.: _The coboundary of a \(k\)-cochain \(u\) on \(\mathfrak{g}\) with values in \(M\) is the \((k+1)\)-cochain \(\delta u:C^{k}(\mathfrak{g};M)\to C^{k+1}(\mathfrak{g};M)\), such that for all \(X_{1},X_{2},\cdots,X_{k}\in\wedge^{n-1}\mathfrak{g}\), \(z\in\mathfrak{g}\),_ \[\delta u(X_{1},X_{2},\cdots,X_{k},z)\] \[= \sum_{i=1}^{k}(-1)^{i+1}\rho(X_{i}^{1},\cdots,X_{i}^{n-1})u(X_{1 },\cdots,\hat{X_{i}},\cdots,X_{k},z)\] \[+ \sum_{i=1}^{n-1}(-1)^{n+k-i+1}\rho(X_{k}^{1},\cdots,\hat{X_{k}^{i }},\cdots,X_{k}^{n-1},z)u(X_{1},\cdots,X_{k-1},\hat{X_{k}},X_{k}^{i})\] \[+ \sum_{i=1}^{k}(-1)^{i}u(X_{1},\cdots,\hat{X_{i}},\cdots,X_{k},[X_ {i}^{1},\cdots,X_{i}^{n-1},z])\] \[+ \sum_{1\leq i\leq j}^{k}(-1)^{i}u(X_{1},\cdots,\hat{X_{i}},\cdots,X_{j-1},\sum_{m=1}^{n-1}[X_{i}^{1},\cdots,X_{i}^{n-1},X_{j}^{m}]\wedge X_{j}^ {1}\wedge\cdots\wedge\hat{X_{j}^{m}}\wedge\cdots\wedge X_{j}^{n-1},\cdots,X_{ k},z),\] _where \(\hat{X}_{i}\) indicates that the element \(X_{i}\) is omitted._ **Proposition 3.3**.: _[_6_]_ _For any \(k\)-cochain \(u\), \(k\geq 1\), \(\delta(\delta u)=0\)._ This is a standard result, which generalizes the property proved above for \(k=1\). **Definition 3.4**.: _A \(k\)-cochain \(u\) is called a \(k\)_**-cocycle** _if it satisfies_ \[\delta u=0.\] _A \(k\)-cochain \(u\) is called a \(k\)_**-coboundary**_\((k\geq 2)\) if there exists a \((k-1)\)-cochain \(v\), such that_ \[u=\delta v.\] By Proposition 3.3, any \(k\)-coboundary is a \(k\)-cocycle. By Definition 3.4, the quotient of the vector space of \(k\)-cocycles by the vector space of \(k\)-coboundaries is called the \(k\)-th cohomology vector space of \(\mathfrak{g}\) with values in \(M\). See [20] for more details. ### \(n\)-Lie bialgebras Now assume that \(\mathfrak{g}\) is an \(n\)-Lie algebra and that \(\gamma\) is a linear map from \(\mathfrak{g}\) to \(\otimes^{n}\mathfrak{g}\) whose transpose is denoted by \({}^{\prime}\gamma:\otimes^{n}\mathfrak{g}^{*}\rightarrow\mathfrak{g}^{*}\). (If \(\mathfrak{g}\) is infinite-dimensional, \(\otimes^{n}\mathfrak{g}^{*}\) is a subspace of \((\otimes^{n}\mathfrak{g})^{*}\), and what we are considering is in fact the restriction of the transpose of \(\gamma\).) Recall that a linear map on \(\otimes^{n}\mathfrak{g}^{*}\) can be identified with a \(n\)-linear map on \(\mathfrak{g}^{*}\). **Definition 3.5**.: _An \(n\)_**-Lie bialgebra** _is an \(n\)-Lie algebra \(\mathfrak{g}\) with a linear map \(\gamma:\mathfrak{g}\rightarrow\otimes^{n}\mathfrak{g}\) such that_ 1. \({}^{\prime}\gamma:\otimes^{n}\mathfrak{g}^{*}\rightarrow\mathfrak{g}^{*}\) _defines an_ \(n\)_-Lie bracket on_ \(\mathfrak{g}^{*}\)_, i.e.,_ \({}^{\prime}\gamma\) _is a skew-symmetric_ \(n\)_-linear map on_ \(\mathfrak{g}^{*}\) _satisfying the Filippov-Jacobi identity;_ 2. \(\gamma\) _is a_ \(1\)_-cocycle on_ \(\mathfrak{g}\) _with values in_ \(\otimes^{n}\mathfrak{g}\)_, where_ \(\mathfrak{g}\) _acts on_ \(\otimes^{n}\mathfrak{g}\) _by the adjoint representation_ \(\mathrm{ad}^{(n)}\)_._ Condition (ii) means that the \(2\)-cochain \(\delta\gamma\) vanishes, i.e., for all \(x_{1},\cdots,x_{n}\in\mathfrak{g}\), \[\gamma([x_{1},\cdots,x_{n}])=\sum_{i=1}^{n}(-1)^{n-i}\mathrm{ad}^{(n)}_{x_{1},\cdots,x_{i},\cdots,x_{n}}(\gamma(x_{i})). \tag{6}\] Let \([\cdot,\cdots,\cdot]_{\mathfrak{g}^{*}}:\otimes^{n}\mathfrak{g}^{*} \rightarrow\mathfrak{g}^{*}\) be the \(n\)-Lie bracket defined by \(\gamma\). Denote \({}^{\prime}\gamma:\otimes^{n}\mathfrak{g}^{*}\rightarrow\mathfrak{g}^{*}\) by \[[\xi_{1},\cdots,\xi_{n}]_{\mathfrak{g}^{*}}={}^{\prime}\gamma(\xi_{1}\otimes \cdots\otimes\xi_{n}),\quad\forall\ \xi_{1},\cdots,\xi_{n}\in\mathfrak{g}^{*}. \tag{7}\] Then by Definition 3.5, for all \(x\in\mathfrak{g}\), \[\langle\ [\xi_{1},\cdots,\xi_{n}]_{\mathfrak{g}^{*}},x\ \rangle=\langle\ \gamma(x),\xi_{1}\otimes\cdots\otimes\xi_{n}\ \rangle. \tag{8}\] Condition (i) is equivalent to the following two identities: \[[\xi_{\sigma(1)},\cdots,\xi_{\sigma(n)}]_{\mathfrak{g}^{*}}=sgn(\sigma)[\xi_{ 1},\cdots,\xi_{n}]_{\mathfrak{g}^{*}},\] \[[\xi_{1},\cdots,\xi_{n-1},[\eta_{1},\cdots,\eta_{n}]_{\mathfrak{g}^{*}}]_{ \mathfrak{g}^{*}}=\sum_{i=1}^{n}[\eta_{1},\cdots,\eta_{i-1},[\xi_{1},\cdots, \xi_{n-1},\eta_{i}]_{\mathfrak{g}^{*}},\eta_{i+1},\cdots,\eta_{n}]_{\mathfrak{ g}^{*}}.\] An alternate way of writing (6) is \[\langle\ [\xi_{1},\cdots,\xi_{n}]_{\mathfrak{g}^{*}},[x_{1}, \cdots,x_{n}]\ \rangle\] \[= \sum_{i=1}^{n}\langle\ \xi_{1}\otimes\cdots\otimes\xi_{n},( \mathrm{ad}_{x_{1},\cdots,x_{i},\cdots,x_{n}}\otimes 1\otimes\cdots\otimes 1\] \[+ 1\otimes\mathrm{ad}_{x_{1},\cdots,x_{i},\cdots,x_{n}}\otimes 1 \otimes\cdots\otimes 1+\cdots+1\otimes\cdots\otimes 1\otimes\mathrm{ad}_{x_{1}, \cdots,x_{i},\cdots,x_{n}}(\gamma(x_{i}))\ \rangle.\] Using the Sweedler's notation, write \(\gamma(x_{i})=z_{1}\otimes z_{2}\cdots\otimes z_{n}\), we have \[(\mathrm{ad}_{x_{1},\cdots,x_{n-1}}\otimes 1\otimes\cdots\otimes 1 +\cdots+1\otimes\cdots\otimes 1\otimes\mathrm{ad}_{x_{1},\cdots,x_{n-1}})(z_{1} \otimes z_{2}\cdots\otimes z_{n})\] \[= [x_{1},\cdots,x_{n-1},z_{1}]\otimes z_{2}\otimes\cdots\otimes z_{ n}+z_{1}\otimes[x_{1},\cdots,x_{n-1},z_{2}]\otimes z_{3}\otimes\cdots\otimes z_{n}\] \[+ \cdots+z_{1}\otimes z_{2}\otimes\cdots\otimes[x_{1},\cdots,x_{n-1 },z_{n}].\] ### The coadjoint representation of \(n\)-Lie algebras Now we introduce the definition of the coadjoint representation of \(n\)-Lie algebras on the dual vector space. **Proposition 3.6**.: _Let \(\mathfrak{g}\) be a finite-dimensional \(n\)-Lie algebra and \(\mathfrak{g}^{*}\) be its dual vector space. Set_ \[\mathrm{ad}^{*}_{x_{1},\cdots,x_{n-1}}=-^{t}(\mathrm{ad}_{x_{1},\cdots,x_{n-1}} ):\wedge^{n-1}\mathfrak{g}\to End(\mathfrak{g}^{*}),\quad\forall\ x_{1}, \cdots,x_{n-1}\in\mathfrak{g}, \tag{9}\] _i.e., \(\mathrm{ad}^{*}_{x_{1},\cdots,x_{n-1}}\) is the endomorphism of \(\mathfrak{g}^{*}\) satisfying_ \[\langle\ \xi,\mathrm{ad}_{x_{1},\cdots,x_{n-1}}x\ \rangle=-\langle\ \mathrm{ad}^{*}_{x_{1},\cdots,x_{n-1}}\xi,x\ \rangle,\quad\forall\ x\in\mathfrak{g},\xi\in\mathfrak{g}^{*}. \tag{10}\] _Then \((\mathfrak{g}^{*},\mathrm{ad}^{*}_{x_{1},\cdots,x_{n-1}})\) is a representation of \(\mathfrak{g}\) on \(\mathfrak{g}^{*}\)._ Proof.: By (5) and (10), for all \(x_{1},\cdots,x_{n},\ y_{1},\cdots,y_{n-1},\ z\in\mathfrak{g},\ \xi\in\mathfrak{g}^{*}\), we have \[\langle\ \mathrm{ad}^{*}_{[x_{1},\cdots,x_{n}],y_{1},\cdots,y_{n-2} }\xi,z\ \rangle\] \[= -\langle\ \xi,[[x_{1},\cdots,x_{n}],y_{1},\cdots,y_{n-2},z]\ \rangle\] \[= (-1)^{n}\langle\ \xi,[y_{1},\cdots,y_{n-2},z,[x_{1},\cdots,x_{n}]]\ \rangle,\] \[= (-1)^{n}\sum_{i=1}^{n}\langle\ \xi,[x_{1},\cdots,x_{i-1},[y_{1},\cdots,y_{n-2},z,x_{i}],x_{i+1},\cdots,x_{n}]\ \rangle\] \[= (-1)^{n-i}\sum_{i=1}^{n}\langle\ \xi,[x_{i},y_{1},\cdots,y_{n-2},[x_{1}, \cdots,\hat{x}_{i},\cdots,x_{n},z]]\ \rangle\] \[= \langle\ \sum_{i=1}^{n}(-1)^{n-i}\mathrm{ad}^{*}_{x_{1},\cdots, \hat{x}_{i},\cdots,x_{n}}\mathrm{ad}^{*}_{x_{i},y_{1},\cdots,y_{n-2}}\xi,z\ \rangle.\] And \[\langle\ \mathrm{ad}^{*}_{x_{1},\cdots,x_{n-1}}\mathrm{ad}^{*}_{y_{1 },\cdots,y_{n-1}}\xi,z\ \rangle-\langle\ \mathrm{ad}^{*}_{y_{1},\cdots,y_{n-1}}\mathrm{ad}^{*}_{x_{1},\cdots,x_{n-1}} \xi,z\ \rangle\] \[= \langle\ \xi,[y_{1},\cdots,y_{n-1},[x_{1},\cdots,x_{n-1},z]]\ \rangle-\langle\ \xi,[x_{1},\cdots,x_{n-1},[y_{1},\cdots,y_{n-1},z]]\ \rangle\] \[= \langle\xi,\sum_{i=1}^{n-1}[x_{1},\cdots,x_{i-1},[y_{1},\cdots,y_ {n-1},x_{i}],x_{i+1},\cdots,x_{n-1},z]\rangle\] \[= \langle\ \xi,-\sum_{i=1}^{n-1}[y_{1},\cdots,y_{i-1},[x_{1},\cdots,x_ {n-1},y_{i}],y_{i+1},\cdots,y_{n-1},z]\ \rangle\] \[= \langle\ \sum_{i=1}^{n-1}\mathrm{ad}^{*}_{y_{1},\cdots,y_{i-1},[x_{1 },\cdots,x_{n-1},y_{i}],y_{i+1},\cdots,y_{n-1}}\xi,z\ \rangle.\] Thus, the following two equalities hold: \[\mathrm{ad}^{*}_{[x_{1},\cdots,x_{n}],y_{1},\cdots,y_{n-2}} = \sum_{i=1}^{n}(-1)^{n-i}\mathrm{ad}^{*}_{x_{1},\cdots,\hat{x}_{i},\cdots,x_{n}}\mathrm{ad}^{*}_{x_{i},y_{1},\cdots,y_{n-2}},\] \[\mathrm{ad}^{*}_{x_{1},\cdots,x_{n-1}}\mathrm{ad}^{*}_{y_{1}, \cdots,y_{n-1}}-\mathrm{ad}^{*}_{y_{1},\cdots,y_{n-1}}\mathrm{ad}^{*}_{x_{1}, \cdots,x_{n-1}} = \sum_{i=1}^{n-1}\mathrm{ad}^{*}_{y_{1},\cdots,y_{i-1},[x_{1}, \cdots,x_{n-1},y_{i}],y_{i+1},\cdots,y_{n-1}}.\] Therefore,(\(\mathfrak{g}^{*},\operatorname{ad}_{x_{1},\cdots,x_{n-1}}^{*}\)) is a representation of \(\mathfrak{g}\) on \(\mathfrak{g}^{*}\). **Definition 3.7**.: _The representation \((\mathfrak{g}^{*},\operatorname{ad}_{x_{1},\cdots,x_{n-1}}^{*})\) is called the_ **coadjoint representation** _of \(\mathfrak{g}\)_. ### The dual of \(n\)-Lie bialgebras Let \((\mathfrak{g}^{*},\operatorname{ad}_{x_{1},\cdots,x_{n-1}}^{*})\) be the coadjoint representation of \(\mathfrak{g}\). By (10), for all \(\xi_{1},\cdots,\xi_{n}\in\mathfrak{g}^{*}\) and \(y_{1}\otimes\cdots\otimes y_{p}\) in \(\otimes^{p}\mathfrak{g}\), we have \[\langle\;\xi_{1}\otimes\cdots\otimes\xi_{n},\,\text{ad}_{x_{1},\cdots,x_{n-1} }^{(p)}(y_{1}\otimes\cdots\otimes y_{n})\;\rangle=-\langle\;\operatorname{ad} _{x_{1},\cdots,x_{n-1}}^{*(p)}(\xi_{1}\otimes\cdots\otimes\xi_{n}),y_{1} \otimes\cdots\otimes y_{n}\;\rangle. \tag{11}\] Let \((\mathfrak{g},\gamma)\) be an \(n\)-Lie bialgebra and let \([\cdot,\cdots,\cdot]_{\mathfrak{g}^{*}}:\otimes^{n}\mathfrak{g}^{*}\to \mathfrak{g}^{*}\) be the \(n\)-Lie bracket defined by \(\gamma\). By (6) and (11), for all \(\xi_{1},\cdots,\xi_{n}\in\mathfrak{g}^{*}\), \(x_{1},\cdots,x_{n}\in\mathfrak{g}\), we have \[\langle\;[\xi_{1},\cdots,\xi_{n}]_{\mathfrak{g}^{*}},[x_{1},\cdots,x_{n}]\;\rangle\] \[= \langle\;\xi_{1}\otimes\cdots\otimes\xi_{n},\gamma([x_{1},\cdots,x_{n}])\;\rangle\] \[= \langle\;\xi_{1}\otimes\cdots\otimes\xi_{n},\,\sum_{i=1}^{n}(-1)^ {n-i}\operatorname{ad}_{x_{1},\cdots,\hat{x}_{i},\cdots,x_{n}}^{(n)}(\gamma( x_{i}))\;\rangle\] \[= \sum_{i=1}^{n}(-1)^{n-i}\langle\;\xi_{1}\otimes\cdots\otimes\xi_ {n},(\operatorname{ad}_{x_{1},\cdots,\hat{x}_{i},\cdots,x_{n}}\otimes 1\otimes \cdots\otimes 1+\cdots+1\otimes\cdots\otimes 1\otimes\operatorname{ad}_{x_{1}, \cdots,\hat{x}_{i},\cdots,x_{n}}^{*})(\gamma(x_{i}))\;\rangle\] \[= \sum_{i=1}^{n}(-1)^{n-i+1}\langle\;(\operatorname{ad}_{x_{1}, \cdots,\hat{x}_{i},\cdots,x_{n}}^{*}\otimes 1\otimes\cdots\otimes 1+\cdots+1 \otimes\cdots\otimes 1\otimes\operatorname{ad}_{x_{1},\cdots,\hat{x}_{i},\cdots,x_{n}}^ {*})(\xi_{1}\otimes\cdots\otimes\xi_{n}),\gamma(x_{i})\;\rangle\] \[= \sum_{i=1}^{n}(-1)^{n-i+1}\langle\;[\xi_{1},\cdots,\operatorname{ ad}_{x_{1},\cdots,\hat{x}_{i},\cdots,x_{n}}^{*}(\xi_{j}),\cdots,\xi_{n}]_{\mathfrak{g}^{*}},x _{i}\;\rangle\] \[= \sum_{i,j=1}^{n}(-1)^{n-i+1}(-1)^{n-j}\langle\;[\xi_{1},\cdots, \xi_{j-1},\xi_{j+1},\cdots,\xi_{n},\operatorname{ad}_{x_{1},\cdots,\hat{x}_{i}, \cdots,x_{n}}^{*}(\xi_{j})]_{\mathfrak{g}^{*}},x_{i}\;\rangle\] \[= \sum_{i,j=1}^{n}(-1)^{i+j-1}\langle\;[\xi_{1},\cdots,\xi_{i-1}, \xi_{i+1},\cdots,\xi_{n},\operatorname{ad}_{x_{1},\cdots,\hat{x}_{j},\cdots,x_{n }}^{*}(\xi_{i})]_{\mathfrak{g}^{*}},x_{j}\;\rangle.\] Then (6) can be written by \[\langle\;[\xi_{1},\cdots,\xi_{n}]_{\mathfrak{g}^{*}},[x_{1},\cdots,x_{n}]\; \rangle=\sum_{i,j=1}^{n}(-1)^{i+j-1}\langle\;[\xi_{1},\cdots,\xi_{i-1},\xi_{i+1},\cdots,\xi_{n},\operatorname{ad}_{x_{1},\cdots,\hat{x}_{j},\cdots,x_{n}}^{*}( \xi_{i})]_{\mathfrak{g}^{*}},x_{j}\;\rangle. \tag{12}\] For example, in [4], the \(1\)-cocycle condition of Lie bialgebras can be written by \[\langle\;[\xi_{1},\xi_{2}]_{\mathfrak{g}^{*}},[x_{1},x_{2}]\rangle = -\langle\;[\xi_{2},\operatorname{ad}_{x_{2}}^{*}(\xi_{1})],x_{1} \;\rangle+\langle\;[\xi_{1},\operatorname{ad}_{x_{2}}^{*}(\xi_{2})],x_{1}\;\rangle\] \[+ \langle\;[\xi_{2},\operatorname{ad}_{x_{1}}^{*}(\xi_{1})],x_{2}\; \rangle-\langle\;[\xi_{1},\operatorname{ad}_{x_{1}}^{*}(\xi_{2})],x_{2}\;\rangle.\] Set the adjoint representation ad \(:\wedge^{n-1}\mathfrak{g}^{*}\to End(\mathfrak{g}^{*})\) of \(\mathfrak{g}^{*}\) by \[\operatorname{ad}_{\xi_{1},\cdots,\xi_{n-1}}(\xi_{n})=[\xi_{1},\cdots,\xi_{n} ]_{\mathfrak{g}^{*}}.\] Define a skew-symmetric linear map \(\mathrm{ad}^{*}_{\xi_{1},\cdots,\xi_{n-1}}:\,\wedge^{n-1}\mathfrak{g}^{*}\to End( \mathfrak{g})\) satisfying \[\langle\ \mathrm{ad}_{\xi_{1},\cdots,\xi_{n-1}}(\xi_{n}),x\ \rangle=-\langle\ \xi_{n},\mathrm{ad}^{*}_{\xi_{1},\cdots,\xi_{n-1}}(x)\ \rangle,\quad\forall\ \xi_{1},\cdots,\xi_{n}\in \mathfrak{g}^{*},x\in\mathfrak{g}. \tag{13}\] Then, by Proposition 3.6, \((\mathfrak{g},\mathrm{ad}^{*}_{\xi_{1},\cdots,\xi_{n-1}})\) is the coadjoint representation of \(\mathfrak{g}^{*}\) on \(\mathfrak{g}\). By (12) and (13), we have \[\langle\ [\xi_{1},\cdots,\xi_{n}]_{\mathfrak{g}^{*}},[x_{1},\cdots,x_{n}]\ \rangle=\sum_{i,j=1}^{n}(-1)^{i+j}\langle\ \mathrm{ad}^{*}_{x_{1},\cdots,x_{j},\cdots,x_{n}}(\xi_{i}),\mathrm{ad}^{*}_{\xi _{1},\cdots,\xi_{i},\cdots,\xi_{n}}(x_{j})\ \rangle. \tag{14}\] **Remark 3.8**.: _There is a symmetry between \(\mathfrak{g}\) with its \(n\)-Lie bracket \([\cdot,\cdots,\cdot]\) and \(\mathfrak{g}^{*}\) with its \(n\)-Lie bracket \([\cdot,\cdots,\cdot]_{\mathfrak{g}^{*}}\) defined by \(\gamma\). Therefore, \(\mathfrak{g}\) and \(\mathfrak{g}^{*}\) play symmetric roles._ **Proposition 3.9**.: _Let \((\mathfrak{g},\gamma)\) be an \(n\)-Lie bialgebra, \({}^{\prime}\!\gamma\) be the \(n\)-Lie bracket of \(\mathfrak{g}^{*}\), and let \(\mu\) be the \(n\)-Lie bracket of \(\mathfrak{g}\). Then \((\mathfrak{g}^{*}\!,\mu)\) is an \(n\)-Lie bialgebra._ Proof.: Let \(\mu:\otimes^{n}\mathfrak{g}\to\mathfrak{g}\) be the skew-symmetric \(n\)-Lie bracket of \(\mathfrak{g}\). For all \(\xi_{1},\cdots,\xi_{n}\in\mathfrak{g}^{*}\), \(x_{1},\cdots,x_{n}\in\mathfrak{g}\) we have \[\langle\ [\xi_{1},\cdots,\xi_{n}]_{\mathfrak{g}^{*}},[x_{1},\cdots,x_{n}]\ \rangle=\langle\ ^{t}\mu([\xi_{1},\cdots,\xi_{n}]_{\mathfrak{g}^{*}}),x_{1}\otimes\cdots \otimes x_{n}\ \rangle. \tag{15}\] By (14) and (15), we have \[\langle\ ^{t}\mu([\xi_{1},\cdots,\xi_{n}]_{\mathfrak{g}^{*}}),x_{1 }\otimes\cdots\otimes x_{n}\ \rangle\] \[= \sum_{i,j=1}^{n}(-1)^{i+j+1}\langle\ \xi_{i},[x_{1},\cdots,x_{j-1},\hat{x}_{j},x_{j+1},\cdots,x_{n}, \mathrm{ad}^{*}_{\xi_{1},\cdots,\hat{x}_{j},\cdots,\xi_{n}}x_{j}]\ \rangle\] \[= \sum_{i,j=1}^{n}(-1)^{i+j+1}\langle\ ^{t}\mu(\xi_{i}),x_{1} \otimes\cdots\otimes x_{j-1}\otimes\hat{x}_{j}\otimes x_{j+1}\otimes\cdots \otimes x_{n}\otimes\mathrm{ad}^{*}_{\xi_{1},\cdots,\hat{\xi}_{i},\cdots,\xi_ {n}}x_{j}\ \rangle\] \[= \sum_{i,j=1}^{n}(-1)^{i+j+1}(-1)^{n-j}\langle\ ^{t}\mu(\xi_{i}),x_{1} \otimes\cdots\otimes x_{j-1}\otimes\mathrm{ad}^{*}_{\xi_{1},\cdots,\hat{\xi}_ {i},\cdots,\xi_{n}}x_{j}\otimes x_{j+1}\otimes\cdots\otimes x_{n}\ \rangle\] \[= \sum_{i=1}^{n}(-1)^{n+i-2i}\langle\ (\mathrm{ad}_{\xi_{1},\cdots,\hat{\xi}_ {i},\cdots,\xi_{n}}\otimes 1\otimes\cdots\otimes 1+1\otimes\mathrm{ad}_{\xi_{1}, \cdots,\hat{\xi}_{i},\cdots,\xi_{n}}\otimes 1\otimes\cdots\otimes 1\] \[+ \cdots+1\otimes\cdots\otimes 1\otimes\mathrm{ad}_{\xi_{1},\cdots,\hat{ \xi}_{i},\cdots,\xi_{n}}(^{t}\mu(\xi_{i})),x_{1}\otimes\cdots\otimes x_{j-1} \otimes x_{j}\otimes x_{j+1}\otimes\cdots\otimes x_{n}\ \rangle.\] Therefore, \[(^{t}\mu)([\xi_{1},\cdots,\xi_{n}]_{\mathfrak{g}^{*}})=\sum_{i=1}^{n}(-1)^{n-i} \mathrm{ad}^{(n)}_{\xi_{1},\cdots,\hat{\xi}_{i},\cdots,\hat{\xi}_{n}}(^{t}\mu( \xi_{i})),\] which implies that \({}^{t}\mu\) is a \(1\)-cocycle. Then \((\mathfrak{g}^{*}\!,^{t}\mu)\) is an \(n\)-Lie bialgebra. **Remark 3.10**.: _By Proposition 3.9, each \(n\)-Lie bialgebra has a dual \(n\)-Lie bialgebra whose dual is the \(n\)-Lie bialgebra itself. The \(n\)-Lie bialgebra \((\mathfrak{g}^{*}\!,^{t}\mu)\) is called the_ **dual of \(n\)-Lie bialgebra \((\mathfrak{g},\gamma)\)**_._ ## 4. The double of \(n\)-Lie bialgebras. In this section, we define an operad matrix of \(n\)-Lie bialgebras and a local cocycle \(n\)-Lie bialgebra. It is a generalization of local cocycle 3-Lie bialgebras introduced by [4]. We also establish a one-to-one correspondence between the double of \(n\)-Lie bialgebras and a Manin triple of \(n\)-Lie algebras. ### Local cocycle \(n\)-Lie bialgebras In this subsection, we first introduce the notion of operad matrix, which can be utilized to represent the 1-cocycle condition of \(n\)-Lie bialgebras. Using the operad matrix, we define \(R_{i}\)-operad \(n\)-Lie bialgebras and \(C_{j}\)-operad \(n\)-Lie bialgebras based on rows and columns respectively. We also demonstrate that \(R_{i}\)-operad matrices generalize the local cocycle 3-Lie bialgebras to local cocycle \(n\)-Lie bialgebras. Finally, we give the relationship between \(n\)-Lie bialgebras and local cocycle \(n\)-Lie bialgebras in Proposition 4.12. **Proposition 4.1**.: _Let \((\mathfrak{g},\gamma)\) be an \(n\)-Lie bialgebra. For all \(x_{1},\cdots,x_{n}\in\mathfrak{g}\), define an_ **operad matrix** _of \((\mathfrak{g},\gamma)\) by_ \[A=\left(\begin{array}{cccc}(-1)^{n-1}\mathrm{ad}_{x_{2},x_{3},\cdots,x_{n}} \otimes^{n-1}1&(-1)^{n-2}\mathrm{ad}_{x_{1},x_{3},\cdots,x_{n}}\otimes^{n-1}1& \cdots&\mathrm{ad}_{x_{1},\cdots,x_{n-2},x_{n-1}}\otimes^{n-1}1\\ (-1)^{n-1}1\otimes\mathrm{ad}_{x_{2},x_{3},\cdots,x_{n}}\otimes^{n-2}1&(-1)^{ n-2}1\otimes\mathrm{ad}_{x_{1},x_{3},\cdots,x_{n}}\otimes^{n-2}1&\cdots&1\otimes \mathrm{ad}_{x_{1},\cdots,x_{n-2},x_{n-1}}\otimes^{n-2}1\\ \vdots&\vdots&\ddots&\vdots\\ (-1)^{n-1}1\otimes^{n-2}1\otimes\mathrm{ad}_{x_{2},x_{3},\cdots,x_{n}}&(-1)^{ n-2}1\otimes\mathrm{ad}_{x_{1},x_{3},\cdots,x_{n}}&\cdots&1\otimes^{n-2}1 \otimes\mathrm{ad}_{x_{1},\cdots,x_{n-2},x_{n-1}}\end{array}\right),\] _where \(\mathrm{ad}:\wedge^{n-1}\mathfrak{g}\to End(\mathfrak{g})\) is the adjoint representation of \(\mathfrak{g}\), \(1\) is a symbol playing a similar role of unit. Then \(\gamma\) is a 1-cocycle if and only if_ \[\gamma([x_{1},\cdots,x_{n}])=(1,1,\cdots,1)_{1\times n}A(\gamma(x_{1}),\gamma (x_{2}),\cdots,\gamma(x_{n}))^{T}. \tag{16}\] Proof.: By (6), we have \[\gamma([x_{1},\cdots,x_{n}])\] \[= [\gamma(x_{1}),x_{2},\cdots,x_{n}]+[x_{1},\gamma(x_{2}),\cdots,x_ {n}]+\cdots+[x_{1},\cdots,x_{n-1},\gamma(x_{n})],\] \[= (-1)^{n-1}[x_{2},x_{3},\cdots,x_{n},\gamma(x_{1})]+(-1)^{n-2}[x_{ 1},x_{3},\cdots,x_{n},\gamma(x_{2})]+\cdots+[x_{1},\cdots,x_{n-2},x_{n-1}, \gamma(x_{n})],\] \[= (-1)^{n-1}(\mathrm{ad}_{x_{2},x_{3},\cdots,x_{n}}\otimes^{n-1}1+1 \otimes\mathrm{ad}_{x_{2},x_{3},\cdots,x_{n}}\otimes^{n-2}1+\cdots+1\otimes^ {n-2}1\otimes\mathrm{ad}_{x_{2},x_{3},\cdots,x_{n}}\gamma(x_{1})\] \[+ (-1)^{n-2}(\mathrm{ad}_{x_{1},x_{3},\cdots,x_{n}}\otimes^{n-1}1+1 \otimes\mathrm{ad}_{x_{1},x_{3},\cdots,x_{n}}\otimes^{n-2}1+\cdots+1\otimes^ {n-2}1\otimes\mathrm{ad}_{x_{1},x_{3},\cdots,x_{n}}\gamma(x_{2})\] \[+ \cdots\] \[+ (\mathrm{ad}_{x_{1},\cdots,x_{n-2},x_{n-1}}\otimes^{n-1}1+1 \otimes\mathrm{ad}_{x_{1},\cdots,x_{n-2},x_{n-1}}\otimes^{n-2}1+\cdots+1 \otimes^{n-2}1\otimes\mathrm{ad}_{x_{1},\cdots,x_{n-2},x_{n-1}})\gamma(x_{n}),\] \[= (1,1,\cdots,1)_{1\times n}A(\gamma(x_{1}),\gamma(x_{2}),\cdots, \gamma(x_{n}))^{T}.\] This completes the proof. **Example 4.2**.: _Let \((\mathfrak{g},\gamma)\) be a Lie bialgebra, then \(\gamma\) is a 1-cocycle if and only if_ \[\gamma([x_{1},x_{2}])=(1,1)A(\gamma(x_{1}),\gamma(x_{2}))^{T},\] _where_ \[A=\left(\begin{array}{cc}-\mathrm{ad}_{x_{2}}\otimes 1&\mathrm{ad}_{x_{1}} \otimes 1\\ -1\otimes\mathrm{ad}_{x_{2}}&1\otimes\mathrm{ad}_{x_{1}}\end{array}\right).\] **Example 4.3**.: _Let \((\mathfrak{g},\gamma)\) be a 3-Lie bialgebra, then \(\gamma\) is a 1-cocycle if and only if_ \[\gamma([x_{1},x_{2},x_{3}])=(1,1,1)A(\gamma(x_{1}),\gamma(x_{2}),\gamma(x_{3}) )^{T},\] _where_ \[A=\left(\begin{array}{cc}\mathrm{ad}_{x_{2},x_{3}}\otimes 1\otimes 1&- \mathrm{ad}_{x_{1},x_{3}}\otimes 1\otimes 1&\mathrm{ad}_{x_{1},x_{2}}\otimes 1 \otimes 1\\ 1\otimes\mathrm{ad}_{x_{2},x_{3}}\otimes 1&-1\otimes\mathrm{ad}_{x_{1},x_{3}} \otimes 1&1\otimes\mathrm{ad}_{x_{1},x_{2}}\otimes 1\\ 1\otimes 1\otimes\mathrm{ad}_{x_{2},x_{3}}&-1\otimes 1\otimes\mathrm{ad}_{x_{1},x_{3}}& 1\otimes 1\otimes\mathrm{ad}_{x_{1},x_{2}}\end{array}\right).\] **Definition 4.4**.: _Let the operad matrix \(A\) keep the \(i\)-th row, all other positions are zero, i,e., the operad matrix is given by_ \[A_{R_{i}}=\left(\begin{array}{cccc}0&0&0&0\\ \cdots&\cdots&\cdots&\cdots\\ (-1)^{n-1}\otimes^{i-1}1\otimes\mathrm{ad}_{\hat{x}_{1}}\otimes^{n-i}1&(-1)^{ n-2}\otimes^{i-1}1\otimes\mathrm{ad}_{\hat{x}_{2}}\otimes^{n-i}1&\cdots&\otimes^{i-1}1 \otimes\mathrm{ad}_{\hat{x}_{n}}\otimes^{n-i}1\\ \cdots&\cdots&\cdots&\cdots\\ 0&0&0&0\end{array}\right),\] _where \(\mathrm{ad}_{\hat{x}_{i}}=\mathrm{ad}_{x_{1},\cdots,\hat{x}_{i},\cdots,x_{n}}\), \(i=1,\ 2,\cdots,\ n\). If the linear map \(\gamma_{R}^{i}:\mathfrak{g}\rightarrow\otimes^{n}\mathfrak{g}\) satisfying_ \[\gamma_{R}^{i}([x_{1},\cdots,x_{n}])=(1,1,\cdots,1)_{1\times n}A_{R_{i}}( \gamma_{R}^{i}(x_{1}),\gamma_{R}^{i}(x_{2}),\cdots,\gamma_{R}^{i}(x_{n}))^{T}, \tag{17}\] _such that \({}^{t}\gamma_{R}^{i}:\otimes^{n}\mathfrak{g}^{*}\rightarrow\mathfrak{g}^{*}\) defines an \(n\)-Lie algebra structure on \(\mathfrak{g}^{*}\), then we call \((\mathfrak{g},\gamma_{R}^{i})\) the \(R_{i}\)_**-operad \(n\)-Lie bialgebra**_._ **Example 4.5**.: _[_4_]_ _Let \(\mathfrak{g}\) be a Lie algebra, and let \(\gamma_{R}^{1},\gamma_{R}^{2}:\mathfrak{g}\rightarrow\mathfrak{g}\otimes \mathfrak{g}\) be two linear maps such that_ \[\gamma_{R}^{1}([x_{1},x_{2}]) = (1,1)A_{R_{1}}(\gamma_{R}^{1}(x_{1}),\gamma_{R}^{1}(x_{2}))^{T},\] \[\gamma_{R}^{2}([x_{1},x_{2}]) = (1,1)A_{R_{2}}(\gamma_{R}^{2}(x_{1}),\gamma_{R}^{2}(x_{2}))^{T},\] _where_ \[A_{R_{1}}=\left(\begin{array}{cc}-\mathrm{ad}_{x_{2}}\otimes 1&\mathrm{ad}_{x_ {1}}\otimes 1\\ 0&0\end{array}\right),\] _and_ \[A_{R_{2}}=\left(\begin{array}{cc}0&0\\ -1\otimes\mathrm{ad}_{x_{2}}&1\otimes\mathrm{ad}_{x_{1}}\end{array}\right).\] _If \(\gamma=\gamma_{R}^{1}+\gamma_{R}^{2}\) satisfy \({}^{t}\gamma:\mathfrak{g}^{*}\otimes\mathfrak{g}^{*}\rightarrow\mathfrak{g}^{*}\) defines a Lie algebra structure on \(\mathfrak{g}^{*}\), then \((\mathfrak{g},\gamma)\) is called a local cocycle Lie bialgebra._ **Remark 4.6**.: _[_4_]_ _Let \((\mathfrak{g},\gamma)\) be a local cocycle Lie bialgebra. If the following compatibility condition holds:_ \[(1\otimes\mathrm{ad}_{x_{1}})\gamma_{1}(x_{2})+(\mathrm{ad}_{x_{1}}\otimes 1) \gamma_{2}(x_{2})-(1\otimes\mathrm{ad}_{x_{2}})\gamma_{1}(x_{1})-(\mathrm{ad}_{x_ {2}}\otimes 1)\gamma_{2}(x_{1})=0, \tag{18}\] _then \((\mathfrak{g},\gamma)\) is a Lie bialgebra. Conversely, let \((\mathfrak{g},\gamma)\) be a Lie bialgebra. If \(\gamma=\gamma_{1}+\gamma_{2}\) such that for any \(x_{1},x_{2}\in\mathfrak{g}\), (18) holds, then \((\mathfrak{g},\gamma)\) is a local cocycle Lie bialgebra._ **Definition 4.7**.: _Let the operad matrix \(A\) keep the \(j\)-th column, all other position are zero, i,e., the operad matrix is given by_ \[A_{C_{j}}=\left(\begin{array}{ccccc}0&\cdots&(-1)^{n-j}\mathrm{ad}_{\delta_{j }}\otimes^{n-1}1&\cdots&0\\ 0&\cdots&(-1)^{n-j}1\otimes\mathrm{ad}_{\delta_{j}}\otimes^{n-2}1&\cdots&0\\ \vdots&\ddots&\vdots&\ddots&\vdots\\ 0&\cdots&(-1)^{n-j}1\otimes^{n-2}1\otimes\mathrm{ad}_{\delta_{j}}&\cdots&0 \end{array}\right).\] _If the linear map \(\gamma_{C}^{j}:\mathfrak{g}\to\otimes^{n}\mathfrak{g}\) satisfying_ \[\gamma_{C}^{j}([x_{1},\cdots,x_{n}])=(1,1,\cdots,1)_{1\times n}A_{C_{j}}( \gamma_{C}^{j}(x_{1}),\gamma_{C}^{j}(x_{2}),\cdots,\gamma_{C}^{j}(x_{n}))^{T}, \tag{19}\] _such that \({}^{t}\gamma_{C}^{j}:\otimes^{n}\mathfrak{g}^{*}\to\mathfrak{g}^{*}\) defines an \(n\)-Lie algebra structure on \(\mathfrak{g}^{*}\), then we call \((\mathfrak{g},\gamma_{C}^{j})\) the \(C_{j}\)**-operad \(n\)-Lie bialgebra._ **Example 4.8**.: _Let \(\mathfrak{g}\) be a Lie algebra, and let \(\gamma_{C}^{1},\gamma_{C}^{2}:\mathfrak{g}\to\mathfrak{g}\otimes\mathfrak{g}\) be two linear maps such that_ \[\gamma_{C}^{1}([x_{1},x_{2}]) = (1,1)A_{C_{1}}(\gamma_{C}^{1}(x_{1}),\gamma_{C}^{1}(x_{2}))^{T},\] \[\gamma_{C}^{2}([x_{1},x_{2}]) = (1,1)A_{C_{2}}(\gamma_{C}^{2}(x_{1}),\gamma_{C}^{2}(x_{2}))^{T},\] _where_ \[A_{C_{1}}=\left(\begin{array}{cc}-\mathrm{ad}_{x_{2}}\otimes 1&0\\ -1\otimes\mathrm{ad}_{x_{2}}&0\end{array}\right),\] _and_ \[A_{C_{2}}=\left(\begin{array}{cc}0&\mathrm{ad}_{x_{1}}\otimes 1\\ 0&1\otimes\mathrm{ad}_{x_{1}}\end{array}\right).\] _If \(\gamma_{C}^{1}\) (or \(\gamma_{C}^{2}\)) satisfy \({}^{t}\gamma_{C}^{1}\) (or \({}^{t}\gamma_{C}^{2}\)) \(:\mathfrak{g}^{*}\otimes\mathfrak{g}^{*}\to\mathfrak{g}^{*}\) defines a Lie algebra structure on \(\mathfrak{g}^{*}\), then \((\mathfrak{g},\gamma_{C}^{1})\) or \(((\mathfrak{g},\gamma_{C}^{2}))\) is called a \(C_{1}\) ( or \(C_{2}\))-operad Lie bialgebra._ **Remark 4.9**.: _Let \((\mathfrak{g},\gamma_{C}^{1})\) be a \(C_{1}\)-operad Lie bialgebra, and let \((\mathfrak{g},\gamma_{C}^{2})\) be a \(C_{2}\)-operad Lie bialgebra. If the following compatibility condition holds:_ \[(\mathrm{ad}_{x_{1}}\otimes 1)\gamma_{1}(x_{2})+(1\otimes\mathrm{ad}_{x_{1}}) \gamma_{1}(x_{2})-(\mathrm{ad}_{x_{2}}\otimes 1)\gamma_{2}(x_{1})-(1\otimes \mathrm{ad}_{x_{2}})\gamma_{2}(x_{1})=0,\] _then \((\mathfrak{g},\gamma=\gamma_{C}^{1}+\gamma_{C}^{2})\) is a Lie bialgebra._ **Definition 4.10**.: _Let \(\mathfrak{g}\) be an \(n\)-Lie algebra and let \(\gamma_{1},\gamma_{2},\cdots,\gamma_{n}:\mathfrak{g}\to\otimes^{n}\mathfrak{g}\) be linear maps such that_ \[\gamma_{i}=\gamma_{R}^{i},\quad i=1,2,\cdots,n. \tag{20}\] _If \(\gamma=\gamma_{1}+\gamma_{2}+\cdots+\gamma_{n}\) be a linear map, such that \({}^{t}\gamma:\otimes^{n}\mathfrak{g}^{*}\to\mathfrak{g}^{*}\) defines an \(n\)-Lie algebra structure on \(\mathfrak{g}^{*}\), then the pair \((\mathfrak{g},\gamma)\) is called a_ **local cocycle**_\(n\)-Lie bialgebra._ For any \(x_{1},\cdots,x_{n}\in\mathfrak{g}\), we can rewrite (20) as, \[\gamma_{1}([x_{1},\cdots,x_{n}]) = \sum_{i=1}^{n}(-1)^{n-i}(\mathrm{ad}_{x_{1},\cdots,x_{i-1}, \hat{x}_{i},x_{i+1},\cdots,x_{n}}\otimes 1\otimes\cdots\otimes 1)\gamma_{1}(x_{i}),\] \[\gamma_{2}([x_{1},\cdots,x_{n}]) = \sum_{i=1}^{n}(-1)^{n-i}(1\otimes\mathrm{ad}_{x_{1},\cdots,x_{i-1}, \hat{x}_{i},x_{i+1},\cdots,x_{n}}\otimes\cdots\otimes 1)\gamma_{2}(x_{i}),\] \[\cdots\] \[\gamma_{n}([x_{1},\cdots,x_{n}]) = \sum_{i=1}^{n}(-1)^{n-i}(1\otimes\cdots\otimes 1\otimes\mathrm{ad }_{x_{1},\cdots,x_{i-1},\hat{x}_{i},x_{i+1},\cdots,x_{n}})\gamma_{n}(x_{i}),\] **Example 4.11**.: _Let \(\mathfrak{g}\) be a \(3\)-Lie algebra, and let \(\gamma=\gamma_{1}+\gamma_{2}+\gamma_{3}:\mathfrak{g}\to\mathfrak{g}\otimes \mathfrak{g}\otimes\mathfrak{g}\) be a linear map, such that \({}^{i}\gamma:\mathfrak{g}^{*}\otimes\mathfrak{g}^{*}\otimes\mathfrak{g}^{*} \to\mathfrak{g}^{*}\) defines a \(3\)-Lie algebra structure on \(\mathfrak{g}^{*}\), and for any \(x_{1},x_{2},x_{3}\in\mathfrak{g}\), the following conditions are satisfied:_ \[\gamma_{1}([x_{1},x_{2},x_{3}]) = (\mathrm{ad}_{x_{2},x_{3}}\otimes 1\otimes 1)\gamma_{1}(x_{1})-( \mathrm{ad}_{x_{1},x_{3}}\otimes 1\otimes 1)\gamma_{1}(x_{2})+(\mathrm{ad}_{x_{1},x _{2}}\otimes 1\otimes 1)\gamma_{1}(x_{3});\] \[\gamma_{2}([x_{1},x_{2},x_{3}]) = (1\otimes\mathrm{ad}_{x_{2},x_{3}}\otimes 1)\gamma_{2}(x_{1})-(1 \otimes\mathrm{ad}_{x_{1},x_{3}}\otimes 1)\gamma_{2}(x_{2})+(1\otimes\mathrm{ad}_{x _{1},x_{2}}\otimes 1)\gamma_{2}(x_{3});\] \[\gamma_{3}([x_{1},x_{2},x_{3}]) = (1\otimes 1\otimes\mathrm{ad}_{x_{2},x_{3}})\gamma_{3}(x_{1})-(1 \otimes 1\otimes\mathrm{ad}_{x_{1},x_{3}})\gamma_{3}(x_{2})+(1\otimes 1\otimes \mathrm{ad}_{x_{1},x_{2}})\gamma_{3}(x_{3}).\] _Then the pair \((\mathfrak{g},\gamma)\) is a local cocycle \(3\)-Lie bialgebra._ **Proposition 4.12**.: _Let \((\mathfrak{g},\gamma)\) be a local cocycle \(n\)-Lie bialgebra. If the following compatibility condition holds:_ \[\sum_{k=1}^{n}\sum_{i=1,j\neq k}^{n}\sum_{j=1}^{n}(-1)^{n-j}(\otimes^{k-1}1 \otimes\mathrm{ad}_{x_{1},\cdots,x_{j-1},\hat{x}_{j},x_{j+1},\cdots,x_{n}} \otimes^{n-k}1)\gamma_{i}(x_{j})=0, \tag{21}\] _then the pair \((\mathfrak{g},\gamma)\) is a \(n\)-Lie bialgebra._ Proof.: By (6), (20) and the Definition 4.10, we can complete the proof directly. **Example 4.13**.: _Let \((\mathfrak{g},\gamma)\) be a local cocycle Lie bialgebra, and the compatibility condition holds: for any \(x_{1},x_{2}\in\mathfrak{g}\)_ \[\sum_{j=1}^{2}(-1)^{2-j}(\mathrm{ad}_{\hat{x}_{j}}\otimes 1) \gamma_{2}(x_{j})+\sum_{j=1}^{2}(-1)^{2-j}(1\otimes\mathrm{ad}_{\hat{x}_{j}}) \gamma_{1}(x_{j})\] \[= -(\mathrm{ad}_{x_{2}}\otimes 1)\gamma_{2}(x_{1})+(\mathrm{ad}_{x _{1}}\otimes 1)\gamma_{2}(x_{2})-(1\otimes\mathrm{ad}_{x_{2}})\gamma_{1}(x_{1})+( 1\otimes\mathrm{ad}_{x_{1}})\gamma_{1}(x_{2})\] \[= 0,\] _then the pair \((\mathfrak{g},\gamma)\) is a Lie bialgebra._ **Example 4.14**.: _Let \((\mathfrak{g},\gamma)\) be a local cocycle \(3\)-Lie bialgebra, and the compatibility condition holds: for any \(x_{1},x_{2},x_{3}\in\mathfrak{g}\)_ \[\sum_{j=1}^{3}(-1)^{3-j}(\mathrm{ad}_{\hat{x}_{j}}\otimes 1 \otimes 1)\gamma_{2}(x_{j})+\sum_{j=1}^{3}(-1)^{3-j}(\mathrm{ad}_{\hat{x}_{j}} \otimes 1\otimes 1)\gamma_{3}(x_{j})\] \[+ \sum_{j=1}^{3}(-1)^{3-j}(1\otimes\mathrm{ad}_{\hat{x}_{j}} \otimes 1)\gamma_{1}(x_{j})+\sum_{j=1}^{3}(-1)^{3-j}(1\otimes\mathrm{ad}_{\hat{x}_{j }}\otimes 1)\gamma_{3}(x_{j})\] \[+ \sum_{j=1}^{3}(-1)^{3-j}(1\otimes 1\otimes\mathrm{ad}_{\dot{x}_{j}}) \gamma_{1}(x_{j})+\sum_{j=1}^{3}(-1)^{3-j}(1\otimes 1\otimes\mathrm{ad}_{\dot{x}_{j}}) \gamma_{2}(x_{j})\] \[= (\mathrm{ad}_{x_{2},x_{3}}\otimes 1\otimes 1)\gamma_{2}(x_{1})-( \mathrm{ad}_{x_{1},x_{3}}\otimes 1\otimes 1)\gamma_{2}(x_{2})+(\mathrm{ad}_{x_{1},x_{2 }}\otimes 1\otimes 1)\gamma_{2}(x_{3})\] \[+ (\mathrm{ad}_{x_{2},x_{3}}\otimes 1\otimes 1)\gamma_{3}(x_{1})-( \mathrm{ad}_{x_{1},x_{3}}\otimes 1\otimes 1)\gamma_{3}(x_{2})+(\mathrm{ad}_{x_{1},x_{2 }}\otimes 1\otimes 1)\gamma_{3}(x_{3})\] \[+ (1\otimes\mathrm{ad}_{x_{2},x_{3}}\otimes 1)\gamma_{1}(x_{1})-( 1\otimes\mathrm{ad}_{x_{1},x_{3}}\otimes 1)\gamma_{1}(x_{2})+(1\otimes\mathrm{ad}_{x _{1},x_{2}}\otimes 1)\gamma_{1}(x_{3})\] \[+ (1\otimes\mathrm{ad}_{x_{2},x_{3}}\otimes 1)\gamma_{3}(x_{1})-( 1\otimes\mathrm{ad}_{x_{1},x_{3}}\otimes 1)\gamma_{3}(x_{2})+(1\otimes\mathrm{ad}_{x _{1},x_{2}}\otimes 1)\gamma_{3}(x_{3})\] \[+ (1\otimes 1\otimes\mathrm{ad}_{x_{2},x_{3}})\gamma_{1}(x_{1})-( 1\otimes 1\otimes\mathrm{ad}_{x_{1},x_{3}})\gamma_{1}(x_{2})+(1\otimes 1\otimes \mathrm{ad}_{x_{1},x_{2}})\gamma_{1}(x_{3})\] \[+ (1\otimes 1\otimes\mathrm{ad}_{x_{2},x_{3}})\gamma_{2}(x_{1})-( 1\otimes 1\otimes\mathrm{ad}_{x_{1},x_{3}})\gamma_{2}(x_{2})+(1\otimes 1\otimes \mathrm{ad}_{x_{1},x_{2}})\gamma_{2}(x_{3})\] \[= 0,\] _then the pair \((\mathfrak{g},\gamma)\) is a \(3\)-Lie bialgebra._ **Remark 4.15**.: _Let \((\mathfrak{g},\gamma)\) be a local cocycle \(n\)-Lie bialgebra, and \(\mu\) is the \(n\)-Lie bracket of \(\mathfrak{g}\), but we cannot obtain that \((\mathfrak{g}^{*},\mu)\) is also a local cocycle \(n\)-Lie bialgebra, where \({}^{t}\gamma\) is the \(n\)-Lie bracket of \(\mathfrak{g}^{*}\). For example, the dual of a local cocycle Lie bialgebra is not a local cocycle Lie bialgebra._ ### Manin triples of \(n\)-Lie algebras and the double of \(n\)-Lie bialgebras We now show that there is a one-to-one correspondence between the double of \(n\)-Lie bialgebras and a Manin triple of \(n\)-Lie algebras. **Definition 4.16**.: \(A\) **metric \(n\)-Lie algebra** _is a triple \((\mathfrak{g},[\cdot,\cdots,\cdot],(\cdot,\cdot))\), where \(\mathfrak{g}\) is an \(n\)-Lie algebra with the \(n\)-Lie bracket \([\cdot,\cdots,\cdot]\), and \((\cdot,\cdot):\mathfrak{g}\times\mathfrak{g}\rightarrow\mathbf{k}\) is a non-degenerate symmetric bilinear form satisfies the invariant condition:_ \[([x_{1},\cdots,x_{n-1},x_{n}],t)+(x_{n},[x_{1},\cdots,x_{n-1},t])=0,\quad \forall x_{1},\cdots,x_{n},t\in\mathfrak{g}. \tag{22}\] **Definition 4.17**.: _Let \((\mathfrak{g},[\cdot,\cdots,\cdot],(\cdot,\cdot))\) be a metric \(n\)-Lie algebra. A_ **Manin triple of \(n\)-Lie algebras** _is a triple \(((\mathfrak{g},[\cdot,\cdots,\cdot],(\cdot,\cdot)),\mathfrak{g}_{1},\mathfrak{ g}_{2})\) such that_ 1. \(\mathfrak{g}_{1}\)_,_ \(\mathfrak{g}_{2}\) _are isotropic_ \(n\)_-Lie subalgebras of_ \(\mathfrak{g}\)_, such that_ \(\mathfrak{g}=\mathfrak{g}_{1}\oplus\mathfrak{g}_{2}\) _as vector spaces._ 2. _For all_ \(x_{1},\cdots,x_{n-1}\in\mathfrak{g}_{1}\)_,_ \(\xi_{1},\cdots,\xi_{n-1}\in\mathfrak{g}_{2}\) _and_ \(x\in\mathfrak{g}\)_, the following conditions hold:_ (23) \[(\xi_{2},[x_{1},\cdots,x_{n-1},\xi_{1}]) =0,\] (24) \[(x_{2},[\xi_{1},\cdots,\xi_{n-1},x_{1}]) =0,\] _and_ (25) \[\begin{cases}(x,[x_{1},\cdots,x_{n-2},\xi_{1},\xi_{2}])=0,\\ \hskip 14.226378pt\vdots\\ (x,[x_{1},x_{2},\xi_{1},\cdots,\xi_{n-2}])=0.\end{cases}\] Let \((\mathfrak{g},[\cdot,\cdots,\cdot])\) be a \(R_{1}\)-operad \(n\)-Lie bialgebra, where \(\gamma_{\mathfrak{g}}:\mathfrak{g}\to\otimes^{n}\mathfrak{g}\) is a skew-symmetric linear map such that \([\cdot,\cdots,\cdot]_{\mathfrak{g}^{*}}:\otimes^{n}\mathfrak{g}^{*}\to \mathfrak{g}^{*}\) defines an \(n\)-Lie bracket on \(\mathfrak{g}^{*}\). Set \(\mathfrak{b}=\mathfrak{g}\oplus\mathfrak{g}^{*}\) and denote its elements by \(x+\xi\), where \(x\in\mathfrak{g}\), \(\xi\in\mathfrak{g}^{*}\). Define a linear map \([\cdot,\cdots,\cdot]_{\mathfrak{b}}:\wedge^{n}\mathfrak{b}\to\mathfrak{b}\) by \[[x_{1}+\xi_{1},x_{2}+\xi_{2},\cdots,x_{n}+\xi_{n}]_{\mathfrak{b}}=[x_{1},x_{2},\cdots,x_{n}]+\sum_{i=1}^{n}(-1)^{n-i}\mathrm{ad}_{x_{1},\cdots,x_{i-1},\xi_{ i},x_{i+1},\cdots,x_{n}}^{*}\xi_{i} \tag{26}\] \[+[\xi_{1},\xi_{2},\cdots,\xi_{n}]_{\mathfrak{g}^{*}}+\sum_{i=1}^ {n}(-1)^{n-i}\mathrm{ad}_{\xi_{1},\cdots,\xi_{i-1},\xi_{i},\xi_{i+1},\cdots, \xi_{n}}^{*}x_{i},\quad\forall\ x_{1},\cdots,x_{n}\in\mathfrak{g},\xi_{1}, \cdots,\xi_{n}\in\mathfrak{g}^{*},\] where \(\mathrm{ad}^{*}\) denotes both the coadjoint of \((\mathfrak{g},[\cdot,\cdots,\cdot])\) on \((\mathfrak{g}^{*},[\cdot,\cdots,\cdot]_{\mathfrak{g}^{*}})\) and of \((\mathfrak{g}^{*},[\cdot,\cdots,\cdot]_{\mathfrak{g}^{*}})\) on \((\mathfrak{g},[\cdot,\cdots,\cdot])\), i.e., for all \(x,x_{1},\cdots,x_{n}\in\mathfrak{g},\xi,\xi_{1},\cdots,\xi_{n}\in\mathfrak{g}^ {*}\), \[\langle\ \mathrm{ad}_{x_{1},\cdots,x_{n-1}}^{*}(\xi),x_{n}\ \rangle = -\langle\ \xi,[x_{1},\cdots,x_{n-1},x_{n}]\ \rangle, \tag{28}\] \[\langle\ \mathrm{ad}_{\xi_{1},\cdots,\xi_{n-1}}^{*}(x),\xi_{n}\ \rangle = -\langle\ x,[\xi_{1},\cdots,\xi_{n-1},\xi_{n}]_{\mathfrak{g}}^{*}\ \rangle. \tag{27}\] **Definition 4.18**.: _Let \((\mathfrak{g},[\cdot,\cdots,\cdot])\) be an \(n\)-Lie algebra and \(\gamma:\mathfrak{g}\to\otimes^{n}\mathfrak{g}\) be a linear map. If \(\gamma\) satisfies_ \[\gamma([x_{1},x_{2},\cdots,x_{n}])=[\gamma(x_{1}),x_{2},\cdots,x_{n}],\quad \forall\ x_{1},\cdots,x_{n}\in\mathfrak{g}, \tag{29}\] _then we call \(\gamma\) a_ **centroid** _map._ **Definition 4.19**.: _Let \((\mathfrak{g},[\cdot,\cdots,\cdot])\) be an \(n\)-Lie algebra and \(\gamma:\mathfrak{g}\to\otimes^{n}\mathfrak{g}\) be a linear map. For all \(x_{1},\cdots,x_{n}\in\mathfrak{g}\), if \(\gamma\) satisfies_ \[(\otimes^{j-1}1\otimes\mathrm{ad}_{x_{2},x_{3},\cdots,x_{n}} \otimes^{n-j}1 + \otimes^{n-1}1\otimes\mathrm{ad}_{x_{2},x_{3},\cdots,x_{n}}) \gamma(x_{1})=0, \tag{31}\] \[(\otimes^{i-1}1\otimes\mathrm{ad}_{x_{2},x_{3},\cdots,x_{n-1},x _{n}}\otimes^{n-i}1)\gamma(x_{1}) + (\otimes^{k-1}1\otimes\mathrm{ad}_{x_{2},x_{3},\cdots,x_{n-1},x _{1}}\otimes^{n-k}1)\gamma(x_{n})=0, \tag{30}\] _where \(x_{i}\in\mathfrak{g}\), \(1\leq j\leq n-1\), \(1\leq i,k\leq n,\ i\neq k\), then we call \(\gamma\) a_ **local operad map**_._ **Proposition 4.20**.: _Let \((\mathfrak{g},[\cdot,\cdots,\cdot],\gamma_{\mathfrak{g}})\) be a local centroid \(R_{1}\)-operad \(n\)-Lie bialgebra and define a linear map \([\cdot,\cdots,\cdot]_{\mathfrak{b}}:\wedge^{n}\mathfrak{b}\to\mathfrak{b}\) by (26). Then \((\mathfrak{b},[\cdot,\cdots,\cdot]_{\mathfrak{b}})\) is an \(n\)-Lie algebra._ Proof.: By the definition of \(R_{1}\)-operad \(n\)-Lie bialgebra, we have \[\gamma_{\mathfrak{g}}([x_{1},\cdots,x_{n}])=\sum_{i=1}^{n}(-1)^{n-i}(\mathrm{ ad}_{x_{1},\cdots,x_{i-1},\xi_{i},x_{i+1},\cdots,x_{n}}\otimes 1\otimes\cdots \otimes 1)\gamma_{\mathfrak{g}}(x_{i}). \tag{32}\] On the one hand, for the tensor \(\xi_{1}\otimes\cdots\otimes\xi_{n}\in\otimes^{n}\mathfrak{g}^{*}\), we have \[\langle\gamma_{\mathfrak{g}}([x_{1},\cdots,x_{n}]),\xi_{1}\otimes\cdots \otimes\xi_{n}\rangle = -\langle\mathrm{ad}_{\xi_{1},\cdots,\xi_{n-1}}^{*}([x_{1}, \cdots,x_{n}]),\xi_{n}\rangle.\] On the other hand, by (8) and (27) we have \[\sum_{i=1}^{n}(-1)^{n-i}\langle(\mathrm{ad}_{x_{1},\cdots,x_{i-1},\xi_{i},x_{i +1},\cdots,x_{n}}\otimes 1\otimes\cdots\otimes 1)\gamma_{\mathfrak{g}}(x_{i}),\xi_{1} \otimes\cdots\otimes\xi_{n}\rangle\] \[= -\sum_{i=1}^{n}(-1)^{n-i}\mathrm{ad}^{*}_{\eta_{1},\cdots,\eta_{i-1}, \tilde{\eta}_{i},\eta_{i+1},\cdots,\eta_{n}}Y_{i}]\] \[= \sum_{i=1}^{n}(-1)^{n-i}\mathrm{ad}^{*}_{\eta_{1},\cdots,\eta_{i-1 },\tilde{\eta}_{i},\eta_{i+1},\cdots,\eta_{n}}([x_{1},\cdots,x_{n-1},y_{i}])\] \[+ \sum_{i=1}^{n}\sum_{k=1}^{i-1}(-1)^{n-k}\mathrm{ad}^{*}_{\eta_{1},\cdots,\eta_{i-1},\mathrm{ad}^{*}_{x_{1},\cdots,x_{n-1}}\eta_{i},\eta_{i+1}, \cdots,\eta_{n}}Y_{k}\] \[+ \sum_{i=1}^{n}\sum_{k=i+1}^{n}(-1)^{n-k}\mathrm{ad}^{*}_{\eta_{1},\cdots,\eta_{i-1},\mathrm{ad}^{*}_{x_{1},\cdots,x_{n-1}}\eta_{i},\eta_{i+1}, \cdots,\eta_{n}}Y_{k}.\] Thus, we can obtain \[[x_{1},\cdots,x_{n-1},\sum_{i=1}^{n}(-1)^{n-i}\mathrm{ad}^{*}_{\eta_{1}, \cdots,\eta_{i-1},\tilde{\eta}_{i},\eta_{i+1},\cdots,\eta_{n}}Y_{i}]\] \[= \sum_{i=1}^{n}(-1)^{n-i}\mathrm{ad}^{*}_{\eta_{1},\cdots,\eta_{i- 1},\tilde{\eta}_{i},\eta_{i+1},\cdots,\eta_{n}}([x_{1},\cdots,x_{n-1},y_{i}])\] \[+ \sum_{i=1}^{n}\sum_{k=1}^{i-1}(-1)^{n-k}\mathrm{ad}^{*}_{\eta_{1},\cdots,\eta_{k-1},\eta_{k},\eta_{i+1},\cdots,\eta_{i-1},\mathrm{ad}^{*}_{x_{ 1},\cdots,x_{n-1}}\eta_{i},\eta_{i+1},\cdots,\eta_{n}}Y_{k}\] \[+ \sum_{i=1}^{n}\sum_{k=i+1}^{n}(-1)^{n-k}\mathrm{ad}^{*}_{\eta_{1},\cdots,\eta_{i-1},\mathrm{ad}^{*}_{x_{1},\cdots,x_{n-1}}\eta_{i},\eta_{i+1}, \cdots,\eta_{n-1},\tilde{\eta}_{i},\eta_{i+1},\cdots,\eta_{n}}Y_{k}. \tag{35}\] By (30), for the integer \(1\leq i\leq n\), \(y_{i}\in\mathfrak{g}\), we can get the following \(n(n-1)\) equations \[(\mathrm{ad}_{y_{1},\cdots,y_{i-1},\hat{y}_{i},y_{i+1},\cdots,y_{n} }\otimes^{n-1}1+\otimes^{n-1}1\otimes\mathrm{ad}_{y_{1},\cdots,y_{i-1},\hat{y} _{i},y_{i+1},\cdots,y_{n}})\gamma(x_{1}) = 0,\] \[(1\otimes\mathrm{ad}_{y_{1},\cdots,y_{i-1},\hat{y}_{i},y_{i+1}, \cdots,y_{n}}\otimes^{n-2}1+\otimes^{n-1}1\otimes\mathrm{ad}_{y_{1},\cdots,y_{i -1},\hat{y}_{i},y_{i+1},\cdots,y_{n}})\gamma(x_{2}) = 0,\] \[\cdots\] \[(\otimes^{n-2}1\otimes\mathrm{ad}_{y_{1},\cdots,y_{i-1},\hat{y} _{i},y_{i+1},\cdots,y_{n}}\otimes 1+\otimes^{n-1}1\otimes\mathrm{ad}_{y_{1}, \cdots,y_{i-1},\hat{y}_{i},y_{i+1},\cdots,y_{n}})\gamma(x_{n-1}) = 0.\] Sum over the above equations, we can obtain \[\sum_{i=1}^{n}\sum_{j=1}^{n-1}(-1)^{n-i}\big{(}(\otimes^{j-1}1\otimes\mathrm{ ad}_{y_{1},\cdots,\hat{y}_{i},\cdots,y_{n}}\otimes^{n-j}1)+(\otimes^{n-1}1 \otimes\mathrm{ad}_{y_{1},\cdots,\hat{y}_{i},\cdots,y_{n}})\big{)}\gamma(x_{j} )=0. \tag{36}\] For the tensor \(\xi_{1}\otimes\cdots\otimes\xi_{n-1}\otimes\eta_{i}\in\otimes^{n}\mathfrak{g}^ {*}\), the left-hand side of (36) is equal to \[\langle\ \sum_{i=1}^{n}\sum_{j=1}^{n-1}(-1)^{n-i}\big{(}(\otimes^{j -1}1\otimes\mathrm{ad}_{y_{1},\cdots,\hat{y}_{i},\cdots,y_{n}}\otimes^{n-j}1) \big{)}\] \[+(\otimes^{n-1}1\otimes\mathrm{ad}_{y_{1},\cdots,\hat{y}_{i}, \cdots,y_{n}})\big{)}\gamma(x_{j}),\xi_{1}\otimes\cdots\otimes\xi_{n-1} \otimes\eta_{i}\ \rangle\] \[= -\sum_{i=1}^{n}\sum_{j=1}^{n-1}(-1)^{n-j}\langle\ [y_{1},\cdots,y_{i-1}, \mathrm{ad}_{\xi,\cdots,\xi_{j-1},\hat{\xi}_{j},\xi_{j+1},\cdots,\xi_{n-1}, \eta_{i}}^{*}x_{j},y_{i+1},\cdots,y_{n}],\xi_{j}\ \rangle\] \[+\sum_{i=1}^{n}\sum_{j=1}^{n-1}(-1)^{n-i}(-1)^{n-j}\langle\ \mathrm{ad}_{\xi, \cdots,\xi_{j-1},\hat{\xi}_{j},\xi_{j+1},\cdots,\xi_{n-1},\mathrm{ad}_{y_{1}, \cdots,y_{i-1},\hat{y}_{i},y_{i+1},\cdots,y_{n}}}^{*}x_{j},\xi_{j}\ \rangle).\] The right-hand side of (36) is equal to \(0\), then we have \[\sum_{j=1}^{n-1}(-1)^{n-j}\mathrm{ad}_{\xi_{1},\cdots,\xi_{j-1}, \hat{\xi}_{j},\xi_{j+1},\cdots,\xi_{n-1},\sum_{i=1}^{n}(-1)^{n-i}\mathrm{ad}_ {y_{1},\cdots,y_{i-1},\hat{y}_{i},y_{i+1},\cdots,y_{n}}^{*}x_{j}\] \[= \sum_{i=1}^{n}[y_{1},\cdots,y_{i-1},\sum_{j=1}^{n-1}(-1)^{n-j} \mathrm{ad}_{\xi_{1},\cdots,\xi_{j-1},\hat{\xi}_{j},\xi_{j+1},\cdots,\xi_{n-1}, \eta_{i}}^{*}x_{j},y_{i+1},\cdots,y_{n}]. \tag{37}\] By (31), for the integer \(1\leq j\leq n\), \(y_{i}\in\mathfrak{g}\), \(i\neq k\) we can get the following \(n(n-1)\) equations: \[(\otimes^{i-1}1\otimes\mathrm{ad}_{x_{1},\cdots,\hat{x}_{j},\cdots,x_{n-1},y_{i}}\otimes^{n-i}1)\gamma(y_{1})+(\otimes^{k-1}1\otimes\mathrm{ad}_ {x_{1},\cdots,\hat{x}_{j},\cdots,x_{n-1},y_{1}}\otimes^{n-k}1)\gamma(y_{i}) = 0,\] \[(\otimes^{i-1}1\otimes\mathrm{ad}_{x_{1},\cdots,\hat{x}_{j}, \cdots,x_{n-1},y_{1}}\otimes^{n-i}1)\gamma(y_{2})+(\otimes^{k-1}1\otimes\mathrm{ ad}_{x_{1},\cdots,\hat{x}_{j},\cdots,x_{n-1},y_{2}}\otimes^{n-k}1)\gamma(y_{i}) = 0,\] \[\cdots\] \[(\otimes^{i-1}1\otimes\mathrm{ad}_{x_{1},\cdots,\hat{x}_{j}, \cdots,x_{n-1},y_{1}}\otimes^{n-i}1)\gamma(y_{n})+(\otimes^{k-1}1\otimes\mathrm{ ad}_{x_{1},\cdots,\hat{x}_{j},\cdots,x_{n-1},y_{n}}\otimes^{n-k}1)\gamma(y_{i}) = 0.\] sum over the above equations, for all \(i=1,2,\cdots,n\), we can obtain \[\sum_{j=1}^{n-1}\sum_{k=1,k\neq i}^{n}\big{(}(\otimes^{i-1}1\otimes\mathrm{ad}_ {x_{1},\cdots,\hat{x}_{j},\cdots,x_{n-1},y_{i}}\otimes^{n-i}1)\gamma(y_{1})+( \otimes^{k-1}1\otimes\mathrm{ad}_{x_{1},\cdots,\hat{x}_{j},\cdots,x_{n-1},y_{1} }\otimes^{n-k}1)\gamma(y_{i})\big{)}=0,\] thus, we have \[\sum_{i=1}^{n}\sum_{j=1}^{n-1}\sum_{k=1}^{n}(-1)^{n-j}(\otimes^{i-1}1 \otimes\mathrm{ad}_{x_{1},\cdots,x_{j-1},\xi_{j},x_{j+1},\cdots,x_{n-1},y_{i}} \otimes^{n-i}1)\gamma(y_{k})\] \[= \sum_{i=1}^{n}\sum_{j=1}^{n-1}(-1)^{n-j}(\otimes^{i-1}1\otimes \mathrm{ad}_{x_{1},\cdots,x_{j-1},\xi_{j},x_{j+1},\cdots,x_{n-1},y_{i}} \otimes^{n-i}1)\gamma(y_{i}). \tag{38}\] For the tensor \(\eta_{1}\otimes\cdots\otimes\eta_{i-1}\otimes\xi_{j}\otimes\eta_{i+1}\otimes \cdots\otimes\eta_{n}\in\otimes^{n}\mathfrak{g}^{*}\), the left-hand side of (38) is equal to \[\sum_{i=1}^{n}\sum_{j=1}^{n-1}\sum_{k=1}^{n}(-1)^{n-j}\langle\ ( \otimes^{i-1}1\otimes\mathrm{ad}_{x_{1},\cdots,x_{j-1},\xi_{j},x_{j+1},\cdots,x _{n-1},y_{i}}\otimes^{n-i}1)\gamma(y_{k}),\] \[\eta_{1}\otimes\cdots\otimes\eta_{i-1}\otimes\xi_{j}\otimes\eta_ {i+1}\otimes\cdots\otimes\eta_{n}\ \rangle\] \[= \sum_{i=1}^{n}\sum_{j=1}^{n-1}\sum_{k=1}^{i-1}(-1)^{n-j}\langle\ \mathrm{ad}_{\eta_{1},\cdots,\eta_{i-1},\mathrm{ad}_{x_{1},\cdots,x_{j-1},\xi_{ j},x_{j+1},\cdots,x_{n-1},y_{i}}\xi_{j},\eta_{i+1},\cdots,\eta_{n}}y_{k},\eta_{k}\ \rangle\] \[+\sum_{i=1}^{n}\sum_{j=1}^{n-1}\sum_{k=i+1}^{n}(-1)^{n-j}(-1)^{n- k}\langle\ \mathrm{ad}_{\eta_{1},\cdots,\eta_{i-1},\mathrm{ad}_{x_{1},\cdots,x_{j-1},\xi_{j},x _{j+1},\cdots,x_{n-1},y_{i}}\xi_{j},\eta_{i+1},\cdots,\eta_{n}}y_{k},\eta_{k}\ \rangle\] \[-\sum_{i=1}^{n}\sum_{j=1}^{n-1}(-1)^{n-j}\langle\ y_{i},[\eta_{1},\cdots,\eta_{i-1},\mathrm{ad}_{x_{1},\cdots,x_{j-1},\xi_{j},x_{j+1},\cdots,x_{ n-1},y_{i}}\xi_{j},\eta_{i+1},\cdots,\eta_{n}]_{\mathfrak{g}^{*}}\ \rangle,\] the right-hand side of (38) is equal to \[\sum_{i=1}^{n}\sum_{j=1}^{n-1}(-1)^{n-j} \langle\ (\otimes^{i-1}1\otimes\mathrm{ad}_{x_{1},\cdots,x_{j-1},\xi_{ j},x_{j+1},\cdots,x_{n-1},y_{i}}\otimes^{n-i}1)\gamma(y_{i}),\] \[\eta_{1}\otimes\cdots\otimes\eta_{i-1}\otimes\xi_{j}\otimes\eta_ {i+1}\otimes\cdots\otimes\eta_{n}\ \rangle\] \[=-\sum_{i=1}^{n}\sum_{j=1}^{n-1}(-1)^{n-j} \langle\ y_{i},[\eta_{1},\cdots,\eta_{i-1},\mathrm{ad}_{x_{1}, \cdots,x_{j-1},\xi_{j},x_{j+1},\cdots,x_{n-1},y_{i}}\xi_{j},\eta_{i+1},\cdots, \eta_{n}]_{\mathfrak{g}^{*}}\ \rangle.\] Then we have \[\sum_{i=1}^{n}\sum_{k=1}^{i-1}(-1)^{n-k}\mathrm{ad}_{\eta_{1}, \cdots,\eta_{k-1},\eta_{k},\eta_{k+1},\cdots,\eta_{i-1},\sum_{j=1}^{n-1}(-1)^{ n-j}\mathrm{ad}_{x_{1},\cdots,x_{j-1},\xi_{j},x_{j+1},\cdots,x_{n-1},y_{i}}^{*} \xi_{j},\eta_{i+1},\cdots,\eta_{n}}^{y_{k}}\] \[+ \sum_{i=1}^{n}\sum_{k=i+1}^{n}(-1)^{n-k}\mathrm{ad}_{\eta_{1}, \cdots,\eta_{i-1},\sum\limits_{j=1}^{n-1}(-1)^{n-j}\mathrm{ad}_{x_{1},\cdots,x_{ j-1},\xi_{j},x_{j+1},\cdots,x_{n-1},y_{i}}^{*}\xi_{j},\eta_{i+1},\cdots,\eta_{n-1}, \xi_{j},\eta_{i+1},\cdots,\eta_{n}}^{y_{k}}\] \[= 0. \tag{39}\] Similarly, we can obtain \[\mathrm{ad}_{x_{1},\cdots,x_{n-1}}^{*}([\eta_{1},\cdots,\eta_{n} ]_{\mathfrak{g}^{*}})\] \[= \sum_{i=1}^{n}[\eta_{1},\cdots,\eta_{i-1},\mathrm{ad}_{x_{1}, \cdots,x_{n-1}}^{*}\eta_{i},\eta_{i+1},\cdots,\eta_{n}]_{\mathfrak{g}^{*}}; \tag{40}\] \[[\xi_{1},\cdots,\xi_{n-1},\sum_{i=1}^{n}(-1)^{n-i}\text{ad}^{*}_{y_{1}, \cdots,y_{i-1},\hat{y}_{i},y_{i+1},\cdots,y_{n}}\eta_{i}]_{\mathfrak{g}^{*}}\] \[= \sum_{i=1}^{n}(-1)^{n-i}\text{ad}^{*}_{y_{1},\cdots,y_{i-1},\hat{ y}_{i},y_{i+1},\cdots,y_{n}}([\xi_{1},\cdots,\xi_{n-1},\eta_{i}]_{\mathfrak{g}^{*}})\] \[+ \sum_{i=1}^{n}\sum_{k=1}^{i-1}(-1)^{n-k}\text{ad}^{*}_{y_{1}, \cdots,y_{k-1},y_{k},y_{k+1},\cdots,y_{1-1},\text{ad}^{*}_{\xi_{1}-\xi_{n-1}}y _{i}y_{i+1},\cdots,y_{n}}\eta_{k}\] \[+ \sum_{i=1}^{n}\sum_{k=i+1}^{n}(-1)^{n-k}\text{ad}^{*}_{y_{1}, \cdots,y_{i-1},\text{ad}^{*}_{\xi_{1},\cdots,\xi_{n-1}}y_{i}y_{i+1},\cdots,y_{ k-1},\hat{y}_{k},y_{k+1},\cdots,y_{n}}\eta_{k};\] \[\sum_{j=1}^{n-1}(-1)^{n-j}\text{ad}^{*}_{x_{1},\cdots,x_{j-1}, \hat{x}_{j},x_{j+1},\cdots,x_{n-1},\sum_{i=1}^{n}(-1)^{n-i}\text{ad}^{*}_{\eta _{1},\cdots,y_{i-1},\hat{x}_{i},\eta_{i+1},\cdots,y_{n}}y_{i}}\xi_{j}\] \[= \sum_{i=1}^{n}[\eta_{1},\cdots,\eta_{i-1},\sum_{j=1}^{n-1}(-1)^{ n-j}\text{ad}^{*}_{x_{1},\cdots,x_{j-1},\hat{x}_{j},x_{j+1},\cdots,x_{n-1},y_{i}}\xi_{j}, \eta_{i+1},\cdots,\eta_{n}]_{\mathfrak{g}^{*}};\] \[\sum_{i=1}^{n}\sum_{k=1}^{i-1}(-1)^{n-k}\text{ad}^{*}_{y_{1}, \cdots,y_{k-1},y_{k},y_{k+1},\cdots,y_{i-1},\sum_{j=1}^{n-1}(-1)^{n-j}\text{ad} ^{*}_{\xi_{1},\cdots,\xi_{j-1}\hat{x}_{j},\xi_{j+1},\cdots,\xi_{n-1}}\eta_{k}}\eta _{k}\] \[+ \sum_{i=1}^{n}\sum_{k=i+1}^{n}(-1)^{n-k}\text{ad}^{*}_{y_{1}, \cdots,y_{i-1},\sum_{j=1}^{n-1}(-1)^{n-j}\text{ad}^{*}_{\xi_{1},\cdots,\xi_{j-1 }\hat{x}_{j},x_{j+1},\cdots,x_{n-1},y_{i}}\eta_{k}}\eta_{k}\] \[= 0. \tag{41}\] By (1), for all \(y_{1},\cdots,y_{n}\in\mathfrak{g},\xi_{1},\cdots,\xi_{n-1},\eta_{1},\cdots, \eta_{n}\in\mathfrak{g}^{*}\) we have \[\langle\,\sum_{i=1}^{n}y_{i},[\xi_{1},\cdots,\xi_{n-1},[\eta_{1}, \cdots,\eta_{n}]_{\mathfrak{g}^{*}}]_{\mathfrak{g}^{*}}\,\rangle\] \[= \langle\,\sum_{i=1}^{n}y_{i},\sum_{k=1}^{n}[\eta_{1},\cdots,\eta_ {k-1},[\xi_{1},\cdots,\xi_{n-1},\eta_{k}]_{\mathfrak{g}^{*}},\eta_{k+1}, \cdots,\eta_{n}]_{\mathfrak{g}^{*}}\,\rangle. \tag{44}\] The left-hand side of (44) is equal to \[\langle\,\sum_{i=1}^{n}y_{i},[\xi_{1},\cdots,\xi_{n-1},[\eta_{1}, \cdots,\eta_{n}]_{\mathfrak{g}^{*}}]_{\mathfrak{g}^{*}}\,\rangle\] \[= \langle\,\sum_{i=1}^{n}(-1)^{n-i}\text{ad}^{*}_{\eta_{1},\cdots, \eta_{i-1},\eta_{i},\eta_{i+1},\cdots,\eta_{n}}(\text{ad}^{*}_{\xi_{1},\cdots, \xi_{n-1}}y_{i}),\eta_{i}\,\rangle,\] the right-hand side of (44) is equal to \[\langle\,\sum_{i=1}^{n}y_{i},\sum_{k=1}^{n}[\eta_{1},\cdots,\eta _{k-1},[\xi_{1},\cdots,\xi_{n-1},\eta_{k}]_{\mathfrak{g}^{*}},\eta_{k+1}, \cdots,\eta_{n}]_{\mathfrak{g}^{*}}\,\rangle\] \[= \langle\,\text{ad}^{*}_{\xi_{1},\cdots,\xi_{n-1}}(\sum_{i=1}^{n} (-1)^{n-i}\text{ad}^{*}_{\eta_{1},\cdots,\hat{\eta}_{i},\cdots,\eta_{n}}y_{i}), \eta_{i}\,\rangle\] \[- \langle\,\sum_{i=1}^{n}\sum_{k=1}^{i-1}(-1)^{n-k}\mathrm{ad}^{*}_{ \eta_{1},\cdots,\hat{\eta}_{k},\cdots,\eta_{l-1},[\xi_{1},\cdots,\xi_{n-1},\eta_ {1}]_{\mathbb{S}^{*}},\eta_{l+1},\cdots,\eta_{n}}y_{k},\eta_{k}\ \rangle\] \[- \langle\,\sum_{i=1}^{n}\sum_{k=i+1}^{n}(-1)^{n-k}\mathrm{ad}^{*}_ {\eta_{1},\cdots,\eta_{l-1},[\xi_{1},\cdots,\xi_{n-1},\eta_{1}]_{\mathbb{S}^{ *}},\eta_{l+1},\cdots,\eta_{n}}y_{k},\eta_{k}\ \rangle.\] Then we have \[\mathrm{ad}^{*}_{\xi_{1},\cdots,\xi_{n-1}}(\sum_{i=1}^{n}(-1)^{n-i }\mathrm{ad}^{*}_{\eta_{1},\cdots,\hat{\eta}_{l},\cdots,\eta_{n}}y_{i})=\sum_{ i=1}^{n}(-1)^{n-i}\mathrm{ad}^{*}_{\eta_{1},\cdots,\hat{\eta}_{l},\cdots,\eta_{n}}( \mathrm{ad}^{*}_{\xi_{1},\cdots,\xi_{n-1}}y_{i})\] \[+ \sum_{i=1}^{n}\sum_{k=1}^{i-1}(-1)^{n-k}\mathrm{ad}^{*}_{\eta_{1},\cdots,\hat{\eta}_{k},\cdots,\eta_{l-1},[\xi_{1},\cdots,\xi_{n-1},\eta_{1}]_{ \mathbb{S}^{*}},\eta_{l+1},\cdots,\eta_{n}}y_{k}\] \[+ \sum_{i=1}^{n}\sum_{k=i+1}^{n}(-1)^{n-k}\mathrm{ad}^{*}_{\eta_{1},\cdots,\eta_{l-1},[\xi_{1},\cdots,\xi_{n-1},\eta_{1}]_{\mathbb{S}^{*}},\eta_{ l+1},\cdots,\eta_{n}}y_{k}. \tag{45}\] Similarly, we can obtain \[\mathrm{ad}^{*}_{x_{1},\cdots,x_{n-1}}(\sum_{i=1}^{n}(-1)^{n-i} \mathrm{ad}^{*}_{y_{1},\cdots,\hat{y}_{i},\cdots,y_{n}}\eta_{i})=\sum_{i=1}^{n }(-1)^{n-i}\mathrm{ad}^{*}_{y_{1},\cdots,y_{l-1},\hat{y}_{l},y_{l+1},\cdots,y_ {n}}(\mathrm{ad}^{*}_{x_{1},\cdots,x_{n-1}}\eta_{i})\] \[+ \sum_{i=1}^{n}\sum_{k=1}^{i-1}(-1)^{n-k}\mathrm{ad}^{*}_{y_{1}, \cdots,y_{l},\cdots,y_{l-1},[x_{1},\cdots,x_{n-1},y_{1}],y_{l+1},\cdots,y_{n}} \eta_{k}\] \[+ \sum_{i=1}^{n}\sum_{k=i+1}^{n}(-1)^{n-k}\mathrm{ad}^{*}_{y_{1}, \cdots,y_{l-1},[x_{1},\cdots,x_{n-1},y_{1}],y_{l+1},\cdots,y_{k},\cdots,y_{n}} \eta_{k}.\] By (1), we have \[[\xi_{1},\cdots,\xi_{n-1},[\eta_{1},\cdots,\eta_{n}]_{\mathbb{S} ^{*}}]_{\mathbb{S}^{*}}\] \[= \sum_{i=1}^{n}\sum_{j=1}^{n-1}(-1)^{n-i}(-1)^{n-j}[\xi_{1},\cdots,\hat{\xi_{j}},\cdots,\xi_{n-1},\eta_{i},[\eta_{1},\cdots,\eta_{i-1},\hat{\eta} _{i},\eta_{i+1},\cdots,\eta_{n},\xi_{j}]_{\mathbb{S}^{*}}]_{\mathbb{S}^{*}}\] \[+\sum_{i=1}^{n}[\xi_{1},\cdots,\xi_{n-1},[\eta_{1},\cdots,\eta_ {n}]_{\mathbb{S}^{*}}]_{\mathbb{S}^{*}},\] thus, we obtain \[-(n-1)[\xi_{1},\cdots,\xi_{n-1},[\eta_{1},\cdots,\eta_{n}]_{ \mathbb{S}^{*}}]_{\mathbb{S}^{*}}\] \[=\sum_{i=1}^{n}\sum_{j=1}^{n-1}(-1)^{n-i}(-1)^{n-j}[\xi_{1}, \cdots,\hat{\xi_{j}},\cdots,\xi_{n-1},\eta_{i},[\eta_{1},\cdots,\eta_{i-1},\hat {\eta}_{i},\eta_{i+1},\cdots,\eta_{n},\xi_{j}]_{\mathbb{S}^{*}}]_{\mathbb{S}^{*}}. \tag{46}\] By (46), we have \[-\sum_{j=1}^{n-1}\langle\ [\xi_{1},\cdots,\xi_{n-1},[\eta_{1},\cdots,\eta_ {n}]_{\mathbb{S}^{*}}]_{\mathbb{S}^{*}},x_{j}\ \rangle \tag{47}\] \[=\sum_{i=1}^{n}\sum_{j=1}^{n-1}(-1)^{n-i}(-1)^{n-j}\langle\ [\xi_{1}, \cdots,\hat{\xi}_{j},\cdots,\xi_{n-1},\eta_{i},[\eta_{1},\cdots,\eta_{i-1},\hat{ \eta}_{i},\eta_{i+1},\cdots,\eta_{n},\xi_{j}]_{\mathfrak{q}^{*}},x_{j}\ \rangle,\] the left-hand side of (47) is equal to \[-\sum_{j=1}^{n-1}\langle\ [\xi_{1},\cdots,\xi_{n-1},[\eta_{1},\cdots,\eta_{n }]_{\mathfrak{q}^{*}}]_{\mathfrak{q}^{*}},x_{j}\ \rangle=\sum_{j=1}^{n-1}(-1)^{n-j}\langle\ \xi_{j},\mathrm{ad}_{\xi_{1},\cdots,\xi_{j-1},\hat{\xi}_{j},\xi_{j+1},\cdots, \xi_{n-1},[\eta_{1},\cdots,\eta_{n}]_{\mathfrak{q}^{*}}}^{*}x_{j}\ \rangle,\] the right-hand side of (47) is equal to \[\sum_{i=1}^{n}\sum_{j=1}^{n-1}(-1)^{n-i}(-1)^{n-j}\langle\ [\xi_{1}, \cdots,\hat{\xi}_{j},\cdots,\xi_{n-1},\eta_{i},[\eta_{1},\cdots,\eta_{i-1}, \hat{\eta}_{i},\eta_{i+1},\cdots,\eta_{n},\xi_{j}]_{\mathfrak{q}^{*}}]_{ \mathfrak{q}^{*}},x_{j}\ \rangle\] \[= \sum_{i=1}^{n}\sum_{j=1}^{n-1}(-1)^{n-i}(-1)^{n-j}\langle\ \xi_{j}, \mathrm{ad}_{\eta_{1},\cdots,\eta_{i-1},\hat{\eta}_{i},\eta_{i+1},\cdots,\eta_ {n}}^{*}(\mathrm{ad}_{\xi_{1},\cdots,\xi_{j-1},\hat{\xi}_{j},\xi_{j+1},\cdots, \xi_{n-1},\eta_{i}}^{*}x_{j})\ \rangle.\] Then we have \[\sum_{j=1}^{n-1}(-1)^{n-j}\mathrm{ad}_{\xi_{1},\cdots,\hat{\xi}_{j},\cdots,\xi _{n-1},[\eta_{1},\cdots,\eta_{n}]_{\mathfrak{q}^{*}}}^{*}x_{j}=\sum_{i=1}^{n}( -1)^{n-i}\mathrm{ad}_{\eta_{1},\cdots,\hat{\eta}_{i},\cdots,\eta_{n}}^{*}(\sum _{j=1}^{n-1}(-1)^{n-j}\mathrm{ad}_{\xi_{1},\cdots,\hat{\xi}_{j},\cdots,\xi_{n- 1},\eta_{i}}^{*}x_{j}). \tag{48}\] Similarly, we can obtain \[\sum_{j=1}^{n-1}(-1)^{n-j}\mathrm{ad}_{x_{1},\cdots,\hat{x}_{j},\cdots,x_{n-1 },[y_{1},\cdots,y_{n}]}^{*}\xi_{j}=\sum_{i=1}^{n}(-1)^{n-i}\mathrm{ad}_{y_{1}, \cdots,\hat{y}_{i},\cdots,y_{n}}^{*}(\sum_{j=1}^{n-1}(-1)^{n-j}\mathrm{ad}_{x_ {1},\cdots,\hat{x}_{j},\cdots,x_{n-1},\eta_{i}}^{*}\xi_{j}). \tag{49}\] Then by (33),(35),(37),(39)-(43),(45),(46),(48),(49) and the fact that \(\mathfrak{g}\), \(\mathfrak{g}^{*}\) are \(n\)-Lie algebras, next we can proof the Filippov-Jacobi Identity on \(\mathfrak{g}\oplus\mathfrak{g}^{*}\). \[[x_{1}+\xi_{1},\cdots,x_{n-1}+\xi_{n-1},[y_{1}+\eta_{1},\cdots,y_ {n}+\eta_{n}]_{\mathfrak{b}}]_{\mathfrak{b}}\] \[= [x_{1},\cdots,x_{n-1},[y_{1},\cdots,y_{n}]]+[x_{1},\cdots,x_{n-1},\sum_{i=1}^{n}(-1)^{n-i}\mathrm{ad}_{\eta_{1},\cdots,\hat{\eta}_{i},\cdots, \eta_{n}}^{*}y_{i}]\] \[+ \mathrm{ad}_{\xi_{1},\cdots,\xi_{n-1}}([y_{1},\cdots,y_{n}]+\sum_ {i=1}^{n}(-1)^{n-i}\mathrm{ad}_{\eta_{1},\cdots,\hat{\eta}_{i},\cdots,\eta_{n} }^{*}y_{i})\] \[+ \sum_{j=1}^{n-1}(-1)^{n-j}\mathrm{ad}_{\xi_{1},\cdots,\hat{\xi}_{j },\cdots,\xi_{n-1},[\eta_{1},\cdots,\eta_{n}]_{\mathfrak{q}^{*}}+\sum_{i=1}^{n} (-1)^{n-i}\mathrm{ad}_{\eta_{1},\cdots,\hat{\eta}_{i},\cdots,\eta_{n}}^{*}x_{j}\] \[+ \sum_{j=1}^{n-1}(-1)^{n-j}\mathrm{ad}_{x_{1},\cdots,\hat{x}_{j}, \cdots,x_{n-1},[y_{1},\cdots,y_{n}]_{\mathfrak{q}^{*}}+\sum_{i=1}^{n}(-1)^{n- i}\mathrm{ad}_{\eta_{1},\cdots,\hat{\eta}_{i},\cdots,y_{n}}^{*}\xi_{j}\] \[+ \mathrm{ad}_{x_{1},\cdots,x_{n-1}}([\eta_{1},\cdots,\eta_{n}]_{ \mathfrak{q}^{*}}+\sum_{i=1}^{n}(-1)^{n-i}\mathrm{ad}_{y_{1},\cdots,\hat{y}_{i}, \cdots,y_{n}}^{*}\eta_{i})\] \[+ \left[\xi_{1},\cdots,\xi_{n-1},[\eta_{1},\cdots,\eta_{n}]_{ \mathfrak{q}^{*}}\right]_{\mathfrak{q}^{*}}+\left[\xi_{1},\cdots,\xi_{n-1}, \sum_{i=1}^{n}(-1)^{n-i}\mathrm{ad}_{y_{1},\cdots,\hat{y}_{i},\cdots,y_{n}}^{*} \eta_{i}\right]_{\mathfrak{q}^{*}}.\] \[\sum_{i=1}^{n}\big{[}y_{1}+\eta_{1},\cdots,y_{i-1}+\eta_{i-1},[x_{1}+ \xi_{1},\cdots,x_{n-1}+\xi_{n-1},y_{i}+\eta_{i}]_{\tt b},y_{i+1}+\eta_{i+1}, \cdots,y_{n}+\eta_{n}\big{]}_{\tt b}\] \[= \sum_{i=1}^{n}\big{[}y_{1},\cdots,y_{i-1},[x_{1},\cdots,x_{n-1}, y_{i}],y_{i+1},\cdots,y_{n}\big{]}+\sum_{i=1}^{n}[y_{1},\cdots,y_{i-1},\text{ad}^{*}_{ \xi_{1},\cdots,\xi_{n-1}}y_{i},y_{i+1},\cdots,y_{n}]\] \[+ \sum_{i=1}^{n}[y_{1},\cdots,y_{i-1},\sum_{j=1}^{n-1}(-1)^{n-j} \text{ad}^{*}_{\xi_{1},\cdots,\xi_{j},\cdots,\xi_{n-1},\eta_{i}}x_{j},y_{i+1}, \cdots,y_{n}]\] \[+ \sum_{i=1}^{n}(-1)^{n-i}\text{ad}^{*}_{\eta_{1},\cdots,\eta_{i}, \cdots,\eta_{n}}([x_{1},\cdots,x_{n-1},y_{i}]+\sum_{j=1}^{n-1}(-1)^{n-j}\text{ ad}^{*}_{\xi_{1},\cdots,\xi_{j},\cdots,\xi_{n-1},\eta_{i}}x_{j}+\text{ad}^{*}_{\xi_{1}, \cdots,\xi_{n-1}}y_{i})\] \[+ \sum_{i=1}^{n}\sum_{k=1}^{i-1}(-1)^{n-k}\text{ad}^{*}_{\eta_{1}, \cdots,\eta_{i-1},[\xi_{1},\cdots,\xi_{n-1},\eta_{i}]_{\tt a}^{*}+\sum_{j=1}^{n -1}(-1)^{n-j}\text{ad}^{*}_{\xi_{1},\cdots,\xi_{j},\cdots,\xi_{n-1},\eta_{i}} \xi_{j}+\text{ad}^{*}_{\xi_{1},\cdots,x_{n-1}}\eta_{i},\eta_{i+1},\cdots,\eta_ {n}}\] \[+ \sum_{i=1}^{n}\big{[}\eta_{1},\cdots,\eta_{i-1},[\xi_{1},\cdots, \xi_{n-1},\eta_{i}]_{\tt a},\eta_{i+1},\cdots,\eta_{n}\big{]}_{\tt g}\] \[+ \sum_{i=1}^{n}[\eta_{1},\cdots,\eta_{i-1},\text{ad}^{*}_{x_{1}, \cdots,x_{n-1}}\eta_{i},\eta_{i+1},\cdots,\eta_{n}]_{\tt g}.\] \[+ \sum_{i=1}^{n}[\eta_{1},\cdots,\eta_{i-1},\sum_{j=1}^{n-1}(-1)^{ n-j}\text{ad}^{*}_{x_{1},\cdots,\xi_{j},\cdots,x_{n-1},y_{i}}\xi_{j},\eta_{i+1}, \cdots,\eta_{n}]_{\tt g}\] \[+ \sum_{i=1}^{n}(-1)^{n-i}\text{ad}^{*}_{y_{1},\cdots,\xi_{j}, \cdots,y_{n}}([\xi_{1},\cdots,\xi_{n-1},\eta_{i}]_{\tt g}+\sum_{j=1}^{n-1}(-1)^{ n-j}\text{ad}^{*}_{x_{1},\cdots,\xi_{j},\cdots,x_{n-1},y_{i}}\xi_{j}+\text{ad}^{*}_{x_{1}, \cdots,x_{n-1}}\eta_{i})\] \[+ \sum_{i=1}^{n}\sum_{k=1}^{i-1}(-1)^{n-k}\text{ad}^{*}_{y_{1}, \cdots,y_{i-1},[x_{1},\cdots,x_{n-1},y_{i}]_{\tt g}+\sum_{j=1}^{n-1}(-1)^{n-j} \text{ad}^{*}_{\xi_{1},\cdots,\xi_{j},\cdots,x_{n-1},y_{i}}\xi_{j}+\text{ad}^{* }_{\xi_{1},\cdots,x_{n-1}}\eta_{i})\] \[+ \sum_{i=1}^{n}\sum_{k=1}^{i-1}(-1)^{n-k}\text{ad}^{*}_{y_{1}, \cdots,y_{i-1},[x_{1},\cdots,x_{n-1},y_{i}]_{\tt g}+\sum_{j=1}^{n-1}(-1)^{n-j} \text{ad}^{*}_{x_{1},\cdots,\xi_{j},\cdots,x_{n-1},y_{i}}x_{j}+\text{ad}^{*}_{ \xi_{1},\cdots,x_{n-1}}\eta_{i})\] \[+ \sum_{i=1}^{n}\sum_{k=1}^{i-1}(-1)^{n-k}\text{ad}^{*}_{y_{1}, \cdots,y_{i-1},[x_{1},\cdots,x_{n-1},y_{i}]_{\tt g}+\sum_{j=1}^{n-1}(-1)^{n-j} \text{ad}^{*}_{\xi_{1},\cdots,\xi_{j},\cdots,\xi_{n-1},\eta_{i}}x_{j}+\text{ad}^ {*}_{\xi_{1},\cdots,\xi_{n-1}}\eta_{i})\] \[+ \sum_{i=1}^{n}\sum_{k=1}^{n}(-1)^{n-k}\text{ad}^{*}_{y_{1}, \cdots,y_{i-1},[x_{1},\cdots,x_{n-1},y_{i}]_{\tt g}+\sum_{j=1}^{n-1}(-1)^{n-j} \text{ad}^{*}_{\xi_{1},\cdots,\xi_{j},\cdots,\xi_{n-1},\eta_{i}}x_{j}+\text{ad}^ {*}_{\xi_{1},\cdots,\xi_{n-1}}y_{i},y_{i+1},\cdots,y_{i}\] \[+ \sum_{i=1}^{n}\sum_{k=1}^{n}(-1)^{n-k}\text{ad}^{*}_{y_{1}, \cdots,y_{i-1},[x_{1},\cdots,x_{n-1},y_{i}]_{\tt g}+\sum_{j=1}^{n-1}(-1)^{n-j} \text{ad}^{*}_{\xi_{1},\cdots,\xi_{j},\cdots,\xi_{n-1},\eta_{i}}x_{j}+\text{ad}^ {*}_{\xi_{1},\cdots,\xi_{n-1}}y_{i},y_{i+1},\cdots,y_{i}\] \[+ \sum_{i=1}^{n}\sum_{k=1}^{n}(-1)^{n-k}\text{ad}^{*}_{y_{1}, \cdots,y_{i-1},[x_{1},\cdots,x_{n-1},y_{i}]_{\tt g}+\sum_{j=1}^{n-1}(-1)^{n-j} \text{ad}^{*}_{\xi_{1},\cdots,\xi_{j},\cdots,\xi_{n-1},\eta_{i}}x_{j}+\text{ad}^ {*}_{\xi_{1},\cdots,\xi_{n-1}}y_{i},y_{i+1},\cdots,y_{i}\] \[+ \sum_{i=1}^{n}\sum_{k=1}^{i-1}(-1)^{n-k}\text{ad}^{*}_{y_{1}, \cdots,y_{i-1},[x_{1},\cdots,x_{n-1},y_{i}]_{\tt g}+\sum_{j=1}^{n-1}(-1)^{n-j} \text{ad}^{*}_{\xi_{1},\cdots,\xi_{j},\cdots,\xi_{n-1},\eta_{i}}x_{j}+\text{ad}^ {*}_{\xi_{1},\cdots,\xi_{n-1}}y_{i},y_{i+1},\cdots,y_{i}\] \[+ \sum_{i=1}^{n}\sum_{k=1}^{i-1}(-1)^{n-k}\text{ad}^{*}_{y_{1}, \cdots,y_{i-1},[x_{1},\cdots,x_{n-1},y_{i}]_{\tt g}+\sum_{j=1}^{n-1}(-1)^{n-j} \text{ad}^{*}_{\xi_{1},\cdots,\xi_{j},\cdots,\xi_{n-1},\eta_{i}}x_{j}+\text{ad}^ {*}_{\xi_{1},\cdots,\xi_{n-1}}y_{i},y_{i+1},\cdots,y_{i}\] \[+ \sum_{i=1}^{n}\sum_{k=1}^{i-1}(-1)^{n-k}\text{ad}^{*}_{y_{1}, \cdots,y_{i-1},[x_{1},\cdots,x_{n-1},y_{i}]_{\tt g}+\sum_{j=1}^{n-1}(-1)^{n-j} \text{ad}^{*}_{\xi_{1},\cdots,\xi_{j},\cdots,\xi_{n-1},\eta_{i}}x_{j}+\text{ad}^ {*}_{\xi_{1},\cdots,\xi_{n-1}}y_{i},y_{i+1},\cdots,y_{i}\] \[+ \sum_{i=1}^{n}\sum_{k=1}^{i-1}(-1)^{n-k}\text{ad}^{*}_{\xi_{1}, \[\langle x_{2},[\xi_{1},\cdots,\xi_{n-1},x_{1}]_{\mathfrak{b}}\rangle_{\mathfrak{b}}=0, \tag{52}\] _and_ \[\begin{cases}\langle x+\xi,[x_{1},\cdots,x_{n-2},\xi_{1},\xi_{2}]_{\mathfrak{b}} \rangle_{\mathfrak{b}}=0,\\ \qquad\qquad\qquad\vdots\\ \langle x+\xi,[x_{1},x_{2},\xi_{1},\cdots,\xi_{n-2}]_{\mathfrak{b}} \rangle_{\mathfrak{b}}=0.\end{cases} \tag{53}\] _where \([\cdot,\cdots,\cdot]_{\mathfrak{b}}\) is given by (26), then \([\cdot,\cdots,\cdot]_{\mathfrak{b}}\) is a unique \(n\)-Lie bracket such that \((\mathfrak{g},[\cdot,\cdots,\cdot])\) and \((\mathfrak{g}^{*},[\cdot,\cdots,\cdot]_{\mathfrak{g}^{*}})\) are \(n\)-Lie subalgebras of \((\mathfrak{b},[\cdot,\cdots,\cdot]_{\mathfrak{b}})\) and that the symmetric bilinear form \(\langle\cdot,\cdot\rangle_{\mathfrak{b}}\) is invariant._ Proof.: It is straightforward to deduce that \(\langle\cdot,\cdot\rangle_{\mathfrak{b}}\) is a non-degenerate symmetric bilinear form. By the invariant condition (22) and \((\mathfrak{g},[\cdot,\cdots,\cdot])\) is an \(n\)-Lie subalgebra of \((\mathfrak{b},[\cdot,\cdots,\cdot]_{\mathfrak{b}})\), for all \(x_{1},\cdots,x_{n}\in\mathfrak{g},\xi\in\mathfrak{g}^{*}\), we have \[\langle x_{n},[x_{1},\cdots,x_{n-1},\xi]_{\mathfrak{b}}\rangle_{\mathfrak{b} }=-\langle[x_{1},\cdots,x_{n-1},x_{n}]_{\mathfrak{b}},\xi\rangle_{\mathfrak{b }}=-\langle[x_{1},\cdots,x_{n-1},x_{n}],\xi\rangle_{\mathfrak{b}}.\] By (50) and (27), the right-hand side of the above equation is equal to \[-\langle\ \xi,[x_{1},\cdots,x_{n-1},x_{n}]\ \rangle=\langle\ \mathrm{ad}^{*}_{x_{1},\cdots,x_{n-1}}\xi,x_{n}\ \rangle=\langle x_{n},\mathrm{ad}^{*}_{x_{1},\cdots,x_{n-1}}\xi\rangle_{ \mathfrak{b}}.\] By (51) and \(\langle\cdot,\cdot\rangle_{\mathfrak{b}}\) is a non-degenerate bilinear form, we have \[[x_{1},\cdots,x_{n-1},x+\xi]_{\mathfrak{b}}=\mathrm{ad}^{*}_{x_{1},\cdots,x_ {n-1}}\xi. \tag{54}\] Similarly, for all \(\xi_{1},\cdots,\xi_{n}\in\mathfrak{g}^{*},x\in\mathfrak{g}\), by (50) and (28) we have \[\langle\xi_{n},[\xi_{1},\cdots,\xi_{n-1},x]_{\mathfrak{b}}\rangle_{\mathfrak{ b}}=\langle\xi_{n},\mathrm{ad}^{*}_{\xi_{1},\cdots,\xi_{n-1}}x\rangle_{ \mathfrak{b}}.\] Thus, by (52), we have \[[\xi_{1},\cdots,\xi_{n-1},x+\xi]_{\mathfrak{b}}=\mathrm{ad}^{*}_{\xi_{1}, \cdots,\xi_{n-1}}x. \tag{55}\] By (53), we have \[[x_{1},\cdots,x_{n-2},\xi_{1},\xi_{2}]_{\mathfrak{b}}=0,\cdots,[x_{1},x_{2}, \xi_{1},\cdots,\xi_{n-2}]_{\mathfrak{b}}=0. \tag{56}\] Therefore, for all \(x_{1},\cdots,x_{n}\in\mathfrak{g},\ \xi_{1},\cdots,\xi_{n}\in\mathfrak{g}^{*}\), we get the unique linear map \([\cdot,\cdots,\cdot]_{\mathfrak{b}}:\wedge^{n}\mathfrak{b}\to\mathfrak{b}\) satisfying (26). Then, the conclusion holds. **Definition 4.22**.: _Let \((\mathfrak{g},[\cdot,\cdots,\cdot],\gamma_{\mathfrak{g}})\) be a local centroid \(R_{1}\)-operad \(n\)-Lie bialgebra, where \(\gamma_{\mathfrak{g}}:\mathfrak{g}\to\mathfrak{g}^{n}\mathfrak{g}\) is a linear map that defines an \(n\)-Lie bracket on \(\mathfrak{g}^{*}\) through the dual map \(\gamma_{\mathfrak{g}}^{*}:\otimes^{n}\mathfrak{g}^{*}\to\mathfrak{g}^{*}\). Then we call \((\mathfrak{g},\gamma_{\mathfrak{g}})\) a_ **double construction \(n\)-Lie bialgebra.**__ **Theorem 4.23**.: _Let \((\cdot,\cdot)\) be a bilinear form defined by (50) and let \([\cdot,\cdots,\cdot]_{\mathfrak{b}}\) be a linear map defined by (26). Then \(((\mathfrak{g}\oplus\mathfrak{g}^{*},[\cdot,\cdots,\cdot]_{\mathfrak{b}},( \cdot,\cdot)),\mathfrak{g},\mathfrak{g}^{*})\) is a Manin triple if and only if \((\mathfrak{g},\gamma_{\mathfrak{g}})\) is a double construction \(n\)-Lie bialgebra._ Proof.: It can be directly obtained from Proposition 4.20 and Lemma 4.21. **Proposition 4.24**.: _Let \((\mathfrak{g},\gamma_{\mathfrak{g}})\) be a double construction \(n\)-Lie bialgebra with a basis \(\{e_{1},\cdots,e_{n}\}\). For the positive integers \(1\leq a_{1},\cdots,a_{n},s_{1},\cdots,s_{n},i,k\leq n\) and structural constants \(T^{k}_{a_{1},\cdots,a_{n}}\), \(C^{s_{1},\cdots,s_{n}}_{i}e_{s_{1}}\in F\), set_ \[[e_{a_{1}},\cdots,e_{a_{n}}]=\sum_{k=1}^{n}T^{k}_{a_{1},\cdots,a_{n}}e_{k}, \quad\gamma(e_{i})=\sum_{s_{1},\cdots,s_{n}=1}^{n}C^{s_{1},\cdots,s_{n}}_{i}e_ {s_{1}}\otimes\cdots\otimes e_{s_{n}},\] _then we have_ _(1) \(\gamma_{\mathfrak{g}}\) satisfies (29) if and only if the following equation holds:_ \[\sum_{s_{1},\cdots,s_{n},k=1}^{n}T^{k}_{a_{1},\cdots,a_{n}}C^{s_{1},\cdots,s_ {n}}_{k}-\sum_{i=1}^{n}(-1)^{n-1}T^{s_{i}}_{a_{2},\cdots,a_{n},k}C^{s_{1}, \cdots,s_{i-1},s_{i},s_{i+1},\cdots,s_{n},k}_{a_{1}})=0. \tag{57}\] _(2) \(\gamma_{\mathfrak{g}}\) satisfies (30) if and only if the following equations hold:_ \[\sum_{s_{1},\cdots,s_{n},k=1}^{n}T^{k}_{a_{2},\cdots,a_{n},s_{j}}C^{s_{1}, \cdots,s_{n}}_{a_{1}}=\sum_{s_{1},\cdots,s_{n},k=1}^{n}T^{k}_{a_{2},\cdots,a_ {n},s_{n}}C^{s_{1},\cdots,s_{n}}_{a_{1}}=0,\quad\forall j=1,2,\cdots,n-1. \tag{58}\] _(3) \(\gamma_{\mathfrak{g}}\) satisfies (31) if and only if the following equations hold:_ \[\sum_{s_{1},\cdots,s_{n},j=1}^{n}T^{j}_{a_{2},\cdots,a_{n},s_{1}}C^{s_{1}, \cdots,s_{n}}_{a_{1}}=\sum_{s_{1},\cdots,s_{n},j=1}^{n}T^{j}_{a_{1},\cdots,a_{ n-1},s_{1}}C^{s_{1},\cdots,s_{n}}_{a_{n}}=0,\quad\forall i,k=1,2,\cdots,n;\ i\neq k. \tag{59}\] Proof.: It is obtained by a straightforward computation of (29)-(31) followed by comparing the coefficients. **Acknowledgements:** The authors would like to thank the referees for helpful comments. The fourth author acknowledges support from the NSF China (12101328).
2306.02620
On the feasibility of performing quantum chemistry calculations on quantum computers
Quantum chemistry is envisioned as an early and disruptive application for quantum computers. Yet, closer scrutiny of the proposed algorithms shows that there are considerable difficulties along the way. Here, we propose two criteria for evaluating two leading quantum approaches for finding the ground state of molecules. The first criterion applies to the variational quantum eigensolver (VQE) algorithm. It sets an upper bound to the level of imprecision/decoherence that can be tolerated in quantum hardware as a function of the targeted precision, the number of gates and the typical energy contribution from states populated by decoherence processes. We find that decoherence is highly detrimental to the accuracy of VQE and performing relevant chemistry calculations would require performances that are expected for fault-tolerant quantum computers, not mere noisy hardware, even with advanced error mitigation techniques. Physically, the sensitivity of VQE to decoherence originates from the fact that, in VQE, the spectrum of the studied molecule has no correlation with the spectrum of the quantum hardware used to perform the computation. The second criterion applies to the quantum phase estimation (QPE) algorithm, which is often presented as the go-to replacement of VQE upon availability of (noiseless) fault-tolerant quantum computers. QPE requires an input state with a large enough overlap with the sought-after ground state. We provide a criterion to estimate quantitatively this overlap based on the energy and the energy variance of said input state. Using input states from a variety of state-of-the-art classical methods, we show that the scaling of this overlap with system size does display the standard orthogonality catastrophe, namely an exponential suppression with system size. This in turns leads to an exponentially reduced QPE success probability.
Thibaud Louvet, Thomas Ayral, Xavier Waintal
2023-06-05T06:41:22Z
http://arxiv.org/abs/2306.02620v3
# Go-No go criteria for performing quantum chemistry calculations on quantum computers ###### Abstract Quantum chemistry is envisioned as an early and disruptive application where quantum computers would provide a genuine advantage with respect to purely classical approaches. In this work, we propose two criteria for evaluating the potential of the two leading quantum approaches for this class of problems. The first criterion applies to the Variational Quantum Eigensolver (VQE) algorithm and sets an upper bound to the level of noise that can be tolerated in quantum hardware as a function of the target precision and problem size. We find a crippling effect of noise with an overall scaling of the precision that is generically _less_ favourable than in the corresponding classical algorithms. This is due to the studied molecule being unrelated to the hardware dynamics, hence its noise; conversely the hardware noise populates states of arbitrary energy of the studied molecule. The second criterion applies to the Quantum Phase Estimation (QPE) algorithm that is often presented as the go-to replacement of VQE upon availability of (noiseless) fault-tolerant quantum computers. QPE suffers from the phenomenon known as the orthogonality catastrophe that generically leads to an exponentially small success probability when the size of the problem grows. Our criterion allows one to estimate quantitatively the importance of this phenomenon from the knowledge of the variance of the energy of the input state used in the calculation. There exists a hierarchy in the applications that have been proposed for quantum computers, from the celebrated Shor algorithm [1] (exponentially faster than its known classical counterparts) to Grover algorithm [2] (up to quadratically faster than its classical counterparts [3] but less specialized) to near-term algorithms such as the Variational Quantum Eigensolver (VQE) [4], which have no a priori parametric advantage over their classical counterparts but might have a practical one. This hierarchy ends with quantum (analog) simulations, where one gives up on the quantum gate model, i.e. on programmability. Going down this hierarchy, one gives up on the advantage provided by the quantum computer in terms of the degree of generality of the applications and arguably the expected provable speedup. In turn, the requirements on the hardware become less drastic, perhaps allowing one to obtain useful results without the need for a fault-tolerant approach [5]. This article focuses on one field that has been put forward as a possible near-term application for quantum computers: quantum chemistry. Quantum approaches to this problem have been blooming recently, with several good reviews that analyze the various aspects of the algorithms [6; 7] or the applicability of the approach to solve real problems in VQE [8; 9; 10; 11] or in its fault-tolerant counterpart, the Quantum Phase Estimation (QPE) algorithm [12; 13; 14]. Despite great expectations, it is a very difficult exercise to extrapolate the existing hardware capabilities to estimate whether a quantum advantage will be eventually reached. In this letter, we take a somewhat reverse approach and derive _necessary_ conditions to obtain such an advantage, thereby defining constraints that the hardware must fulfill if an advantage is to be obtained. _A criterion for Variational Quantum Eigensolving_. We start with an analysis of VQE and the level of quantum noise that it can sustain. The VQE approach is very close in spirit to the classical algorithm of variational Monte-Carlo (VMC) [15]. One constructs a variational anzatz \(|\Psi_{V}\rangle=U(\vec{\theta})|0\rangle\) by applying a quantum circuit \(U(\vec{\theta})\) to an initial state \(|0\rangle\) of a system made of \(n\) qubits. The variables \(\vec{\theta}\) parametrize the ansatz. In a second step, one estimates the energy \(E_{V}=\langle\Psi_{V}|H|\Psi_{V}\rangle\) of the molecule. Here \(H\) is the Hamiltonian of the studied molecule, encoded in a form suitable for qubits. The ground state of \(H\) is \(|\Psi_{0}\rangle\), with a ground state energy \(E_{0}\), and one seeks to find \(E_{V}\geq E_{0}\) as close to \(E_{0}\) as possible. The energy estimation is performed by running the circuit \(N_{S}\) times until the statistical uncertainty of the measured quantities \(\eta_{S}\propto 1/\sqrt{N_{S}}\) is smaller than the desired precision. Then the parameters \(\vec{\theta}\) are updated in order to decrease the variational energy \(E_{V}\). The process is repeated until the energy has reached convergence. A central question for VQE is the ability of the algorithm to provide accurate results despite the presence of hardware imperfections such as noise or decoherence. Noise is known to lead to exponentially vanishing gradients [16] (hence a more difficult optimization) and quite stringent lower bounds on the lowest achievable variational energy [17]. Important efforts are ongoing to try and mitigate the effect of this noise [18; 19] but these techniques are generically also plagued with an exponential complexity [20; 21; 22; 23]. The effect of the noise in real hardware can be measured by the fidelity \(F=\langle\Psi_{V}|\rho|\Psi_{V}\rangle\). The fidelity expresses how the density matrix \(\rho\) of the quantum computer after state prepara tion differs from the expected one, \(|\Psi_{V}\rangle\langle\Psi_{V}|\). A fidelity smaller than one implies that \(\rho=F|\Psi_{V}\rangle\langle\Psi_{V}|+(1-F)\rho_{\rm noise}\), where \(\rho_{\rm noise}\) is the part of the density matrix that results from decoherence. The resulting energy is given by \(E=E_{V}+\Delta E\) with a noise induced error \(\Delta E\) defined as \[\Delta E=(1-F)[E_{\rm noise}-E_{V}], \tag{1}\] with \(E_{\rm noise}={\rm Tr}(\rho_{\rm noise}H)\). A large corpus of experiments and theory [24], including the seminal "quantum supremacy" experiment by Google [25], indicates that the fidelity decays exponentially with the total number of applied gates \(N_{g}\), \[F\approx e^{-\epsilon N_{g}} \tag{2}\] where \(\epsilon\) is the average error per gate. In the leading quantum hardware, this error is dominated by the two-qubit gates for which \(\epsilon\leq 1\%\). We now argue that the energy scale \(E_{\rm noise}-E_{V}\) generically scales as the square of the number of electrons, a very unfavourable scaling. Indeed, in general, the target Hamiltonian \(H\) is very different from the Hamiltonian that describes the hardware. Therefore, the sought-after \(|\Psi_{V}\rangle\) is generically a high-energy state of the hardware Hamiltonian. Conversely, hardware noise shares no structure with the studied molecule and it will typically populate eigenstates of \(H\) of _arbitrary_ large energies. For instance, one of the simplest noise channels, the depolarizing noise, maps \(\rho\rightarrow(1-\epsilon)\rho+\epsilon I_{d}/2^{n}\), where \(I_{d}\) is the identity matrix. It follows that \(E_{\rm noise}=E_{\infty}\) for this model, where \(E_{\infty}\) is the equilibrium energy of the Hamiltonian \(H\) at _infinite_ temperature. The same energy scale would be found using local Pauli errors which have the same fixed point[16]. A first consequence of the above discussion lies in the scaling of the noise-induced error due to the long-range nature of the Coulomb interaction. Indeed, the ground-state energy of a molecule is generically an extensive quantity, i.e. it is proportional to the number \(N\) of electrons in the system: \(E_{0}\propto N\). Likewise, any reasonable variational ansatz will share this property \(E_{V}\propto N\). The spectrum of \(H\), however, generically contains contributions from Coulomb interaction terms that scale quadratically in \(N^{2}\). For low energy states, these terms are not significant because the electrons screen the nucleous electric field and no macroscopic dipole arises. However in VQE, the Hamiltonian of the quantum hardware is distinct and independent from the physical Hamiltonian so that noise can mix the target states with arbitrary high energy states. For example, we can build an extremely high energy state for a given molecule by piling all the electrons on one atom and leaving the orbitals of the other atoms unoccupied. Such a state has an energy that scales as \(N^{2}\) (classical charging energy of a capacitor). It follows that the error scales defavorably as \[E_{\rm noise}=aN+bN^{2}. \tag{3}\] This is very different from e.g. VMC that does not suffer from this problem (unless a particularly ill-chosen variational ansatz is used). Interestingly, quantum simulators (_aka_ analog quantum computers) do not generically suffer from this problem either. There, the Hamiltonian of the physical system (the hardware) is supposed to match as closely as possible the Hamiltonian that one wants to study. It follows that imperfections such as e.g. energy relaxation are not necessarily problematic, they are part of the hardware and perhaps have a counterpart in the problem that one wants to study. The quadratic scaling of the VQE error is a direct consequence of its added programmability with respect to an analog simulation. We now construct a quantitative criterion for using VQE on a given hardware for a given molecule. A generally accepted minimum accuracy necessary for a quantum chemistry calculation to be of real use is the so-called chemical accuracy \(\eta_{\rm chem}=1{\rm kcal/mol}\ \approx 1.6{\rm mHa}\approx 500K\). The energy \(E_{\rm noise}\), on the other hand, will have a typical value of the order of 1 Ha or higher (the typical scale of the Hamiltonian matrix element in the atomic orbitals). For instance for the \(H_{2}\) molecule in a minimum basis set of just two orbitals per atom (STO-3G), one gets \(E_{\infty}-E_{0}=1.02\) Ha. This energy quickly climbs to tens of Hartrees when one uses larger basis sets, necessary for achieving chemical accuracy. We can thus construct our Figure 1: Hydrogen chain energy scales. Hartree-Fock ground state energy \(E_{HF}\) and thermalized energy \(E_{\infty}\) per atom, for the ”STO-3G” (2 qubits per atom), ”6-31G” (4 qubits per atom) [26], ”cc-pVDZ” (10 qubits per atom) and ”cc-pVTZ” basis (28 qubits per atom) [27]. For the ”STO-3G” basis we have plotted the highest excited state energy \(E_{max}\) up to \(N=10\) (regime where exact diagonalization is possible). For all curves, the zero-offset for energies per atom is \(E_{0}(N=2)/2\), where \(E_{0}(N=2)\) comes from exact diagonalization of the H\({}_{2}\) Hamiltonian in the ”STO-3G” basis. first criterion: it simply reads \(\Delta E\leq\eta_{\rm chem}\), and translates [using Eq.(1) and (2)] into \[\epsilon\leq\frac{\eta_{\rm chem}}{(E_{\rm noise}-E_{0})N_{g}}. \tag{4}\] It follows that the error level must be very low, especially if an expressive ansatz with a large number of gates is to be used, another necessary step needed to reach chemical accuracy. What does this criterion imply in practice? Let us consider a recent blind test benchmark on the benzene molecule [28]. Benzene is a non trivial calculation for classical approaches. Yet [28] showcased that a variety of classical techniques arrived at chemical precision using 30 electrons distributed on 108 orbitals. Using the UCC ansatz, inspired by the successful coupled cluster approach used in quantum chemistry, would require to include at least single, double and triple excitations (actually quadruple would probably be needed too) which translates into \(\sim N^{6}\) gates. One arrives at a noise level \(\epsilon\leq 10^{-12}\) or lower, that is many orders of magnitude below the best existing quantum hardware. To further illustrate the unfavourable behaviour of the noise-induced error, we have computed the energy \(E_{\infty}-E_{0}\) of a chain of hydrogen atoms in various basis sets of increasing accuracy, using the PySCF package[29]. Figure 1 shows the energy per atom \(E_{\infty}/N\) (counted from \(E_{0}/N\), which at this scale does not depend on \(N\)) for various standard basis sets of increasing accuracy (typically only the largest one can reach chemical accuracy). To set the energy scales into perspective the right axis shows the energies in Kelvin and one quickly arrives at core sun level of temperature. Also shown are the Hartree-Fock energy (empty squares) and the maximum energy of \(H\) for the minimum basis set STO-3G (blue stars). As soon as one steps away from the minimum basis set and/or number of hydrogen atoms, one finds that the energy \(E_{\infty}\) becomes very large, which will lead to an explosion of the noise-induced error and rule out any practical computation. A last consequence of the noise-induced error is the statistical precision of the calculation. In VQE, one does not measure the energy \(E\) directly but rather its different subterms separately from the one and two-body reduced density matrix. It results that even if \(|\Psi_{V}\rangle\) is close to the actual ground state (hence the total energy has a low variance), these subterms taken separately have large standard deviations (\(|\Psi_{V}\rangle\) is not an eigenstate of any of them) of the order of \(1Ha\) or larger. This implies a large number of shots \(N_{S}\) is needed to reach high precision. To this problem, the noise adds a contribution to the variance of the order of \((\Delta E)^{2}\), which will have a drastic impact on the measurement time needed e.g. to implement any noise mitigation scheme. Note that classical methods typically do not suffer from this problem. In a VMC calculation, the statistical error \(\eta_{S}\) is given by \(\eta_{S}=\sigma_{V}/\sqrt{N_{S}}\) where \(\sigma_{V}^{2}=\langle\Psi_{V}|H^{2}|\Psi_{V}\rangle-E_{V}^{2}\) is the variance of the energy of the ansatz. Hence, when the ansatz is properly chosen, it has a low variance so that one can reach high precision at an affordable \(N_{S}\). _A criterion for Quantum Phase Estimation._ We now turn to the second criterion relevant for Quantum Phase Estimation (QPE) algorithm. QPE starts from a guess input state \(|\Psi_{V}\rangle\) and applies the quantum phase estimation algorithm to project the state onto the eigenvectors of \(e^{-iHt}\) i.e. the eigenvectors of \(H\), and extract the corresponding eigenenergy \(E_{0}\). QPE is much more demanding than VQE and it is assumed that one is in possession of a hypothetical [31] (noiseless) fault tolerant quantum computer. The probability to obtain the ground state \(|\Psi_{0}\rangle\) of \(H\) is proportional to the overlap \(\Omega\) of the initial input Figure 2: (a) Sketch of the energy \(E(\tau)\) versus imaginary time. The area \(\kappa\) under the energy curve directly provides the overlap \(\Omega=e^{-\kappa}\). We approximate \(\kappa\) with the orange shaded area \(I_{\Omega}\) more easily accessible. (b), (c) and (d): Energy \(E_{V}\) versus optimization step in VQE simulation, the ”error bar” corresponds to the standard deviation \(\sigma_{V}\). (e), (f) and (g): Overlap \(\Omega\) and \(e^{-I_{\Omega}}\) versus optimization step. state with the ground state, i.e. \(\Omega=|\langle\Psi_{V}|\Psi_{0}\rangle|^{2}\). It is thus of prime importance to be able to estimate what value of \(\Omega\) one can hope for in quantum chemistry in order to determine if QPE could become useful. In condensed matter, the overlap between two slightly different states is usually believed to decrease exponentially with system size, a phenomenon referred to as the orthogonality catastrophe [32]. This phenomenon holds even for states that share very similar energies. For instance, the difference of energy per particle between superconducting aluminum and normal-metal aluminium is extremely tiny \(\sim(\Delta/E_{F})^{2}\approx 10^{-8}\) (\(\Delta\): superconducting gap, \(E_{F}\): Fermi energy) yet the two states behave drastically differently. In quantum chemistry, the situation has been studied in less detail: Tubman _et al._[33] looked at small molecules with small static correlation and concluded that \(\Omega\) could be kept to relatively high values, while [34], which looked at more correlated molecules, gave indication of a relatively fast decay of the overlap. We note that calculations on small molecules may be artificially optimistic. For instance in a method such as CCSD which spans a \(N^{4}\) dimensional space, the crossover \(N^{4}\approx 2^{N}\) happens for \(N=16\). Hence, the ansatz is overexpressive for small \(N\), indicating that the exact ground state likely can be represented by the variational ansatz, while the situation deteriorates abruptly as \(N\) increases. Also, molecules with small static correlations are good targets for classical calculations. Below we show that \(\Omega\) can actually be estimated from quantities accessible in standard quantum chemistry calculations, therefore allowing one to estimate the success probability that QPE would have on a perfect quantum computer. Let us assume that the initial state \(|\Psi_{V}\rangle\) fed to QPE has been obtained using a variational computation like VQE. Then, we use a theorem proved by one of us in [35] to estimate the overlap \(\Omega\) of this state with the ground state \(|\Psi_{0}\rangle\). Let us consider the wavefunction \(|\Psi(\tau)\rangle=\frac{1}{\sqrt{Z}}e^{-H\tau}|\Psi_{V}\rangle\), where the factor \(Z=\langle\Psi_{V}|e^{-2H\tau}|\Psi_{V}\rangle\) ensures normalization. This wavefunction appears in various techniques (e.g. Diffusion Monte-Carlo or Green function Monte-Carlo) that project the variational wavefunction onto the ground-state since \(\lim_{\tau\rightarrow\infty}|\Psi(\tau)\rangle=|\Psi_{0}\rangle\). A typical output of these methods is the energy \(E(\tau)=\langle\Psi(\tau)|H|\Psi(\tau)\rangle\) as a function of \(\tau\) as sketched in Fig. 2 (left panel). The success probability of PQE is simply related to the area \(\kappa\) under this curve as [35], \[\Omega=\exp(-\kappa),\text{ with }\kappa=\int\limits_{0}^{\infty}d\tau(E(\tau)-E _{0}). \tag{5}\] While the area \(\kappa\) is not necessarily easy to compute, a good proxy can be obtained by considering the area \(I_{\Omega}\) of the dashed triangle in the left panel of Fig. 2. This area can be calculated from the knowledge of the variational energy \(E_{V}=E(\tau=0)\), the energy variance of the variaitonal ansatz \(\sigma_{V}^{2}=-\partial_{\tau}E(\tau=0)\) and an estimate (that needs not necessarily to be very accurate) of the ground state energy \(E_{0}\): \[I_{\Omega}\equiv\frac{(E_{V}-E_{0})^{2}}{2\sigma^{2}}. \tag{6}\] We call \(I_{\Omega}\) the "overlap index" of the variational ansatz \(|\Psi_{V}\rangle\). The overlap index provides an estimate of the success probability \(\Omega\) as \[\Omega\approx e^{-I_{\Omega}}, \tag{7}\] and the associated criterion is naturally \(\Omega\sim 1\). This criterion depends on the energy and variance of the ansatz and is totally independent on the imaginary time evolution used for its derivation. To corroborate the validity of this overlap estimation, we have performed VQE simulations for several molecules, computing both the variational energy and the variance. For these small systems, the exact ground state energy can be calculated as well. This allows to compute the exact overlap \(\Omega\) and check if the estimate Eq.(7) holds. (On bigger systems, one would rely on estimates such as those currently used in quantum chemistry where one extrapolates from a sequence of increasingly accurate calculations (e.g. CCSD, CCSDT and CCSDTQ calculations)). We use the myQLM-fermion package [36], a one-layer UCC ansatz and a minimum basis set (STO-3g for H\({}_{2}\) and H\({}_{4}\) molecules; the 6-31G basis and active space selection to reduce the Hilbert space dimension from 22 to 4 for LiH). The convergence of the results versus the Figure 3: Scaling of the Hartree-Fock energy (a) and the energy variance (b) versus number of atoms in an hydrogen chain. (c): Overlap index \(I_{\Omega}\) versus energy error \(|E-E_{0}|\) for the variational ansatz of the [30] data set. The dashed line is a linear fit \(I_{\Omega}\approx 27.8|E-E_{0}|\). optimization step are shown on the right panels of Figure 2. Note that the "error bars" in the upper panels stand for the variance of the variational ansatz. In the lower panels, we observe a very good match between the right and left-hand sides of Eq.(7), which shows that the overlap index can indeed be used to estimate the overlap \(\Omega\). This is an important point of this second criterion: adding the calculation of the variance \(\sigma_{V}^{2}\) provides, together with the energy, very valuable information and we argue that variational calculations should report on the variance. Note that in the H\({}_{4}\) simulation, which uses 8 qubits in contrast to H\({}_{2}\) and LiH where the number of qubits is only 4, the convergence is much slower. We end this letter by a discussion of the scaling that one may expect for \(\Omega\). A reasonable variational energy is an extensive quantity \(E_{V}\propto N\). Likewise, the variance is also likely to be extensive \(\sigma^{2}\propto N\) (this is true if the energy is roughly the sum of local terms). It follows that the overlap index is generically an extensive quantity \(I_{\Omega}=\alpha N\), from which one concludes that the overlap decreases exponentially \(\Omega\approx e^{-\alpha N}\). This is the orthogonality catastrophe in the context of variational calculations. To illustrate the above statements, the top panels of Figure 3 show the Hartree-Fock energy (left) and variance (right) of hydrogen chains of up to \(N=128\) in the STO-3G basis set. Both indeed scale linearly with \(N\) as advertized. In a slightly different context, a recent work [30] has aggregated a large data set of energies and variances of variational ansatz of various condensed matter systems of various sizes using various methods. The bottom panel of Figure 3 shows \(I_{\Omega}\) versus \(E_{V}-E_{0}\) for the data set of [30]. We find that \(I_{\Omega}\) is well fitted by a linear law \(I_{\Omega}=C|E-E_{0}|\), with \(C=27.8\pm 0.1\). Since \(|E-E_{0}|\) is an extensive quantity, this implies again the exponential decay associated with the orthogonality catastrophe. To conclude, we have proposed two criteria, one for VQE (noisy hardware) one for QPE (fault tolerant hardware) that are easily accessible and provide necessary conditions for the possibility of doing genuinely relevant chemistry calculations on quantum hardware. Our preliminary estimates imply that this possibility is unlikely with the approaches and technologies that are currently pursued, unless important paradigm shifts take place. ## Acknowledgements We acknowledge funding from the French ANR QPEG and the Plan France 2030 ANR-22-PETQ-0007 "EPIQ".
2304.03347
Towards Interpretable Mental Health Analysis with Large Language Models
The latest large language models (LLMs) such as ChatGPT, exhibit strong capabilities in automated mental health analysis. However, existing relevant studies bear several limitations, including inadequate evaluations, lack of prompting strategies, and ignorance of exploring LLMs for explainability. To bridge these gaps, we comprehensively evaluate the mental health analysis and emotional reasoning ability of LLMs on 11 datasets across 5 tasks. We explore the effects of different prompting strategies with unsupervised and distantly supervised emotional information. Based on these prompts, we explore LLMs for interpretable mental health analysis by instructing them to generate explanations for each of their decisions. We convey strict human evaluations to assess the quality of the generated explanations, leading to a novel dataset with 163 human-assessed explanations. We benchmark existing automatic evaluation metrics on this dataset to guide future related works. According to the results, ChatGPT shows strong in-context learning ability but still has a significant gap with advanced task-specific methods. Careful prompt engineering with emotional cues and expert-written few-shot examples can also effectively improve performance on mental health analysis. In addition, ChatGPT generates explanations that approach human performance, showing its great potential in explainable mental health analysis.
Kailai Yang, Shaoxiong Ji, Tianlin Zhang, Qianqian Xie, Ziyan Kuang, Sophia Ananiadou
2023-04-06T19:53:59Z
http://arxiv.org/abs/2304.03347v4
# Towards Interpretable Mental Health Analysis with ChatGPT ###### Abstract Automated mental health analysis shows great potential for enhancing the efficiency and accessibility of mental health care, with recent methods using pre-trained language models (PLMs) and incorporated emotional information. The latest large language models (LLMs), such as ChatGPT, exhibit dramatic capabilities on diverse natural language processing tasks. However, existing studies on ChatGPT for mental health analysis bear limitations in inadequate evaluations, ignorance of emotional information, and lack of explainability. To bridge these gaps, we comprehensively evaluate the mental health analysis and emotional reasoning ability of ChatGPT on 11 datasets across 5 tasks, and analyze the effects of various emotion-based prompting strategies. Based on these prompts, we further explore LLMs for interpretable mental health analysis by instructing them to also generate explanations for each of their decisions. With an annotation protocol designed by domain experts, we convey human evaluations to assess the quality of explanations generated by ChatGPT and GPT-3. The annotated corpus will be released for future research. Experimental results show that ChatGPT outperforms traditional neural network-based methods but still has a significant gap with advanced task-specific methods. Prompt engineering with emotional cues can be effective in improving performance on mental health analysis but suffers from a lack of robustness and inaccurate reasoning. In addition, ChatGPT significantly outperforms GPT-3 on all criteria in human evaluations of the explanations and approaches to human performance, showing its great potential in explainable mental health analysis. ## 1 Introduction WARNING: This paper contains examples and descriptions which are depressive in nature. Mental health conditions such as depression and suicidal ideation pose serious challenges to global health care (Evans-Lacko et al., 2018; Zhang et al., 2022). Researchers have devoted much effort to automatic mental health analysis methods with natural language processing (NLP) techniques (Skaik and Inkpen, 2020; Gkotsis et al., 2016). The mainstream method leverages the strong context modeling ability of pre-trained language models (PLMs) to enhance the post representations, which facilitates the detection of mental health conditions (Ji et al., 2022; Yang et al., 2022; Abed-Esfahani et al., 2019; Murarka et al., 2020). In addition, emotional cues are proven to be closely coupled with the mental states of patients and widely utilized as useful features (Sawhney et al., 2020; Turcan et al., 2021; Zhang et al., 2023). Recent years have witnessed the emerging techniques of Large Language Models (LLMs) and their fast iterations, such as GPT-3 (Brown et al., 2020), Instruct-GPT (Ouyang et al., 2022), and most recently, ChatGPT 1 and GPT-4 (OpenAI, 2023). LLMs, especially ChatGPT and GPT-4, have exhibited strong general language processing ability (Wei et al., 2022; Kojima et al., 2022; Luo et al., 2023). In mental health analysis, Lamichhane (2023) did a simple evaluation of ChatGPT on stress, depression, and suicide detection and glimpsed its strong language understanding ability to mental health-related texts. Amin et al. (2023) compared the zero-shot performance of ChatGPT on suicide and depression detection with previous fine-tuning-based methods such as PLMs and word embeddings. The comparable results further showed a promising future of a new LLMs-based paradigm in mental health analysis. On evaluating the emotional reasoning ability of ChatGPT, existing works (Amin et al., 2023; Qin et al., 2023; Zhong et al., 2023) mostly tested its zero-shot performance on the simple binary sentiment analysis tasks on a single sentence, indicating the fundamental sentiment reasoning ability of ChatGPT and other LLMs. Though previous works depict a promising future with ChatGPT handling mental health analysis, several issues remain unresolved. Firstly, mental health condition detection is a safe-critical task requiring careful evaluation and high transparency for any predictions (Zhang et al., 2022), while these works simply tested ChatGPT on a few binary mental health condition detection tasks and lack the explainability on detection results. Moreover, other important mental health analysis tasks, such as the cause/factor detection of mental health conditions (Mauriello et al., 2021; Garg et al., 2022), were ignored. Secondly, previous works mostly design prompts to directly detect mental health conditions with ChatGPT. These vanilla methods ignore useful information, such as emotional cues, which are proven useful for mental health analysis. By extension, ChatGPT's emotional reasoning ability in complex scenarios, such as in conversations (Poria et al., 2019, 2021), is also not well evaluated. However, this ability is crucial for mining emotional cues in the dialogue-based interaction mode of ChatGPT. We believe it requires a comprehensive exploration and evaluation of the ability and explainability of LLMs on mental health analysis, including mental health detection, emotional reasoning, and cause detection of mental health conditions et al. Therefore, we raise the following three research questions (RQ): * **RQ 1**: How are the generalized mental health analysis and emotional reasoning abilities of ChatGPT in the zero-shot setting? * **RQ 2**: How do different prompting strategies and emotional cues impact the mental health analysis ability of ChatGPT? * **RQ 3**: How well can ChatGPT generate explanations for its decisions on mental health analysis? Based on these research questions, we first conduct a preliminary study of how ChatGPT performs in a zero-shot setting on 11 datasets across 5 tasks to broadly evaluate its mental health analysis and emotional reasoning abilities, including the task of binary and multi-class mental health condition detection, cause/factor detection of mental health conditions, emotion recognition in conversations, and causal emotion entailment. We then systematically analyze the effectiveness of different prompting strategies to aid mental health analysis, including zero-shot prompting, zero-shot Chain-of-Thought (CoT) prompting (Kojima et al., 2022), and emotion-enhanced zero-shot CoT prompting with supervised and unsupervised multi-granularity emotional information. Finally, we perform human evaluations to assess the quality of explanations from ChatGPT and GPT-3, which follow a strict human annotation protocol designed by a domain expert in mental health analysis. We will release the annotated corpus for future research. Based on our experimental results, we conclude our findings as follows: * **Overall performance.** ChatGPT outperforms traditional neural networks such as CNN and GRU, showing its potential in mental health analysis and emotional reasoning in conversations. However, it significantly underperforms advanced supervised methods on all tasks, highlighting the challenges of emotion-related subjective tasks for ChatGPT. * **Zero-shot prompting.** In contrast to the results observed in other NLP tasks, ChatGPT's performance using zero-shot CoT prompting is comparable to, or even worse than, its performance with vanilla zero-shot prompting. This suggests that a simple trigger sentence, without considering additional valuable information like emotional cues, is ineffective for mental health analysis. * **Emotion-enhanced prompting.** ChatGPT with the unsupervised emotion-enhanced zero-shot CoT prompting, achieves the best performance. However, when introducing distantly supervised emotion and sentiment lexicon information, the emotion-enhanced zero-shot prompting results in decreased performance and even underperforms vanilla zero-shot prompting results. These results highlight the importance of appropriate prompt engineering in leveraging emotional cues for mental health analysis. * **Explainability.** The human evaluation results of the explanations show that ChatGPT significantly outperforms GPT-3 in all criteria and generates approaching-human explanations for its classifications, indicating its potential to enhance the transparency of mental health analysis. * **Limitations.** Besides the gap with advanced methods on quantitative metrics, ChatGPT has been found to have limitations on unstable predictions caused by its excessive sensitivity to minor alterations in prompts and inaccurate reasoning. ## 2 Methodology In this section, we will introduce the details of instructing ChatGPT with prompting strategies for emotional reasoning and mental health analysis, and how we improve prompts with chain-of-thought and emotional cues to improve the efficiency and transparency of ChatGPT for mental health analysis. ### ChatGPT ChatGPT is an LLM developed by OpenAI that interacts with its user in a dialogue way. This interactive mode enables users to convert almost any NLP tasks to a natural language format, which is known as prompts, and get flexible answers from ChatGPT. ChatGPT is originally trained based on InstructGPT (Ouyang et al., 2022) but continually optimizes through reinforcement learning from human feedback (RLHF) (Stiennon et al., 2020). ### Emotional Reasoning Tasks.We evaluate the emotional reasoning ability of ChatGPT in complex scenarios on the following two widely studied tasks: emotion recognition in conversations (ERC) and causal emotion entailment (CEE). ERC aims at recognizing the emotion of each utterance within a conversation from a fixed emotion category set, which is often modeled as a multi-class text classification task (Poria et al., 2019). Given an utterance with a non-neutral emotion, CEE aims to identify the casual utterances for this emotion in the previous conversation history. CEE is usually modeled as a binary classification between the candidate utterance and the target utterance. Prompts.We perform _direct guidance_ on exploring the ability of ChatGPT in both tasks, which designs zero-shot prompts to directly ask for a classification result from the response of ChatGPT. Specifically, the prompt for ERC is designed as follows: _Context: "[Previous Dialogue]". Consider this context to assign one emotion label to this utterance "[Target]". Only from this emotion list:_ _[Emotion List]. Only return the assigned word._ where the slots marked blue are the required inputs. _[Previous Dialogue]_ denotes the previous dialogue history of the target utterance, where each utterance is pre-pended with its speaker, then concatenated in the sequence order. _[Target]_ denotes the target utterance, and _[Emotion List]_ denotes the predefined emotion category set of the corresponding dataset, which are listed in Table 1. Similarly, the prompt for CEE task is designed as follows: _Context with emotion labels: "[Previous Dialogue]". Consider this context to answer the question: Did this utterance "[Query]" caused the [Target Emotion] emotion of the target utterance "[Target]"? Only return Yes or No._ where _[Previous Dialogue]_ still denotes the dialogue history with speakers, but each utterance is also post-pended with its emotion label. _[Query]_ is the candidate utterance. _[Target]_ and _[Target Emotion]_ are the target utterance and its emotion label. ### Mental Health Analysis Tasks.We conduct broad tests of ChatGPT's mental health analysis ability on the following three tasks: binary mental health condition detection, multi-class mental health condition detection, and cause/factor detection of mental health conditions. Binary mental health condition detection is modeled as a yes/no classification on the mental health condition such as depression and stress from a post, while multi-class detection identifies one label from multiple mental health conditions. Cause/factor detection aims at recognizing one potential cause of a mental health condition from multiple causes. Prompts.We systematically explore different prompting strategies for mental health analysis. Considering the wide applications of emotional clues in mental health analysis (Sawhney et al., 2020; Turcan et al., 2021; Zhang et al., 2023), we also leverage multi-grained and multi-form emotion infusion methods to further enhance the prompts. * **Zero-shot prompting**. Firstly, we still perform zero-shot prompting on direct guiding ChatGPT for mental health analysis. Specifically, for binary mental health condition detection, we design the following prompt: _Post: "[Post]". Consider this post to an swer the question: Is the poster likely to suffer from very severe [Condition]? Only return Yes or No._ For multi-class mental health detection, we use the following prompt: * _"[Post]". Consider this post to assign only one mental disorder label to this post from this list: [List]. Only return the assigned label._ For cause/factor detection, the prompt is: * _"[Post]". Consider this post and assign a label that causes its [Condition]. Only return answers from one of the labels: [List]._ where _[Post]_ denotes the target post, _[Condition]_ denotes the target mental health condition such as depression or stress, and _[List]_ are the predefined labels presented in Table 2. * **Unsupervised emotion-enhanced zero-shot CoT prompting**. Secondly, we perform emotion infusion by designing unsupervised emotion-enhanced zero-shot Chain-of-Thought (CoT) prompts, where the emotion-related part inspires the LLM to concentrate on the emotional clues from the post, and the CoT part guides the LLM to generate step-by-step explanations for its decision. Specifically, for the binary mental health condition detection task, we modify the zero-shot prompt as follows: _Post: "[Post]". Consider the emotions expressed from this post to answer the question: Is the poster likely to suffer from very severe [Condition]? Only return Yes or No, then explain your reasoning step by step._ where the green parts are the zero-shot CoT enhancements, and the red parts are _further_ added on zero-shot CoT prompts to obtain the emotion-enhanced prompts. Similar modifications are performed on the prompt of multi-class detection and cause/factor detection. * **Supervised emotion-enhanced zero-shot CoT prompting**. In addition, we propose a distantly supervised emotion fusion method by using sentiment and emotion lexicons. To perform distantly supervised fusion of sentiment information, we utilize the VADER Hutto and Gilbert (2014) and NRC EmoLex Mohammad and Turney (2010, 2013) lexicons to assign a sentiment score to each post and convert the score to one of the labels: _{positive, negative, neutral}_. NRC EmoLex also contains emotion annotations that were assigned from the following emotion list: _anger, anticipation, disgust, fear, joy, sadness, surprise, trust_. We regard the emotion category with the maximum emotion score as the emotion label of the input text. The details are described in Appendix A. We design the supervised emotion-enhanced zero-shot Chain-of-Thought (CoT) prompts by adding the sentiment/emotion labels to the zero-shot prompt. For example, we modify the prompt for multi-class mental health condition detection as follows: _Post: "[Post]". Alice thinks it is [Sentiment/Emotion]. Consider this post to assign only one mental disorder label to this post from this list: [List]. Only return the assigned label._ where the green parts are the modifications for distantly supervised emotion infusion, and _[Sentiment/Emotion]_ denotes the corresponding sentiment/emotion label. Modifications for other tasks are similar. ## 3 Experimental Settings In this section, we first introduce the benchmark datasets, baseline models, and automatic evaluation metrics for the classification results of emotional reasoning and mental health analysis. For human evaluation in explainability, we also describe the details of the annotation protocols and aggregation process. ### Emotional Reasoning Datasets.For ERC, we select four widely utilized benchmark datasets: IEMOCAP Busso et al. (2008), MELD Poria et al. (2019), EmoryNLP Zahieri and Choi (2017), DailyDialog Li et al. (2017). For CEE, we select the dataset RECCON Poria et al. (2021). More information about these datasets are listed in Table 1. Baseline Models.Since there are no previous zero-shot methods for both tasks, we compare the performance of ChatGPT with that of supervised baseline models. For ERC, we se lect CNN Kim (2014), cLSTM Zhou et al. (2015), CNN+LSTM Poria et al. (2017), DialogueRNN Majumder et al. (2019), KET Zhong et al. (2019), BERT-Base Devlin et al. (2019), RoBERTa-Base Liu et al. (2019), XLNet Yang et al. (2019), DialogXL Shen et al. (2021), KIST Xie et al. (2021), SCCL Yang et al. (2023), and SPCL Song et al. (2022). For CEE, we select RankCP Wei et al. (2020), RoBERTa-Base/Large, KEC Li et al. (2022), and KBCIN Zhao et al. (2022). Details about these baseline models are in Appendix C.1. Metrics.We use the weighted-F1 measure as the evaluation metric for IEMOCAP, MELD, and EmoryNLP dataset. Since _neutral_ occupies most of DailyDialog, we use micro-F1 for this dataset, and ignore the label _neutral_ when calculating the results as in the previous works Shen et al. (2021); Xie et al. (2021); Yang et al. (2023). For RECCON, we report the F1 scores of both negative and positive causal pairs, and the macro F1 scores as a whole. ### Mental Health Analysis Datasets.For binary mental health condition detection, we select two depression detection datasets Depression_Reddit (DR) Pirina and Coltekin (2018), CLPsych15 Coppersmith et al. (2015), and another stress detection dataset Dreaddit Turcan and McKeown (2019). For multi-class mental health condition detection, we utilize the dataset T-SID Ji et al. (2022). For cause/factor detection of mental health conditions, we use a stress cause detection dataset called SAD Mauriello et al. (2021) and a depression/suicide cause detection dataset CAMS Garg et al. (2022). More details of these datasets are presented in Table 2. Baseline Models.We select the following baseline models: CNN Kim (2014), GRU Cho et al. (2014), BiLSTM_Att Zhou et al. (2016), fastText Joulin et al. (2017), BERT/RoBERTa Devlin et al. (2019); Liu et al. (2019), and MentalBERT/MentalRoBERTa Ji et al. (2022). Details about these baseline models are in Appendix C.2. Metrics.We evaluate the model performance using the recall and weighted-F1 scores as the evaluation metric for all mental health datasets. Due to imbalanced classes in some datasets such as DR, CLPsych15 and T-SID, we use weighted-F1 scores following previous methods. In addition, it is crucial to minimize false negatives, which refers to cases where the model fails to identify individuals with mental disorders. Therefore, we also report the recall scores. ### Human Evaluation for Explainability We examine the quality of the generated explanations with human evaluation on the binary mental health conditions detection task. To compare the performance of ChatGPT with other LLMs, we utilize ChatGPT and GPT-3 (_curie-instruct-beta_) to simultaneously generate explanations for the same posts with the same emotion-enhanced CoT \begin{table} \begin{tabular}{c l l l l} \hline \hline Task & Data Source & Dataset & Conv./Utter. & Emotion Category Set \\ \hline ERC & Acted Script & IEMOCAP & 31/1,622 & _neutral, sad, anger, happy, frustrated, excited_ \\ ERC & TV Show Scripts & MELD & 280/2,610 & _neutral, sad, anger, disgust, fear, happy, surprise_ \\ ERC & TV Show Scripts & EmoryNLP & 85/1,328 & _neutral, sad, mad, scared, powerful, peaceful, joyful_ \\ ERC & Human Written Scripts & DailyDialog & 1,000/7,740 & _neutral, happy, surprise, sad, anger, disgust, fear_ \\ CEE & Human Written Scripts & RECCON & 225/2,405 & _neutral, happy, surprise, sad, anger, disgust, fear_ \\ \hline \hline \end{tabular} \end{table} Table 1: A summary of datasets for emotional reasoning in conversations. Conv. and Utter. denote conversation and utterance numbers. Data statistics are on the test set. \begin{table} \begin{tabular}{l l l l l} \hline \hline Condition & Platform & Dataset & Post Num. & Labels \\ \hline Depression & Reddit & DR & 406 & _Yes, No_ \\ Depression & Reddit & CLPsych15 & 300 & _Yes, No_ \\ Stress & Reddit & Dreaddit & 715 & _Yes, No_ \\ Suicide & Twitter & T-SID & 960 & _None, Suicide, Depression, PTSD_ \\ Stress & SMS & SAD & 685 & _School, Finance, Family, Social Relation,_ \\ & & & & _Work, Health, Emotion, Decision, Others_ \\ Depression/Suicide & Reddit & CAMS & 626 & _None, Bias, Job, Medication, Relation, Alienation_ \\ \hline \hline \end{tabular} \end{table} Table 2: A summary of datasets for mental health tasks. Note we test the zero-shot performance on the test set. prompts. We select 121 results that are correctly classified by both ChatGPT and GPT-3 to enable fair comparisons of their explanations. 42 more responses that are incorrectly classified by ChatGPT are also collected for error analysis (refer to Sec. 4.4). To standardize the human evaluation process, we invite an domain expert in mental health analysis to determine four key aspects for assessment: * **Fluency**: the coherence and readability of the explanation. * **Reliability**: the trustworthiness of the generated explanations to support the detection results. * **Completeness**: how well the generated explanations cover all relevant aspects of the original post. * **Overall**: the general effectiveness of the generated explanation. Based on the above definitions, the expert further determines the assessment criteria, where each aspect is divided into four standards rating from 0 to 3. Higher ratings reflect more satisfactory performance in the corresponding aspect and 3 denotes approaching human performance. Details of the criteria are described in Appendix D. Strictly based on these criteria, each response is assigned a score by three annotators for each corresponding aspect, followed by the examination of the expert. We further evaluate the quality of the annotations by calculating the inter-evaluator agreement: Fleiss' Kappa statistics [10] for each aspect. Any annotations with a majority vote are considered as reaching an agreement. ## 4 Results and Analysis We conduct all experiments using the ChatGPT API provided by OpenAI. Each prompt is fed independently to avoid the effects of dialogue history. ### Mental Health Analysis The experimental results of mental health analysis are presented in Table 5. We first compare the results of ChatGPT with the zero-shot prompting to gain a straight view of ChatGPT's potential in mental health analysis, then analyze its performance with other prompts enhanced by emotional information. Zero-shot Prompting.According to the results, ChatGPT\({}_{ZS}\) performs significantly better than traditional light-weighted neural network-based methods such as CNN, GRU, BiLSTM\(\_\)Att on binary mental health condition detection. For example, it outperforms GRU by over 10% on DR, CLPsych15, and Dreaddit. In addition, ChatGPT\({}_{ZS}\) also outperforms CNN and RNN-based methods on the cause/factor detection datasets SAD and CAMS, showing its good potential in cause analysis for mental health-related texts. However, ChatGPT\({}_{ZS}\) still struggles to achieve comparable performance to PLM-based fine-tuning methods such as MentalBERT and MentalRoBERTa. Particularly, ChatGPT\({}_{ZS}\) achieves much worse performance than all baselines on the multi-class detection dataset T-SID. We notice that T-SID collects mostly short posts from Twitter, which contain many usernames, hashtags, and slang words. The huge gap between the posts and ChatGPT's training data makes zero-shot detection difficult. In addition, we notice that recent efforts (Kocon et al., 2023) have a similar finding that ChatGPT has very poor performance on emoji, sentiment, and stances detection on the Twitter dataset. Moreover, although the zero-shot CoT prompting has been proven to be effective in improving ChatGPT on most NLP tasks (Zhong et al., 2023; Wei et al., 2022; Kojima et al., 2022), we surprisingly find that ChatGPT\({}_{CoT}\) has a comparable or even worse performance with ChatGPT\({}_{ZS}\). This illustrates that the simple trigger sentence "explain your reasoning step by step" is not effective in prompting ChatGPT on mental health analysis. Overall, although ChatGPT exhibited some generalized ability to recognize mental health conditions and analyze their causes, it still underperforms PLM-based methods with task-specific fine-tuning, leaving a huge gap for future work to further explore the mental health detection ability of LLMs. Emotion-enhanced Prompting.We further test multi-grained multi-form emotion-enhanced prompts on all datasets. Firstly, we infuse the distantly supervised sentiment information from the lexicon VADER and NRC EmoLex. Surprisingly, we notice that ChatGPT\({}_{V}\) and ChatGPT\({}_{N\_sen}\) perform worse than ChatGPT\({}_{ZS}\) on most datasets, showing that these prompts are not effective in \begin{table} \begin{tabular}{l c c c} \hline \hline **Model** & **Neg. F1** & **Pos. F1** & **Macro F1** \\ \hline RankCP & **97.30** & 33.00 & 65.15 \\ RoBERTa\({}_{Base}\) & 88.74 & 64.28 & 76.51 \\ RoBERTa\({}_{Large}\) & 87.89 & 66.23 & 77.06 \\ KEC & 88.85 & 66.55 & 77.70 \\ KBCIN & 89.65 & **68.59** & **79.12** \\ \hline ChatGPT\({}_{ZS}\) & 67.18 & 51.35 & 59.26 \\ \hline \hline \end{tabular} \end{table} Table 4: Test results on the CEE task. Best values: bold. The results of baseline methods are referenced from Zhao et al. (2022). \begin{table} \begin{tabular}{l c c c c} \hline \hline **ERC** & **IEMOCAP** & **MELD** & **DailyDialog** & **EmoryNLP** \\ \hline Model & Weighted F1 & Weighted F1 & Micro F1 & Weighted F1 \\ \hline CNN (Kim, 2014) & 52.18 & 55.86 & 49.34 & 32.59 \\ cLSTM (Zhou et al., 2015) & 34.84 & 49.72 & 49.90 & 26.01 \\ CNN+LSTM (Poria et al., 2017) & 55.87 & 56.87 & 50.24 & 32.89 \\ DialogueRNN (Majumder et al., 2019) & 61.21 & 56.27 & 50.65 & 31.70 \\ KET (Zhong et al., 2019) & 59.56 & 58.18 & 53.37 & 34.39 \\ BERT\({}_{Base}\) (Devlin et al., 2019) & 61.19 & 56.21 & 53.12 & 33.15 \\ RoBERTa\({}_{Base}\) (Liu et al., 2019) & 55.67 & 62.75 & 55.16 & 37.0 \\ XLNet (Yang et al., 2019) & 61.33 & 61.65 & 53.62 & 34.13 \\ DialogXL (Shen et al., 2021a) & 65.94 & 62.41 & 54.93 & 34.73 \\ KI-Net (Xie et al., 2021) & 66.98 & 63.24 & 57.3 & – \\ SCCL (Yang et al., 2023) & **69.88** & 65.70 & **62.51** & 38.75 \\ SPCL (Song et al., 2022) & 69.74 & **67.25** & – & **40.94** \\ \hline ChatGPT\({}_{ZS}\) & 53.35 & 61.18 & 43.27 & 32.64 \\ \hline \hline \end{tabular} \end{table} Table 3: Test results on ERC task. ChatGPT\({}_{ZS}\) denotes the method using the zero-shot prompt. Best values: bold. The results of some baseline methods are referenced from (Zhong et al., 2019; Song et al., 2022). enhancing the performance of ChatGPT with the sentiment information from lexicon VADER and NRC EmoLex. A possible reason is that a coarse-grained sentiment classification based on the two lexicons cannot describe complex emotions expressed in the posts. We believe it requires more accurate emotional information to improve performance. Based on this, we incorporate fine-grained emotion labels from NRC EmoLex into the zero-shot prompt. The results show that ChatGPT\({}_{N\_emo}\) outperforms ChatGPT\({}_{N\_sen}\) on most datasets, especially on CAMS (a 7.89% improvement). However, ChatGPT\({}_{N\_emo}\) still underperforms ChatGPT\({}_{ZS}\) on most datasets. This may be because both coarse-grained sentiment and fine-grained emotion labels from the two lexicons are not accurate enough and misguide ChatGPT, since multiple emotions can co-exist in a post. We consider it may require a more flexible way of integrating emotional cues for guiding ChatGPT. Therefore, we explore the unsupervised emotion-enhanced prompts using the CoT process to inspire the emotion-related reasoning of ChatGPT. As a result, ChatGPT\({}_{CoT\_emo}\) outperforms all other prompt-based methods on most datasets. For example, ChatGPT\({}_{CoT\_emo}\) achieves 42.29% on CAMS, which outperforms ChatGPT\({}_{ZS}\) by 8.44% and yields comparable performance with the strong PLM-based methods. It proves that emotion-enhanced CoT prompting is an effective method of leveraging emotional cues to enhance the ability of ChatGPT on mental health analysis. We provide more emotion-enhanced CoT cases in Appendix E.1. ### Human Evaluation for Explainability In the above subsection, we have shown that emotion-enhanced CoT prompts can enhance ChatGPT's zero-shot performance in mental health analysis. Moreover, it can also prompt ChatGPT to provide an explanation of its step-by-step reasoning for each response and significantly improves the explainability of the predictions, which is a key advantage compared with most previous black-box methods. In this subsection, we provide carefully designed human evaluations to gain a clear view of ChatGPT's explainability on its detection results. The Fleiss' Kappa results and agreement percentages are presented in Table 6. We aggregate each score by averaging the three assignments, and the distributions are presented in Figure 1. Firstly, the three annotators reach an agreement in most cases of evaluation. Over 95% of ChatGPT evaluations and 89.9% of GPT-3 results reach agreement. According to the widely utilized interpretation criterion 2, all Fleiss' Kappa statistics achieve at least fair agreement (\(\geq\)0.21) and 10 out of 16 results reach at least moderate agreement (\(\geq\)0.41). These outcomes further prove the quality of the human evaluation results. Footnote 2: [https://en.wikipedia.org/wiki/FleissX27_kappa](https://en.wikipedia.org/wiki/FleissX27_kappa) According to the box plot of the aggregated \begin{table} \begin{tabular}{l|c c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**DR**} & \multicolumn{3}{c}{**CLPSych15**} & \multicolumn{3}{c}{**Dreaddit**} & \multicolumn{3}{c}{**T-SID**} & \multicolumn{3}{c}{**SAD**} & \multicolumn{3}{c}{**CAMS**} \\ & Rec. & F1 & Rec. & F1 & Rec. & F1 & Rec. & F1 & Rec. & F1 & Rec. & F1 \\ \hline CNN & 80.54 & 79.78 & 51.67 & 40.28 & 65.31 & 64.99 & 71.88 & 71.77 & 39.71 & 38.45 & 36.26 & 34.63 \\ GRU & 61.72 & 62.13 & 50.00 & 46.76 & 55.52 & 54.92 & 67.50 & 67.35 & 35.91 & 34.79 & 34.19 & 29.33 \\ BiLSTM\_Att & 79.56 & 79.41 & 51.33 & 39.20 & 63.22 & 62.88 & 66.04 & 65.77 & 37.23 & 38.50 & 34.98 & 29.49 \\ fastText & 83.99 & 83.94 & 58.00 & 56.48 & 66.99 & 66.92 & 69.17 & 69.09 & 38.98 & 38.32 & 40.10 & 34.92 \\ BERT & 91.13 & 90.90 & 64.67 & 62.75 & 78.46 & 78.26 & 88.44 & 88.51 & 62.77 & 62.72 & 40.26 & 34.92 \\ RoBERTa & **95.07** & **95.11** & 67.67 & 66.07 & 80.56 & 80.56 & 88.75 & 88.76 & 66.86 & 67.53 & 41.18 & 36.54 \\ MentalBERT & 94.58 & 94.62 & 64.67 & 62.63 & 80.28 & 80.04 & 88.65 & 88.61 & 67.45 & 67.34 & 45.69 & 39.73 \\ MentalRoBERTa & 94.33 & 94.23 & **70.33** & **69.71** & **81.82** & **81.76** & **88.96** & **89.01** & **68.61** & **68.44** & **50.48** & **47.62** \\ \hline ChatGPT\({}_{ZS}\) & 82.76 & 82.41 & 60.33 & 56.31 & 72.72 & 71.79 & 39.79 & 33.30 & 55.91 & 54.05 & 32.43 & 33.85 \\ ChatGPT\({}_{V}\) & 79.51 & 78.01 & 59.20 & 56.34 & 74.23 & 73.99 & **40.04** & **33.38** & 52.49 & 50.29 & 28.48 & 29.00 \\ ChatGPT\({}_{N\_sen}\) & 80.00 & 78.86 & 58.19 & 55.50 & 70.87 & 70.21 & 39.00 & 32.02 & 52.92 & 51.38 & 26.88 & 27.22 \\ ChatGPT\({}_{N\_emo}\) & 79.51 & 78.41 & 58.19 & 53.87 & 73.25 & 73.08 & 39.00 & 32.25 & 54.82 & 52.57 & 35.20 & 35.11 \\ ChatGPT\({}_{C\_emo}\) & 82.72 & 82.9 & 56.19 & 50.47 & 70.97 & 70.87 & 37.66 & 32.89 & 55.18 & 52.92 & 39.19 & 38.76 \\ ChatGPT\({}_{CoT\_emo}\) & **83.17** & **83.10** & **61.41** & **58.24** & **75.07** & **74.83** & 34.76 & 27.71 & **58.31** & **56.68** & **43.11** & **42.29** \\ \hline \hline \end{tabular} \end{table} Table 5: Test results on the mental health analysis tasks. ChatGPT\({}_{V}\), ChatGPT\({}_{N\_sen}\), and ChatGPT\({}_{N\_emo}\) denote the emotion-enhanced prompting methods with VADER sentiments, NRC EmoLex sentiments, and NRC EmoLex emotions. ChatGPT\({}_{CoT}\) and ChatGPT\({}_{CoT\_emo}\) denote the zero-shot and emotion-enhanced Chain-of-Thought methods on the corresponding task. Best values: bold. The results of baseline methods are referenced from Ji et al. (2022). scores, ChatGPT\({}_{true}\) almost achieves an average score of 3.0 in fluency and stably maintains outstanding performance, which shows that ChatGPT can consistently generate human-level responses regarding coherence and readability. On the other hand, GPT-3 achieves much worse performance in fluency with a 0 median score and an average score of less than 1.0. These results prove ChatGPT as a fluent explanation generator for mental health analysis. In reliability, ChatGPT\({}_{true}\) achieves a median score of 3 and over 2.7 in average score, showing ChatGPT as a very trustworthy reasoner in supporting its classifications. Only a few of GPT-3's explanations generate moderately reliable information while most of them are unreliable. The main reason is that ChatGPT can understand and respond to complex CoT prompts due to its advanced instruction tuning and RLHF, while GPT-3 outputs no appropriate explanations at all in many cases. For completeness, ChatGPT\({}_{true}\) obtains over 2.5 scores on average, indicating that ChatGPT can cover most of the relevant content in the posts to explain its classifications, while GPT-3 ignores key aspects by obtaining less than 0.5 on average. Overall, ChatGPT\({}_{true}\) has an average score of over 2.5, proving that ChatGPT can generate human-level explanations for correct classifications regarding fluency, reliability, and completeness and significantly outperforms previous LLMs such as GPT-3. We provide some cases to further demonstrate the explainability of ChatGPT in Appendix E.2. ### Error Analysis We further analyze some typical errors during our experiments to inspire future efforts of improving ChatGPT and emotion-enhanced prompts for mental health analysis. We provide quantitative analysis and cases to help illustrate these errors. Unstable Predictions.We notice that ChatGPT's performance on mental health analysis can vary drastically with the change of a few keywords in the prompt, especially on binary mental health condition detection. While keywords describing the tasks are easy to control, some other words such as adjectives are hard to optimize. For example, we replace the adjective describing the mental health condition with different degrees in the zero-shot \begin{table} \begin{tabular}{l c|c c c c c c} \hline \hline **Model** & **Sample Num.** & **Avg. Token Num.** & **Agreement** & **Fluency** & **Reliability** & **Completeness** & **Overall** \\ \hline ChatGPT & 163 & 237 & 96.6\% & 0.94 & 0.53 & 0.39 & 0.36 \\ ChatGPT\({}_{true}\) & 121 & 203 & 95.9\% & 0.95 & 0.58 & 0.40 & 0.38 \\ ChatGPT\({}_{false}\) & 42 & 335 & 98.8\% & 0.91 & 0.34 & 0.38 & 0.28 \\ \hline GPT-3 & 121 & 203 & 89.9\% & 0.55 & 0.58 & 0.63 & 0.62 \\ \hline \hline \end{tabular} \end{table} Table 6: Fleiss’ Kappa and other statistics of human evaluations on ChatGPT and GPT-3 results for the four aspects. ChatGPT\({}_{true}\) and ChatGPT\({}_{false}\) denote the correctly and incorrectly classified results of ChatGPT. ”Sample Num.” and ”Avg Token Num.” denote the sample numbers and average token numbers of the posts. ”Agreement” denotes the percentages of results that reached a final agreement with a majority vote from the three assignments. Figure 1: Box plots of the aggregated human evaluation scores for each aspect. Orange lines denote the median scores and green lines denote the average scores. \begin{table} \begin{tabular}{l c c c} \hline \hline ChatGPT\({}_{ZS}\) & **DR** & **CLPsych15** & **Dreaddit** \\ \hline _any_ & **82.41** & 56.31 & 53.10 \\ _some_ & 74.44 & **56.59** & 50.62 \\ _very severe_ & 78.65 & 47.55 & **71.79** \\ \hline \hline \end{tabular} \end{table} Table 7: Change of ChatGPT\({}_{ZS}\)’s weighted-F1 score with adjectives showing different degrees of depression/stress in the prompt. Best values: bold. prompt for binary mental health detection: _...Is the poster likely to suffer from [Adjective of Degree] [Condition]?..._ where the adjective (marked red) is replaced with one keyword from {_any_, _some_, _very severe_}, and the results on three binary detection datasets are shown in Table 7. As shown, the performance on Dreaddit drops 21.17% with the change from _very severe_ to _some_. On CLPsych15 it drops 9.04% from _some_ to _very severe_. The optimal adjective changes with the datasets and is hard to determine. This sensitivity makes ChatGPT's performance very unstable even with slightly different prompts. We believe this problem is due to the subjective nature of mental health conditions. The human annotations only answer Yes/No for each post, which makes the human criteria of predictions hard to learn for ChatGPT in a zero-shot setting. Inaccurate Reasoning.Though ChatGPT is proven capable of generating explanations for its classifications, there are still many cases showing its inaccurate reasoning leading to incorrect results. To investigate the contributing factors behind these mistakes, we further compare the human evaluation results between the correctly and incorrectly classified results ChatGPT\({}_{true}\) and ChatGPT\({}_{false}\). The results are presented in Figure 1. As shown, ChatGPT\({}_{false}\) still achieves comparable fluency scores to ChatGPT\({}_{true}\) but performs worse on both completeness and reliability. For completeness, the average score of ChatGPT\({}_{false}\) drops below 2.0. We also notice that the average token number of ChatGPT\({}_{false}\) reaches 335 (Table 6), which exceeds ChatGPT\({}_{true}\) by over 130 tokens. These results indicate that ChatGPT struggles to cover all relevant aspects of long-context posts. For reliability, more than half of ChatGPT\({}_{false}\) results give unreliable or inconsistent explanations (below 1.0), possibly due to the lack of mental health-related knowledge. A few ChatGPT\({}_{false}\) samples provide mostly reliable reasoning (above 2.0) but miss key information due to the lack of completeness. Overall, the mistakes of ChatGPT are mainly caused by ignorance of relevant information in long posts and unreliable reasoning process. Therefore, future works should improve ChatGPT's long-context modeling ability and introduce more mental health-related knowledge to benefit its performance. More cases of inaccurate reasoning are provided in Appendix E.3. ## 5 Conclusion In this work, we comprehensively studied ChatGPT's zero-shot mental health analysis ability and the impact of different emotion-enhanced prompting strategies. We also explored the potential of ChatGPT on explaining its decisions via CoT prompting. We developed a reliable annotation protocol and performed strict human evaluations to assess the quality of explanations generated by ChatGPT and GPT-3. Experimental results demonstrate that subjective tasks like mental health analysis and conversational emotional reasoning are still challenging for ChatGPT, but emotional information with proper prompt engineering can better trigger its ability. In addition, the human evaluation results show that ChatGPT can generate reliable explanations for its decisions and possesses great potential in enhancing the explainability of mental health analysis. We also quantitatively analyzed ChatGPT's several critical limitations including unstable predictions and inaccurate reasoning. We believe addressing these limitations is crucial to approaching realistic mental health care with ChatGPT. ### Limitations Unexpected Responses.Though ChatGPT makes predictions in most of its responses as requested by the prompts, there are a few cases where it refuses to make a classification. There are two main reasons: 1) the lack of evidence from the post to make a prediction; 2) the post contains content that violates the content policy of OpenAI3. For example, ChatGPT can respond: "As an AI language model, I cannot accurately diagnose mental illnesses or predict what may have caused them in this post." In our experiments, we directly exclude these responses because they are very rare, but future efforts are needed to alleviate these problems. Footnote 3: [https://openai.com/policies/usage-policies](https://openai.com/policies/usage-policies) Limitations of Lexicons.The motivation for using sentiment and emotion lexicons is to provide additional context with distant supervision for the prompts, which however have several limitations. The two lexicons, VADER Hutto and Gilbert (2014) and NRC EmoLex Mohammad and Turnney (2010, 2013) we used were developed a decade ago with human annotation using social media data. It is inevitable that they suffer from annotation bias in the sentiment/emotion scores and only reflect the language used when they were developed. The Internet language evolves rapidly, and our experiments also use some recent datasets such as T-SID (Ji et al., 2022) and CAMS (Garg et al., 2022). Besides, these lexicons have limited vocabularies. Manual rules to aggregate sentence-level sentiment and emotions could be underspecified. Prompt engineering with other advanced resources with extra emotional information can be explored in future work. We also see the limitation of the dataset. Ji (2022) showed that the sentiment distribution has no significant difference in the binary case of T-SID dataset. Although the sentiment-enhanced prompt with VADER gains slightly better performance than other prompts on T-SID dataset, we cannot clearly explain if the choice of lexicon contributes to the improvement due to the black-box nature of ChatGPT. Limitations of Evaluation.Due to the limitation of cost, we only evaluate the zero-shot performance of one representative LLM ChatGPT (gpt-3.5-turbo). LLMs evolve fast, and there are many other representative LLMs such as the GPT-3.5 series (text-davinci-002, code-davinci-002, text-davinci-003), and the latest GPT-4. We believe it requires a comprehensive analysis of the ability of different LLMs in mental health analysis in the future. This can help us understand the advantages and limitations of different LLMs and provides useful insights into future efforts. Moreover, we only utilize zero-shot prompting in our experiments. More effective prompt engineering such as few-shot prompting which has been shown effective in improving the performance of sentiment analysis in recent efforts (Qin et al., 2023; Zhong et al., 2023; Chen et al., 2023), can be explored in the future. ## Ethical Considerations Although the datasets used are anonymously posted, our study adheres to strict privacy protocols (Benton et al., 2017; Nicholas et al., 2020) and minimizes privacy impact as much as possible, as social media datasets can reveal poster thoughts and may contain sensitive personal information. We use social posts that are manifestly public from Reddit and Twitter. The SMS-like SAD dataset (Mauriello et al., 2021) has been released publicly on GitHub by the authors. All examples presented in our paper have been paraphrased and obfuscated using the moderate disguising scheme (Bruckman, 2002) to avoid misuse. And we do not use the user profile on social media, identify the users or interact with them. Our study aims to use social media as an early source of information to assist researchers or psychiatrists in detecting mental health conditions for non-clinical use. The model predictions cannot replace psychiatric diagnoses and we recommend individuals with mental health issues seek professional help from social workers or psychiatrists. In addition, we recognize that some mental disorders are subjective (Keilp et al., 2012), and the interpretation of our analysis may differ (Puschman, 2017) because we do not understand the actual intentions of the posts.
2305.08113
Modelling Quasi-Orthographic Captures for Surface Imaging
Surveillance and surveying are two important applications of empirical research. A major part of terrain modelling is supported by photographic surveys which are used for capturing expansive natural surfaces using a wide range of sensors -- visual, infrared, ultrasonic, radio, etc. A natural surface is non-smooth, unpredictable and fast-varying, and it is difficult to capture all features and reconstruct them accurately. An orthographic image of a surface provides a detailed holistic view capturing its relevant features. In a perfect orthographic reconstruction, images must be captured normal to each point on the surface which is practically impossible. In this paper, a detailed analysis of the constraints on imaging distance is also provided. A novel method is formulated to determine an approximate orthographic region on a surface surrounding the point of focus and additionally, some methods for approximating the orthographic boundary for faster computation is also proposed. The approximation methods have been compared in terms of computational efficiency and accuracy.
Maniratnam Mandal, Venkatesh K. Subramanian
2023-05-14T09:59:42Z
http://arxiv.org/abs/2305.08113v1
# Modelling Quasi-Orthographic Captures for Surface Imaging ###### Abstract Surveillance and surveying are two important applications of empirical research. A major part of terrain modelling is supported by photographic surveys which are used for capturing expansive natural surfaces using a wide range of sensors-visual, infrared, ultrasonic, radio, etc. A natural surface is non-smooth, unpredictable and fast-varying, and it is difficult to capture all features and reconstruct them accurately. An orthographic image of a surface provides a detailed holistic view capturing its relevant features. In a perfect orthographic reconstruction, images must be captured normal to each point on the surface which is practically impossible. In this paper, a detailed analysis of the constraints on imaging distance is also provided. A novel method is formulated to determine an approximate orthographic region on a surface surrounding the point of focus and additionally, some methods for approximating the orthographic boundary for faster computation is also proposed. The approximation methods have been compared in terms of computational efficiency and accuracy. Maniratnam Mandal and Venkatesh K. Subramanian+Department of Electrical Engineering, IIT Kanpur + Footnote †: Thanks to Computer Vision Lab, IIT Kanpur. Digital Elevation Map, Field of View, Surface Imaging, Orthography, Curvatures, Imaging Surface ## 1 Introduction Visual representation of topographical data or Digital Terrain Modelling has a wide-spread application, be it Google Maps, Navigation, Geological surveys, agriculture, disaster management, Astronomical studies and mapping of extra terrestrial surfaces, Archaeological studies, Sociological studies or even deciding national policies. The capturing technology has progressed a lot over the past decade and nowadays a mixture of visual, radio, infrared, laser and radar sensors are used to capture terrain information [3]. However, natural terrains are highly uneven and for vast surfaces, when very small features are the focus of studies, it is easy to lose them while reconstruction or even while capturing. An orthographic projection or image of a surface is a holistic representation of all its features. To construct an idealistic orthographic projection of the whole surface, captures at all points need to be taken separately. This approach is highly impractical, implausible and incomprehensible due to resource limitations. Thus an approximation of local orthographic region needs to be formulated for practical purposes and methods to determine capture points for the coverage of the surface is also needed. The physical parameters of an imaging system is guided by the optics and the sensors of the device[2], among which, _Field of View (FOV)_ and _Working Distance (WD)_ are important parameters to be considered for orthographic imaging. Usually, an _object-space telecentric lens[1]_ with the entrance pupil at infinty, is used for eliminating the perception of depth and creating orthographic images. A more commonly used technology is _orthorthography_. Orthophotographs are commonly used in _geographic information systems (GIS)_ as it is'map-accurate'. A _digital elevation model (DEM)_ is often required to create accurate orthophotos. An aerial image or a satellite image of a terrain or surface which is geometrically corrected or _orthorectified_ such that the photo or image is essentially an orthographic projection of the terrain. An orthophotograph can be used to measure distances accurately because it is an almost accurate depiction of the Earth's surface being adjusted for topographic relief, perspective distortion and camera tilt.[4] In this paper, a mathematical approach is taken to generate orthographic captures of surfaces. In section 2, the derivation and analysis of _imaging surfaces_ is given and the mathematical bounds on working distance has been formulated. Section 3 deals with the formulation of approximate orthography and in section 4 the algorithms for computation of orthographic bounds for curves and regions for surfaces have been provided. In section 5, several methods for approximating orthographic boundary for faster computation are proposed and compared and in section 6, the contributions of this paper have been summarized along with potential future extensions. Figure 1: **a.** A gray-scale DEM denoting a terrain. (Source: Creating Heightfields and Details on Terrain RAW Files, wiki.secondlife.com). **b.** Surface plot generated in Matlab. ## 2 Imaging Surface ### Generating Surface Topographies A digital terrain elevation map (DTEM) is a digital image of a terrain or a topography map where the pixel intensity at a point gives the relative elevation of the point. There are various ways of allocating pixel values (Gray-scale or RGB colormap) in a DTEM. The images that were used for the purpose of this thesis are Gray-scale DTEMs, where the elevation of a point or location in the digital map can range from \(0\) to \(255\), the white intensity pixels denoting the highest elevation points and the black intensity pixels denoting the lowest intensity points. Fig. 1(a) is an example of a gray-scale digital elevation map. The obtained image of the height-map is first smoothed out because rough surfaces with abrupt changes creates difficulty in further processing. The double precision matrix \(I\) is used as a topographical surface for later processing. The surface generated in Matlab for the DTEM in figure 1(a) is shown in figure 1(b). ### Formulation Let the working distance be denoted as \(d\), i.e., it is assumed that to capture a point \(P(x,y,z)\) on the surface \(S\) (given by the bi-variate function \(f(x,y)\)), the camera needs to be placed at \(P^{\prime}(x^{\prime},y^{\prime},z^{\prime})\) at a height \(d\) along the normal to the surface at \(P\). So, \[z=f(x,y). \tag{1}\] Then the surface normal at point \(P(x,y,z)\) is given as \[\vec{n}=\left[\frac{\partial f(x,y)}{\partial x},\frac{\partial f(x,y)}{ \partial y},-1\right]. \tag{2}\] If variables \(p\) and \(q\) are defined as \[p=\frac{\partial f(x,y)}{\partial x}\;\;and\;\;q=\frac{\partial f(x,y)}{ \partial y}, \tag{3}\] then the surface normal can be written as \([p,q,-1]\). The unit normal vector at \(P\) is \[\begin{split}\hat{n}&=\frac{\vec{n}}{|\vec{n}|}= \frac{\vec{n}}{\sqrt{p^{2}+q^{2}+1}}\\ &=\left[\frac{p}{\sqrt{p^{2}+q^{2}+1}},\frac{q}{\sqrt{p^{2}+q^{2 }+1}},\frac{-1}{\sqrt{p^{2}+q^{2}+1}}\right].\end{split} \tag{4}\] Now, using Eq. 4 as derived above, the corresponding imaging point, \(P^{\prime}(x^{\prime},y^{\prime},z^{\prime})\), at a height \(d\) from point \(P\) and along the unit surface normal \(\hat{n}\) is given by \[\begin{split}\vec{P^{\prime}}&=\vec{P}+d\cdot \hat{n}\\ &=\vec{P}+\left[\frac{d\cdot p}{\sqrt{p^{2}+q^{2}+1}},\frac{d \cdot q}{\sqrt{p^{2}+q^{2}+1}},\frac{-d}{\sqrt{p^{2}+q^{2}+1}}\right].\end{split} \tag{5}\] Therefore the co-ordinates of point \(P^{\prime}(x^{\prime},y^{\prime},z^{\prime})\) can be derived using Eq. 5. The imaging surface \(S^{\prime}\) at imaging distance \(d\) can be parameterized in terms of \(x\) and \(y\) as shown in Eq. 6. Here \(p=\frac{\partial f(x,y)}{\partial x}\) and \(q=\frac{\partial f(x,y)}{\partial y}\). \[\vec{S^{\prime}}=\left[\begin{array}{c}x+\frac{d\cdot p}{\sqrt{p^{2}+q^{2 }+1}}\\ y+\frac{d\cdot q}{\sqrt{p^{2}+q^{2}+1}}\\ f(x,y)-\frac{d}{\sqrt{p^{2}+q^{2}+1}}\end{array}\right] \tag{6}\] The surface plots in Fig. 2 demonstrate the imaging surfaces for the surface \(S\) given by \(f(x,y)=cos(x)+cos(y)\) in the range \((-5\leq x\leq 5)\) and \((-5\leq y\leq 5)\) calculated and plotted at different values of \(d\). In case the surface \(S\) is represented a double precision matrix (\(I\)) as shown in section 2.1, then instead of calculating mathematical gradients (\(\frac{\partial f(x,y)}{\partial x}\) and \(\frac{\partial f(x,y)}{\partial y}\)) for finding surface normals, the numerical gradients can be calculated as an approximation. ### Analysis of Imaging Curves If at any point on \(S^{\prime}\) is below the surface \(S\), then those points are inaccessible and hence cannot be used as imaging points, i.e. if \(z^{\prime}<f(x^{\prime},y^{\prime})\), \(P^{\prime}(x^{\prime},y^{\prime},z^{\prime})\) cannot be an imaging Figure 2: The surface \(S\) given by \(f(x,y)=cos(x)+cos(y)\) and the imaging surfaces \(S^{\prime}\) plotted in Matlab point for \(P(x,y,z)\) at height \(d\). Visualizing the variation of the _imaging surface_ with imaging height \(d\) is difficult for bivariate functions. Hence the following analysis is done for imaging curves and virtual bounds on \(d\) is derived. Let \(C\) be the target curve given by \(y=f(x)\). The curve can be parametrized by vector \(\vec{P}\) as \[\vec{P}(x)=\left[\begin{array}{c}x\\ y\end{array}\right]=\left[\begin{array}{c}x\\ f(x)\end{array}\right]. \tag{7}\] Proceeding similarly as in section 2.2, the coordinates(\(x^{\prime}\) and \(y^{\prime}\)) of imaging curve \(C^{\prime}\) located at distance \(d\) can be parametrized in terms of \(x\) (ref Eq.8 ). \[\begin{split} x^{\prime}&=x-\frac{d\cdot f^{\prime}(x)}{\sqrt{1 +(f^{\prime}(x))^{2}}}\\ \text{and}&y^{\prime}=f(x)+\frac{d}{\sqrt{1+(f^{\prime}(x))^{2}}}. \end{split} \tag{8}\] If a point \(P^{\prime}(x^{\prime},y^{\prime})\) on curve \(C^{\prime}\) is such that it satisfies \(y^{\prime}(x)<f(x^{\prime})\), then it lies below the curve \(C(f(x))\) and hence it cannot be accepted as a valid imaging point. In other terms for a \(d\) to be valid, curves \(C\) and \(C^{\prime}\) should not intersect at any point. This gives a mathematical bound for imaging height \(d\) - * \(d>0\) * \(d<D\) such that \(\forall\ d\geq D,\ \exists\) some \(x\) in \(dom(f)\), s.t. \(y^{\prime}(x)<f(x^{\prime})\), where \(x^{\prime}\) and \(y^{\prime}\) are as given in Eq.8. The mathematical upper bound \(D\) depends on the curvature or nature of the function \(f(x)\) and also the imaging range, i.e., the range of values of \(x\) that is to be imaged. D can be calculated numerically by solving Eq.9 and applying the _bisection algorithm_. \[\begin{split} y^{\prime}(x)&=f(x^{\prime})\\ or,\,f(x^{\prime})&=f(x)+\frac{d}{\sqrt{1+(f^{ \prime}(x))^{2}}}\end{split} \tag{9}\] For \(d<D\), the above equation will have no solution and for \(d\geq D\), the above equation will have one or more solution(s). For some smooth functions, there may not be any upper limit on \(d\) (i.e. \(D=\infty\)). For those functions, Eq. 9 does not have a solution for any \(d>0\), i.e. \(y^{\prime}(x)>f(x^{\prime})\) for all positive values of \(d\) and all \(x\) in \(dom(f)\). However, the practical upper bound depends on the limitations of resolution of the capturing device and also the concerned application. The imaging surfaces for some curves have been computed and plotted in Matlab as shown in figure 3. The target curves(\(C\)) are shown in blue along with the imaging curves(\(C^{\prime}\)) for different values of \(d\). It is interesting to note that all non-smooth curves do not generate invalid imaging curves. For example, \(f(x)=|mx|\) is non-differentiable at \(x=0\), but generates valid imaging curves for all \(m\in(0,1]\). ## 3 Modelling Orthographic Imaging ### Definition and Assumptions A practical approximation of orthography is to consider a very small(\(\epsilon\)) angular _field of view(FOV)_ and the points on the surface within this \(\epsilon\)-FOV to be roughly orthographic and the Figure 3: **a. As \(d\) increases, \(C^{\prime}\) moves further away from \(C\), thus the upper bound on \(d\), \(D=\infty\). b. As \(d\) increases, \(C^{\prime}\) moves further away from \(C\). For lower values of \(d\), \(C^{\prime}\)s do not intersect \(C\) and therefore are valid imaging curves. But for bigger values of \(d\), they intersect \(C\) and hence are invalid. Here the upper bound on \(d\), \(D\approx 2.6\). c. Here \(C\) is non-smooth and it is non-differentiable at \(x=0\). In this case, \(C^{\prime}\) generated for any \(d>0\) is invalid, as it always intersects \(C\), i.e. there are always points around \(0\) which generate invalid imaging points. Also, as \(d\) increases, the number of invalid imaging points increases.** boundary of the region consisting of such points is the _orthographic boundary_. Here \(\epsilon\) is a very small angle (\(\sim~{}10^{\circ}-20^{\circ}\)). Also, for orthographic imaging of a surface point \(P\), the imaging point must be along the surface normal at \(P\) and at a height \(d\). The region in a capture which lies within the orthographic boundary is defined as the \(\epsilon\)_-Orthographic Image_. ### Circular Case For a circle \(C\) (Fig. 3(a)), that radius to a point on the circumference is always orthogonal to the tangent at that point. Consequently, the centre \(O\) of the circle satisfies the properties of a valid imaging point for any point \(P\) on the circle. So the imaging point at \(O\) can be used to capture a length of \(2\pi R\) or the entire circumference as shown in figure below. The total number of captures required to cover the entire circle is \(\lceil\frac{2\pi}{\epsilon}\rceil\). Here imaging height is \(d=R\). For an imaging point located at an eccentric point \(Q\) (Fig. 3(b)) at distance \(x\) from \(O\), the normal from only two diagonally opposite points on the circumference passes through it. In this case the total length of \(C\) that can be imaged can be proved to be \(2\epsilon R\). ### Derivation of \(\epsilon\)-Orthography #### 3.3.1 Bounds for Curves Let us consider a curve \(C\) (Fig.5) given by a univariate function \(f(x)\). The tangent(\(\vec{T}\)) and normal(\(\vec{N}\)) vectors at point \(P(x,f(x))\) are given as \[\vec{T}(x)=\left[\begin{array}{c}1\\ f^{\prime}(x)\end{array}\right]\quad\vec{N}(x)=\left[\begin{array}{c}-f^{ \prime}(x)\\ 1\end{array}\right]. \tag{10}\] Let point \(P^{\prime}(x^{\prime},f(x^{\prime}))\) be situated at a small distance \(\Delta x\) to the left of \(x\). Let \(p=f^{\prime}(x)\) and \(p^{\prime}=f^{\prime}(x^{\prime})\).If \(\Delta x\) is very small then \(f(x^{\prime})\) and \(f^{\prime}(x^{\prime})\) can be approximated as \[\begin{split} f(x^{\prime})&=f(x-\Delta x)\approx f(x)- \Delta x\cdot f^{\prime}(x)\\ f^{\prime}(x^{\prime})&=f^{\prime}(x-\Delta x)\approx f^{ \prime}(x)-\Delta x\cdot f^{\prime\prime}(x),\end{split} \tag{11}\] therefore, \[\begin{split} p^{\prime}&=p+\Delta p\\ \Delta p&\approx-\Delta x\cdot f^{\prime\prime}(x) \end{split} \tag{12}\] Tangent \(\vec{T^{\prime}}\) and normal \(\vec{N^{\prime}}\) are constructed at \(P^{\prime}\). The normals \(\vec{N}\) and \(\vec{N^{\prime}}\) intersect at \(Q\) at an angle \(\phi\). Therefore, \[\begin{split} cos(\phi)&=\frac{\vec{N}\cdot\vec{N^{ \prime}}}{|\vec{N}|\cdot|\vec{N^{\prime}}|}\\ &=\frac{1}{\sqrt{p^{2}+1}\sqrt{p^{\prime 2}+1}}\cdot\left[ \begin{array}{c}-p\\ 1\end{array}\right]\cdot\left[\begin{array}{c}-p^{\prime}\\ 1\end{array}\right]\\ &=\frac{pp^{\prime}+1}{\sqrt{p^{2}+1}\sqrt{p^{\prime 2}+1}}.\end{split} \tag{13}\] So, \[\phi=cos^{-1}\Big{(}\frac{pp^{\prime}+1}{\sqrt{p^{2}+1}\sqrt{p^{\prime 2}+1}} \Big{)}. \tag{14}\] Also, if the line joining \(P^{\prime}\) and the imaging point \(O\) intersect \(\overline{OP}\) at angle \(\theta\), as \(\Delta x\) is much smaller compared to \(d\), \[\begin{split} tan(\theta)&=\frac{\Delta x}{d}\\ or,&\theta=tan^{-1}\big{(}\frac{\Delta x}{d}\big{)}.\end{split} \tag{15}\] \(\epsilon\)-orthographic bounds are dependent on both the FOV and the curvature at the concerned point. For a point \(P^{\prime}\) on \(C\) to lie within the \(\epsilon\)-orthographic region for capturing point \(O\) at a height \(d\) from the point \(P\), it must satisfy- **1.**\(\theta\leq\epsilon\), and **2.**\(\phi\leq\epsilon\). _Condition 1_ is necessary so that the point \(P^{\prime}\) lies within the \(\epsilon\)-FOV. _Condition 2_ is required because as curvature of \(C\) increases around \(P\), although a point close to it may remain within the \(\epsilon\)-FOV bound, the high curvature causes very small region around \(P\) to be approximately orthographic. With reference to Fig. 5, if curvature at \(P\) increases, \(OP\) and \(OP^{\prime}\) may differ very much and then \(P\) and \(P^{\prime}\) cannot be considered in the same orthographic region. #### 3.3.2 Boundary for Surfaces The derivation is very similar to that for the curves. A surface \(S\) is given by a bi-variate function, \(z=f(x,y)\). Given a central point \(P(x,y,z)\) on the surface, an imaging height of \(d\) and useful FOV \(\epsilon\), the goal is to find the orthographic boundary surrounding \(P\), or in other words, the area around \(P\) which can be considered as an approximate orthographic image. Figure 4: The case of circle as the target curve. Figure 5: Illustrating the variables for derivation. From Eqs. 2 and 3, the surface normal at point \(P(x,y,z)\) is \([p,q,-1]\), where \(p=\frac{\partial f(x,y)}{\partial x}\) and \(q=\frac{\partial f(x,y)}{\partial y}\). The Hessian matrix of \(f(x,y)\) is \[H=\left[\begin{array}{cc}\frac{\partial^{2}f(x,y)}{\partial x^{2}}&\frac{ \partial^{2}f(x,y)}{\partial x\partial y}\\ \frac{\partial^{2}f(x,y)}{\partial y\partial^{2}}&\frac{\partial^{2}f(x,y)}{ \partial y^{2}}\end{array}\right]. \tag{16}\] Let us take a point \(P^{\prime}(x^{\prime},y^{\prime},z^{\prime})\) very close to \(P\) such that \[\begin{array}{l}x^{\prime}=x+\Delta x\\ y^{\prime}=y+\Delta y.\end{array} \tag{17}\] As \(\Delta x\) and \(\Delta y\) are very small quantities, the change in the surface normal vector is also small. So the surface normal at \(P^{\prime}\), \(\vec{N^{\prime}}=[p^{\prime},q^{\prime},-1]\) and it can be approximated as \[\left[\begin{array}{c}\Delta p\\ \Delta q\end{array}\right]=H\cdot\left[\begin{array}{c}\Delta x\\ \Delta y\end{array}\right] \tag{18}\] \[p^{\prime}=p+\Delta p\] \[q^{\prime}=q+\Delta q\] Similarly as Eqs. 13 and 15, \(\phi\) and \(\theta\) are calculated as \[\begin{array}{l}cos(\phi)=\hat{N}.\hat{N^{\prime}}\\ =\frac{\vec{N}\cdot\vec{N^{\prime}}}{|\vec{N^{\prime}}|}\\ =\frac{1}{\sqrt{p^{2}+q^{2}+1}\sqrt{p^{\prime 2}+q^{\prime 2}+1}} \left[\begin{array}{c}p\\ q\\ -1\end{array}\right]\cdot\left[\begin{array}{c}p^{\prime}\\ q^{\prime}\\ -1\end{array}\right]\\ =\frac{pp^{\prime}+qq^{\prime}+1}{\sqrt{p^{2}+q^{2}+1}\sqrt{p^{ \prime 2}+q^{\prime 2}+1}}.\end{array} \tag{19}\] So, \[\phi=cos^{-1}\Big{(}\frac{pp^{\prime}+qq^{\prime}+1}{\sqrt{p^{2}+q^{2}+1} \sqrt{p^{\prime 2}+q^{\prime 2}+1}}\Big{)} \tag{20}\] and \[\begin{array}{l}tan(\theta)=\frac{\sqrt{\Delta x^{2}+\Delta y^{2}}}{d}\\ \par or,\,\theta=tan^{-1}\Big{(}\frac{\sqrt{\Delta x^{2}+\Delta y^{2}}}{d} \Big{)}.\end{array} \tag{21}\] Now, if \(P^{\prime}\) belongs to the orthographic region around point \(P\) for an imaging height \(d\), then both \(\theta\leq\epsilon\) and \(\phi\leq\epsilon\). In all the aforementioned derivations, the gradient components at \(P^{\prime}\) have been calculated by approximations for fast computation. Where computational capability is not an issue, the actual gradients can be calculated by differentiating the function \(f(x,y)\). ## 4 Implementation ### Algorithms The algorithm for computing the orthographic bounds for a smooth curve \(f(x)\) at a central point \(P_{0}(x_{0},f(x_{0}))\), for \(\epsilon\) angular FOV, resolution \(dx\) and imaging height \(d\) is stated. ``` 1:procedureLift Orthographic Bound(\(x_{0},\epsilon,d,dx\)) 2: Find \(p_{0}=f^{\prime}(x_{0})\) 3: Set \(x=x_{0}-dx\) 4: Find \(p^{\prime}=f^{\prime}(x)\) and calculate \(\phi\) using Eq.14 5: Set \(\Delta x=|x-x_{0}|\) 6: Calculate \(\theta\) using Eq.15 7:while\(\phi\leq\epsilon\) and \(\theta\leq\epsilon\)do 8: Set \(x=x-dx\) 9: Find \(p=f^{\prime}(x)\) and calculate \(\phi^{\prime}\) using Eq.14 10:\(\phi=\phi^{\prime}\). 11: Set \(\Delta x=|x-x_{0}|\) 12: Calculate \(\theta^{\prime}\) using Eq.15 13:\(\theta=\theta^{\prime}\) 14:return\(x=x+dx\) 15:procedureRightOrthographic Bound(\(x_{0},\epsilon,d,dx\)) 16: Find \(p=f^{\prime}(x)\) 17: Set \(x=x_{0}+dx\) 18: Find \(p=f^{\prime}(x)\) and calculate \(\phi\) using Eq.14 19: Set \(\Delta x=|x-x_{0}|\) 20: Set \(\Delta x=|x-x_{0}|\) 21: Calculate \(\theta\) using Eq.15 22:while\(\phi\leq\epsilon\) and \(\phi\leq\epsilon\)do 23: Set \(x=x+dx\) 24: Find \(p=f^{\prime}(x)\) and calculate \(\phi^{\prime}\) using Eq.14 25:\(\theta=\phi^{\prime}\) 26: Set \(\Delta x=|x-x_{0}|\) 27: Calculate \(\theta^{\prime}\) using Eq.15 28:\(\theta=\theta^{\prime}\) 29:return\(x=x-dx\) ``` **Algorithm 2** Finding Orthographic Bounds of a Curve Algorithm 2 has been implemented in Matlab and the bounds have been calculated and plotted for a convex curve and a concave curve as shown in Fig. 6. It is to be noted that although \(d\) keeps increasing, the bounds do not spread after a point. If the bounds were a function of \(\epsilon\) only, then with increase in \(d\), they would have spread apart indefinitely which is not a true characteristic of orthography. This shows that \(\epsilon\)-orthography not only depends on the FOV but also the curvature. The \(\epsilon\)-orthographic boundary can be numerically calculated using Algorithm 3 for a surface \(S\) (\(z=f(x,y)\)) at a central point \(P_{0}(x_{0},y_{0},z_{0})\), for \(\epsilon\) angular FOV, resolutions Figure 6: The orthographic bounds for convex and concave parts of a sine curve for increasing \(d\). \(dx\) and \(dy\), and imaging height \(d\). ``` 1: Find surface normal components \(P_{0}\) and \(q_{0}\) at \(P_{0}\). 2: Create an empty point set \(P\) for storing the eligible points inside the orthographic region. 3: Append \(P_{0}\) to \(P\) 4: Compute the number of \(x\) or \(y\) co-ordinates in the grid. \(n_{x}=(r_{H}-rL)/dx=R_{x}/dx\) and \(n_{y}=R_{y}/dy\). 5: Set \(s=max(n_{x},n_{y})\) 6: Set \(buf=0\) 7:for\(n=f:s\)do 8: Vector \(out=PairGen(n)\) 9: Set \(count=0\) 10:for\(j=1:length(out)\)do 11:\(x_{1}=x_{0}+dx\cdot out(1,j)\) 12:\(y_{1}=y_{0}+dy\cdot out(2,j)\) 13: Compute surface normal vector at \(P_{1}(x_{1},y_{1},z_{1})\) 14: Calculate \(\phi\) using Eq.20 15: Calculate \(\theta\) using Eq.21 16:if\(\theta\leq\epsilon\) and \(\theta\leq\epsilon\)then 17: Append \(P_{1}(x_{1},y_{1},z_{1})\) to 18:\(count=count+1\) 19:if\(count=0\)then 20:\(buff=buff+1\) 21:if\(buff>3\)then 22: Break from loop ``` **Algorithm 3** Finding Orthographic Boundary of a Surface The function _PairGen_ generates a vector of all pairs of integers \(n_{1}\) and \(n_{2}\) such that \(|n_{1}|+|n_{2}|=n\), i.e. all co-ordinates located at absolute distance \(n\). Algorithm 3 has been implemented in Matlab to generate \(\epsilon\)-orthographic regions on smooth curves (Fig. 7). This algorithm is also valid for non-smooth curves, for which numerical gradients can be calculated at non-differentiable points. ### Special Surfaces _Conjecture:_ Points on surfaces of constant Gaussian curvature [6] form \(\epsilon\)-orthographic regions of same area for constant imaging height \(d\). The upper bound on \(d\) depends on the nature (parameters) of such surfaces. Surfaces of constant curvatures can be classified into the following three classes- #### 4.2.1 **Zero Curvature Surfaces** A surface with Gaussian curvature(\(\kappa\)) equal to zero at all points is a plane. For a plane, which is inherently orthographic, the calculated region is thus of same shape as the _FOV_. As the _FOV_ is considered circular in all our calculations, the orthographic region is thus circular for a planar surface, the radius of which depends on the imaging height as given by Eq.22. Thus the problem of finding optimal capture points is reduced to a _Circle Packing_ problem. #### 4.2.2 **Positive Curvature Surfaces** A surface with equal positive Gaussian curvature(\(\kappa\)) at all points is sphere. Using the definition of \(\epsilon\)-orthography, it can be shown that for a sphere, the orthographic regions are also circular and of constant radii, dependent on the imaging height \(d\) (Fig.8a-b). This is due to the fact that the two principal curvatures (\(\kappa_{1}\) and \(\kappa_{2}\)) at any point on the sphere are equal and constant. However, unlike a plane, sphere is not inherently orthographic but behaves like one. #### 4.2.3 **Negative Curvature Surfaces** A surface with equal negative Gaussian curvature(\(\kappa\)) at all points is a pseudosphere [5]. Unlike the other two cases, for a pseudosphere, the orthographic boundaries are not circular, and the limit on imaging height \(d\) is dependent on the radius \(a\) (Fig.8c-d). Among the two principal curvatures (\(\kappa_{1}\) and \(\kappa_{2}\)) calculated at any point on the surface, one is positive and the other is negative. As we move along the surface, from the flat region to the narrow region, the magnitude of the positive principal curvature increases and the negative principal decreases such that the product of the two (Gaussian curvature) remains the same. As a consequence of this property, it has been empirically observed that the size of the \(\epsilon\)-orthographic regions remain the same, although the shape may vary (figure 8c-d). ### Limitations The calculation of orthographic regions is computationally expensive, specially for higher resolutions. Consequently, finding the overlap between two regions is also expensive. Also, the entire orthographic region needs to be calculated to find the boundary. For non-smooth surfaces, gradients can Figure 7: Orthographic regions drawn on curve \(f(x,y)=cos^{2}(x)+cos^{2}(y)\) shown in white. The figures on right show the boundary shape. The central point \((x_{0},y_{0})\) is plotted in red. (Here \(\epsilon=10^{\circ}\)) not be calculated at non-differentiable points and for natural surfaces, with fast varying curvatures, orthographic boundaries are difficult to calculate. Moreover, the exact boundaries cannot be used to calculate optimal capture points, where the boundaries need to be computed at multiple points simultaneously, which is repetitive and slow. ## 5 Approximation of Orthographic Boundary ### Approaches As pointed out in the limitations (4.3), calculating the exact orthographic region and hence the boundary is not practical because it is computationally expensive and thus time consuming. However, instead of considering exact boundaries, they can be approximated to some regular shapes for faster boundary computation as well as calculating overlap between regions as explored in the following approaches. #### 5.1 **Polygonal Approximation** In this approach, \(N\) points are calculated in \(N\) different directions from the central point. By setting \(\Delta x\) and \(\Delta y\) in Eq.20 according to the direction of calculation, and by calculating \(\theta\) and \(\phi\) using Eqs.20 and 21, and checking at each step to see whether they maintain the \(\epsilon\) constraint, the boundary point in the concerned direction can be computed. The directions in which the boundary points must be calculated, should be at equal angles to each other at the central point \((x_{0},y_{0})\), i.e. the directions should be at an angle \(\theta=\frac{360^{\circ}}{N}\) from each other. The \(N\) boundary points are joined to form the \(N\)-polygonal boundary. Fig.9(a) shows an example of polygonal approximation of the boundary. Here the boundary points \(P_{i}\)'s are evaluated in 8 directions centered at \(O(x_{0},y_{0})\). Obviously, larger the number of directions taken, better will be the approximate boundary. This approach may lead to both over-estimation and under-estimation of boundary depending on the convexity of the boundary curve. #### 5.2 **Elliptical Approximation** This is an extension or further approximation of the _polygonal approximation_ approach but here only even sided polygons are considered. Boundary points are calculated in \(N\) different equiangularly spaced directions. Hence, we get \(N\) boundary points \(P_{i},\;\;i=1,2,...,N\). Now, the distances between the diagonally opposite boundary points is calculated, and thus we have \(N/2\) diagonals (\(d_{i}\)). The maximum and minimum diagonals are considered, \(d_{max}=max(d_{i})\) and \(d_{min}=min(d_{i})\). The boundary is approximated as an ellipse with the major axis as \(d_{max}\) and the minor axis as \(d_{min}\) and the major axis is aligned along the longest diagonal. In Fig.9(b), the maximum length diagonal is \(\overline{P_{4}P_{8}}\) and the minimum length diagonal is \(\overline{P_{2}P_{6}}\). The major axis of the constructed ellipse(\(B\)) is \(\overline{P_{4}P_{8}}\) and the minor axis is of same length as \(\overline{P_{2}P_{6}}\). It is to be noted that central point \(O(x_{0},y_{0})\) is not the centre of the ellipse. #### 5.3 **Circular Approximation-1** This is a further simplification of the _elliptical approximation_. In this case, the orthographic boundary is approximated as a circle. Boundary points are calculated in \(N\) different equiangularly spaced directions as discussed in _polygonal_ case. Hence, we get \(N\) boundary points \(P_{i},\;\;i=1,2,...,N\). Now, the distances of the boundary points from the central point \((x_{0},y_{0})\) is calculated, and thus we have \(N\) distances (\(d_{i}\)). The average of all the distance lengths is calculated, \(d_{avg}=\Sigma d_{i}\).The boundary is approximated as a circle centered at \((x_{0},y_{0})\) and of radius \(R=d_{avg}\). In the illustrated figure 9(c), \(d_{i}=\overline{OP_{i}}\), and the average of all 8 \(\overline{OP_{i}}\)'s is calculated. The boundary circle \(B\) is constructed with centre at \(O(x_{0},y_{0})\) and radius equal to the average of \(OP_{i}\)'s. Figure 8: **a–b.**\(\epsilon\)-Orthographic regions plotted on a sphere-a surface of constant positive Gaussian curvature. **c–d.**\(\epsilon\)-Orthographic regions plotted on a pseudosphere- a surface of constant negative Gaussian curvature (large regions are plotted for demonstration). Figure 9: **a.** Boundary points detected for \(N=8\). Here \(\theta=45^{\circ}\). Actual boundary \(C\) shown in bold. **b.** Actual boundary \(C\) and approximated elliptical boundary \(B\). **c.** Actual boundary \(C\) shown in blue and approximated circular boundary \(B\) shown in red. #### 4.1.4 **Circular Approximation-II** This approach is the simplest and most intuitive approach among the ones discussed. It is based on the observation that on a smooth surface, the orthographic regions are small at places of high _Absolute Gaussian Curvature_[6] and relatively larger at places where it is low. If the orthographic region or boundary is estimated by a circle, an inverse relation between the radius of the boundary and the curvature of the central point can be formulated. Also, for a planar surface or a zero curvature surface, the orthographic region is circular with radius \[R=d\cdot tan(\epsilon), \tag{22}\] where \(d\) is the imaging distance and \(\epsilon\) is the useful FOV as discussed in the derivation of \(\epsilon\)-orthography. For any point on a non-planar surface having an absolute curvature \(|K|\geq 0\), the boundary will shrink from this circle. Therefore, if the region boundary is approximated by a circle of radius \(r\), \(r\leq R\). Now, let us consider a surface \(S\) and its Gaussian curvature (\(K\)). Given the bounds of the surface, the maximum absolute curvature is calculated. \[K_{max}=\max_{x,y}|K(x,y)|,\quad(x,y)\,\in\,Bound(S). \tag{23}\] Fix a ratio (\(m\)) between the largest radius possible \((r_{max}=R)\) for the points of least absolute curvature and the least radius possible for the point having curvature \(K_{max}\). So, \(m=\frac{r_{max}}{r_{min}}\). Therefore, the radius of the approximated circular boundary can be expressed as a function of point \((x,y)\) as \[r(x,y)=R-\frac{|K(x,y)|}{K_{max}}\cdot R\cdot(1-\frac{1}{m}). \tag{24}\] The value of \(m\) can be tuned by experimental observations. ### Comparison and Analysis The four different approaches for approximating the orthographic boundary can be compared in terms of accuracy of approximation and computation time. For _Approach 1_ (_Polysonal Approximation_), \(2\) (_Elliptical Approximation_) and \(3\) (_Circular Approximation - I_), the computational time depends on the number of directions \(N\) or the number of boundary points used for approximation, which increases with increase in \(N\). However in _Approach 2_, the diagonals have to be compared and the equation of ellipse has to be calculated, which is the most time consuming among the four. For _Approach 1_ and \(2\), calculation of overlap among the regions is most complicated because of their polygonal and elliptical shape respectively, whereas, in _Approach 3_ and \(4\), it is much easier due to their circular shape. The accuracy of approximation decrease from \(1\) to \(4\), polygonal being the most accurate and the second circular approximation being the crudest. However, _Approach 4_ is the most intuitive and easiest to compute and can be used for further optimization but has the highest error, specially for natural or fast-varying surfaces. ## 6 Conclusion and Future Work Orthographic imaging is a crucial tool for terrain survey or terrain mapping. Although technological improvements have been made widely in the devices to capture visual or other sensor data of a surface, a proper and efficient algorithm for reconstructing the surface topography and creating an orthographic projection of the terrain is lacking. This paper aims to study and analyze this problem and propose novel methods for solving it. A technique for generating topographical surface from elevation maps has been proposed. A detailed study of imaging surfaces and bounds on imaging height has been presented. The effects of imaging height and angular field of view for capturing orthographic views have been formulated and analyzed in detail. A novel method for calculating orthographic boundaries have been proposed and demonstrated. Different methods of approximating the orthographic boundaries have been proposed and compared. As a future extension of this study, better approximations of orthographic boundaries should be explored, and in case of which, how and by how much the results are affected must be studied. Methods for computing orthographic views by combining visual data with that from devices not capturing visual data can be explored and incorporated. Faster algorithms for real-time computation of orthographic boundaries should be explored for large-scale computation problems. The approximation can be used for computing the optimal number of capture points to cover the whole surface with minimum overlap among them.
2307.06747
Helicon waves in a converging-diverging magnetoplasma
Waves propagating along a converging-diverging rf magnetoplasma having the characteristics of a bounded m=0 helicon mode are reported and characterised. The discharge features a 30 cm separation between the region of radiofrequency energy deposition by a single loop antenna and the region of maximum magnetic field applied by a pair of coils. With 200 W of rf input power, the resulting plasma exhibits a strong axial plasma density gradient peaking at the magnetic mirror throat where an Ar II blue-core is observed. Two dimensional B-dot probe measurements show that the rf magnetic fields are closely guided by the converging-diverging geometry. The wave is characterised as a m=0 mode satisfying the helicon dispersion relation on-axis with radial boundary conditions approximately matching the radii of the plasma column. Analysis of the wave phase velocity and wave axial damping failed to identify collisionless or collisional wave-plasma coupling mechanisms. Instead, the wave axial amplitude variations can be explained by local wave resonances and possible reflections from localised rapid changes of the refractive index. A Venturi-like effect owing to the funnel-shaped magnetoplasma and conservation of the wave energy may also explain some level of amplitude variations.
Félicien Filleul, Antonella Caldarelli, Kazunori Takahashi, Rod Boswell, Christine Charles, John Cater, Nicholas Rattenbury
2023-07-13T13:38:01Z
http://arxiv.org/abs/2307.06747v2
# Helicon waves in a converging-diverging magnetoplasma ###### Abstract Waves propagating along a converging-diverging rf magnetoplasma having the characteristics of a bounded \(m=0\) helicon mode are reported and characterised. The discharge features a 30 cm separation between the region of radiofrequency energy deposition by a single loop antenna and the region of maximum magnetic field applied by a pair of coils. With 200 W of rf input power, the resulting plasma exhibits a strong axial plasma density gradient peaking at the magnetic mirror throat where an Ar II blue-core is observed. Two dimensional B-dot probe measurements show that the rf magnetic fields are closely guided by the converging-diverging geometry. The wave is characterised as a \(m=0\) mode satisfying the helicon dispersion relation on-axis with radial boundary conditions approximately matching the radii of the plasma column. Analysis of the wave phase velocity and wave axial damping failed to identify collisionless or collisional wave-plasma coupling mechanisms. Instead, the wave axial amplitude variations can be explained by local wave resonances and possible reflections from localised rapid changes of the refractive index. A Venturi-like effect owing to the funnel-shaped magnetoplasma and conservation of the wave energy may also explain some level of amplitude variations. + Footnote †: : _Plasma Sources Sci. Technol._ _Keywords_: Helicon waves, blue-core, wave-plasma coupling, magnetic funnel ## 1 Introduction The high degree of plasma ionisation associated with helicon modes is desirable for a variety of applications, from space plasma propulsion to power generation, semiconductor manufacturing and fundamental plasma physics [1, 2, 3, 4, 5]. The coupling mechanisms of helicon waves have been debated since their first association with efficient high density plasma generation (\(\geq 10^{12}\) cm\({}^{-3}\)) in the 1960's, and this question remains a dynamic field of research today [6, 7, 8]. The well-known helicon dispersion relation shows that in general the wavelength is proportional to the ratio of the intensity of the applied magnetic field B\({}_{0}\) over the plasma density [9]. It also appears that discharges in which the contribution of helicon waves have been verified were operating in regimes for which several wavelengths could fit within the plasma characteristic dimensions [10, 11, 12, 13]. These and other studies have observed the collisionless and/or collisional helicon waves contributions to plasma generation in two general categories of operating conditions. The first contribution occurs for moderately high densities (\(10^{11}-10^{12}\)cm\({}^{-3}\)) for which the dominant coupling mechanisms appear to be collisionless [10, 14, 11]. At the radiofrequencies commonly used, these densities require applied magnetic fields where B\({}_{0}<\) 100 G in order to fit several wavelengths within the discharge. Under these conditions, it was found that the plasma density is maximised when the helicon wave phase velocity is of the order of the electron thermal speed and/or the speed of electrons most likely to ionise, e.g. \(1-3\times 10^{8}\) cm s\({}^{-1}\) for a 3 eV Maxwellian electron population [15, 10]. In this wave mode, the plasma density is seen to increase as \(n_{0}\propto\mathrm{P_{rf}^{x}}\) with \(x>1\)[10]. This is likely owing to electrons being trapped and accelerated by the helicon waves axial electric field [15, 10, 16, 17]. The second category of discharges typically concerns densities \(\sim 10^{13}\) cm\({}^{-3}\) in which electron-neutral and electron-ion collisions alone can explain most of wave energy deposition [18, 12]. At these densities, higher magnetic fields of B\({}_{0}>\) 300 G can be used without excessively increasing the wavelength, and the higher densities could also partially result from reduced plasma-wall losses. Moreover, in favourable experimental configurations, the beating of standing helicon waves could also increase the wave-electron coupling [19, 13, 20, 21]. Finally, Trivelpiece-Gould (TG) waves are often regarded as yet another channel for power deposition of bounded whistler modes [22, 23, 24]. However, the typically extremely short wavelengths of TG waves makes it particularly challenging to experimentally verify their presence when B\({}_{0}>\) 100 G. The high damping of the TG waves also rules them out as potential contributors to rf power deposition within the plasma core, which is one of the desirable feature of helicon waves since it allows them to overcome the limited penetration of inductively coupled discharges rf fields. The observation of a blue-core or step density changes are often interpreted as signs of wave-driven modes. However, observations seem to indicate that these features are neither sufficient nor necessary conditions to wave-heated regimes [25, 26, 8]. In particular, an experiment using a double-saddle antenna at 13.56 MHz in a strong magnetic mirror configuration has generated blue-core plasmas of densities \(\geq 10^{12}\) cm\({}^{-3}\) at moderate rf powers while no helicon waves were detected [26]. This motivated conducting wave measurements with a B-dot probe in an experiment reproducing all parameters of the former experiment except for the type of rf antenna and operating frequency, i.e. using a single-loop antenna at 27.12 MHz. While the new experiment closely reproduced the plasma parameters obtained in the former [27], rf magnetic waves identified as \(m=\) 0 helicon modes were detected with the B-dot probe. These measurements are presented and characterised here. The main purpose of this study is to identify the nature of the rf magnetic waves with known wave theory and to identify potential wave contributions to the plasma generation. The work is organised as follows; the background of bounded whistler waves is summarised in section 2 and the experimental apparatus and diagnostics are described in section 3. The experimental data is presented and analysed in the scope of cold plasma wave theory in section 4. Possible coupling mechanisms and explanations for are considered in section 5. ## 2 Background The purpose of this section is to summarise notions of plasma wave theory which are necessary to interpret the data of this study. ### Dispersion relations In the limit of an homogeneous magnetised infinite cold-plasma, the Fourier transformed wave equation for a plasma wave electric field E of angular frequency \(\omega\) and wavevector \(\mathbf{k}\) is [28] \[\mathbf{k}\times(\mathbf{k}\times\mathbf{E})+\frac{\omega^{2}}{c^{2}}\epsilon \cdot\mathbf{E}=\mathbf{T}\cdot\mathbf{E}=0\, \tag{1}\] with \(c\) the speed of light in vacuum and \(\epsilon\) the cold-plasma dielectric tensor. The general cold-plasma dispersion relation for plasma waves propagating at an angle \(\theta\) to the applied magnetic field \(\mathbf{B_{0}}\) is found by taking the determinant of the tensor \(\mathbf{T}\) and writes \[\Lambda n^{4}-\mathrm{B}n^{2}+\mathrm{C}=0\, \tag{2}\] where \(n=\frac{|\mathbf{k}|c}{\omega}\) is the complex index of refraction, and A, B and C are terms combining the ion and electron cyclotron frequencies (\(\omega_{\rm ci}\),\(\omega_{\rm ce}\)), the plasma frequencies (\(\omega_{\rm pi}\),\(\omega_{\rm pe}\)) as well as sine and cosine of \(\theta\)[28]. In what follows, \(k=|\mathbf{k}|\) is the modulus of the wavevector, i.e. the wavenumber. Neglecting the ion mass and restricting the wave frequency to \(\omega_{ci}\ll\omega\leq\alpha_{ce}\ll\omega_{pe}\), equation 2 reduces to the dispersion relation of whistler waves [29] \[\frac{k^{2}c^{2}}{\omega^{2}}=\frac{\omega_{\rm pe}^{2}}{\omega\left(\omega_{ \rm ce}\cos\theta-\omega-i\nu_{\rm eff}\right)}\, \tag{3}\] where \(\nu_{\rm eff}\) is the effective electron collision frequency and is only relevant when \(\nu_{\rm eff}/\omega>0.1\). When \(\omega<<\omega_{\rm ce}\cos\theta\) and in the collisionless limit (\(\nu_{\rm eff}/\omega<<1\)), equation 3 simplifies to \[\frac{k^{2}c^{2}}{\omega^{2}}=\frac{\omega_{\rm pe}^{2}}{\omega\omega_{\rm ce }\cos\theta}\Longleftrightarrow k_{\parallel}k=\frac{en_{\rm e}\omega_{\rm H }}{\rm B_{0}}. \tag{4}\] Here, \(k_{\parallel}=k\cos\theta\) is the wavenumber component parallel to \(\mathbf{B_{0}}\) (while \(k_{\perp}=k\sin\theta\)), \(e\) is the elementary charge, \(\mu_{0}\) the vacuum permeability and \(n_{\rm e}\) the electron density. From equation 4, it can be seen that in this limit the electron inertia is not taken into account. If \(k_{\perp}>>k_{\parallel}\), the wave collision absorption length \(\delta_{\parallel}\) resulting from the imaginary part of equation 4 modified to include collisions writes [30] \[\delta_{\parallel}=\frac{\omega_{\rm ce}}{k_{\perp}\,\nu_{\rm eff}}. \tag{5}\] Figure 1 shows equation 3 and equation 4 for conditions B\({}_{0}=150\) G, \(n_{\rm e}=5\times 10^{11}\) cm\({}^{-3}\) and a 27.12 MHz wave (\(\omega/\omega_{\rm ce}\simeq 0.06\)). It can be seen that the whistler dispersion relation has two branches for small and large \(k_{\perp}\). For small \(k_{\perp}\), equation 4 approximates equation 3 well, the wave propagates nearly parallel to the field and is electromagnetic in nature [23]. For large \(k_{\perp}\), the electron inertia effects need to be taken into account and equation 3 asymptotically approaches a straight line characterised by the angle \(\cos\theta_{\rm res}=\omega/\omega_{\rm ce}\) for which the whistler wave refractive index goes to infinity [31]. This angle is known as the phase velocity resonance angle beyond which the whistler wave is evanescent. Whistler modes are therefore bound to propagate within a resonance cone whose main axis is along \(\mathbf{B_{0}}\) and half-angle is \(\theta_{\rm res}\). For \(\theta\) approaching \(\theta_{\rm res}\) (typically for \(\omega>0.5\omega_{\rm ce}\)), the wave is purely electrostatic and corresponds to the electron cyclotron wave [32]. Since this study treats of bounded non-uniform magnetised plasmas, the infinite plane wave concepts introduced so far are expected to only provide an approximate quantification of the measured waves' properties. A model describing the inhomogeneous plasma density and magnetic field is beyond the scope of this work. Taking into account the effects of the limited spatial extent of the plasma can however greatly improve the model. ### Boundary conditions With the inclusion of boundary conditions, the free-space whistler waves develop cavity eigenmodes, namely helicon waves and Trivelpiece-Gould (TG) waves [9, 29, 23]. Equation 4 is often called the simplified helicon dispersion relation. TG waves can be understood as the result of constructive interferences of reflections of the whistler waves resonance cones from the boundaries (plasma edges or physical surfaces) [23]. As illustrated in figure 1, TG waves have very short wavelengths, making them challenging to be experimentally characterised [24, 31]. For helicon waves, some studies have found that \(k_{\parallel}\) takes discrete values determined by the cylindrical antenna length or the vacuum chamber dimensions [19], while others have observed a continuous \(k_{\parallel}\) spectrum [13]. When a single loop antenna is used, \(k_{\parallel}\) is expected to have a continuous spectrum since the antenna has an ill-defined axial extent. From Ohm's law and for waves of the form \(\sim\exp\left(i(kz-\omega t+m\theta)\right)\) the magnetic components of the bounded helicon wave \(\mathbf{B}\) can be expressed as Bessel functions [7]. For example, the component of \(\mathbf{B}\) parallel to \(\mathbf{B_{0}}\) is proportional to J\({}_{m}(rk_{\perp})\), with J\({}_{m}\) the \(m^{th}\) order Bessel function of the first kind, and \(m\) the helicon wave azimuthal mode number. For bounded helicon waves, satisfying the boundary conditions inside a cylinder of radius \(r=r_{0}\), \(k_{\perp}\) becomes \[mkJ_{m}(k_{\perp}r_{0})+k_{\parallel}j^{{}^{\prime}}_{m}(k_{\perp}r_{0})=0\, \tag{6}\] from which \(k_{\perp}\) can be deduced depending on the azimuthal mode number, e.g. \(k_{\perp}=3.83/r_{0}\) for the \(m=0\) azimuthal mode. Therefore, with \(r_{0}\), \(n_{e}\) and B\({}_{0}\), equation 4 and equation 6 can be used to compute the axial wave number \(k_{\parallel}\) of the helicon wave for a given azimuthal mode \(m\). In axially and radially bounded systems, density Figure 1: The dispersion relations of equation 3 (continuous line), equation 4 (dotted line) and the electrostatic limit (dashed line). The conditions are B\({}_{0}=150\) G, \(n_{\rm e}=5\times 10^{11}\) cm\({}^{-3}\) and \(f=27.12\) MHz, for which \(\omega/\omega_{\rm ce}\simeq 0.06\). step increases have been observed for constant B\({}_{0}\) and increasing rf power, owing to resonant cavity modes resulting in discrete values of \(k_{\parallel}\) proportional to the system's characteristic dimensions [19]. The helicon axial wavelength and phase velocity can then be calculated to be \(\lambda_{\parallel}=2\pi/k_{\parallel}\) and \(\nu_{\phi}=f\lambda_{\parallel}\), respectively (with \(f=\omega/2\pi\)). The group velocity \(v_{\mathrm{g}}=\delta\omega/\delta\mathbf{k}\) can be calculated from the dispersion relations. ## 3 Experimental arrangement and diagnostics ### Apparatus The measurements were performed in a linear plasma device illustrated in figure 2. The active region of the apparatus consists of a 150 cm long, 9 cm inner diameter borosilicate glass tube connected to stainless steel vacuum chambers onto which vacuum pumps and gauges are installed to reach a base pressure of \(\sim 10^{-7}\) Torr. Argon is injected from the chamber onto which the turbo pump inlet is mounted in order to minimise axial neutral pressure gradients [27]. The argon working pressure ranging from 0.1 to 10 mTorr is set with a mass-flow controller. The rf antenna is a 1-1/3 turn loop antenna wound around the glass tube. RF power is delivered from a variable frequency RF generator through an L-type matching network made from two vacuum capacitors. The working frequency is set to 27.12 MHz and the RF reflected power kept \(\leq\) 1% at all times. The antenna centre marks the origin of the (\(r\),\(\phi\),\(z\)) laboratory reference frame used throughout this work. The magnetic field is applied by a pair of movable Helmholtz coils placed concentrically around the glass tube. The solenoids position \(z_{\mathrm{B}}\) is kept fixed, such as to place the magnetic mirror throat 30 cm away from the antenna. The solenoids produce a peak magnetic field strength B\({}_{0}\) of 25 G/A on-axis (see figure 2 (b)) setting a magnetic mirror ratio of R\({}_{\mathrm{m}}\) = 6.86 from the antenna location to the throat, as shown in figure 3. A magnetic surface of interest is the funnel-shaped one delimited by the most radial streamlines to intersect the antenna plane (the white continuous lines in figure 3). ### Plasma diagnostics The in-situ probes used in this study are mounted at the extremity of a 1.5 m 1/4" steel shaft encapsulated in a glass tube in order to ensure the continuity of the dielectric plasma boundary [27]. This arrangement is show in figure 2 (a). The shaft slides at the bottom of the glass tube and can be rotated around \(\phi\) in order for the probe tips to effectively reach every location inside the apparatus, owing to its axial symmetry. A planar Langmuir probe and an rf compensated cylindrical Langmuir probe (LP) have been used to measure the ion density from the ion saturation method and the electron temperature from the Druyvesteyn method, as previously described in detail [27, 33]. The B-dot probe is made out of 6 loops of 0.2 mm copper wire forming a single coil of 4 mm diameter [34]. The coil is mounted as the extremity of a ceramic probe holder such as to measure time-varying magnetic fields along the \(\hat{\mathbf{z}}\) direction. A 6 mm outer diameter borosilicate glass enclosure is placed around the coil to protect it from direct plasma exposure. The coil's twisted pair leads then run along the probe shaft which acts as a coaxial shield. The leads are connected to an hybrid combiner which suppresses common-mode signals associated with electrostatic pick-ups and preserves the differential magnetic signal [35]. The common-mode rejection of the hybrid coupler was tested and found to be close to 98%, i.e. when a common-mode 27.12 MHz signal is picked up, \(\sim\) 2% of the signal leaked as a differential signal. A second B-dot and hybrid combiner are used to measure the rf field of the loop antenna in atmosphere to provide a phase reference for the mobile B-dot's signal. The outputs of the hybrid combiners are recorded and digitised by a 200 MHz bandwidth oscilloscope and Fast-Fourier Transform (FFT) post-processed to filter out harmonics and extract the 27.12 MHz magnetic rf wave amplitude and phase. Argon I and II emissions at 750.4 nm and 488 nm are measured with a previously described arrangement made out of two 10 nm narrow-band-pass filters and a calibrated CMOS sensor [36]. A feature of optical emission spectroscopy (OES) is that the intensity of emission lines can be interpreted in terms of the density and temperature of the particles contributing to their excitation. The emission rate coefficients can be determined from the so-called corona model or collisional radiative model, depending on the relevant density range [37]. Ar I emission at 750.4 nm is excited from an electron-neutral impact from ground state and its intensity can be modelled as \[\mathrm{I_{750nm}}\propto\mathrm{K_{750nm}}(\mathrm{T_{e}})n_{\mathrm{e}}n_{ \mathrm{g}}\,, \tag{7}\] where K\({}_{750\mathrm{nm}}\) is the emission rate coefficient and \(n_{\mathrm{g}}\) the neutral argon density [38]. The 488 nm Ar II line is preferably excited from an electron-ion interaction [39] \[\mathrm{I_{488nm}}\propto\mathrm{K_{488nm}}(\mathrm{T_{e}})n_{\mathrm{e}}n_{ \mathrm{i}}\,, \tag{8}\] with K\({}_{488\mathrm{nm}}\) the corresponding emission rate coefficient. From quasi-neutrality, it follows that \(\mathrm{I_{488nm}}\propto\mathrm{K_{488nm}}(\mathrm{T_{e}})n_{\mathrm{e}}^{2}\). In this study the filtered CMOS sensor was placed on a viewport at \(z=-140\) cm such that the recorded intensities are integrated along the plasma column axial line of sight. Finally, an _Octiv_ I-V probe in-line between the matching box and the loop antenna is used to measure the plasma resistance R\({}_{\mathrm{p}}\) from the current, voltage and respective phase at the antenna terminal. The circuit resistance \(\mathrm{R}_{\mathrm{ant}}=0.34\ \Omega\) was measured when operating the apparatus with no plasma to allow calculation of the power coupling efficiency \(\eta\) \[\eta=\frac{\mathrm{R}_{\mathrm{p}}}{\mathrm{R}_{\mathrm{ant}}+\mathrm{R}_{ \mathrm{p}}}\;. \tag{9}\] ## 4 Results For continuity with previous studies conducted in similar apparatuses, the rf power, argon pressure and solenoids positions were kept fixed at 200 W, 1 mTorr (0.13 Pa) and \(z_{\mathrm{B}}=30\) cm, respectively throughout this work. Only the applied magnetic field intensity \(\mathrm{B}_{0}\) was varied and its impact on the plasma and rf magnetic waves characterised. The properties of the plasma discharge are summarised here to allow the comparison between the measured waves features and the dispersion relations. For now, the plasma is assumed to be a medium carrying the waves and the focus is on the characterisation of the waves propagating across the converging-diverging magnetic field. Possible wave-plasma coupling mechanisms will be discussed in the following section. ### Plasma discharge characteristics The plasma resulting from the experimental conditions was first characterised in [26] with an apparatus of identical dimensions and configuration to the one used for this study, but employing a double-saddle antenna at 13.56 MHz. With the antenna located upstream Figure 3: 2D contours of \(\mathrm{B}_{0}\) such that \(\mathrm{B}_{0}=300\ \mathrm{G}\) at \((r,z)=(0,30)\) cm. The two continuous lines represent the most radial magnetic streamlines to cross the antenna (orange half-dots), such as to intersect the glass tube (turquoise rectangles) inner surface at \(z=0\) cm. \(\mathrm{\theta}_{\mathrm{B}_{0}}\) is defined as the angle between \(\mathrm{B}_{0}\) and \(\mathrm{z}\). Figure 2: Sketch of the experimental apparatus in the configuration used in this study, i.e. with \(z_{\mathrm{B}}=30\) cm and the 1-1/3 turn loop antenna at \(z=0\) cm (a). Magnetic field strength on-axis \(\mathrm{B}_{0}\) measured with an Hall effect probe (circles) and compared with an analytical model (curve) (b). of the magnetic mirror, a converging-diverging plasma column is obtained whose density peaks under the solenoids when B\({}_{0}\) is greater than a threshold value. For B\({}_{0}\) below this threshold, the axial density profile is bimodal with one maximum under the antenna and one under the solenoids. It was shown that these features and the plasma parameters absolute values were closely maintained when changing the antenna and radiofrequency to a single-loop copper wire antenna driven at 27.12 MHz [27]. As suggested in [26], the transition between double to single peaked density profiles is highly correlated with the level of ion magnetisation under the antenna [36]. For the present case of \(z_{\rm B}=\) 30 cm, the transition was estimated to occur for B\({}_{0}\simeq\) 250 G and was correlated with anisotropic charging of the glass tube under the antenna [36]. Figure 4 shows the single peaked axial ion density profiles for B\({}_{0}=\) 300 G, 600 G and 900 G. The measured standard deviations on the ion saturation currents and a typical error on the electron temperature of 0.5 eV are used to propagate the uncertainties to compute the error bars. The profiles are roughly symmetrical with respect to the solenoids (dashed vertical line) and the maximum density monotonously increases under the solenoids despite remaining approximately constant under the antenna (\(z=\) 0 cm, vertical dotted line). As previously observed, this increase under the solenoids is well explained by the reduction of plasma losses to the walls as the maximum density approaches the value expected from the geometrical reduction of the magnetic field cross-section (see figure 3) [26, 27]. This convergence can be further noted in figure 5 from the curve showing the ion density measured with the Langmuir probe under the solenoids at \(z=\) 30 cm for increasing B\({}_{0}\). Not shown here, the electron temperature measured on-axis under the solenoids was observed to be approximately unchanged by the change in B\({}_{0}\). Figure 4: Axial plasma density measured with the LP for increasing B\({}_{0}\). The dashed vertical line marks the position of the centre of the solenoids and the dotted one the position of the antenna. Figure 5: Left-axis : Ion density under the solenoids on-axis (\(z=\) 30 cm) measured with the LP (star markers) and the corresponding axial average density (continuous line) for increasing B\({}_{0}\). The square-root of the axially integrated Ar II line emission (blue dot markers) is scaled to the axially averaged \(n_{\rm i}\) at 300 G. Right-axis : Measured axial average wavelengths A\({}_{\rm I}\) (square markers). The error bars are of the same nature as in figure 4. Figure 6: Power transfer efficiency n (left-axis) and ratio of Ar II over Ar I intensities along the centreline of the plasma column (right-axis) for increasing B\({}_{0}\). In what follow, the wave features are analysed from both the local and macroscopic (axially averaged) perspectives. Since the medium is non-uniform, the measured macroscopic axial wavelengths are best interpreted with dispersion relations using the axially averaged plasma densities and magnetic fields. Unfortunately, the axial plasma density profiles were only measured with the LP for the B\({}_{0}\) cases in figure 4. To remedy to this, the axially integrated Ar II intensities I\({}_{488\mathrm{nm}}\) were recorded for all considered values of B\({}_{0}\). From equation 8, taking isothermal electrons, the Ar II intensity is quadratically proportional to \(n_{\mathrm{i}}\). As such, the square-root of I\({}_{488\mathrm{nm}}\) should be a good qualitative estimate of \(<n_{\mathrm{i}}>\), the plasma density axial average. The absolute value of \(<n_{\mathrm{i}}>\) at 300 G is computed from figure 4 and compared to the respective value of \(\sqrt{I_{488\mathrm{nm}}}\) to obtain a scaling factor. This factor is applied to all B\({}_{0}\) cases and the optically determined values of \(<n_{\mathrm{i}}>\) are plotted in figure 5 as a function of B\({}_{0}\) (dot markers). To validate this approach, the trend of \(<n_{\mathrm{i}}>\sqrt{I_{488\mathrm{nm}}}\) for increasing B\({}_{0}\) is compared with the same trend of \(<n_{\mathrm{i}}>\) estimated from the Langmuir probe data only. The LP \(<n_{\mathrm{i}}>\) values for all B\({}_{0}\) cases are computed by scaling the on-axis LP measured density \(n_{\mathrm{i}}\) at \(z=30\) cm from figure 5 (star markers) to the axial average values \(<n_{\mathrm{i}}>\) calculated from the three B\({}_{0}\) cases of figure 4. This produces the continuous line in figure 5. The good agreement between the OES \(<n_{\mathrm{i}}>\) and the Langmuir probe \(<n_{\mathrm{i}}>\) gives confidence in the OES deduced \(<n_{\mathrm{i}}>\) values. Along with the increase in plasma density, the ratio of I\({}_{488\mathrm{nm}}\) to the 750 nm emission line I\({}_{750\mathrm{nm}}\) along the apparatus centreline is also seen to monotonously increase with B\({}_{0}\), as shown in figure 6. This trend is not only the result of the increasing I\({}_{488\mathrm{nm}}\) with B\({}_{0}\), which appears to plateau for high values of B\({}_{0}\), but also of the decreasing I\({}_{750\mathrm{nm}}\) on-axis, likely owing to neutral depletion [40]. Indeed, assuming neutrals at room temperature, the ionisation fraction is already \(\sim 2.4\%\) for B\({}_{0}=300\) G. This means that the neutral density in the centre could amount to 20% of the neutral density at the edge of the column [30]. At 400 G, I\({}_{488\mathrm{nm}}\) roughly equals a quarter of I\({}_{750\mathrm{nm}}\) and a blue-core is observed. Finally, figure 6 shows the antenna-plasma coupling efficiency \(\eta_{\mathrm{i}}\) for increasing B\({}_{0}\) calculated from equation 9. Above B\({}_{0}=300\) G the coupling efficiency is seen to stabilise at \(\sim 85\%\). This shows that beyond 300 G, a constant amount of power gets deposited into the plasma from the antenna and that the change in density is likely not the result of a mode transition but rather of reduced losses from improved magnetic confinement. While it is expected for the density to increase with B\({}_{0}\), the mechanisms enabling it are still not fully understood. A simple consideration can show that the ions under the solenoids and downstream are the result of non-local ionisation, i.e., taking place elsewhere than under the antenna. The Ar II 488 nm excited state lifetime of 8.5 ns [41] is 5 orders of magnitude smaller than the time it would take an ion to travel the distance between the antenna and the solenoids (on the order of a few milliseconds for room temperature ions). The ion-neutral mean-free path of \(\sim 1.2\) cm at 1 mTorr further slows the ions diffusion down. Therefore, ions must be non-locally excited by electrons with energy of at least 20 eV [41]. ### Spatio-temporal wave features #### 4.2.1 Two dimensional characteristics The B-dot probe was incrementally moved by 2 cm steps in \(\mathbf{\hat{z}}\) and \(10^{\circ}\) in \(\mathbf{\hat{\phi}}\), to obtain time-resolved 2D features of B\({}_{\parallel}\). Due to the eccentric placement of the probe shaft, a \(10^{\circ}\) increment equals a \(\simeq 6\) cm step in the \(\pm\mathbf{\hat{z}}\) direction. The rf magnetic pick-up coil of the B-dot probe is always oriented normal to the laboratory frame of reference's \(\mathbf{\hat{z}}\) direction. The converging-diverging \(\mathbf{B_{0}}\) streamlines affect a maximum angle of \(\theta_{\mathrm{B_{0}}}=\pm 12^{\circ}\) with respect to \(\mathbf{\hat{z}}\) within the glass-tube volume of interest. As a result, at most 2.2% of the amplitude of the rf magnetic field measured by the B-dot probe also comprises a B\({}_{\perp}\) component. This was corrected by multiplying the measured signals by the local value of \(\cos\theta_{\mathrm{B_{0}}}\). On-axis, B\({}_{\parallel}\) is purely along \(\mathbf{\hat{z}}\). The 2D measurements were carried out for three magnetic field intensities: B\({}_{0}=150\) G, 300 G and 600 G. The resulting maximum amplitudes of B\({}_{\parallel}\) are show in figure 7. The region corresponding to \(z<6\) cm was not plotted in order to increase the contrast of the wave features. The magnitude of the antenna rf magnetic field indeed dominates the spontaneous rf wave field under antenna (as it can be seen in figure 10). For the three values of B\({}_{0}\), the rf magnetic waves were picked-up by the B-dot probe down to \(z=50\) cm and \(z=70\) cm. A few wave features are of interest. The first is the waveguide-like effect of the magnetised plasma on the spontaneous rf waves. This is probably mainly the result of the well-known ability of whistler waves to propagate along the magnetic streamlines. Outside of the funnel-shaped volume delimited by the most radial streamlines, the plasma density is much lower than within the funnel, and the appropriate wave dispersion relation might not be locally satisfied. As such the wave might be evanescent outside of the funnel-shaped volume, explaining the wave amplitude nearing zero at these loci in figure 7. The second interesting feature is that the overall appearance of the 2D profiles of B\({}_{\parallel}\) closely resembles one of an \(m=0\) helicon mode [31, 42]. In particular, the radial profiles of B\({}_{\parallel}\) are well fitted by Bessel functions of the first kind J\({}_{0}(k_{\perp}r)\). This is shown in figure 9 (a) at \(z=20\) cm for the B\({}_{0}=150\) G case. From the boundary conditions equation 6 for a \(m=0\) azimuthal mode, it can be found that \(k_{\perp}=3.83/r_{0}\), where \(r_{0}\) is the radial coordinate of the plasma boundary [43]. As such, \(r_{0}\) is used as a free Figure 8: Temporal evolution of \(\rm B_{I}\) for four quarters of an rf period T (a-d) illustrating the travelling-wave behaviour of \(\rm B_{z}\) at the conditions of figure 7 (b), i.e. when \(\rm B_{0}=300\) G. The black star serves as a guide to follow one wavefront. Figure 7: Two dimensional profiles of \(|\rm B_{I}|\) for \(\rm B_{0}=150\) G (a), \(\rm B_{0}=300\) G (b) and \(\rm B_{0}=600\) G (c). The white lines show the most radial streamlines. parameter in the fittings of \(\mathrm{J}_{0}(k_{\perp}r)\) to the radial profiles of \(\mathrm{B}_{\mathrm{I}}\). From the best fit at each \(z\), axial profiles of \(k_{\perp}\) can be obtained and will be used later. As a testament to the good foundation of this process, the values of \(r_{0}\) giving the best fit were consistent with the funnel shape of the wave profiles, as it can be seen in figure 9 (b). \(r_{0}\) is roughly symmetrical with respect to the solenoids up to \(z\simeq 50\) cm, beyond which it increases to values greater than the glass-tube inner radius (4.5 cm). It could be that this behaviour is the result of the wave boundary conditions changing from being a radial density gradient between \(z=10\) cm and \(z=50\) cm, to being a dielectric surface of the glass-tube elsewhere. Figure 8 shows the apparent travelling wave feature of the measured \(\mathrm{B}_{\mathrm{I}}\) for \(\mathrm{B}_{0}=300\) G by displaying the spatial variation of \(\mathrm{B}_{\mathrm{I}}\) at four times of an rf period. The data plotted in this figure is the raw data from the output of the hybrid combiner, i.e. the FFT was not performed. This illustrates that the harmonic content of the measured signal was weak, typically lower than 10%, as it can be seen in figure 10. The wavefront can be followed moving along \(\mathbf{\hat{z}}\). From the distance separating two crests, a parallel wavelength of approximately 50 cm can be estimated. #### 4.2.2 Wave axial behaviour. Figure 10 shows the change in the axial \(|\mathrm{B}_{\mathrm{I}}|\) profile between operating the plasma device without magnetic field and when \(\mathrm{B}_{0}=300\) G. In both cases, the rf power was set to 200 W. When \(\mathrm{B}_{0}=0\) G, the measured wave profile is solely owing to the antenna's rf induced field and exponentially decays away from \(z=0\) cm. When \(\mathrm{B}_{0}=300\) G, the intensity of the antenna's induced field is reduced as a result of the higher plasma density under the antenna which causes a smaller rf skin-depth. Moreover, the spontaneous rf magnetic wave can be measured up to \(z=70\) cm in this case. Using the exponential decay rate of the antenna's induced field when \(\mathrm{B}_{0}=0\) G, it is found that the spontaneous wave amplitude dominates the antenna's field for \(z>8\) cm. As such and in what follows, the analysis of the axial features of the spontaneous waves were conducted from \(z=10\) cm to \(z=70\) cm. The evolution of \(|\mathrm{B}_{\mathrm{I}}|\) on-axis for increasing values of \(\mathrm{B}_{0}\) is shown in figure 11 (a). The \(\mathrm{B}_{0}=150\) G case stands out as being more strongly damped than the rest, which themselves appear to all behave similarly. The axial plasma density for \(\mathrm{B}_{0}=150\) G being bimodal and of overall much lower amplitude than the profiles for \(\mathrm{B}_{0}\geq 300\) G (see figure 5) is thought to be the cause for the difference in wave damping and will be discussed below. The roughly equal wave amplitude profiles for increasing \(\mathrm{B}_{0}\) would indicate that wave-plasma coupling is not significantly affected by the change of \(\mathrm{B}_{0}\) and the resulting expected increase in wavelength. This hints that the increase in plasma density and in Ar II emission with increasing \(\mathrm{B}_{0}\) might not be correlated with the presence of the wave. Figure 11 (b) shows the phase shift \(\Delta\phi\) resulting from the wave propagating away from \(z=10\) cm and obtained from the FFT processing. Standing waves profiles are usually characterised by \(180^{\circ}\) phase jumps coinciding with local minima of \(|\mathrm{B}_{\mathrm{I}}|\)[10, 19]. Since such features are not observed but instead the phase continuously increases; it can be therefore concluded that the measured waves have the dominant features of travelling waves. The fact that for some magnetic field Figure 9: Normalised \(|\mathrm{B}_{\mathrm{I}}|\) radial profiles taken from figure 7 (a) at \(z=20\) cm (dot markers) with the fitted Bessel function (a). Fitting parameter \(r_{0}\) versus \(z\) for \(\mathrm{B}_{0}=300\) G (dot markers) and radial coordinates of the most radial streamline (continuous curve) (b). The horizontal line represents the inner surface of the glass-tube. intensities the wave amplitudes feature local minima at \(z=50\) cm is discussed in section 5. From figure 11 (b), an estimate of the parallel wavelength \(\lambda_{\mathrm{I}}\) can be calculated from the distance it takes the phase to change by 360\({}^{\circ}\), i.e. \[\lambda_{\mathrm{I}}=360\frac{\Delta z}{\Delta\phi}\,. \tag{10}\] The values obtained this way from figure 11 (b) are plotted in figure 5 and reported on its right-axis. \(\lambda_{\mathrm{I}}\) increases linearly with B\({}_{0}\) and the value for B\({}_{0}=300\) G, \(\lambda_{\mathrm{I}}\simeq 52\) cm matches the visual assessment made in figure 8. Finally, the instantaneous parallel wavenumber can be computed from \(k_{\mathrm{I}}=\partial\phi/\partial z\). ### Dispersion relation The local parallel wavelength can be obtained from \(k_{\mathrm{I}}\) and is compared with the local value of \(\sqrt{\mathrm{B}_{0}/n_{\mathrm{i}}}\) in figure 12. It can be seen that the two quantities follow a similar trend until \(z\simeq 50\) cm. This indicates that locally the wave seems to follow the simplified helicon wave relation equation 4. Taking the arithmetic mean of the axial local values of \(\lambda_{\mathrm{I}}\) obtained this way gives \(\simeq 52\) cm, i.e. similar to the axial macroscopic wavelength found from the phase-shift through equation 10 and the corresponding curve in figure 11 (b). Figure 13 shows the three different dispersion relations; equation 3 (EM+ES), equation 4 (EM) and \(\cos\theta_{\mathrm{res}}=\omega/\omega_{\mathrm{ce}}\) (ES), plotted together with the measured axial macroscopic quantities \(\lambda_{\mathrm{I}}\) and \(\sqrt{\mathrm{B}_{0}/n_{\mathrm{i}}}\) for the cases of figure 11. The axially averaged value of \(r_{0}\) obtained from the Bessel functions fitting shown in figure 9 (b) gives a macroscopic value of \(k_{\perp}\simeq 3.83/4.1\) used in the three dispersion relation equations. It is observed that the simplified helicon dispersion relation (EM) fits the measured data well, whilst the dispersion relations including the electron inertia (EM+ES and ES) do not follow the same trend as the data. This significantly supports that the rf magnetic waves on-axis are indeed an \(m=0\) helicon mode over the explored range of B\({}_{0}\) values. ## 5 Discussion ### Wave-plasma coupling The rf magnetic wave being identified as an helicon wave, one can now check whether the damping observed on Figure 11: Axial amplitude (a) and phase (b) of B\({}_{\mathrm{I}}\) for increasing B\({}_{0}\). The error bars are equal to 2% of the electrostatic pick-up not rejected by the hybrid combiner. Figure 10: Comparison of \(|\mathrm{B}_{\mathrm{I}}|\) measured on-axis for B\({}_{0}=0\) G (open circles) and for B\({}_{0}=300\) G (filled circles). The first and second harmonic components of \(|\mathrm{B}_{\mathrm{I}}|\) are also shown for the B\({}_{0}=300\) G case. axis is owed to wave-plasma collisionless or collisional coupling. Among other studies, [15] and [10] have found strong correlations between enhanced plasma generation and the presence of helicon waves having phase velocities close to the electrons' thermal speed, or to the electrons' speed most likely to ionise, respectively [17]. The phase velocities \(v_{\phi}\) corresponding to the measured values of \(\lambda_{\parallel}\) in figure 13 are plotted in figure 14 for increasing B\({}_{0}\) and therefore for increasing plasma density. Also plotted is a 4.75 eV normalised Maxwellian speed distribution function \(f_{s}(v)\), which is the average axial value of the effective electron temperature obtained from integrating the measured electron-energy probability functions (EEPF) for B = 300 G and 600 G. The vertical dotted line in figure 14 shows the distribution's thermal speed. The measured values of \(v_{\phi}\) are at least one order of magnitude larger than the thermal speed, thus the wave-electron coupling mechanism detailed in [15] can be ruled out as potential cause of the wave damping on-axis. Given \(f_{s}(v)\) and the argon ionisation cross-section \(\sigma_{\rm{iz}}\), the ionisation rate constant is [43] \[{\rm K_{iz}=}\int_{0}^{\infty}vf_{s}(v)\sigma_{\rm{iz}}\,dv\;. \tag{11}\] Figure 14 shows \(\sigma_{\rm{iz}}\) values taken from [44] alongside the integrand of equation 11. The maximum of this integrand gives the electrons having the speed most likely to cause ionisation for a 4.75 eV Maxwellian population, i.e. \(\sim 2.8\times 10^{8}\) cm.s\({}^{-1}\), or 22.4 eV. In figure 14, discharges with higher plasma densities (higher B\({}_{0}\)) yielded higher values of \(v_{\phi}\). This trend took place despite \(v_{\phi}\) moving further Figure 14: Comparison between the velocities of electrons most likely to ionise for a 4.75 eV Maxwellian speed distribution function (continuous line), and the measured helicon wave phase velocities \(v_{\phi}\) (square markers). \(v_{\phi}\) is plotted against the normalised averaged plasma density as B\({}_{0}\) increases in the direction indicated by the arrow. Figure 12: Local wavelength (square markers) obtained from the axial derivative of the phase shift of figure 11 (b) for the case B\({}_{0}\) = 300 G. The square root of B\({}_{0}\) divided by the density is shown for comparison (dot markers). The continuous curve is a B-spline fitting to highlight the data trends. Figure 13: Comparison between the axially averaged measured wave properties (red squares) and the helicon dispersion relation (equation 4) for a radially bounded wave with \(k_{\perp}\simeq 3.83/4.1\) (red dotted line). The whistler and pure electrostatic dispersion relations for the fixed \(k_{\perp}\) value are also given for comparison (continuous orange and blue dashed lines, respectively). The data points are taken from figure 5 with the abscissa as axially averaged quantities. away from the speed of electrons most likely to ionise. Moreover, at energies corresponding to the measured values of \(\nu_{\phi}\) (e.g. \(>\) 376 eV), the electron number densities are expected to be negligible and are unlikely to cause any quantifiable wave damping. These observations indicate that wave-trapping as described in [10] is unlikely to be the cause of observed wave damping on-axis. With the collisionless processes likely ruled out, the wave collision absorption length \(\delta_{\parallel}\) from equation 5 can be calculated using \(\nu_{\mathrm{eff}}=\nu_{\mathrm{ei}}+\nu_{\mathrm{en}}\). The two terms on the right-hand side are the electron-ion and electron-neutral collision rate respectively. They can be calculated from rewriting equation 11 with the appropriate cross-section and multiplying the integral by the neutral or ion density. For B\({}_{0}\) = 300 G, using the EEPFs measured on-axis, and not accounting for ion-pumping for simplicity, \(\nu_{\mathrm{eff}}\) was calculated for each \(z\) position and plotted in figure 15. With \(\nu_{\mathrm{eff}}\sim 7\) MHz, \(\nu_{\mathrm{eff}}/\omega\simeq 0.04\) and therefore collisional processes cannot account for the axial wave damping. To further illustrate this, \(\delta_{\parallel}\) can be estimated to be \(\sim 4.7\) m, and used with \[\mathrm{B}_{\parallel}(z)=\mathrm{B}_{\parallel}(z_{0})\exp\left(-(z-z_{0})/ \delta_{\parallel}\right) \tag{12}\] to produce the continuous curve in figure 15. This curve represents the approximate wave damping if a collisional process was the principal cause of it. Interestingly, dividing \(\delta_{\parallel}\) by 10 produces the dashed curve which approximates the measured damping well. Similar observations of anomalous damping were made in [7] and spawned the quest for collisionless coupling mechanisms. Here, equation 12 was used rather than the ray-tracing Wentzel-Kramers-Brillouin (WKB) method to estimate the overall axial wave attenuation from wave-plasma coupling in this inhomogeneous plasma. This is because the WKB method's validity condition \(\left|\frac{\partial n/\partial z}{n}\right|^{-1}>>\lambda_{\parallel}\), i.e., that the wavelength needs to be much less than the gradient scale length of the refractive index, was found to be only fulfilled for limited axial extents (typically \(\lesssim 5\) cm), for the conditions encountered in this study [28, 45]. Resorting to complex wave optics formalisms is beyond the scope of this study. ### Reflections and resonances Reflections of helicon waves at the axial conducting boundaries of plasma columns have been associated with standing wave patterns when the reflected wave interacts constructively with the forward wave and such situations seem to sometimes improve the wave-plasma coupling [19, 13]. Local variations in plasma densities from changes in the applied magnetic fields are also known to affect the waves propagation and coupling [17, 20]. In the present apparatus, the plasma density drops by several orders of magnitude before reaching the normal conducting surface at the downstream end of the apparatus and it can be seen from figure 7 that the wave amplitudes go to zero well before even exiting the glass tube. Nevertheless, for B\({}_{0}\) > 150 G, the wave axial amplitudes feature a clear local maximum at \(z\simeq 50\) cm (see figure 11 (a)). This might indicate some amount of wave reflection caused by a rapidly changing index of refraction \(n\)[28, 45]. The plasma column is highly inhomogeneous on-axis and such changes in \(n\) can be expected. Note that electron cyclotron resonance surfaces are well outside of the volume of interest for the operating conditions of interest. Following the steps in [20], the local values of \(n\) are calculated from equation 2 for the B\({}_{0}\) = 300 G case by using the 2D measured plasma density, the calculated \(\mathbf{B_{0}}\) and for a wave travelling at \(\theta\) = 79.6\({}^{\circ}\) with respect to the local \(\mathbf{B_{0}}\). This value of \(\theta\) is the axial average value taken from figure 17 (b). This approximation of the 2D refractive index does not include the effects of the boundary conditions but can still provide some valuable insights, especially on-axis. Figure 16 shows the resulting 2D contours of \(n\), and as expected, significant gradients of \(n\) exist. On-axis, the refractive index evolves from a plateau at \(n\simeq 140\) in the centre of the column to \(n>200\) for \(z<10\) cm and \(z\geq 47\) cm. This would cause reflections and/or absorptions as the WKB approximation breaks down (while it is locally valid around \(z=30\) cm) and might explain the wave axial damping which is especially pronounced for \(z<10\) cm and \(z>50\) cm (see figure 10) [20]. Interestingly, figure 16 features a conical front of resonance beyond which \(n\) is imaginary and the wave is Figure 15: B\({}_{0}\) = 300 G case \(|\mathrm{B}_{\parallel}|\) (dot markers) and wave damping estimated from equation 5 (continuous curve) and with \(\delta_{\parallel}/10\) (dashed line). The computed effective collision frequencies \(\nu_{\mathrm{eff}}\) (star markers) are reported on the right-hand axis. evanescent (cutoff). These regions roughly coincide with loci in figure 7 (b) where \(|\mathbf{B}_{0}|\) is at the noise level. The fact that the conical front is non-continuous is an artefact due to the discrete measurement steps. The measured phase shifts having the characteristics of travelling waves, wave reflections at the loci of resonance are probably weak and wave absorption possibly dominates between the resonance and the cutoff regions where signs of heated electrons will be looked for in a future work. Next, the values of \(\theta\) calculated from \(k_{\perp}\) and \(k_{\parallel}\) on-axis are compared with the resonance angle \(\theta_{\mathrm{res}}\) in figure 17. At several loci, \(\theta\geq\theta_{\mathrm{res}}\), most notably \(z\simeq 30\) cm for \(\mathbf{B}_{0}=150\) G and for \(z>60\) cm when \(\mathbf{B}_{0}=300\) G and \(600\) G. The wave locally approaching the \(v_{\phi}\) resonance angle can therefore explain the anomalous "U" shaped profile of \(|\mathbf{B}_{\parallel}|\) when \(\mathbf{B}_{0}=150\) G (see figure 7 and figure 11 (a)) as well as contribute to the wave damping for \(z>60\) cm in the two other \(\mathbf{B}_{0}\) cases. \(\theta\geq\theta_{\mathrm{res}}\) could be the reason for \(n\to\infty\) in figure 16, suggesting that the measured wave features are consistent with the properties of the plasma medium. ### Energy conservation Reflections and resonances might account for some of the wave damping, especially at the magnetic mirror throat and downstream depending on the value of \(\mathbf{B}_{0}\). However, the damping of \(|\mathbf{B}_{\parallel}|\) for \(z<30\) cm is still unexplained when \(\mathbf{B}_{0}\geq 150\) G. One particularity of the discharge has been previously overlooked: its converging-diverging geometry. From figure 7, the magnetically confined plasma appears to act on the wave as a symmetric funnel of varying cross-sectional area. As such, conservation of the wave's intensity should be considered as it propagates downstream from the antenna. The intensity is equal to the time-average of the Poynting vector normal to the cross-section which is equal to the group velocity \(v_{\mathrm{g}}\) times the energy density (which is \(\propto\mathbf{B}^{2}\)) [28]. In an uniform medium, \(v_{\mathrm{g}}\) would be constant and with a decreasing cross-sectional area one would expect the energy density to locally increase. This is not what is observed. In the present case, the medium is non-uniform and \(v_{\mathrm{g}}\) varies. From equation 4, the components of \(v_{\mathrm{g}}\) in polar coordinates \((\mathbf{\hat{k}},\mathbf{\hat{\theta}})\) are \[\left\{\begin{aligned} & v_{\mathrm{g}k}=2kc^{2}\omega_{\mathrm{ce}} \cos\theta/\omega_{\mathrm{pe}}^{2}\;,\\ & v_{\mathrm{g}\theta}=-kc^{2}\omega_{\mathrm{ce}}\sin\theta/ \omega_{\mathrm{pe}}^{2}\;.\end{aligned}\right. \tag{13}\] Figure 18 (a) shows the magnitude of \(v_{\mathrm{g}}\) and \(\psi\) on-axis for \(\mathbf{B}_{0}=300\) G, with \(\psi\) being the angle between \(\mathbf{v}_{\mathrm{g}}\) and \(\mathbf{B}_{0}\). The group velocity vector stays nearly parallel to \(\mathbf{\hat{z}}\) and its magnitude is symmetrical with respect to the solenoids. The direction of the Poynting vector is approximately along \(\mathbf{\hat{z}}\), and for the intensity to be conserved it is expected that as \(v_{\mathrm{g}}\) increases towards \(z=30\) cm, the wave energy density proportional to \(\mathbf{B}^{2}\) should decrease. This situation is analogous to a Venturi effect where the energy density is a proxy for the pressure of a fluid flowing at \(v_{\mathrm{g}}\) through a constriction. As such, it is tempting to fit a Bernoulli-like equation of the form \[\mathbf{B}(z_{2})^{2}-\mathbf{B}(z_{1})^{2}=\mathrm{C}\left(v_{\mathrm{g}}(z _{2})^{2}-v_{\mathrm{g}}(z_{1})^{2}\right)\;, \tag{14}\] to the axial profile of \(|\mathbf{B}_{\parallel}|\). Here \(z_{1}\) and \(z_{2}\) are two axial positions and \(\mathrm{C}\) is an arbitrary constant. It is noted that for a \(m=0\) mode, \(\mathbf{B}_{\perp}=0\) on-axis, and it is reasonable to expect that \(\mathbf{B}\simeq\mathbf{B}_{\parallel}\). Taking \(z_{1}=10\) cm, equation 14 was fitted to the measured profile in figure 18 (b). It can be seen that this equation reproduces the wave damping up to \(z\simeq 25\) cm beyond which the amplitude would increase again, as per the fluid analogue. This suggests that the local increase in the wave amplitude around \(z=50\) cm for \(\mathbf{B}_{0}>150\) G could be a result of a Venturi-like effect combined with local wave resonance and reflection. Figure 16: Index of refraction \(n\) calculated with equation 2 using the measured volume plasma density \(n_{\mathrm{i}}\) and calculated \(\mathbf{B}_{0}\). For this example, an erstax wave is set to travel at \(\theta=79.6^{\circ}\) with respect to the local \(\mathbf{B}_{0}\), the axial average of figure 17 (b). Regions in black show where \(n\) is imaginary. ## 6 Conclusions In this study, the 2D features of rf magnetic waves excited by a single loop antenna are characterised and identified as an \(m=0\) helicon mode throughout the funnel-shaped magnetised plasma. Two-dimensional mappings of the wave amplitude show that the wave is somewhat guided by the converging-diverging plasma column. On-axis, it is shown that the waves are best modelled by the dispersion relation neglecting electron inertia and including the plasma column radial boundary condition. This is verified at the local and global levels of description of the wave, for all reported values of B\({}_{0}\), and despite the simplicity of the employed model. Analysis to find potential wave-plasma couplings to account for the observed wave damping on-axis revealed that neither previously described collisional nor collisionless mechanisms seem to play a significant role. Figure 17: Axial variation of the wavenumber angle \(\theta\) (markers) and its resonance angle \(\theta_{\rm res}\) (continuous line) for B\({}_{0}\) increasing from 150 G to 600 G (a)-(c). Figure 18: Magnitude of the group velocity vector \(r_{\rm B}\), its angle \(\psi\) with respect to \(\hat{\mathbf{z}}\) for B\({}_{0}=300\) G (a). Comparison between the measured amplitudes \(|\)B\({}_{1}|\) (dotted line) and the amplitude \(|\)B\({}_{1}|\) from energy conservation alone (continuous line) for B\({}_{0}=300\) G (b). An interesting result in accordance with wave theory, is that some level of damping could be explained by local resonances of the wave at loci where its wavevector takes an angle greater than the phase velocity resonance cone. There are good correlations for the B\({}_{0}\) = 150 G case explaining the "U" shape wave amplitude profile, and for B\({}_{0}\neq 150\) G cases to explain the severe damping at both extremities of the plasma column. An unexpected outcome is that the wave damping from \(z=10\) cm to the magnetic funnel can be modelled by a Bernoulli-like equation. As the cross-sectional area of the funnel decreases, the group velocity is seen to increase, resulting in a decreasing wave energy density. This does not constitute a novel helicon wave property, but rather is a feature emerging from the specific discharge geometry and the inhomogeneous plasma density. This raises the question of whether considerations of wave energy conservation and wave resonances have been sometimes overlooked in past studies which considered helicon wave damping. For the conditions reported here, phase velocity resonance is the only identified mechanism through which helicon waves could be a vector of energy transfer between the antenna and remote electrons. However, the precise coupling of such resonances are still unclear, as is the question of whether the locally deposited power is significant to the global plasma dynamics. Finally, this work provides an example of a situation where high magnetic field intensities and moderate plasma densities (\(10^{11}-10^{12}\) cm\({}^{-3}\)) result in unambiguously characterised helicon waves in blue-core discharges whilst contributions of the waves to the plasma generation could not be identified. Since waves are often assumed to play a role in in the power balance of so-called helicon devices, these results make a case for the importance of rigorous analyses of the wave features and coupling to the plasma. The lack of such analysis can induce confusion in distinguishing between plasma generation owing to magnetically enhanced capacitive / inductive mechanisms and wave-heated regimes. ## Data availability statement The data that support the findings of this study are available from the corresponding author upon reasonable request. ## Acknowledgments The authors would like to thank Jim Chung for his help with B-dot measurements during his visit to Aotearoa, as well as Philippe Gutittienne for our insightful conversations. This work was partially supported by the New Zealand Space Agency under Grant No. MBIR#00008060. ## Conflict of Interest The authors have no conflicts to disclose.
2310.05859
Choices of HKR isomorphisms
In this short note we record the fact that the set of multiplicative HKR natural equivalences defined simultaneously for all derived schemes, functorialy splitting the HKR-filtration and rendering the circle action compatible with the de Rham differential, is, via Cartier duality, in a natural bijection with the set of filtered formal exponential maps $ \widehat{\mathbb{G}_a}\to \widehat{\mathbb{G}_m}$. In particular, when the base $k$ is a field of characteristic zero, the set of choices is $k^\ast$.
Marco Robalo
2023-10-09T16:53:04Z
http://arxiv.org/abs/2310.05859v1
# Choices of HKR isomorphisms ###### Abstract In this short note we record the fact that the set of multiplicative HKR natural equivalences defined simultaneously for all derived schemes, functorially splitting the HKR-filtration and rendering the circle action compatible with the de Rham differential, is, via Cartier duality, in a natural bijection with the set of filtered formal exponential maps \(\widehat{\mathbb{G}_{\mathsf{a}}}\to\widehat{\mathbb{G}_{\mathrm{m}}}\). In particular, when the base \(\mathsf{k}\) is a field of characteristic zero, the set of choices is \(\mathsf{k}^{*}\). ###### Contents * 1 Introduction * 2 The space of functorial HKR isomorphisms * 3 Computation ## 1 Introduction Let \(\mathsf{k}\) be a commutative ring. The Hochschild-Kostant-Rosenberg (HKR) theorem [13] establishes for any smooth \(\mathsf{k}\)-scheme \(X=\mathsf{Spec}(R)\) an identification of the Hochschild homology groups \(\mathsf{HH}_{i}(R/\mathsf{k}):=\mathrm{Tor}^{i}_{R\otimes_{\mathsf{k}}R}(R,R)\) with the modules of \(i\)-differential forms \(\Omega^{i}_{X/\mathsf{k}}\), given by the anti-symmetrization map \[\Omega^{i}_{R/\mathsf{k}}\to\mathsf{HH}_{i}(R/\mathsf{k})\quad r_{0}.dr_{1} \wedge\cdots\wedge dr_{i}\mapsto\sum_{\sigma\in\Sigma_{n}i}(-1)^{\mathsf{sign} (\sigma)}[r_{0}\otimes r_{\sigma(1)}\otimes\cdots\otimes r_{\sigma(i)}]\] The groups \(\mathrm{HH}_{i}(R/\mathsf{k})\) are actually defined for every derived \(\mathsf{k}\)-algebra \(R\in\mathsf{SCR}_{\mathsf{k}^{(*)}}\) as the homology groups of the derived tensor product of \(\mathsf{k}\)-algebras \(\mathsf{HH}(R/\mathsf{k}):=R\stackrel{{\mathbb{L}}}{{\otimes}}R\) where \(R\) is seen as a \(R\overset{\mathbb{L}}{\underset{\mathsf{k}}{\otimes}}R\)-algebra using the multiplication map \(R\overset{\mathbb{L}}{\underset{\mathsf{k}}{\otimes}}R\to R\). In particular, this shows that \(\mathsf{HH}(R/\mathsf{k})\) carries the structure of an object in \(\mathsf{SCR}_{\mathsf{k}}\). Also, for a general \(R\in\mathsf{SCR}_{\mathsf{k}}\) we replace \(\Omega^{1}_{R/\mathsf{k}}\) by the _cotangent complex_\(\mathbb{L}_{R/\mathsf{k}}\) and independently of the characteristic of \(\mathsf{k}\), we have the HKR filtration on \(\mathsf{HH}(R/\mathsf{k})\) that has \((\Lambda^{i}\mathbb{L}_{R/\mathsf{k}})[i]\) as associated graded piece of weight \(i\) (see [18: IV. 4.1]). When \(\mathsf{k}\) is a field with \(\mathrm{char}(\mathsf{k})=0\), the anti-symmetrization map induces a splitting of the HKR filtration and gives a \(\mathsf{k}\)-linear quasi-isomorphism [10:Prop. 1.3.16, Remark 3.2.3, Prop. 5.4.6] \[\mathsf{HH}(R/\mathsf{k})\mathop{\longrightarrow}\limits^{\sim}\bigoplus_{i= 1}^{n}\left(\Lambda^{i}\mathbb{L}_{R/\mathsf{k}}\right)[i] \tag{1}\] Derived geometry [11; 1] offers another perspective: since in \(\mathsf{SCR}_{\mathsf{k}}\), derived tensor products are pushouts, we find an equivalence in \(\mathsf{SCR}_{\mathsf{k}}\), \(\mathsf{HH}(R/k)\simeq R\overset{\mathbb{L}}{\underset{\mathsf{k}}{\otimes}} \mathsf{S}^{1}\) that presents \(\mathsf{HH}(R/\mathsf{k})\) as the derived ring of functions \(\mathscr{O}_{\mathsf{L}X}\) on the loop derived loop scheme \(\mathsf{L}X:=\mathbb{R}\mathsf{Map}(\mathsf{S}^{1},X)\). Similarly, derived geometry offers a geometric incarnation for \(\bigoplus_{i=1}^{n}\left(\Lambda^{i}\mathbb{L}_{R/\mathsf{k}}\right)[i]\) as the derived ring of functions of the shifted tangent bundle \(\mathsf{T}[1]X\)=\(\mathsf{Spec}(\mathsf{Sym}^{\Delta}(\mathbb{L}_{X/\mathsf{k}}[1]))\) with the \(\mathbb{G}_{\mathrm{m}\,\mathsf{k}}\)-action corresponding to the natural grading. When \(\mathsf{k}\) is a \(\mathbb{Q}\)-algebra, for any affine derived scheme \(X\), the results of [11; 1] provide an isomorphism of derived schemes functorial in \(X\) \[\mathsf{T}[1]X\mathop{\longrightarrow}\limits^{\sim}\mathsf{L}X \tag{2}\] that recover a quasi-isomorphism of the type (1) after passing to global functions. However, it is unclear if the equivalence obtained through derived geometry coincides with the anti-symmetrization map of (1). **Remark 1.1**.: When \(\mathrm{char}(\mathsf{k})=0\), Kapranov [10] explains another way to produce HKR isomorphisms (1) by considering smooth schemes \(X\) with a torsion-free flat connection \(\nabla\) on their tangent bundle. In this case the connection provides a formal exponential \(\mathsf{exp}^{\nabla}:\widehat{\mathsf{T}X}\simeq\widehat{\Delta}\) establishing an isomorphism between the formal completion of \(X\times X\) along the diagonal and the formal completion of \(\mathsf{T}X\) along the zero section. Passing to the self-intersections with \(X\), we obtain another equivalence of derived schemes of the type (2). ## 2 The space of functorial HKR isomorphisms The goal of this note is describe the space of HKR isomorphisms (1). We see though that without further assumptions, this space can be significantly large: the Remark 1.1 shows that every torsion-free connection on a scheme \(X\) provides one, and the space of connections is affine. But clearly, connection induced HKR isomorphisms are not functorial unless the maps preserve the connection. Therefore we will only consider HKR isomorphisms that: * are defined as part of a natural equivalence of \(\infty\)-functors on the \(\infty\)-category of derived schemes \[\mathsf{T}[1](-)\xrightarrow{\sim}\mathsf{L}(-)\] (3) * the natural equivalence defines functorial splittings of the HKR-filtration; * functorialy match the circle action on loop spaces with the de Rham differential on forms. Derived algebraic geometry again helps understanding these properties: using the formalism of affine stacks [10], it is shown in the combined results of [11, 10, 12] that over any commutative ring \(\mathsf{k}\) there exists a flat affine filtered abelian group stack (underived) \[\mathsf{S}^{1}_{\mathsf{Fil}}\to[\mathbb{A}^{1}_{\mathsf{k}}/\mathbb{C}_{\rm m \,\mathsf{k}}]\] which we call the filtered circle, and such that for any derived scheme \(X\), the relative derived mapping stack \[\mathbb{R}\underline{\mathsf{Map}_{(\mathbb{A}^{1}_{\mathsf{k}}/\mathbb{C}_{ \rm m\,\mathsf{k}})}}(\mathsf{S}^{1}_{\mathsf{Fil}},X\times[\mathbb{A}^{1}_{ \mathsf{k}}/\mathbb{G}_{\rm m\,\mathsf{k}}])\to[\mathbb{A}^{1}_{\mathsf{k}}/ \mathbb{G}_{\rm m\,\mathsf{k}}]\] provides the HKR-filtration on the derived loop space \(\mathsf{L}X\), with associated graded stack given by \((\mathsf{T}[-1]X)/\mathbb{G}_{\rm m\,\mathsf{k}}\to\mathsf{B}\mathbb{G}_{\rm m \,\mathsf{k}}\). As a consequence, asking for HKR-isomorphisms respecting (i)-(iii), is to ask for _splittings_ of the filtered circle compatible with the group structure: **Construction 2.1**.: Let \(q:[\mathbb{A}^{1}_{\mathsf{k}}/\mathbb{G}_{\rm m\,\mathsf{k}}]\to\mathsf{B} \mathbb{G}_{\rm m\,\mathsf{k}}\) be the map induced by the projection \(\mathbb{A}^{1}_{\mathsf{k}}\to\mathsf{Spec}(\mathsf{k})\) and let \(Y\) be a stack endowed with a \(\mathbb{G}_{\rm m}\)-action. Take \(Z=[Y/\mathbb{G}_{\rm m}]\to\mathsf{B}\mathbb{G}_{\rm m\,\mathsf{k}}\). We define the associated _split_ filtered stack \(Z^{\mathsf{split}}\to[\mathbb{A}^{1}_{\mathsf{k}}/\mathbb{G}_{\rm m\,\mathsf{ k}}]\) to be the pullback By construction, it is equivalent to the quotient stack \([Y\times\mathbb{A}^{1}/\mathbb{G}_{\rm m}]\) where we let \(\mathbb{G}_{\rm m}\) act on the product coordinate-wise. The associated graded stack \((Z^{\mathsf{split}})^{\mathsf{gr}}\) is canonically equivalent to \(Z\) because \(q\) is a right inverse to the inclusion \(0:\mathsf{B}\mathbb{G}_{\rm m\,\mathsf{k}}\to[\mathbb{A}^{1}_{\mathsf{k}}/ \mathbb{G}_{\rm m\,\mathsf{k}}]\). Finally, when \(S\to[\mathbb{A}^{1}_{\mathsf{k}}/\mathbb{G}_{\rm m\,\mathsf{k}}]\) is a filtered stack, we denote by \(S^{\rm triv}:=(S^{\mathsf{gr}})^{\mathsf{split}}\) the associated split filtered stack where \(S^{\mathsf{gr}}\) is the pullback of \(S\) along the inclusion \(\mathsf{B}\mathbb{G}_{\rm m\,\mathsf{k}}\to[\mathbb{A}^{1}_{\mathsf{k}}/ \mathbb{G}_{\rm m\,\mathsf{k}}]\). Since the Construction 2.1 is monoidal with respect to cartesian products, \((\mathsf{S}^{1}_{\mathsf{Fil}})^{\mathrm{triv}}\) is still a group. We finally narrow down the choices of HKR-isomorphisms: **Definition 2.2**.: The space of HKR-isomorphisms compatible with (i)-(iii) is the space of invertible maps of group (higher) stacks \[\mathsf{Map}^{\mathrm{inv}}_{\mathrm{group},[\mathsf{A}^{1}_{\mathsf{k}}/ \mathsf{G}_{\mathrm{m}\,\mathsf{k}}]}\big{(}\mathsf{S}^{1}_{\mathsf{Fil}}\,, \,(\mathsf{S}^{1}_{\mathsf{Fil}})^{\mathrm{triv}}\big{)}\] ie, universal splittings of the HKR filtration compatible with the action of the filtered circle. **Remark 2.3**.: Given a splitting \(\mathsf{S}^{1}_{\mathsf{Fil}}\simeq(\mathsf{S}^{1}_{\mathsf{Fil}})^{\mathrm{ triv}}\) as in Definition 2.2 we obtain the associated HKR-natural transformation (3) by pre-composition with the relative derived mapping spaces over \([\mathsf{A}^{1}_{\mathsf{k}}/\mathsf{G}_{\mathrm{m}\,\mathsf{k}}]\) \[\mathbb{R}\underline{\mathsf{Map}}_{[\mathsf{A}^{1}_{\mathsf{k}}/\mathsf{G}_{ \mathrm{m}\,\mathsf{k}}]}((\mathsf{S}^{1}_{\mathsf{Fil}})^{\mathrm{triv}},X \times[\mathsf{A}^{1}_{\mathsf{k}}/\mathsf{G}_{\mathrm{m}\,\mathsf{k}}]) \xrightarrow{\sim}\mathbb{R}\underline{\mathsf{Map}}_{[\mathsf{A}^{1}_{ \mathsf{k}}/\mathsf{G}_{\mathrm{m}\,\mathsf{k}}]}(\mathsf{S}^{1}_{\mathsf{Fil }},X\times[\mathsf{A}^{1}_{\mathsf{k}}/\mathsf{G}_{\mathrm{m}\,\mathsf{k}}])\] and extracting the fibers over \(1:\mathsf{Spec}(\mathsf{k})=[\mathsf{G}_{\mathrm{m}\,\mathsf{k}}/\mathsf{G}_{ \mathrm{m}\,\mathsf{k}}]\to[\mathsf{A}^{1}_{\mathsf{k}}/\mathsf{G}_{\mathrm{m }\,\mathsf{k}}]\). ## 3 Computation We are interested in computing \(\pi_{0}\) of the space in Definition 2.2. Thanks to [14:Theorem 1.7] we have an explicit formula for the filtered group circle in terms of the relative Cartier dual of the relative formal group scheme over \([\mathsf{A}^{1}_{\mathsf{k}}/\mathsf{G}_{\mathrm{m}\,\mathsf{k}}]\) given by the total space \(\mathrm{Def}\to[\mathsf{A}^{1}_{\mathsf{k}}/\mathsf{G}_{\mathrm{m}\,\mathsf{k }}]\) of the deformation to the normal bundle at the unit, from the formal group \(\widehat{\mathsf{G}_{\mathrm{m}\,\mathsf{k}}}\) to its lie algebra \(\widehat{\mathsf{G}_{\mathsf{a}\,\mathsf{k}}}\) \[\mathsf{S}^{1}_{\mathsf{Fil}}\simeq\mathsf{B}_{[\mathsf{A}^{1}_{\mathsf{k}}/ \mathsf{G}_{\mathrm{m}\,\mathsf{k}}]}(\mathrm{Def}^{\vee})\] Here, Cartier duality is given by the \([\mathsf{A}^{1}_{\mathsf{k}}/\mathsf{G}_{\mathrm{m}\,\mathsf{k}}]\)-relative construction \[(-)^{\vee}:=\mathrm{Hom}_{\mathrm{groups}}(-,\widehat{\mathsf{G}_{\mathrm{m }\,\mathsf{k}}})\] (the hom is taken inside classical group schemes, not as derived schemes) and \(\widehat{\mathsf{G}_{\mathrm{m}\,\mathsf{k}}}\) is the multiplicative formal group. Since the construction of Cartier duality is the relative one, we can freely interchange \[(\mathrm{Def}^{\vee})^{\mathrm{triv}}\simeq(\mathrm{Def}^{\mathrm{triv}})^{\vee}\] \[\mathsf{B}_{[\mathsf{A}^{1}_{\mathsf{k}}/\mathsf{G}_{\mathrm{m}\,\mathsf{k}} ]}(\mathrm{Def}^{\vee})^{\mathrm{triv}}\simeq\mathsf{B}_{[\mathsf{A}^{1}_{ \mathsf{k}}/\mathsf{G}_{\mathrm{m}\,\mathsf{k}}]}((\mathrm{Def}^{\vee})^{ \mathrm{triv}})\] As a consequence, the space of HKR-isomorphisms of Definition 2.2 is equivalent to \[\mathsf{Map}_{\mathrm{group},[\mathbb{A}_{\mathsf{k}}^{1}/\mathbb{G}_{m\,k}]}^{ \mathrm{inv}}\big{(}\mathsf{B}_{[\mathbb{A}_{\mathsf{k}}^{1}/\mathbb{G}_{m\,k}] }(\mathrm{Def}^{\vee})\,,\,\mathsf{B}_{[\mathbb{A}_{\mathsf{k}}^{1}/\mathbb{G} _{m\,k}]}((\mathrm{Def}^{\mathrm{triv}})^{\vee})\big{)}\] Since all group stacks being used are abelian, the Eckmann-Hilton delooping at the unit provides a map \[\mathsf{Map}_{\mathrm{group},[\mathbb{A}_{\mathsf{k}}^{1}/\mathbb{G}_{m\,k}] }^{\mathrm{inv}}\big{(}\mathsf{B}_{[\mathbb{A}_{\mathsf{k}}^{1}/\mathbb{G}_{m \,k}]}(\mathrm{Def}^{\vee})\,,\,\mathsf{B}_{[\mathbb{A}_{\mathsf{k}}^{1}/ \mathbb{G}_{m\,k}]}((\mathrm{Def}^{\mathrm{triv}})^{\vee})\big{)}\] which induces an isomorphism of \(\pi_{0}\) with inverse given by the \(\mathsf{B}\)-construction. Notice that a priori the map \(\Omega_{*}\) lands in maps of group stacks. By [11, 1.4.2 and 1.4.5] these are the same as maps as relative formal groups \(\mathsf{FG}\): \[\mathsf{Map}_{\mathrm{group},[\mathbb{A}_{\mathsf{k}}^{1}/\mathbb{G}_{m\,k}] }^{\mathrm{inv}}\big{(}\mathrm{Def}^{\vee}\,,\,(\mathrm{Def}^{\mathrm{triv}}) ^{\vee}\big{)}\simeq\mathsf{Map}_{\mathsf{FG},[\mathbb{A}_{\mathsf{k}}^{1}/ \mathbb{G}_{m\,k}]}^{\mathrm{inv}}\big{(}\mathrm{Def}^{\vee}\,,\,(\mathrm{Def} ^{\mathrm{triv}})^{\vee}\big{)}\] Finally, we consider the map induced by the functor of Cartier duality \[\mathsf{Map}_{\mathsf{FG},[\mathbb{A}_{\mathsf{k}}^{1}/\mathbb{G}_{m\,k}]}^ {\mathrm{inv}}\big{(}\mathrm{Def}^{\vee}\,,\,(\mathrm{Def}^{\mathrm{triv}})^{ \vee}\big{)}\] which is an equivalence, thanks to the fully faithfulness of Cartier duality [12, 3.12 and Const 3.16-(i)]. Notice that, independently of \(\mathrm{char}(\mathsf{k})\), both mapping spaces in the last formula are discrete. Since \(\mathrm{Def}\to[\mathbb{A}_{\mathsf{k}}^{1}/\mathbb{G}_{m\,k}]\) is a smooth formal group relative to \([\mathbb{A}^{1}/\mathbb{G}_{m}]\), we can identify the trivial filtration \(\mathrm{Def}^{\mathrm{triv}}\to[\mathbb{A}_{\mathsf{k}}^{1}/\mathbb{G}_{m\,k}]\) with the affine linear formal group associated to its relative Lie algebra. In particular, following Construction 2.1, it is given by the constant family over \([\mathbb{A}_{\mathsf{k}}^{1}/\mathbb{G}_{m\,k}]\) \[\mathrm{Def}^{\mathrm{triv}}\simeq[[\widehat{(\mathbb{G}_{\mathsf{k}}}\times \mathbb{A}_{\mathsf{k}}^{1})/\mathbb{G}_{m\,k}]\] and the set of functorial HKR-isomorphisms is given the set of filtered formal exponentials \[\mathsf{Map}_{\mathsf{FG},[\mathbb{A}_{\mathsf{k}}^{1}/\mathbb{G}_{m\,k}]}^ {\mathrm{inv}}\big{(}[\widehat{(\mathbb{G}_{\mathsf{k}}}\times\mathbb{A}_{ \mathsf{k}}^{1})/\mathbb{G}_{m\,k}]\,,\,\mathrm{Def}\big{)}\] **Remark 3.1**.: By extracting the underlying groups of the filtration (ie, the fibers over \(1\) in \([\mathbb{A}^{1}_{\mathsf{k}}/\mathbb{G}_{\mathrm{m}\,\mathsf{k}}]\)) we find a map \[\mathsf{Map}^{\mathrm{inv}}_{\mathsf{FFGr},[\mathbb{A}^{1}_{\mathsf{k}}/ \mathbb{G}_{\mathrm{m}\,\mathsf{k}}]}\big{(}[(\widehat{\mathbb{G}_{\mathsf{a} \,\mathsf{k}}}\times\mathbb{A}^{1}_{\mathsf{k}})/\mathbb{G}_{\mathrm{m}\, \mathsf{k}}]\,,\,\mathrm{Def}\big{)}\to\mathsf{Map}^{\mathrm{inv}}_{\mathsf{FFGr }}\big{(}\widehat{\mathbb{G}_{\mathsf{a}\,\mathsf{k}}},\widehat{\mathbb{G}_{ \mathrm{m}\,\mathsf{k}}}\big{)} \tag{4}\] By height reasons, since \(\widehat{\mathbb{G}_{\mathsf{a}\,\mathsf{k}}}\) is of height \(0\) and \(\widehat{\mathbb{G}_{\mathrm{m}\,\mathsf{k}}}\) is of height \(1\), the target of (4) is empty when \(\mathsf{k}\) is of \(\mathrm{char}(p)>0\). Therefore, so is the source of (4). Finally, when \(\mathrm{char}(\mathsf{k})=0\), the relative exponential map (see for instance [Dem:Expose VIIB - SS3] or [GR17:Chapter 7, Cor. 3.2.2]) defines an isomorphism of filtered formal group schemes \[[(\widehat{\mathbb{G}_{\mathsf{a}\,\mathsf{k}}}\times\mathbb{A}^{1}_{\mathsf{ k}})/\mathbb{G}_{\mathrm{m}\,\mathsf{k}}]\xrightarrow[\sim]{\mathsf{exp}_{\mathrm{rel}}} \mathrm{Def}\] Composition with \(\mathsf{exp}_{\mathrm{rel}}\) defines a bijection \[\mathsf{Map}^{\mathrm{inv}}_{\mathsf{FFGr},[\mathbb{A}^{1}_{ \mathsf{k}}/\mathbb{G}_{\mathrm{m}\,\mathsf{k}}]}\big{(}[(\widehat{\mathbb{G} _{\mathsf{a}\,\mathsf{k}}}\times\mathbb{A}^{1}_{\mathsf{k}})/\mathbb{G}_{ \mathrm{m}\,\mathsf{k}}]\,,\,\mathrm{Def}\big{)} \tag{5}\] \[\mathsf{Map}^{\mathrm{inv}}_{\mathsf{FFGr},[\mathbb{A}^{1}_{ \mathsf{k}}/\mathbb{G}_{\mathrm{m}\,\mathsf{k}}]}\big{(}[(\widehat{\mathbb{G} _{\mathsf{a}\,\mathsf{k}}}\times\mathbb{A}^{1}_{\mathsf{k}})/\mathbb{G}_{ \mathrm{m}\,\mathsf{k}}]\,,\,[(\widehat{\mathbb{G}_{\mathsf{a}\,\mathsf{k}}} \times\mathbb{A}^{1}_{\mathsf{k}})/\mathbb{G}_{\mathrm{m}\,\mathsf{k}}]\big{)}\] Let us compute the last space: since \(\mathrm{char}(\mathsf{k})=0\), the category of formal groups relative to \([\mathbb{A}^{1}_{\mathsf{k}}/\mathbb{G}_{\mathrm{m}\,\mathsf{k}}]\) is equivalent to the category of Lie algebra objects in \(\mathsf{QCoh}([\mathbb{A}^{1}_{\mathsf{k}}/\mathbb{G}_{\mathrm{m}\,\mathsf{k}}])\)[GR17:Chapter 7]. The Lie algebra associated to \([(\widehat{\mathbb{G}_{\mathsf{a}\,\mathsf{k}}}\times\mathbb{A}^{1}_{\mathsf{ k}})/\mathbb{G}_{\mathrm{m}\,\mathsf{k}}]\) is the structure sheaf \(\mathscr{O}_{[\mathbb{A}^{1}_{\mathsf{k}}/\mathbb{G}_{\mathrm{m}\,\mathsf{k}}]} (1)\) with the weight-\((1)\) action of \(\mathbb{G}_{\mathrm{m}\,\mathsf{k}}\) (see [10, (see [10, (11)]), endowed with the abelian Lie bracket. Since \(\mathsf{QCoh}([\mathbb{A}^{1}_{\mathsf{k}}/\mathbb{G}_{\mathrm{m}\,\mathsf{k}}])\) is symmetric monoidal equivalent to filtered \(\mathsf{k}\)-modules \(\mathsf{Fil}(\mathsf{Mod}_{\mathsf{k}})\)[10, (11)], \(\mathscr{O}_{[\mathbb{A}^{1}_{\mathsf{k}}/\mathbb{G}_{\mathrm{m}\,\mathsf{k}}]} (1)\) corresponds to the abelian Lie algebra given by \(\mathsf{k}(1)\). It follows that \[\mathsf{Map}^{\mathrm{inv}}_{\mathsf{FFGr},[\mathbb{A}^{1}_{ \mathsf{k}}/\mathbb{G}_{\mathrm{m}\,\mathsf{k}}]}\big{(}[(\widehat{\mathbb{G} _{\mathsf{a}\,\mathsf{k}}}\times\mathbb{A}^{1}_{\mathsf{k}})/\mathbb{G}_{ \mathrm{m}\,\mathsf{k}}]\,,\,[(\widehat{\mathbb{G}_{\mathsf{a}\,\mathsf{k}}} \times\mathbb{A}^{1}_{\mathsf{k}})/\mathbb{G}_{\mathrm{m}\,\mathsf{k}}]\big{)}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\pi_{0}\, \mathsf{Map}^{\mathrm{inv}}_{\mathrm{Lie},\mathsf{Fil}(\mathsf{Mod}_{\mathsf{k}})} (\mathsf{k}(1),\mathsf{k}(1))=\mathsf{k}^{*}\] In particular, the map (5) sends \(\lambda\in\mathsf{k}^{*}\) to \(\mathsf{exp}(\lambda.(-))\). In summary: **Corollary 3.2**.: _When \(\mathsf{k}\) is a field with \(\mathrm{char}(\mathsf{k})=0\), the set of functorial multiplicative HKR equivalences simultaneously defined for all derived \(\mathsf{k}\)-schemes, splitting the HKR-filtration and matching the circle action to the de Rham differential, is the set of exponential maps, \(\mathrm{Hom}_{\mathsf{FG}}(\widehat{\mathbb{G}_{\mathsf{a}\,\mathsf{k}}}, \widehat{\mathbb{G}_{\mathrm{m}\,\mathsf{k}}})\simeq\mathsf{k}^{*}\)._ **Acknowledgements:** I thank Mauro Porta, Bertrand Toen, Tasos Moulinos and Nick Rozenblyum for discussions about this short note.
2307.07459
Infall Motions in the Hot Core Associated with Hypercompact HII Region G345.0061+01.794 B
We report high angular resolution observations, made with the Atacama Large Millimeter Array in band 6, of high excitation molecular lines of $\rm CH_3CN$ and $\rm SO_2$ and of the H29$\alpha$ radio recombination line towards the G345.0061+01.794 B HC H II region, in order to investigate the physical and kinematical characteristics of its surroundings. Emission was detected in all observed components of the J=14$\rightarrow$13 rotational ladder of $\rm CH_3CN$ and in the $30_{4,26}-30_{3,27}$ and $32_{4,28}-32_{3,29}$ lines of $\rm SO_2$. The peak of the velocity integrated molecular emission is located $\sim$0$\,.\!\!^{\prime\prime}$4 northwest of the peak of the continuum emission. The first-order moment images and channel maps show a velocity gradient, of 1.1 km s$^{-1}$ arcsec$^{-1}$, across the source, and a distinctive spot of blueshifted emission towards the peak of the zero-order moment. The rotational temperature is found to decrease from 252$\pm24$ Kelvin at the peak position to 166$\pm16$ Kelvin at its edge, indicating that our molecular observations are probing a hot molecular core that is internally excited. The emission in the H29$\alpha$ line arises from a region of 0$\,.\!\!^{\prime\prime}$65 in size, where its peak coincides with that of the dust continuum. We model the kinematical characteristics of the "central blue spot" feature as due to infalling motions, suggesting a central mass of 172.8$\pm8.8 M_{\odot}$. Our observations indicate that this HC H II region is surrounded by a compact structure of hot molecular gas, which is rotating and infalling toward a central mass, that is most likely confining the ionized region. The observed scenario is reminiscent of a "butterfly pattern" with an approximately edge-on torus and ionized gas roughly parallel to its rotation axis.
Toktarkhan Komesh, Guido Garay, Christian Henkel, Aruzhan Omar, Robert Estalella, Zhandos Assembay, Dalei Li, Andrés Guzmán, Jarken Esimbek, Jiasheng Huang, Yuxin He, Nazgul Alimgazinova, Meiramgul Kyzgarina, Shukirgaliyev Bekdaulet, Nurman Zhumabay, Arailym Manapbayeva
2023-07-14T16:38:33Z
http://arxiv.org/abs/2307.07459v4
# The environments of hyper-compact H II regions. I. G345.0061+01.794 B ###### Abstract We report high angular resolution observations, made with the Atacama Large Millimeter Array in band 6, of high excitation molecular lines of CH\({}_{3}\)CN and SO\({}_{2}\) and of the H29\(\alpha\) radio recombination line towards the G345.0061+01.794 B HC H II region, in order to investigate the physical and kinematical characteristics of its surroundings. Emission was detected in all observed components of the J=14\(\rightarrow\)13 rotational ladder of CH\({}_{3}\)CN and in the 30\({}_{4,26}-30_{3,27}\) and 32\({}_{4,28}-32_{3,29}\) lines of SO\({}_{2}\). The peak of the velocity integrated molecular emission is located \(\sim\)0.4''northwest of the peak of the continuum emission. The first-order moment images and channel maps show a velocity gradient, of 1.1 km s\({}^{-1}\) arcsec\({}^{-1}\), across the source, and a distinctive spot of blueshifted emission towards the peak of the zero-order moment. We derived that the rotational temperature decreases from 230 Kelvin at the peak position to 137 Kelvin at its edge, indicating that our molecular observations are probing a hot molecular core that is internally excited. The emission in the H29\(\alpha\) line arises from a region of 0.65'' in size, whose peak is coincident with that of the dust continuum, has a center velocity of -18.1\(\pm\)0.9 km s\({}^{-1}\) and a width (FWHM) of 33.7\(\pm\)2.3 km s\({}^{-1}\). We modeled the kinematical characteristics of "central blue spot" feature as due to infalling motions, deriving a central mass of 126.0\(\pm\)8.7\(M_{\odot}\). Our observations indicate that this HC H II region is surrounded by a compact structure of hot molecular gas, which is rotating and infalling toward a central mass, that is most likely confining the ionized region. keywords: ISM: molecules --ISM: clouds --ISM: cores -- stars: formation --stars: massive --ISM: kinematics and dynamics ## 1 Introduction The formation of high-mass stars begins inside dense and massive molecular cores where high-mass protostellar objects accrete at rates between 10\({}^{-5}\) to 10\({}^{-3}\)\(\sim\) M\({}_{\odot}\) yr\({}^{-1}\)(Tan et al., 2014). These objects finish their Kelvin-Helmholtz (K-H) contraction very rapidly and reach the main sequence (Norberg and Maeder, 2000; Keto and Wood, 2006). At this point, the star radiates extreme ultraviolet (UV) photons that ionize its surroundings, producing very small regions of ionized gas, observationally characterized by sizes \(\lesssim\) 0.03 pc, densities n\({}_{e}>10^{6}\) cm\({}^{-3}\), and emission measures \(>10^{8}\) pc cm\({}^{-6}\)(Kurtz, 2000). These hyper-compact (HC) regions are thought to signpost an early stage of the evolutionary path of High-Mass Young Stellar Object (HMYSO). Theoretical calculations show that almost half of the mass of O-type stars is accreted after the K-H contraction and the onset of ionizing radiation (Hosokawa and Omukai, 2009; Zhang et al., 2014). How high-mass stars keep accreting despite the onset of the ionizing radiation is not well established. Theoretical works have shown that under steady spherical accretion, radiation pressure inhibits the growth of the protostars. An effective way to circumvent the radiation and ionized gas pressure is accretion from a disk, allowing the accreting material to reach the young high-mass stars much more easily by flowing inward, mainly through the plane perpendicular to the angular momentum vector of the system (Nakano, 1989; Kuiper et al., 2011). Accretion through a disk may only choke the ionized region near the disk plane, allowing for H II region development in the polar regions (Keto, 2007). In this scenario, an HC H II region should consist of an ionized biconical cavity confined by a rotating and contracting hot molecular core. How does accretion proceed after the onset of the ionizing radiation? How does the envelope material avoid being ionized and blown away by its own pressure? To answer these questions we undertook ALMA Band 6 observations towards a set of luminous embedded HMYSOs associated with HC H II regions in order to simultaneously observe molecular emission in highly excited transitions of SO\({}_{2}\) and CH\({}_{3}\)CN and emission from the ionized gas in the H29\(\alpha\) hydrogen recombination line. These two molecules have been used to trace velocity gradients, indicative of rotation, towards hot molecular cores around luminous young high-mass stars in several cases (e.g., Beltran et al. 2014; Guzman et al. 2014; Sanchez-Monge et al. 2013). Our molecular observations are intended to assess whether or not HC H II regions are associated with rotating hot molecular cores on scales of 3000 AU, as well as to detect inflow motions from the surrounding gas. Our goal is to find evidence of disk accretion and to settle the question as to whether or not accretion onto the HMYSO is maintained after stellar contraction and UV photon injection. In this work, we present the observations towards the HC H II region G345.0061+01.794 B (hereafter G345.01 B, Guzman et al. 2012) associated with IRAS 16533-4009. The distance to the source is 1.7 kpc (Urquhart et al. 2007). The Spitzer-GLIMPSE survey shows that it is associated with a bright compact MIR source prominent in the 4.5 \(\mu\)m band. The paper is organized as follows: in SS2 we describe the observations performed with the Atacama Large Millimeter Array (ALMA); in SS3 we present the observational results; in SS4 we discuss the analysis of the data, including the physical relationship between the hot molecular core and HC H II region; and in SS5 we present a summary of the main points addressed in this paper. ## 2 Observations We observed, using ALMA in Band 6 (256.3-259.6 GHz), dust continuum and molecular line emission towards the HC H II region G345.01 B. The observations were carried out, as part of ALMA Cycle 3, during 21 May 2016, using the 12-m array. The ALMA field of view at this wavelength is \(\sim 22\arcsec\), defined as the FWHM of the primary beam. The phase center of the array was (RA, Dec) (J2000) = (\(16^{h}56^{m}47.5^{ps}\), -\(40^{\circ}14^{\prime}25.8\arcsec\)). We observed 4 spectral windows in dual polarization mode. The first window was centered at the frequency of 256.302035 GHz, has a bandwidth of 1875.00 MHz and a resolution of 1.129 MHz. This setup was chosen to map the H29\(\alpha\) radio recombination line (RRL) emission from the HC II region. The second and third windows were centered, respectively, at the frequencies of 259.599448 and 258.388716 GHz each with 234.38 MHz bandwidths and 488.281 kHz (\(\sim 0.564\) km s\({}^{-1}\)) channels. These two setups were chosen to observe the emission from the purported hot core in two high excitation temperature lines of SO\({}_{2}\) transitions. The fourth window, centered at the frequency of 257.325000 GHz, has a bandwidth of 468.75 MHz and a resolution of 488.281 kHz. This setup was chosen to observe the emission of CH\({}_{3}\)CN, a good temperature probe of both the large scale diffuse gas and small scale dense gas, in the J=14-13 ladder. Zapata et al. (2015) point out that, based on the analysis of IRAS 16547-4247, to detect emission from the inner regions of the rotating core it is important to use molecular transitions with high upper energy levels (\(>300\) Kelvin). The selected SO\({}_{2}\) transitions, 30\({}_{4,26}\) - 30\({}_{3,27}\) and 32\({}_{4,28}\) -32\({}_{3,29}\), have upper level temperatures of 471 and 531 Kelvin, respectively, and those of the lines in the CH\({}_{3}\)CN 14-13 ladder range between 92 and 670 Kelvin. J1427-4206 was used as bandpass calibrator, J1717-3342 was used as phase calibrator, and J1617-5848 as flux calibrators. Table 1 lists the parameters of each spectral window, the synthesized beams and the rms noise achieved. The integration time on source was 35 minutes. Calibration and reduction of these data were done using the Common Astronomy and Software Applications (CASA, McMullin et al. 2007). ## 3 Results ### Molecular emission #### 3.1.1 \(\mathrm{CH_{3}CN}\) Figure 1 presents the spectrum of the emission in the J=14\(\rightarrow\)13 rotational transition of CH\({}_{3}\)CN integrated over a region of 0.5\(\arcsec\) in size, centered on the G345.01 B HC H II region. This rotational transition consists of 14 \(K\) components (\(K\)=0, 1,...13; \(K\) being the projection of the total angular momentum of the molecule about the principal rotation axis of the molecule) of which ten lies within the observed spectral window (red dotted lines). Their line frequencies, upper state energy levels and line strengths are given in Table 2. Figure 2 displays images of the zero-order moment (upper panels) and first-order moment (lower panels) of the emission in the K=2, 3, 4, 6, 7 and 8 components of the 14-13 ladder of CH\({}_{3}\)CN. Moments of the K=0, 1, 5, 9 components are not shown since they are blended with each other or with other molecular lines (see Fig. 1). Superimposed are contours of the continuum emission. The peak of the velocity integrated intensity CH\({}_{3}\)CN emission is located \(\sim 0.4\arcsec\) northwest of the peak of the continuum emission. The first-order moment images show a velocity gradient from roughly east to west with average velocities preferentially blueshifted in the West side and redshifted in the East side, and a spot of blueshifted emission towards the peak of the zero-order moment. The blue spot feature is present in all K components shown in Figure 2, confirming that its detection is a robust result. Fig. 3 presents channel maps of the emission in the K=3 component, which clearly exhibits the shift in velocity, from blueshifted velocities in the West to redshifted velocities in the East. In addition to the blue spot feature, the moment 1 images show a change in velocity across the source roughly along and east-west direction, with blueshifted velocities in the west and redshifted in the east. This is illustrated in Fig. 4 which plots position-velocity diagrams of the emission in the K=0, 1, 2, 3 transitions along a line with P.A=255\({}^{\circ}\) passing through (RA, Dec) (J2000) = (\(16^{h}56^{m}47.59^{s}\), -\(40^{\circ}14^{\prime}26.0\arcsec\)). There is a clear change in velocity across the source of 4.3 km s\({}^{-1}\) over a region of 3.8 \(\arcsec\) (equivalent to 135 km s\({}^{-1}\) pc\({}^{-1}\) at the distance of 1.7 kpc). If this velocity gradient is due to gravitationally bound rotation, it implies a dynamical mass within a 0.016 pc radius of 66 \(M_{\odot}\), two times smaller than the mass of the central object as derived in Sec. 4.2. We conclude that the hot molecular gas is bound and rotating, and infalling towards the central object. #### 3.1.2 \(\mathrm{SO_{2}}\) In addition to the high excitation lines of SO\({}_{2}\) observed on purpose in this work, the spectral window of the RRL encompasses four other transitions of SO\({}_{2}\) all of which have low excitation temperatures. The transitions and their parameters are listed in Table 3; col.(2) gives the frequency, col.(3) the energy of the upper state, col.(4) the Einstein A coefficient, and col.(5) the statistical weight of the upper state. Figure 5 show images of the velocity integrated intensity (upper Figure 1: Spectra of the methyl cyanide emission, integrated over a region of 0.5\({}^{\prime\prime}\), centered on the G345.01 B HC H II region. K components of the CH\({}_{3}\)CN 14-13 transition are marked with red lines (\(V_{\rm LSR}\)=-14 km s\({}^{-1}\)). The K=5 component of CH\({}_{3}\)CN is blended with a CH\({}_{3}\)OH line. Figure 2: Images of the velocity integrated intensity (upper panels) and moment-one (lower panels) in the K=2, 3, 4, 6, 7 and 8 transitions toward G345.01 B HC H II region. Superimposed are contours of the continuum emission. Contour levels are 5\(\sigma\), 10\(\sigma\), 20\(\sigma\), 50\(\sigma\), 100\(\sigma\), 20\(\sigma\), 400\(\sigma\) and 800\(\sigma\), where \(\sigma\) is 0.6 mJy/beam. The white dashed line shown in the lower left panel indicates the position of the PV cut mentioned in section 3.1.1. The white ellipse shown at the bottom right corner indicates the beam size. panels) and intensity-weighted velocity (moment 1; lower panels) in all six observed SO\({}_{2}\) lines, in order of increasing excitation temperature. The peak position of the integrated intensity in SO\({}_{2}\) is similar to that in the CH\({}_{3}\)CN lines. The blue spot signature is also present in the SO\({}_{2}\) moment one maps. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{SPW} & \multirow{2}{*}{Center Freq.} & \multirow{2}{*}{Bandwidth} & \multirow{2}{*}{Vel. res.} & \multirow{2}{*}{Synthesized beam} & \multirow{2}{*}{rms noise} \\ & & (GHz) & (GHz) & (km s\({}^{-1}\)) & \({}^{\prime\prime}\) & mJy/beam \\ \hline \multicolumn{1}{c}{H29\(\alpha\)} & 256.302035 & 1.875 & 1.320 & 0.40\(\times\)0.52 & 2.7 \\ SO\({}_{2}\) v=0 30(4,26)-30(3,27) & 259.599448 & 0.23438 & 0.564 & 0.40\(\times\)0.51 & 4.4 \\ SO\({}_{2}\) v=0 32(4,28)-32(3,29) & 258.388716 & 0.23438 & 0.566 & 0.40\(\times\)0.51 & 4.2 \\ CH\({}_{3}\)CN v=0 14-13 ladder & 257.325000 & 0.46875 & 0.569 & 0.40\(\times\)0.52 & 5.5 \\ \hline \end{tabular} \end{table} Table 1: Observational parameters. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Transitions} & \multirow{2}{*}{Frequency} & \(E_{\rm u}\) & \(A_{\rm ul}\) & \(g_{\rm u}\) \\ & & GHz & Kelvin & \(10^{-4}\)cm\({}^{-1}\) & \\ \hline \multicolumn{4}{c}{_High excitation lines_} \\ 30\({}_{42,26}-30_{3,27}\) & 259.5994 & 471.50 & 2.07 & 61 \\ 32\({}_{42,428}-23_{2,32}\) & 258.3887 & 531.10 & 2.1 & 65 \\ \multicolumn{4}{c}{_Low excitation lines_} \\ 3\({}_{3,1}-3_{2,2}\) & 255.95804 & 27.62 & 0.66 & 7 \\ 4\({}_{3,1}-4_{2,2}\) & 255.5533 & 31.29 & 0.93 & 9 \\ 5\({}_{3,3}-5_{2,4}\) & 256.24695 & 35.89 & 1.07 & 11 \\ 7\({}_{3,5}-7_{2,6}\) & 257.09997 & 47.84 & 1.22 & 15 \\ \hline \end{tabular} \end{table} Table 3: Parameters of the observed SO\({}_{2}\) transitions. Figure 3: Channel maps for the K=3 component of CH\({}_{3}\)CN 14-13 transition. The white ellipse shown in the lower right corner of the bottom left panel indicates the beam size. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Transitions} & \multirow{2}{*}{Frequency} & \(E_{\rm u}\) & \(A_{\rm ul}\) & \(g_{\rm u}\) \\ & & GHz & Kelvin & \(10^{-4}\)cm\({}^{-1}\) & \\ \hline \multicolumn{4}{c}{_High excitation lines_} \\ 30\({}_{42,26}-30_{3,27}\) & 259.5994 & 471.50 & 2.07 & 61 \\ 32\({}_{42,428}-23_{2,32}\) & 258.3887 & 531.10 & 2.1 & 65 \\ \multicolumn{4}{c}{_Low excitation lines_} \\ 3\({}_{3,1}-3_{2,2}\) & 255.95804 & 27.62 & 0.66 & 7 \\ 4\({}_{3,1}-4_{2,2}\) & 255.5533 & 31.29 & 0.93 & 9 \\ 5\({}_{3,3}-5_{2,4}\) & 256.24695 & 35.89 & 1.07 & 11 \\ 7\({}_{3,5}-7_{2,6}\) & 257.09997 & 47.84 & 1.22 & 15 \\ \hline \end{tabular} \end{table} Table 2: Observed CH\({}_{3}\)CN J = 14 \(\rightarrow\) 13 rotational lines. ### Ionized gas: H29\(\alpha\) RRL emission Since at Band 6 frequencies the continuum is likely to be dominated by dust emission, HRLs become the most direct way to trace the ionized gas. Figure 6 left panel shows an image of the velocity integrated H29\(\alpha\) emission along with the dust continuum contours. The velocity range of integration is from -44 to 8 km s\({}^{-1}\). The position of the peak in the velocity integrated line emission, of 18.7 Jy/beam km s\({}^{-1}\), is coincident with that of the dust continuum. A Gaussian fit to the observed H29\(\alpha\) brightness distribution indicates that the HC HII region has a deconvolved angular size (FWHM) \(\theta_{\rm s}=\sqrt{0.75^{\prime\prime}\times 0.56^{\prime\prime}}\approx 0.65^{ \prime\prime}\), corresponding to the geometrical mean of the deconvolved major and minor axes. At the distance of 1.7 kpc this implies a diameter of 0.0054 pc. Figure 6 right panel shows a spectrum of the H29\(\alpha\) RRL emission integrated over the source. A gaussian fit the the line profile gives a linewidth of 33.7\(\pm\)2.3 km s\({}^{-1}\) and a line center velocity of \(-\)18.1\(\pm\)0.9 km s\({}^{-1}\). For optically thin ionized gas and local thermodynamic equilibrium (LTE) condition, the electron temperature \(T_{e}^{*}\) can be derived from the expression (Gordon & Sorochenko, 2002): \[\begin{split} T_{e}^{*}=\left[\left(\frac{6985}{\sigma(v,T_{e})} \right)\left(\frac{\Delta V_{\rm H29}\alpha}{\rm km\,s^{-1}}\right)^{-1}\left( \frac{S_{\rm ff}}{S_{\rm H29}\alpha}\right)\left(\frac{v}{\rm GHz}\right)^{1. 1}\right.\\ \times\left.\left(1+\frac{N({\rm He}^{+})}{N({\rm H}^{+})}\right) ^{-1}\right]^{0.87},\end{split} \tag{1}\] where \(S_{\rm ff}\) is the free-free continuum flux density, \(S_{\rm H29}\alpha\) is the H29\(\alpha\) peak flux density, \(\alpha(v,T_{e})\sim 1\) is a slowly varying function (Mezger & Henderson, 1967), and \((N{\rm He}^{+})/N({\rm H}^{+})\)is the \(He^{+}\) to \(H^{+}\) abundance ratio. The free-free flux density, \(S_{\rm ff}\), cannot be derived from the continuum emission at 256 GHz because of the contribution of dust emission at this frequency. We estimate it using the parameters of the UC III region (EM and size) derived by Guzman et al. (2012), from a fit to the observed radio continuum spectra at lower frequencies, obtaining a value of \(S_{\rm H29}\alpha\)=883 mJy and \(\Delta V_{\rm H29}\alpha\)=33.7\(\pm\)2.3 km s\({}^{-1}\) adopting a value of 0.096 for the \(He^{+}\) to \(H^{+}\) abundance ratio (Mehringer (1994)), we get an electron temperature \(T_{e}^{*}\)=8100\(\pm\)485 Kelvin. Further parameters of the region of ionized gas can be computed using the equations presented in Mezger & Henderson (1967). Assuming that the HII region is spherical and homogeneous, using the values of the continuum flux density at 256 GHz (327 mJy), the angular size (0.65"), electron temperature (8100 Kelvin) and distance (1.7 kpc), we determined an electron density of 2.4\(\times\)10\({}^{5}\) cm\({}^{-3}\), an emission measure of 4.7\(\times\)10\({}^{8}\) pc cm\({}^{-6}\), a mass of ionized gas 1.5\(\times\)10\({}^{-3}\)\(M_{\odot}\) and that the number of ionizing photons required to excite the HII region is 1.4\(\times\)10\({}^{47}\) s\({}^{-1}\). ## 4 Discussion ### Rotational Temperature of CH\({}_{3}\)Cn We used the population-diagram method (see Araya et al., 2005) to obtain the CH\({}_{3}\)CN rotation temperature (\(T_{\rm rot}\)) assuming LTE and low optical depths. The column density in the (J, K) state, N\({}_{JK}\), is given by \[\begin{split}\left(\frac{N_{\rm IK}}{\rm cm^{2}}\right)=& 1.67\times 10^{14}\frac{g_{\rm IK }}{S(\rm I,K)}\frac{J}{(J^{2}-K^{2})}\left(\frac{v_{0}}{\rm GHz}\right)^{-1}\\ &\times\left(\frac{\mu}{\rm debye}\right)^{-2}\left(\frac{f_{\rm B }dv}{\rm K\,km\,s^{-1}}\right)\end{split} \tag{2}\] where \(g_{\rm IK}\) is the statistical weight of the state, \(S\)(I, K) is the spin weight degeneracy factor (see (Boucher et al., 1980, equation A4), \(v_{0}\) is the frequency of the \((J,K)\rightarrow(J-1,K)\) transition, \(\mu\) = 3.91 debye, and \(T_{\rm B}\) is the brightness temperature. \(N_{\rm IK}\) and the total column density of CH\({}_{3}\)CN, \(N_{\rm CH_{3}CN}\), are related, through the Boltzmann equation, by the expression \[\ln\left(\frac{N_{\rm IK}}{g_{\rm IK}}\right)=\ln\left[\frac{N_{\rm CH_{3}CN}}{ Q_{\rm int}(T_{\rm rot})}\right]-\frac{E_{\rm JK}}{kT_{\rm rot}} \tag{3}\] where \(T_{\rm rot}\) is the rotational temperature, \(E_{\rm JK}\) is the energy level of the \((J,K)\) state, and \(Q_{\rm int}\) is the partition function. If more than two transitions are observed, the rotational temperature can be derived from a least-squares linear fit of the \(\ln(N_{\rm IK}/g_{\rm IK})\) versus \(E_{\rm JK}/k\) data. The total column density can be also derived once the partition function is known, which for CH\({}_{3}\)CN is given by (Araya et al., 2005): \[Q_{\rm int}(T_{\rm rot})=\frac{3.897_{\rm rot}^{1.5}}{(1-e^{-524.8/T_{\rm rot} })^{2}} \tag{4}\] where \(T_{\rm rot}\) is in Kelvin. Figure 7 displays rotational diagrams of the emission at the peak position (blue spot) and from five half-rings (since the emission is not radially symmetric) at different radial distances from the peak. The rings are centered at the peak position and have widths of 0.15\({}^{\prime\prime}\). The inner ring (R1) have an inner radius of 0.33\({}^{\prime\prime}\). The rotational temperature decreases outwards with distance from the blue spot (exciting star) being 230 Kelvin at the peak position and 137 Kelvin at the edge of the molecular (CH\({}_{3}\)CN) structure (\(\sim\)0.01 pc). Figure 8 plots the computed rotational temperature versus the projected Figure 4: Position-velocity diagram of K=0, 1, 2 and 3 components of CH\({}_{3}\)CN 14-13 transition along an east-west direction at declination –40\({}^{\prime\prime}\)14\({}^{\prime\prime}\)25.6\({}^{\prime\prime}\) (see Fig.2). Contour levels are 10\(\%\), 20\(\%\), 30\(\%\), 40\(\%\), 50\(\%\), 60\(\%\), 70\(\%\), 80\(\%\) and 90\(\%\) of the peak value of 0.7 Jy/beam. distance from the blue spot. A power law fit to the observed dependence gives \(T_{\rm rot}=126\,r^{-0.44}\). This result suggests that the molecular gas is heated via collisional excitation with hot dust, which in turn is heated by the absorption of radiation emitted by the central star (Scoville & Kwan, 1976). Using expression (11) in Garay & Lizano (1999), we infer that the power-law index of dust emissivity at far infrared wavelengths, \(\beta\), is 0.55 and that the luminosity of the central object is 1.1\(\times\)10\({}^{4}\) L\({}_{\odot}\). This luminosity is in good agreement the luminosity of the star ionizing the HC HII region as determined by Guzman et al. (2012) from radio continuum observations. ### Rotational Temperature of SO\({}_{2}\) From the emission in the four low excitation lines of SO\({}_{2}\) we estimate the rotational temperature of the envelope using the standard rotational diagram analysis Goldsmith & Langer (1999). Figure 9 plots \(\ln(\gamma W/g_{\rm u})\) versus \(E_{\rm u}\), where \(W\) is the velocity integrated intensity, \(\gamma_{u}\) is equal to \(\frac{8\pi k\nu^{2}}{hc^{3}A_{\rm H}^{2}}\), \(k\) is the Boltzmann constant, \(\nu\) is the transition frequency, \(h\) is the Plank constant, \(c\) is the speed of light, and \(A_{\rm ul}\) is the Einstein A-coefficient. A linear fit to the observed trend implies a temperature of 38.4 Kelvin. ### Infall motions The blue spot feature mentioned above (see SS3.1) is a clear signature of infall (Mayen-Gijon et al., 2014; Estalella et al., 2019). The central region of the first-order map appears blueshifted because the blueshifted emission, coming from gas close to the central stellar object and behind it, is stronger than the redshifted emission from gas farther away and in front the stellar object. This asymmetry is produced when the optical depth is high enough so that at a given line-of-sight velocity the gas facing the observer hides the emission from the gas behind it (Anglada et al., 1987, 1991). At larger distances from the center, the integrated intensity decreases, the blue and redshifted intensities become similar, and the intensity-weighted mean velocity approaches the systemic velocity of the cloud. Therefore, the first-order moment of an infalling envelope is characterized by a compact spot of blueshifted emission toward the position of the zeroth-order moment peak. In order to determine the infall velocity, central mass and infall radius we use the hallmark model of Estalella et al. (2019). The value of the first-order moment as a function of the angular distance was obtained for the unblended K components of 14-13 transition of CH\({}_{3}\)CN by averaging the first-order moment in concentric rings of width 0.05''centered on the average position of the peak of the blue spot, \(\alpha\)(J2000)=16\({}^{h}\)56\({}^{m}\)47\({}^{s}\).54, \(\delta\)(J2000)=-40\({}^{\circ}\)14\({}^{\prime}\)25''.879\(\AA\). The first-order moment profiles of the different components are presented in Figure 10. They seem to lie into two separate groups, with the \(\mu\) values for the K=2, 3, 4 components being higher than those of the K=6, 7, 8 components, especially near the peak position. However, the K=7 component follows the K=2, 3, 4 components after Figure 6: Left panel: an image of the velocity integrated H29\(\alpha\) RRL emission along with the dust continuum contours. Right panel: a spectrum of the H29\(\alpha\) RRL emission integrated over the source. A gaussian fit the the line profile gives a linewidth of 33.7\(\pm\)2.3 km s\({}^{-1}\) and a line center velocity of \(-18.1\pm\)0.9 km s\({}^{-1}\). 0.5\({}^{\prime\prime}\) distance from the peak. The difference is likely due to the higher K lines probing the hotter gas close to the HC H II region. The best fit is obtained for an infall radius much larger than the beam size, an ambient gas velocity of -12.46\(\pm\)0.16 km s\({}^{-1}\), and a central mass of 126.0\(\pm\)8.7\(M_{\odot}\). As we discussed above the position of the central blue spot is located in the SW side of the HC H II region. Regarding the first-order moment map, Mayen-Gijon et al. (2014) explores how the central-blue-spot hallmark of an infalling core is modified by the presence of rotation. He finds that rotation makes the central-blue-spot even bluer and moves it off the center toward the half of the core where rotation tends to shift velocities to the blue. The clear detection of the "central blue spot" signature in G345.0061+01.794B HC H II region indicates that infall motions play a fundamental role in the gas kinematics of this source. ## 5 Summary We carried out high angular resolution observations, using ALMA, of emission in highly excited molecular lines of CH\({}_{3}\)CN and SO\({}_{2}\) and in the H29\(\alpha\) radio recombination line towards the G345.0061+01.794 B HC H II region. The main results and conclusions are summarized as follows: 1. Emission was detected in all ten observed K components of the J=14\(\rightarrow\)13 rotational ladder of CH\({}_{3}\)CN and in the 30\({}_{4,26}-30_{3,27}\) Figure 8: Rotational temperature versus projected distance from the blue spot. A power law fit to the observed dependence gives \(T_{\rm rot}=126\,r^{-0.44}\). Figure 7: Rotational diagrams derived from the peak position (blue spot), and in half-rings going from the peak position towards the South-East (diameters at angles of -45\({}^{\circ}\)) with different radii (R\({}_{1}\) to R\({}_{3}\)) from the peak position. The rings are centered at the peak position and have widths of 0.15\({}^{\prime\prime}\). The inner ring (R1) have an inner radius of 0.33\({}^{\prime\prime}\). The rotational temperature decreases from the peak position to the edge of the source (\(\sim\)1\({}^{\prime\prime}\)) from 230 to 137 Kelvin. Figure 9: Rotational diagram using the emission in the low excitation lines of SO\({}_{2}\) integrated over the whole source. A linear fit to the observed trend implies a temperature of 38.4 Kelvin. and 32\({}_{4,28}-32_{3,29}\) lines of SO\({}_{2}\). The peak of the velocity integrated molecular line intensity is located slightly NW (about 0.4'') of the peak of the continuum emission. (ii) The first-order moment images of the molecular emission show a central spot of blueshifted emission, with respect to systemic velocity of the cloud, located at the peak of the zero-order moment, seen in all K components of CH\({}_{3}\)CN and in the SO\({}_{2}\) lines. (iii) Rotational diagrams of the emission the methyl cyanide lines show that the rotational temperature has a peak value of 230 Kelvin at the position of the blue spot and decreases outwards reaching a value of 137 Kelvin at the edge (\(\sim\)1'') of the molecular structure, indicating that our observations are probing a hot molecular core that is internally excited. In addition, from the emission in the four low excitation lines of SO\({}_{2}\) we estimate the rotational temperature of the envelope of 38.4 Kelvin. (iv) The first-order moment images and channel maps of the molecular emission also show a velocity gradient from roughly east to west with average velocities preferentially blueshifted in the West side and redshifted in the East side. The change in velocity amounts to 4.3 km s\({}^{-1}\) over a region of 3.8 ''(equivalent to 135 km s\({}^{-1}\) pc\({}^{-1}\) at the distance of 1.7 kpc). (v) Emission was detected in the H29\(\alpha\) line, having a line center velocity of -18.1\(\pm\)0.9 km s\({}^{-1}\) and a linewidth (FWHM) of 33.7\(\pm\)2.3 km s\({}^{-1}\). The position of the peak in the velocity integrated emission is coincident with that of the dust continuum. The radio recombination line observations indicate that the ionized gas emission arises from a region having a radii of 0.0027 pc, a mass of ionized gas of 1.5\(\times\)10\({}^{-3}\)\(M_{\odot}\), an electron temperature of 8100\(\pm\)485 Kelvin, an emission measure of 4.7\(\times\)10\({}^{8}\) pc cm\({}^{-6}\), and an electron density 2.4\(\times\)10\({}^{5}\) cm\({}^{-3}\). (vi) We modeled the kinematical characteristics of "central blue spot" feature as due to infalling motions, deriving a central mass of 126.0\(\pm\)8.7\(M_{\odot}\). We conclude that this HC H II region is surrounded by a compact structure of hot molecular gas, which is rotating and infalling toward a central mass of 126.0\(\pm\)8.7\(M_{\odot}\), that is most likely confining the region of ionized gas. ## Acknowledgements This research was funded by the Science Committee of the Ministry of Science and Higher Education of the Republic of Kazakhstan (Grant Nos. AP13067768 and AP14870504) and sponsored (in part) by the Chinese Academy of Sciences (CAS), through a grant to the CAS South America Center for Astronomy (CASSACA) in Santiago, Chile. GG acknowledges support from ANID BASAL project FB210003. RE acknowledges partial financial support from the grants PID2020-117710GB-I00 and CEX2019-000918-M funded by MCINY AEI 10/10.13039/501100011033. JE acknowledges support from the National Key R&D Program of China under grant No.2022YFA1603103 and the Regional Collaborative Innovation Project of Xinjiang Uyghur Autonomous Region grant 2022E01050. DL acknowledges support from National Natural Science Foundation of China (NSFC) through grant No. 12173075 and support from Youth Innovation Promotion Association CAS. YH acknowledges support from the CAS "Light of West China" Program under grant No. 2020-XBQNXZ-017 and the Xinjiang Key Laboratory of Radio Astrophysics under grant No. 2023D04033. ## Data Availability The data underlying this article are available in the article.
2306.09985
Parametrisation of decorated Margulis spacetimes using strip deformations
Margulis spacetimes are complete affine 3-manifolds that were introduced to show that the cocompactness condition of Auslander's conjecture is necessary. There are Lorentzian manifolds that are obtained as a quotient of the three dimensional Minkowski space by a non-abelian free group acting properly discontinuously by affine isometries. Goldman-Labourie-Margulis showed that such a group is determined by a complete hyperbolic metric on a possibly non-orientable finite-type hyperbolic surface together with an infinitesimal deformation of this metric that uniformly lengthens all non-trivial closed curves on the surface. Furthermore, the set of all such infinitesimal deformations forms an open convex cone. Danciger Gu\'eritaud-Kassel parametrised the moduli space of Margulis spacetimes, with a fixed convex cocompact linear part, using the pruned arc complex. The parametrisation is done by gluing infinitesimal hyperbolic strips along a family of embedded, pairwise disjoint arcs of the hyperbolic surface that decompose it into topological disks. We generalise this result to complete finite-area hyperbolic surfaces with spikes decorated with horoballs. These are closely related to Margulis spacetimes decorated with finitely many pairwise disjoint affine light-like lines, called photons.
Pallavi Panda
2023-06-16T17:28:54Z
http://arxiv.org/abs/2306.09985v2
# Parametrisation of decorated Margulis spacetimes using strip deformations # Parametrisation of decorated Margulis spacetimes using strip deformations Pallavi Panda **Abstract.** Margulis spacetimes are complete affine 3-manifolds that were introduced to show that the cocompactness condition of Auslander's conjecture is necessary. These are Lorentzian manifolds that are obtained as a quotient of the (2,1)-Minkowski space by a free group acting properly discontinuously by affine isometries. Goldman-Labourie-Margulis showed that such a group is determined by a complete hyperbolic metric on a possibly non-orientable finite-type hyperbolic surface together with an infinitesimal deformation of this metric that uniformly lengthens all non-trivial closed curves on the surface. Furthermore, the set of all such infinitesimal deformations forms an open convex cone. Danciger-Gueritaud-Kassel parametrised the moduli space of Margulis spacetimes, with a fixed convex cocompact linear part, using the pruned arc complex. The parametrisation is done by gluing infinitesimal hyperbolic strips along a family of embedded, pairwise disjoint geodesic arcs of the hyperbolic surface that decompose it into topological disks. We generalise this result to complete finite-area hyperbolic surfaces with spikes decorated with horoballs which are closely related to Margulis spacetimes decorated with finitely many pairwise disjoint affine light-like lines. ## 1 Introduction ### Historical context Bieberbach proved (1910-1912) that any group \(\Gamma\) of affine isometries of the \(n\)-dimensional Euclidean space \(\mathbb{R}^{n}\) that acts properly discontinuously on \(\mathbb{R}^{n}\) contains a finite-index subgroup isomorphic to \(\mathbb{Z}^{n}\), \(m\leq n\). Moreover, the quotient \(\mathbb{R}^{n}/\Gamma\) is compact if and only if \(m=n\). In 1964, Auslander proposed the following conjecture: **Conjecture 1** (Auslander).: If \(\Gamma\subset\operatorname{GL}(n,\mathbb{R})\ltimes\mathbb{R}^{n}\) is a finitely generated group that acts on \(\mathbb{R}^{n}\) properly discontinuously and cocompactly, then \(\Gamma\) is virtually solvable. The conjecture has been proven to be true up to \(n=6\). In 1977, Milnor asked the following question [14]: **Q:** Is the conjecture true if the cocompactness condition is dropped? Meanwhile, in 1972, Tits proved that **Theorem** (Tits alternative).: _Let \(\Gamma\subset\operatorname{GL}(n,\mathbb{F})\) be a finitely generated group, where \(\mathbb{F}\) is a field. Then \(\Gamma\) is either virtually solvable or it contains a free group of rank \(>\)1._ Hence the Tits alternative implies that the answer to Milnor's question is negative if and only if there exists a properly discontinuous affine action of a free group (of rank \(>\)1). Margulis spacetimes.In 1983, Margulis came up with examples for \(n=3\). These were complete non-compact Lorentzian manifolds, called _Margulis spacetimes_, obtained as a quotient of the (2,1)-Minkowski space \(\mathbb{R}^{2,1}\) by a free group \(\Gamma\), acting properly discontinuously by orientation-preserving affine isometries. The group of orientation-preserving affine isometries of \(\mathbb{R}^{2,1}\) is given by \(\mathrm{SO}(2,1)\ltimes\mathbb{R}^{3}\). The special linear group \(\mathrm{SO}(2,1)\) is the isometry group of the two-dimensional hyperbolic space \(\mathbb{H}^{2}\), which lies in the Minkowski space. Its Lie algebra \(\mathfrak{so}_{2,1}\), equipped with its Killing form, is isomorphic to \(\mathbb{R}^{2,1}\). The linear action of \(\mathrm{SO}(2,1)\) on \(\mathbb{R}^{2,1}\) coincides with its adjoint action on \(\mathfrak{so}_{2,1}\). Consequently, the tangent bundle \(\mathrm{T}(\mathrm{SO}(2,1))\) is isomorphic to \(\mathrm{SO}(2,1)\ltimes\mathbb{R}^{3}\). We shall denote by \(G\) the isomorphic groups \(\mathrm{SO}(2,1)\), \(\mathrm{PGL}(2,\mathbb{R})\) and by \(\mathfrak{g}\), their Lie algebra. Abels-Margulis-Soifer [1], [2] generalised these examples to higher dimension by showing the existence of properly discontinuous action of non-abelian free subgroups of \(\mathrm{SO}^{0}(n-1,n)\), with \(n\) even. They also showed that such actions cannot exist when \(n\) is odd. Margulis introduced an invariant (see Section 2.6), which was later named after him, to detect whether a given action on \(\mathbb{R}^{2,1}\) is proper. This invariant can be interpreted as the first length derivative of a closed geodesic. He proves that (_Opposite Sign Lemma_) the sign of the Margulis invariant must remain constant for a proper action. ### Recent developments Affine deformations.Consider the representation \(\rho_{0}:\Gamma\hookrightarrow G\ltimes\mathfrak{g}\simeq\mathrm{T}(G)\) of a discrete not virtually solvable \(\Gamma\) acting properly discontinuously on \(\mathbb{R}^{2,1}\), like in the examples of Margulis. Fried and Goldman [9] proved that by projecting \(\Gamma\) onto its first coordinate, we virtually get the holonomy representation \(\rho:\pi_{1}(S)\to G\) of a finite-type complete hyperbolic surface \(S\). The projection onto the second coordinate \(u:\Gamma\to\mathfrak{g}\) is a \(\rho\)-cocycle: for every \(\gamma,\gamma^{\prime}\in\Gamma\), \(u\) satisfies \(u(\gamma\gamma^{\prime})=u(\gamma)+\rho(\gamma)\cdot u(\gamma^{\prime})\). It gives an infinitesimal deformation of \(\rho\). The group \(\Gamma\) can thus be written as \(\Gamma^{(\rho,u)}:=\{(\rho(\gamma),u(\gamma))\mid\gamma\in\pi_{1}(S)\}\), which gives an _affine deformation_ of \(\rho\). In the paper [10], the authors Goldman, Labourie and Margulis have studied affine deformations of free, discrete subgroups of \(G\). An infinitesimal deformation \(u\) of \(\rho\) is said to be _proper_ if \(\Gamma^{(\rho,u)}\) acts properly discontinuously on \(\mathbb{R}^{2,1}\). It has been proved in the paper [10] that for \(\rho\) convex cocompact, the corresponding \(u\) is proper if and only if \(u\) or \(-u\) uniformly lengthens all closed geodesics: \[\inf_{\gamma\in\Gamma\smallsetminus\{Id\}}\frac{\mathrm{d}l_{\gamma}(\rho)(u)}{ l_{\gamma}(\rho)}>0 \tag{1}\] where \(l_{\gamma}\) is the length function. In both cases, the set of all such infinitesimal deformations forms an open, convex cone; the cone corresponding to the first case is called the _admissible cone_. This was proved by using a diffused version of the Margulis invariant, which now measures the variations of geodesic currents. Strip deformations of compact surfaces.In the paper [5], the authors Danciger-Gueritaud-Kassel study admissible deformations of finite-type hyperbolic surfaces with non-empty boundary and without punctures, using _strip deformations_, first introduced by Thurston in [21]. A strip is the region in \(\mathbb{H}^{2}\) bounded by two geodesics whose closures are disjoint. An arc on such a surface \(S\) is an embedding of \([0,1]\) into \(S\) with its endpoints on its boundary \(\partial S\) such that it is not isotopic to a part of the boundary. Using the isotopy classes of these arcs, one can construct a simplicial complex called the _arc complex_ which depends only on the topology of the surface. A \(k\)-simplex of this complex is generated by the isotopy classes of a family of \(k+1\) pairwise disjoint and distinct arcs. The pruned arc complex is a subspace formed by taking the union of the interiors of all those simplices \(\sigma\) such that the arcs corresponding to the \(0\)-skeleton of \(\sigma\) decompose the surface into topological disks. A strip deformation is the process of cutting the surface along an embedded arc and gluing in a strip, without shearing. The authors uniquely realised an admissible deformation of the surface by performing strip deformations along positively weighted arcs, corresponding to a point in the pruned arc complex. Drumm [6] constructed fundamental domains of some Margulis spacetimes with a convex cocompact linear part using specially crafted piecewise linear surfaces called _crooked planes_. The Crooked Plane Conjecture says that every Margulis spacetime is amenable to such a treatment. Charette, Drumm and Goldman proved this conjecture for rank two free groups in [4]. The general case (with convex cocompact linear part) follows from [5] which provides a dictionary between strip deformations and crooked planes. More generally, Smilga [20] gave a construction of fundamental domains for the action of the Abel-Margulis-Soifer subgroups mentioned above. ### Main results of the paper Surfaces with decorated spikes.Crowned surfaces are complete non-compact surfaces are limits of a compact surface with convex polygonal boundary where the vertices (called _spikes_) become ideal. McShane [13] has determined the orthospectrum of a one-holed polygon, Parlier and Pournin [16] have studied the diameters of their flip graphs. In [8], Fomin-Shapiro-Thurston study the arc complexes of these surfaces. Our main aim is to generalise the parametrisation result of Danciger-Gueritaud-Kassel to crowned hyperbolic surfaces whose spikes are decorated with pairwise disjoint horoballs. These decorated surfaces were first introduced by Penner in his study of Decorated Teichmuller Theory [18], [17]. A _horoball connection_ is a geodesic arc on the surface that joins two decorated spikes. Its length is given by the geodesic segment intercepted by the two horoballs decorating its endpoints. Penner defined the _lambda length_ of a horoball connection and used these lengths get coordinates to the decorated Teichmuller space. The lambda lengths were one of the primary motivations for the development of the field of cluster algebra. Furthermore, using the arc complex, he gave a cell-decomposition of the decorated Teichmuller space of the surface. We define the _admissible cone_ of a hyperbolic surface with decorated spikes to be the set of all infinitesimal deformations that uniformly lengthen every horoball connection and every closed geodesic. More precisely, every element \((m,v)\) in the tangent space over a decorated metric \(m\) satisfies: \[\inf_{\beta\in\mathcal{H}}\frac{\mathrm{d}l_{\beta}(m)(v)}{l_{\beta}(v)}>0,\] where \(\mathcal{H}\) is the set of all horoball connections and closed geodesics. On the decorated surface, we consider more arcs than in the compact case. In addition to the arcs already mentioned, we allow two new types: finite arcs that are isotopic to a horoball neighbourhood of a spike, and infinite arcs that are embeddings of \([0,\infty)\) such that the finite end is on the boundary and the infinite end converges to a spike. This time the pruned arc complex is defined to be the subspace of the arc complex formed by taking the union of all those simplices \(\sigma\) such that the arcs corresponding to the \(0\)-skeleton of \(\sigma\) decompose the surface into topological disks with at most one decorated spike. We prove that the pruned arc complex is an open ball for the crowned surfaces and for the decorated surfaces with spikes using Harer's result [11] The strip added along an infinite arc is the region in \(\mathbb{H}^{2}\) bounded by two geodesics with the spike as the common endpoint. A strip deformation along a finite arc is defined as in the previous case. The main result of this paper gives a parametrisation of the admissible cone of a surface with decorated spikes using the strip deformations: **Theorem**.: _Let \(S^{\,\otimes}\) be a hyperbolic surface with decorated spikes equipped with a decorated metric \(m\in\mathfrak{D}(S^{\,\otimes})\). Let \(\widehat{\mathcal{A}}(S^{\,\otimes})\) be its pruned arc complex. Choose \(m\)-geodesic representatives from the isotopy classes of arcs._ _Then, the projectivised infinitesimal strip map \(\mathbb{P}f:\widetilde{\mathcal{A}}(S^{\otimes})\longrightarrow\mathbb{P}^{+}(T_{ m}\mathfrak{D}(S^{\otimes}))\) is a homeomorphism onto its image \(\mathbb{P}^{+}(\Lambda(m))\), where \(\Lambda(m)\) denotes the admissible cone over \(m\)._ In [15], we proved the above parametrisation result for the decorated ideal polygons, and decorated once-punctured polygons. Their arc complexes are finite, contrary to the bigger surfaces, so some of the methods used in the proofs there are different that those in this paper. Decorated Margulis Spacetimes.We interpret admissible deformations of surfaces with decorated spikes as Margulis spacetimes with a certain type of decoration by pairwise disjoint lightlike lines (photons), one photon per decorated spike. In this context, the above theorem provides fundamental domains of the Margulis spacetimes, adapted to the photons. We have the following theorem: **Theorem**.: _Let \(S^{\otimes}\) be a hyperbolic surface with decorated spikes and let \(\rho:\pi_{1}(S^{\otimes})\rightarrow\mathrm{PGL}(2,\mathbb{R})\) be a holonomy representation. Let \(\mathcal{M}^{\otimes}\) be the space of all decorated Margulis spacetimes with convex cocompact linear part as \(\rho\). Then there is a bijection \(\Psi:\widetilde{\mathcal{A}}(S^{\otimes})\rightarrow\mathcal{M}^{\otimes}\)._ In the proof we see that the photons in the decoration must all have the same _handedness_ (see Section 7.2.2) which translates to having the same sign for a quantity for every pair of photons. This can be viewed as analogous to the opposite sign Lemma of Margulis. The paper is structured into sections in the following way: Sections 2-3 recapitulate the necessary vocabulary from hyperbolic, Lorentzian and projective geometry, and introduces every type of surface mentioned above along with their deformation spaces and admissible cones. In Section 4, we discuss the arcs and the arc complexes of the different types of surfaces and study their topology. Section 5 gives the definitions of the various strip deformations along different types arcs and some estimations that will be required in the proofs. We also give a recap of the main steps of the proof of their main result in [5]. Section 6 contains the proof of our main parametrisation theorem for surfaces with decorated spikes. Finally, in Section 7 we introduce decorated Margulis spacetimes and give their parametrisation as a corollary. Acknowledgements.This work was done during my PhD at Universite de Lille from 2017-2020 funded by the AMX scholarship of Ecole Polytechnique. I would like to thank my thesis advisor Francois Gueritaud for his valuable guidance and extraordinary patience. I am grateful to my thesis referees Hugo Parlier and Virginie Charette for their helpful remarks. I am also grateful to Universite du Luxembourg for funding my postdoctoral research (Luxembourg National Research Fund OPEN grant O19/13865598). ## 2 Preliminaries In this section we recall the necessary vocabulary and notions and also prove some results in hyperbolic geometry that will be used in the rest of the paper. ### Minkowski space \(\mathbb{R}^{2,1}\) **Definition 2.1**.: The _Minkowski space_\(\mathbb{R}^{2,1}\) is the affine space \(\mathbb{R}^{3}\) endowed with the quadratic form \(\|\cdot\|\) of signature \((2,1)\): \[\text{for }v=(x_{1},x_{2},x_{3})\in\mathbb{R}^{3},\quad\|v\|^{2}=x_{1}^{2}+x_{2}^ {2}-x_{3}^{2}.\] There is the following classification of points in the Minkowski space: a non-zero vector \(\mathbf{v}\in\mathbb{R}^{2,1}\) is said to be * _space-like_ if and only if \(\|\mathbf{v}\|^{2}>0\), * _light-like_ if and only if \(\|\mathbf{v}\|^{2}=0\), * _time-like_ if and only if \(\|\mathbf{v}\|^{2}<0\). A vector \(\mathbf{v}\) is said to be _causal_ if it is time-like or light-like. A causal vector \(\mathbf{v}=(x,y,z)\) is called _positive_ (resp. _negative_) if \(z>0\) (resp. \(z<0\)). Note that by definition of the norm, every causal vector is either positive or negative. The set of all light-like points forms the _light-cone_, denoted by \[L:=\{\mathbf{v}=(x,y,z)\in\mathbb{R}^{2,1}\mid x^{2}+y^{2}-z^{2}=0\}.\] The _positive_ (resp. _negative_) cone is defined as the set of all positive (resp. _negative_) light-like vectors. Subspaces.A vector subspace \(W\) of \(\mathbb{R}^{2,1}\) is said to be * _space-like_ if \(W\cap C=\{(0,0,0)\}\), * _light-like_ if \(W\cap C=\operatorname{span}\{\mathbf{v}\}\) where \(\mathbf{v}\) is light-like, * _time-like_ if \(W\) contains at least one time-like vector. A subspace of dimension one is going to be called a line and a subspace of dimension two a plane. The adjective "affine" will be added before the words "line" and "plane" when we are referring to some affine subspace of the corresponding dimension. Duals.Given a vector \(\mathbf{v}\in\mathbb{R}^{2,1}\), its dual with respect to the bilinear form of \(\mathbb{R}^{2,1}\) is denoted \(\mathbf{v}^{\perp}\). For a light-like vector \(\mathbf{v}\), the dual is given by the light-like hyperplane tangent to \(C\) along span \(\{\mathbf{v}\}\). For a space-like vector \(\mathbf{v}\), the dual is given by the time-like plane that intersects \(C\) along two light-like lines, respectively generated by two light-like vectors \(\mathbf{v_{1}}\) and \(\mathbf{v_{2}}\) such that \(\operatorname{span}\{\mathbf{v}\}=\mathbf{v_{1}^{\perp}}\cap\mathbf{v_{2}^{ \perp}}\). Finally, the dual of a time-like vector \(\mathbf{v}\) is given by a space-like plane. One way to construct it is to take two time-like planes \(W_{1},W_{2}\) passing through \(\mathbf{v}\). Then the space \(\mathbf{v}^{\perp}\) is the vectorial plane containing the space-like lines \(W_{1}^{\perp}\) and \(W_{2}^{\perp}\). ### The different models of the hyperbolic 2-space In this section we recall some vocabulary and introduce notations related to the different models for the hyperbolic plane, that will be used in the calculations and proofs later. Hyperboloid model.The classical hyperbolic space of dimension two \(\mathbb{H}^{2}\) can be identified with the upper sheet of the two-sheeted hyperboloid \(\{\mathbf{v}=(x,y,z)\in\mathbb{R}^{2,1}\mid\|\mathbf{v}\|^{2}=-1\}\), along with the restriction of the bilinear form. It is the unique (up to isometry) complete simply-connected Riemannian 2-manifold of constant curvature equal to -1. Its isometry group is isomorphic to \(\operatorname{SO}(2,1)\) and the identity component \(\operatorname{SO}^{0}(2,1)\) of this group forms the group of its orientation-preserving isometries; they preserve each of the two sheets of the hyperboloid individually. If the hyperbolic distance between two points \(\mathbf{u},\mathbf{v}\in\mathbb{H}^{2}\) is denoted by \(d_{\mathbb{H}^{2}}(\mathbf{u},\mathbf{v})\), then \(\cosh d_{\mathbb{H}^{2}}(\mathbf{u},\mathbf{v})=-\langle\mathbf{u},\mathbf{v}\rangle\). The geodesics of this model are given by the intersections of time-like hyperplanes with \(\mathbb{H}^{2}\). Klein's disk model.This model is the projectivisation of the hyperboloid model. Let \(\mathbb{P}:\mathbb{R}^{2,1}\smallsetminus\{\mathbf{0}\}\longrightarrow\mathbb{R} \mathbb{P}^{2}\) be the projectivisation of the Minkowski space. The projective plane \(\mathbb{R}\mathbb{P}^{2}\) can be considered as the set \(A\cup\mathbb{R}\mathbb{P}^{1}\), where \(A:=\{(x,y,1)\,|\,x,y\in\mathbb{R}\}\) is an affine chart and the one-dimensional projective space represents the line at infinity, denoted by \(\overleftrightarrow{l_{\infty}}\). The \(\mathbb{P}\)-image of a point \(\mathbf{v}\in\mathbb{R}^{2,1}\) is denoted by \([\mathbf{v}]\). A line in \(A\), denoted by \(\overleftrightarrow{l}\), is defined as \(A\cap V\) where \(V\) is a two-dimensional vector subspace of \(\mathbb{R}^{2,1}\), not parallel to \(A\). In the affine chart \(A\), the light cone is mapped to the unit circle and the hyperboloid is embedded onto its interior. This is the Klein model of the hyperbolic plane; its boundary a circle. This model is non-conformal. The geodesics are given by open finite Euclidean straight line segments, denoted by \(l\), lying inside \(\mathbb{H}^{2}\), such that the endpoints of the closed segment \(\tilde{l}\) lie on \(\partial_{\infty}\mathbb{H}^{2}\). The distance metric is given by the Hilbert metric \(d_{\mathbb{H}^{2}}(w_{1},w_{2})=\frac{1}{2}\log[p,w_{1};w_{2},q]\), where \(p\) and \(q\) are the endpoints of \(\tilde{l}\), \(l\) being the unique hyperbolic geodesic passing through \(w_{1},w_{2}\in\mathbb{H}^{2}\), and the cross-ratio \([a,b;c,d]\) is defined as \(\frac{(c-a)(d-b)}{(b-a)(d-c)}\). The group of orientation-preserving isometries is identified with \(\mathrm{PSU}(1,1)\). A point \(p\) is called _real_ (_ideal_, _hyperideal_) if \(p\in\mathbb{H}^{2}\) (resp. \(p\in\partial_{\infty}\mathbb{H}^{2}\), \(p\in\overleftrightarrow{l}\cup A\backslash\overline{\mathbb{H}^{2}}\)). The dual of \(\overleftrightarrow{l_{\infty}}\) is the point \((0,0,1)\) in \(A\). The dual of any other projective line \(\overleftrightarrow{l}=A\cap V\) is given by the point \(A\cap V^{\perp}\). The dual \(p^{\perp}\) of a point \(p\in\mathbb{R}\mathbb{P}^{2}\) is the projective line \(A\cap\mathrm{span}\,\{p\}^{\perp}\). If \(l\) is a hyperbolic geodesic, then \(l^{\perp}\) is defined to be \(\overleftrightarrow{l}^{\perp}\); it is given by the intersection point in \(\mathbb{R}\mathbb{P}^{2}\) of the two tangents to \(\partial_{\infty}\mathbb{H}^{2}\) at the endpoints of \(\tilde{l}\). _Notation:_ We shall use the symbol \(\cdot^{\perp}\) for referring to the duals of both linear subspaces as well as their projectivisations. Upper Half-plane Model.The subset \(\{z=x+iy\in\mathbb{C}\,|\,y>0\}\) of the complex plane is the upper half-space model of the hyperbolic space of dimension \(2\). The geodesics are given by semi-circles whose centres lie on \(\mathbb{R}\) or straight lines that are perpendicular to \(\mathbb{R}\). We shall call the former as _horizontal_ and the latter as _vertical_ geodesics. The boundary at infinity \(\partial_{\infty}\mathbb{H}^{2}\) is given by \(\mathbb{R}\cup\{\infty\}\). The orientation-preserving isometry group is given by \(\mathrm{PSL}(2,\mathbb{R})\) that acts by Mobius transformations on \(\mathbb{H}^{2}\). _Notation:_ We shall denote by \(G\) the isomorphic groups \(\mathrm{Isom}(\mathbb{H}^{2}),\mathrm{SO}(2,1),\mathrm{PGL}(2,\mathbb{R})\) and by \(\mathfrak{g}\) the Lie algebra of \(G\). An _open horoball_\(h\) based at \(p\in\partial_{\infty}\mathbb{H}^{2}\) is the projective image of \(H(\mathbf{v})=\{\mathbf{w}\in\mathbb{H}^{2}\mid\langle\mathbf{w},\mathbf{v} \rangle>-1\}\) where \(\mathbf{v}\) is a future-pointing light-like point in \(\mathbb{P}^{-1}\,\{p\}\). If \(k\geq k^{\prime}>0\), then \(H(k\mathbf{v}_{0})\subset H(k^{\prime}\mathbf{v}_{0})\). See Fig. 1. The boundary of an open horoball \(h(p)\subset\mathbb{H}^{2}\) based at \(p\in\partial_{\infty}\mathbb{H}^{2}\) is called a _horocycle_. It is the projective image of the set \[h(\mathbf{v}):=\{\mathbf{w}\in\mathbb{H}^{2}\mid\langle\mathbf{w},\mathbf{v} \rangle=-1\}.\] Figure 1: Concentric horoballs In the projective disk model, it is a Euclidean ellipse inside \(\mathbb{H}^{2}\), tangent to \(\partial_{\infty}\mathbb{H}^{2}\) at \(p\). In the upper half-plane model, horocycles are either Euclidean circles tangent to a point on the real line or horizontal lines which are horocycles based at \(\infty\). In the Poincare disk model, a horocycle is an Euclidean circle tangent to \(\partial_{\infty}\mathbb{H}^{2}\) at \([p]\). A geodesic, one of whose endpoints is the centre of a horocycle, intersects the horocycles perpendicularly. Note that any horoball is completely determined by a future-pointing light-like vector in \(\mathbb{R}^{2,1}\) and vice-versa. From now onwards, we shall use either of the notations introduced above to denote a horoball. Finally, the set of all horoballs of \(\mathbb{H}^{2}\) forms an open cone (the positive light cone). Given an ideal point \(p\in\partial_{\infty}\mathbb{H}^{2}\), a _decoration_ of \(p\) is the specification of an open horoball centred at \(p\). A geodesic, whose endpoints are decorated, is called a _horoball connection_. The following definition is due to Penner [19]. **Definition 2.2**.: The length of a horoball connection joining two horoballs \(\mathbf{v}_{1},\mathbf{v}_{2}\) is given by \[l:=\ln(-\frac{\langle\mathbf{v}_{1},\mathbf{v}_{2}\rangle}{2}).\] It is the signed length of the geodesic segment intercepted by the corresponding horocycles. In particular, is the horoballs are not disjoint, then the length of the horoball connection is negative. ### Killing Vector Fields of \(\mathbb{H}^{2}\) The Minkowski space \(\mathbb{R}^{2,1}\) is isomorphic to \((\mathfrak{g},\kappa)\) where \(\mathfrak{g}\) is the Lie algebra of \(G:=\mathrm{PGL}(2,\mathbb{R})\) and \(\kappa\) is its Killing form, via the following map: \[\mathbf{v}=(x,y,z)\mapsto V=\begin{pmatrix}y&x+z\\ x-z&-y\end{pmatrix}.\] The Lie algebra \(\mathfrak{g}\) is also isomorphic to the set \(\mathcal{X}\) of all Killing vector fields of \(\mathbb{H}^{2}\): \[V\mapsto\left[\begin{array}{cc}X_{v}:&\mathbb{H}^{2}\longrightarrow&\mathbb{ T}\mathbb{H}^{2}\\ &\mathbf{p}\mapsto&\frac{d}{dt}(e^{tV}\cdot\mathbf{p})|_{t=0}\end{array}\right]\] Next, one can identify \(\mathbb{R}^{2,1}\) with \(\mathcal{X}\) via the map: \[\mathbf{v}\mapsto\left[\begin{array}{cc}X_{v}:&\mathbb{H}^{2}\longrightarrow &\mathbb{T}\mathbb{H}^{2}\\ &p\mapsto&\mathbf{v}\wedge\mathbf{p}\end{array}\right]\] Figure 2: Length of horoball connections where \(\wedge\) is the Minkowski cross product: \[(x_{1},y_{1},z_{1})\wedge(x_{2},y_{2},z_{2}):=(-y_{1}z_{2}+z_{1}y_{2},-z_{1}x_{2} +x_{1}z_{2},x_{1}y_{2}-y_{1}x_{2}).\] Finally, in the upper half space model \(\mathbb{H}^{2}\), one can identify \(\mathcal{X}\) with the real vector space \(\mathbb{R}_{2}[z]\) of polynomials of degree at most 2: \[P(\cdot)\mapsto\left[z\mapsto P(z)\frac{\partial}{\partial z}\right]\] The discriminant of a polynomial in \(\mathbb{R}_{2}[z]\) corresponds to the quadratic form \(\|\cdot\|\) in \(\mathbb{R}^{2,1}\). So the nature of the roots of a polynomial determines the type of the Killing vector field. In particular, when * \(P(z)=1\), the corresponding Killing vector field is parabolic, fixing \(\infty\); * \(P(z)=z\), the corresponding Killing vector field is hyperbolic, fixing \(0,\infty\); * \(P(z)=z^{2}\), the corresponding Killing vector field is parabolic, fixing \(0\). **Properties 2.3**.: Using these isomorphisms, we have that * A spacelike vector \(\mathbf{v}\) corresponds, in \(\mathcal{X}\), to an infinitesimal hyperbolic translation whose axis is given by \(\mathbf{v}^{\perp}\cap\mathbb{H}^{2}\). If \(\mathbf{v}^{+}\) and \(\mathbf{v}^{-}\) are respectively its attracting and repelling fixed points in \(C^{+}\), then \((\mathbf{v}^{-},\mathbf{v},\mathbf{v}^{+})\) are positively oriented in \(\mathbb{R}^{2,1}\). * A lightlike vector \(\mathbf{v}\) corresponds, in \(\mathcal{X}\), to an infinitesimal parabolic element that fixes the light-like line \(\operatorname{span}\left\{\mathbf{v}\right\}\). * A timelike vector \(\mathbf{v}\) corresponds, in \(\mathcal{X}\), to an infinitesimal rotation of \(\mathbb{H}^{2}\) that fixes the point \(\frac{\mathbf{v}}{\sqrt{-\|\mathbf{v}\|}}\) in \(\mathbb{H}^{2}\). **Properties 2.4**.: 1. Given a light-like vector \(\mathbf{v}\in\mathbb{R}^{2,1}\), the set of all Killing vector fields that fix \(\operatorname{span}\left\{\mathbf{v}\right\}\) is given by its dual \(\mathbf{v}^{\perp}\). In \(\mathbb{R}\mathrm{P}^{2}\), the set of projectivised Killing vector fields that fix \(\left[\mathbf{v}\right]\in\partial_{\infty}\mathbb{H}^{2}\) is given by the tangent line at \(\left[\mathbf{v}\right]\). 2. The set of all Killing vector fields that fix a given ideal point \(p\in\partial_{\infty}\mathbb{H}^{2}\) and a horocycle in \(\mathbb{H}^{2}\) with centre at \(p\) is given by \(\operatorname{span}\left\{\mathbf{v}\right\}\), where \(\mathbf{v}\in\mathbb{P}^{-1}(p)\) in \(\mathbb{R}^{2,1}\). 3. The set of all Killing vector fields that fix a given hyperbolic geodesic \(l\) in \(\mathbb{H}^{2}\) is given by \(\mathbb{P}^{-1}(l^{\perp})\). ### Convex cocompact surfaces Any orientable compact surface is of the form \(S_{g,n}:=\mathbb{S}^{2}\#(\mathbb{(T}^{2})^{\#g})\#(\mathbb{(D}^{2})^{\#n})\) where * \(\mathbb{S}^{2}\) is a sphere of dimension 2, * \(\mathbb{T}^{2}\) is the topological surface of \(\mathbb{R}^{2}/\mathbb{Z}^{2}\), * \(\mathbb{D}\) is a closed 2-disk, * the variable \(g\in\mathbb{N}\) is called the genus of the surface and is additive under the connected sum, i.e, \(S_{g}\#S_{g^{\prime}}=S_{g+g^{\prime}}\). * the variable \(n\in\mathbb{N}\) denotes that number of boundary components. Next, we shall look at some examples and their common names. **Example 2.5**.: When \(n=0\), the surface is called _closed_. **Example 2.6**.: Suppose that \(g=0\). 1. When \(n=1\), we get back the disk \(\mathbb{D}\). 2. When \(n=2\), we get an _annulus_. 3. When \(n=3\), the surface is called a _pair of pants_. **Example 2.7**.: When \(g=1,n=1\), we shall call the surface a _one-holed_ torus. The Euler characteristic of such a surface is given by \(\chi(S_{g,n})=2-2g-n\). ### Non-orientable surfaces Any compact non-orientable surface is of the form \(T_{h,n}=(\mathbb{R}\mathrm{P}^{2})^{\#h}\#((\mathbb{D}^{2})^{\#n})\) where * \(\mathbb{R}\mathrm{P}^{2}\) is the projective plane, * the variable \(h\in\mathbb{N}\) here is again additive under the connected sum; the surface corresponding to \(h=0\) is the 2-sphere, which is orientable; so when we write \(T_{h,n}\), we implicitly assume that \(h>0\). Also, we have the equality \(T_{h}\#S_{g}=T_{h+2g}\), for any \(h>0\). **Example 2.8**.: When \(h=1,n=1\), we get the _Mobius strip_. **Example 2.9**.: When \(h=2,n=0\), we get the _Klein's bottle_. We are primarily interested in those compact surfaces \(S\) which are hyperbolic and have non-empty boundary. From the Uniformisation Theorem, we know that the Euler characteristic of such a surface, denoted by \(\chi(S)\), is negative. The following is the list of all the connected orientable and non-orientable surfaces that aren't hyperbolic, and hence excluded from the discussion: \[\begin{array}{llll}S_{0,0}:&\text{a sphere $\mathbb{S}^{2}$,}&S_{0,2}:& \text{annulus,}\\ S_{1,0}:&\text{a torus $\mathbb{T}^{2}$,}&T_{1,1}:&\text{a closed Mobius Strip}\\ T_{1,0}:&\text{a projective plane $\mathbb{R}\mathrm{P}^{2}$,}&T_{2,0}:&\text{ Klein's bottle.}\end{array}\] A complete finite-area hyperbolic metric with totally geodesic boundary on a compact hyperbolic surface \(S_{c}:=S_{g,n}\) or \(T_{h,n}\) (\(n>0\)) is given by the following information: * A discrete faithful representation, called a holonomy representation \[\rho:\pi_{1}(S)\longrightarrow\mathrm{PGL}(2,\mathbb{R}),\] that maps each boundary component \(b_{i}\) to a hyperbolic element. When \(S_{c}=S_{g,n}\), the image \(\rho(\pi_{1}(S_{c}))\) is a Fuchsian subgroup of \(\mathrm{PSL}(2,\mathbb{R})\). * A developing map \(\mathrm{dev}:\widetilde{S_{c}}\longrightarrow\mathbb{H}^{2}\), such that the following diagram commutes: for all \(\gamma\in\pi_{1}(S_{c})\) for all \(\gamma\in\pi_{1}(S)\). Here, \(\widetilde{S_{c}}\) is the universal cover of \(S_{c}\), on which an element \(\gamma\in\pi_{1}(S_{c})\) acts by deck transformations. It follows from these conditions that the group \(\Gamma:=\rho(\pi_{1}(S_{c}))\) is a discrete finitely generated free group of \(\mathrm{PGL}(2,\mathbb{R})\) containing only hyperbolic elements. The \(\mathrm{dev}\)-image is a simply-connected region in \(\mathbb{H}^{2}\) bounded by infinite geodesics corresponding to the lifts of its boundary components \(\partial_{i}S\), for every \(i=1,\ldots,n\). These geodesics are pairwise disjoint in \(\overline{\mathbb{H}^{2}}\). The _deformation space_\(\mathfrak{D}(S_{c})\) of the surface is the set of conjugacy classes of all possible holonomy representations. It is a connected component of the set \[[[\rho]:\rho\text{ is discrete, faithful};\forall i,\rho(\partial_{i}S_{c}) \text{ is hyperbolic}]\subset\mathrm{Hom}(\pi_{1}(S_{c}),\mathrm{PGL}(2,\mathbb{R}) )/\mathrm{PGL}(2,\mathbb{R}),\] where the action of \(\mathrm{PGL}(2,\mathbb{R})\) is by conjugation. Let \(S_{c}\) be a compact hyperbolic surface endowed with a metric \(m=[\rho]\in\mathfrak{D}(S_{c})\). Given an element \([\gamma]\in\pi_{1}(S_{c})\smallsetminus\{Id\}\), there exists a unique closed \(m\)-geodesic in this homotopy class, denoted by \(\gamma_{g}\). **Definition 2.10**.: The _length function_ is defined in the following way: \[\begin{array}{rcl}l_{\gamma}:&\mathfrak{D}(S_{c})&\to&\mathbb{R}_{>0}\\ &[\rho]&\mapsto&2\arccos\left(\frac{\mathrm{tr}(\rho(\gamma_{g}))}{2}\right). \end{array}\] The following is a well-known result (for e.g. see [7]) which is usually proved using Fenchel-Nielson coordinates: **Theorem 2.11**.: _Let \(S_{c}\) be a compact hyperbolic surface with geodesic boundary._ 1. _If_ \(S_{c}=S_{g,n}\)_, then its deformation space_ \(\mathfrak{D}(S_{g,n})\) _is homeomorphic to an open ball of dimension_ \(6g-6+3n\)_._ 2. _If_ \(S_{c}=T_{h,n}\)_, then its deformation space_ \(\mathfrak{D}(T_{h,n})\) _is homeomorphic to an open ball of dimension_ \(3h-6+3n\)_._ Next we recall infinitesimal deformations and cocycles. **Definition 2.12**.: An infinitesimal deformation of a metric \(m\in\mathfrak{D}(S_{c})\) is a vector of the tangent space \(T_{m}\mathfrak{D}(S_{c})\). Let \(S_{c}=S_{g,n}\) or \(T_{h,n}\) be a compact surface with non-empty boundary equipped with a hyperbolic structure as above. Let \(G=\mathrm{PGL}(2,\mathbb{R})\cong\mathrm{SO}(2,1)\). An _infinitesimal deformation_ of its holonomy representation \(\rho:\pi_{1}(S_{c})\longrightarrow G\) is a vector of \(T_{\rho}\mathrm{Hom}(\pi_{1}(S),G)\). It can be seen as an equivalence class of smooth paths \(\{\rho_{t}\}_{t\in\mathbb{R}}\) with \(\rho_{0}=\rho\). Given such a path, we have that \[\frac{\mathrm{d}}{\mathrm{d}t^{\prime}}\rho_{t}\Big{|}_{t=0}(\pi_{1}(S_{c})) \subset TG\simeq G\ltimes\mathfrak{g},\] in other words, for every \(\gamma\in\pi_{1}(S)_{c}\), \(\frac{\mathrm{d}}{\mathrm{d}\rho_{1}}\Big{|}_{t=0}(\gamma)=(\rho(\gamma),u( \gamma))\), The map \(u:\pi_{1}(S_{c})\longrightarrow\mathfrak{g}\) defined above satisfies the _cocycle_ condition: \[\text{for every }\gamma_{1},\gamma_{2}\in\pi_{1}(S_{c}),\,u(\gamma_{1}\gamma_{2 })=u(\gamma_{1})+\mathrm{Ad}(\rho(\gamma_{1}))\cdot u(\gamma_{2}). \tag{2}\] A map \(u:\pi_{1}(S_{c})\longrightarrow\mathfrak{so}_{2,1}\), satisfying (2) is called a \(\rho\)-cocycle. **Definition 2.13**.: A \(\rho\)-_coboundary_ is a \(\rho\)-cocycle \(u\) such that for some \(v_{0}\in\mathfrak{g}\) \[u(\gamma)=\mathrm{Ad}(\rho(\gamma))v_{0}-v_{0},\,\text{for every }\gamma\in\pi_{1}(S_{c}). \tag{3}\] Two \(\rho\)-cocycles are equivalent if they differ by a coboundary. The set of equivalence classes of all \(\rho\)-cocycles forms the first cohomology group \(\mathrm{H}^{1}_{\rho}(\pi_{1}(S_{c}),\mathfrak{g})\). An element \([u]\) of this group is an infinitesimal deformation of the metric \([\rho]\), i.e., \([u]\in T_{m}\mathfrak{D}(S_{c})\). Next, we will define a specific type of infinitesimal deformation of a compact surface, known as an admissible deformation. **Definition 2.14**.: Let \(S_{c}\) be a compact (possibly non-orientable) hyperbolic surface with non-empty boundary. Let \(m\in\mathfrak{D}(S_{c})\) and \(v\in T_{m}\mathfrak{D}(S_{c})\). Then \(v\) is said to be an _admissible_ deformation of \(m\) if it satisfies: \[\inf_{\gamma\in\Gamma\smallsetminus\{Id\}}\frac{\mathrm{d}l_{\gamma}(m)(v)}{l_ {\gamma}(m)}>0, \tag{4}\] where \(l_{\gamma}\) is the length function as introduced in Definition (2.10). In other words, an infinitesimal deformation is admissible if and only if the length of every non-trivial closed loop of \(S_{c}\) is uniformly lengthened. The following theorem was proved by Goldman-Labourie-Margulis in [10] **Theorem 2.15**.: _The set of all admissible deformations of a compact hyperbolic surface \(S_{c}\) with non-empty totally geodesic boundary forms an open convex cone of \(T_{m}\mathfrak{D}(S_{c})\)._ ### Margulis spacetimes **Definition 2.16**.: Let \(\rho_{0}:\Gamma\hookrightarrow G\ltimes\mathfrak{g}\) be the representation of a discrete not virtually solvable group \(\Gamma\) acting properly discontinuously and freely on \(\mathbb{R}^{2,1}\). Then the quotient manifold \(M:=\mathbb{R}^{2,1}/\rho_{0}(\Gamma)\) is called a _Margulis spacetime_. As mentioned in the introduction, Fried and Goldman [9] proved that by projecting \(\Gamma\) onto its first coordinate, we get the holonomy representation \(\rho:\pi_{1}(S)\to G\) of a finite-type complete hyperbolic surface \(S\). The projection onto the second coordinate \(u:\Gamma\rightarrow\mathfrak{g}\) is a \(\rho\)-cocycle, which is also an infinitesimal deformation of \(\rho\). The group \(\Gamma\) can thus be written as \(\Gamma^{(\rho,u)}:=\{(\rho(\gamma),u(\gamma))\mid\gamma\in\pi_{1}(S)\}\), which gives an _affine deformation_ of \(\rho\). Goldman-Labourie-Margulis proved in [10] that for \(\rho\) convex cocompact, then the group \(\Gamma^{(\rho,u)}\) acts properly if the \(\rho\)-cycle \(u\) or \(-u\) uniformly lengthens all closed geodesics of the hyperbolic surface, i.e., the equivalence class \([u]\) of \(u\), modulo coboundaries, lies in the admissible cone \(\Lambda([\rho])\) of the hyperbolic surface \(\mathbb{H}^{2}/\rho(\pi_{1}(S))\). Let \(\rho\) be a convex cocompact representation as before and \(u\) be a \(\rho\)-cocycle. Margulis defined an invariant that is used to detect the properness of such a cocycle. For every non-trivial \(\gamma\in\pi_{1}(S)\), its image \(\rho(\gamma)\) is a hyperbolic element of \(\mathrm{SO}(2,1)\) with eigenvalues of the form \(\lambda,1,\lambda^{-1}\). Let \(v_{1},v_{2}\) be two future-pointing light-like eigenvectors corresponding to the eigenvalues \(\lambda,\lambda^{-1}\), and let \(v_{0}(\gamma)\) be the eigenvector of unit norm with eigenvalue 1 such that \((v_{1},v_{0},v_{2})\) is positively oriented. Then the _Margulis invariant_ is defined as the map: \[\begin{array}{rcl}\alpha_{u}:&\pi_{1}(S)&\longrightarrow&\mathbb{R}\\ &\gamma&\mapsto&\langle u(\gamma),v_{0}(\gamma)\rangle.\end{array}\] The map \(\alpha_{u}\) depends only on the cohomology class of \(u\). Margulis showed the following lemma about the properness of a cocycle and the sign of the invariant: **Lemma 2.17** (Opposite sign lemma, Margulis [12]).: _Suppose that \(\Gamma^{(\rho,u)}\subset\mathrm{Isom}^{+}(\mathbb{R}^{2,1})\) acts properly on \(\mathbb{R}^{2,1}\). Then either for every \(\gamma\in\pi_{1}(S)\), \(\alpha_{u}(\gamma)>0\) or for every \(\gamma\in\pi_{1}(S)\), \(\alpha_{u}(\gamma)<0\)._ Next, we recall _crooked planes_. Take any space-like vector \(\mathbf{v}\in\mathbb{R}^{2,1}\). Then the associated Killing field is hyperbolic and has an attracting and a repelling fixed point at \(p_{+}\), \(p_{-}\in\partial_{\infty}\mathbb{H}^{2}\), respectively. Their preimages are light-like lines: \(\mathbb{P}^{-1}p_{+}=\mathbb{R}\mathbf{v}_{+}\), \(\mathbb{P}^{-1}p_{-}=\mathbb{R}\mathbf{v}_{-}\), where \(\mathbf{v}_{+},\mathbf{v}_{-}\) are future-pointing light-like vectors. Denote by \(l_{\mathbf{v}}\), the oriented hyperbolic geodesic from endpoints \(p_{-}\) to \(p_{+}\). It divides \(\mathbb{H}^{2}\) into two half-spaces -- the one lying to the right of \(l_{\mathbf{v}}\) is denoted by \(H_{+}(\mathbf{v})\), the one to the left is denoted by \(H_{-}(\mathbf{v})\). The geodesic \(l_{\mathbf{v}}\) is transversely oriented in the following way: a directed geodesic \(l\) in \(\mathbb{H}^{2}\) transverse to \(l_{\mathbf{v}}\) is said to be pointing in the positive direction if the point \([p_{+}]\in\mathbb{H}^{2}\) lies to its left. When we refer to the geodesic \(l_{\mathbf{v}}\) along with its transverse orientation, we shall denote it by \(\vec{l_{\mathbf{v}}}\). **Definition 2.18**.: A _left crooked plane \(\mathcal{P}(\mathbf{v})\) centered at 0, directed by a space-like vector \(\mathbf{v}\)_ is a subset of \(\mathbb{R}^{2,1}\) that is the union of the following sets: * A _stem_, defined as \(\mathrm{St}(\mathcal{P}):=\{\mathbf{w}\in\mathbb{R}^{2,1}\mid\|\mathbf{w}\|^ {2}\leq 0\}\cap\mathbf{v}^{\perp}\). It meets the light-cone along the two light-like lines \(\mathbb{R}\mathbf{v}_{+}\), \(\mathbb{R}\mathbf{v}_{-}\). * Two _wings_: The connected component of \(\mathbf{v}_{+}{}^{\perp}\setminus\mathbb{R}\mathbf{v}_{+}\) (resp. of \(\mathbf{v}_{-}{}^{\perp}\setminus\mathbb{R}\mathbf{v}_{-}\)), that contains all the hyperbolic Killing fields whose attracting fixed point is given by \(p_{+}\) (resp. \(p_{-}\)), is called a _positive wing_ (resp. a _negative wing_). They are denoted by \(\mathcal{W}^{+}(\mathbf{v})\) and \(\mathcal{W}^{-}(\mathbf{v})\), respectively. For any vector \(\mathbf{v_{0}}\in\mathbb{R}^{2,1}\), the subset \(\mathcal{P}(\mathbf{v_{0}},\mathbf{v}):=\mathbf{v_{0}}+\mathcal{P}(\mathbf{v})\) is an _affine_ left crooked plane _centered at \(\mathbf{v_{0}}\)_ and directed by a space-like vector \(\mathbf{v}\). Then, \(\mathcal{P}(\mathbf{0},\mathbf{v})=\mathcal{P}(\mathbf{v})\). Crooked Halfspaces:The connected component of \(\mathbb{R}^{2,1}\backslash\mathcal{P}(\mathbf{v_{0}},\mathbf{v})\) containing the Killing fields whose non-repelling fixed points (space-like, time-like, light-like) lie in the half-plane \(H_{+}(\mathbf{v})\subset\mathbb{H}^{2}\) (resp. \(H_{-}(\mathbf{v})\)) is called the _positive crooked half-space_ (resp. _negative crooked half-space_), denoted by \(\mathcal{H}^{+}(\mathbf{v})\) (resp. \(\mathcal{H}^{-}(\mathbf{v})\)). Next we recall the definition of stem quadrant of a transversely oriented hyperbolic geodesic, as defined in [3]. Let \(\mathbf{v}\in\mathbb{R}^{2,1}\) be a space-like vector, \(\mathbf{v}_{+},\mathbf{v}_{-}\) be future-pointing light-like vectors in \(\mathbf{v}^{\perp}\) and \(\overset{\rightarrow}{l_{\mathbf{v}}}\) be the hyperbolic geodesic in \(\mathbb{H}^{2}\) with endpoints at \([\mathbf{v}_{+}],[\mathbf{v}_{-}]\) and oriented towards \([\mathbf{v}_{+}]\). **Definition 2.19**.: The set \(\mathrm{SQ}(\overset{\rightarrow}{l_{\mathbf{v}}}):=\mathbb{R}_{>0}\mathbf{v} _{+}-\mathbb{R}_{>0}\mathbf{v}_{-}\) is called the _stem quadrant_ of the transversely oriented geodesic \(l_{\mathbf{v}}\), associated to the positively oriented triplet \((\mathbf{v}_{+},\mathbf{v},\mathbf{v}_{-})\). Surfaces with decorated spikes In this section we give a construction hyperbolic surfaces with decorated spikes. For that first we need to define crowned hyperbolic surfaces. We start with the description of the simplest surface of this type and then gradually increase the topological complexity to obtain more generic examples. Ideal Polygons.An ideal \(n(\geq 3)\)-gon, denoted by \(\Pi_{n}^{\bigcirc}\), is the topological surface of a disk \(\mathbb{B}^{2}\) with \(n\) points removed from its boundary. When \(n(\geq 3)\), we can put a hyperbolic metric on it by taking the convex hull in \(\mathbb{H}^{2}\) of \(n\) distinct points on \(\partial_{\infty}\mathbb{H}^{2}\). The \(n\) ideal points are called _vertices_ and they are marked as \(x_{1},\ldots,x_{n}\). The _edges_ are the infinite geodesics of \(\mathbb{H}^{2}\) joining two consecutive vertices. The restriction of the hyperbolic metric to an ideal polygon gives it a complete finite-area (equal to \(\pi(n-2)\)) hyperbolic metric with geodesic boundary. Its fundamental group is trivial. It is our first example of a hyperbolic surface with spikes. Fig. 3 shows an ideal pentagon in the projective model \(\mathbb{H}^{2}\). **Definition 3.1**.: Let \(S_{c}\) be \(S_{g,m}\) or \(T_{h,m}\), with \(m\geq 0\). Consider \(k(>0)\) ideal polygons \(\Pi_{n_{1}}^{\bigcirc},\ldots,\Pi_{n_{k}}^{\bigcirc}\), with \(n_{j}\geq 1\). Then, the surface \(S^{\curlyeq}\) obtained by taking the connected sum \(S_{c}\#\Pi_{n_{1}}^{\bigcirc}\#\ldots\#\Pi_{n_{k}}^{\bigcirc}\) is called a _crowned surface_. The vertices of the ideal polygons used in the construction are called _spikes_. The total number of spikes of such a surface is given by \(Q:=\sum_{i=1}^{k}n_{i}\). The connected components of the boundary of \(S^{\curlyeq}\) are either homeomorphic to the circle \(\mathbb{S}^{1}\) (boundary component of \(S_{c}\)) or to open intervals (boundary of ideal polygons). The total number of connected components is \(m+Q\). Given an orientable (resp. non-orientable) surface with spikes such that \(6g-6+3n+Q>0\) (resp. \(3h-6+3n+Q>0\)), we can put a complete finite-area hyperbolic metric on it. The following are some examples of "small" hyperbolic surfaces: **Example 3.2**.: The orientable surface \(\mathbb{D}\#\Pi_{n}^{\bigcirc}\), for \(n>0\), is called an _ideal one-holed \(n\)-gon_ and denoted by \(\Pi_{n}^{\otimes}\). Its boundary consists of one simple closed curve, denoted by \(\gamma\) and \(n\) open intervals. Its fundamental Figure 3: An ideal pentagon group \(\pi_{1}(\Pi_{n}^{\otimes})\) is generated by the homotopy class of \(\gamma\) and is isomorphic to \(\mathbb{Z}\). Next, we put a hyperbolic structure on it in the following way: Let \(g\in\mathrm{PSL}(2,\mathbb{R})\) be a hyperbolic element with axis as a bi-infinite geodesic, denoted by \(l\). See Fig. 4. It divides the boundary circle \(\partial_{\infty}\mathbb{H}^{2}\) into two open intervals. Choose a point \(x_{1}\) in any one of them and take \((n-1)\) distinct points \(x_{2},\ldots,x_{n}\) on the same interval between \(x_{1}\) and its image \(g\cdot x_{1}\). Mark all the points of their \(\langle g\rangle\)-orbit. All of them lie on the same side of \(l\) as the initial points. Join consecutive pairs using infinite geodesics. Drop two perpendiculars from \(x_{1}\) and \(g\cdot x_{1}\) and identify them using \(g\). The quotient is a complete finite-area hyperbolic surface with geodesic boundary and the underlying topological surface is homeomorphic to that of an ideal once-punctured \(n\)-gon. If \(\rho:\pi_{1}(\Pi_{n}^{\otimes})\longrightarrow\mathrm{PSL}(2,\mathbb{R})\) is the holonomy representation, then \(\rho(\gamma)=g\). The images of the ideal points \(x_{1},\ldots,x_{n}\) in the quotient are called vertices, and those of the bi-infinite geodesics as well as the closed boundary geodesic are called edges. **Example 3.3**.: The orientable surface \(S:=\mathbb{S}^{2}\#\Pi_{n_{1}}^{\bigcirc}\#\Pi_{n_{2}}^{\bigcirc}\) for \(n_{1},n_{2}>0\) is called a \((n_{1},n_{2})\)-_spiked annulus_. Any connected component of its boundary is homeomorphic to an open interval. It contains exactly one isotopy class of simple curves \([\gamma]\), where \(\gamma\) is a non-trivial simple closed curve. Its fundamental group is again isomorphic to \(\mathbb{Z}\). Take an ideal \(n\)-gon in \(\mathbb{H}^{2}\), where \(n=n_{1}+n_{2}+2\). Its vertices are denoted by \(x_{1},\ldots,x_{n}\) in the anti-clockwise direction. Let \(l_{1},l_{2}\) be the edges joining the pairs of vertices \((x_{n_{1}+1},x_{n_{1}+2})\) and \((x_{n},x_{1})\), respectively. Let \(g\in\mathrm{PSL}(2,\mathbb{R})\) be a hyperbolic isometry whose axis intersects both \(l_{1}\) and \(l_{2}\) at the same angle, and whose translation length is given by the distance between the two points of intersection. Then the quotient surface \(S=\Pi_{n}^{\bigcirc}/\sim\), where for every \(z\in l_{1}\), \(z\sim g\cdot z\), is a complete finite-area hyperbolic surface with geodesic boundary, homeomorphic to a \((n_{1},n_{2})\)-spiked annulus. Its holonomy representation \(\rho:\pi_{1}(S)\longrightarrow\mathrm{PSL}(2,\mathbb{R})\) maps the generator \([\gamma]\) to \(g\). **Example 3.4**.: The non-orientable surface \(\mathbb{RP}^{2}\#\Pi_{n}^{\bigcirc}\), for \(n>0\), is called a spiked Mobius strip. Its orientation double cover is a \((n,n)\)-spiked annulus. Similar to the previous example, we consider an ideal \(n+2\)-gon with marked vertices \(x_{1},\ldots,x_{n+2}\). See Fig. 6. Take any two edges \(l_{1},l_{2}\) of the polygon that don't have any Figure 4: A fundamental domain for an ideal one-holed square common endpoint. Let \(r\in\mathrm{PGL}(2,\mathbb{R})\) be the hyperbolic reflection along the common perpendicular to \(l_{1},l_{2}\) and \(g\in\mathrm{PSL}(2,\mathbb{R})\) be a hyperbolic isometry whose axis intersects \(l_{1}\) and \(l_{2}\) such that the angles of intersections are complementary. Let \(h:=rg\in\mathrm{PGL}(2,\mathbb{R})\). The quotient \(\Pi_{n}^{\bigcirc}/\sim\), where for every \(z\in l_{1}\), \(z\sim h\cdot z\), is a complete finite-area hyperbolic metric with geodesic boundary and is homeomorphic to a spiked Mobius strip with \(n\) marked spikes. **Definition 3.5**.: The smallest closed convex subset of a crowned surface \(S^{\curlywedge}\) which contains all of its closed geodesics of is called the _convex core_ of the surface. If \(S^{\curlywedge}\) is a surface with spikes obtained from a compact surface \(S_{c}\), with \(m\) boundary components, and \(k\) ideal polygons, then its convex core is usually a compact surface of the same genus with \((k+m)\) boundary components, each of which is homeomorphic to \(\mathbb{S}^{1}\). The list of exceptions is given below. **Example 3.6**.: The following is a list of all crowned hyperbolic surfaces whose convex cores are not hyperbolic surfaces: Figure 5: A fundamental domain for a (1,2)-spiked annulus Figure 6: A fundamental domain for a 3-spiked Möbius strip * Ideal polygons \(\Pi_{n}^{\mathbb{O}}\) and ideal punctured polygons \(\Pi_{n}^{\mathbb{O}}\) have empty convex cores. * The convex cores of a spiked Mobius strip, one-holed polygons and a spiked annulus are both homeomorphic to a circle. Label the boundary components of the convex core as \(\partial_{1},\ldots,\partial_{m+k}\). These are called the _peripheral loops_.. Each peripheral loop is either isotopic to a boundary component of the compact surface \(S_{c}\) or it separates a one-holed \(m\)-gon (called a _crown_) from \(S\), where \(m\in\{n_{1}\ldots,n_{k}\}\). There are \(k\) such crowns, which are labelled as \(C_{1},\ldots,C_{k}\). _Notation 3.1_.: For every \(i=1,\ldots,n\), define \(q_{i}=0\) if the \(i\)-th peripheral loop is isotopic to a boundary component of \(S_{c}\) and \(q_{i}=n_{j}\), if the \(i\)-th peripheral loop separates a one-holed \(n_{j}\)-gon, for some \(j\in\{1,\ldots,k\}\). Define the spike vector \(\vec{q}:=(q_{1},\ldots,q_{n})\). Finally, an orientable (resp. non-orientable) surface with genus \(g\) (resp. \(h\)), \(n\) peripheral loops and spike vector \(\vec{q}\) is denoted by \(S_{g,n}^{\vec{q}}\) (resp. \(T_{h,n}^{\vec{q}}\)). _Notation 3.2_.: Now we define the above notation for the exceptional cases that do not have hyperbolic convex cores. For ideal \(q\)-gons, (\(q\geq 4\)), we shall use the notation \(S_{0,1}^{(q)}\). For \((q_{1},q_{2})\)-spiked annulus, we shall use \(S_{0,2}^{(q_{1},q_{2})}\). Finally, for a Mobius strip with \(q(\geq 1)\) spikes, we shall use \(T_{1,1}^{(q)}\). Next, we shall describe the hyperbolic metric on a generic crowned hyperbolic surface. We shall assume that the convex core of the surface is hyperbolic since we have already treated the cases where it is not hyperbolic. The surface with spikes and its convex core have the same homotopy type. In particular, \(\pi_{1}(S_{c})=\pi_{1}(S^{\wedge})\). Let \(([\rho],dev)\) be a hyperbolic structure on \(S_{c}\). The holonomy representation \(\rho\) of \(S_{c}\) gives a holonomy representation of \(S^{\wedge}\). Next, we will construct the embedding \(R^{\prime}\) of the universal cover of \(S^{\wedge}\) in \(\mathbb{H}^{2}\). Start with the simply connected region \(R:=dev(\overline{S_{c}})\) in \(\mathbb{H}^{2}\) bounded by pairwise disjoint lifts of \(\partial_{i}\), \(i=1,\ldots,n\). We choose \(Q\) distinct points on \(\mathbb{H}^{2}\) in the following way-- whenever \(q_{i}>0\), take \(q_{i}\) ideal points \(\underline{x}^{i}=(x_{1}^{i},\ldots,x_{q_{i}}^{i})\) on the same side of a lift of the peripheral loop \(\partial_{i}\). Denote by \(\mathbf{x}=(\underline{x}^{1},\ldots,\underline{x}^{n})\in(\partial_{\infty} \mathbb{H}^{2})^{Q}\) the \(n\)-tuple of vectors. Join consecutive pairs \(x_{j},x_{j+1}\), \(j=1,\ldots,q_{i}-1\), by infinite geodesics. Then, \(R^{\prime\prime}\) is the region bounded by the infinite geodesics corresponding to boundary components. It contains \(dev(\overline{S_{c}})\). A metric on a surface with spikes \(S^{\wedge}\) can be seen as an ordered pair \((\rho,\mathbf{x})\). Two pairs \((\rho,\mathbf{x})\), \((\rho^{\prime},\mathbf{x}^{\prime})\) are said to be equivalent if there exists an element \(g\in\mathrm{PGL}(2,\mathbb{R})\) such that for all \(\gamma\in\pi_{1}(S)\), \(\rho^{\prime}(\gamma)=g\rho(\gamma)g^{-1}\) and \(\mathbf{x}^{\prime}=g\cdot\mathbf{x}\). Hence, elements in the deformation space are equivalence classes of such pairs. The following theorem about the dimension of the deformation space of a surface with non-decorated spikes is analogous to Theorem 2.11. **Theorem 3.7**.: _Let \(S^{\wedge}\) be a crowned hyperbolic surface with \(Q\) spikes._ 1. _If_ \(S^{\wedge}=S_{g,n}^{\vec{q}}\)_, then its deformation space_ \(\mathfrak{D}(S_{g,n}^{\vec{q}})\) _is homeomorphic to an open ball of dimension_ \(6g-6+3n+Q\)_._ 2. _If_ \(S^{\wedge}=T_{h,n}^{\vec{q}}\)_, then its deformation space_ \(\mathfrak{D}(T_{h,n}^{\vec{q}})\) _is homeomorphic to an open ball of dimension_ \(3h-6+3n+Q\)_._ **Definition 3.8**.: A hyperbolic surface with decorated spikes is obtained from a crowned hyperbolic surface of type \(S_{g,n}^{\vec{q}}\) or \(T_{h,n}^{\vec{q}}\) by decorating each spike by a horoball. Such a surface is denoted by \(S_{g,n}^{\vec{q},\vec{h}}\) (when orientable) and \(T_{h,n}^{\vec{q},\vec{h}}\) (when non-orientable). We shall use the symbol \(S^{\mathbb{O}}\) for referring to both cases at once. The deformation space \(\mathfrak{D}(S^{\,\odot})\) of such a surface is the trivial \(\mathbb{R}^{Q}_{>0}\) bundle over \(\mathfrak{D}(S^{\,\wedge})\) where the fibre over a point in the base space is given by the Busemann functions of the horoballs based at the \(Q\) spikes of the surface \(S^{\,\wedge}\). A _horoball connection_ on a surface \(S^{\,\odot}\) is a geodesic path joining two not necessarily distinct decorated spikes. It is the image in the quotient of a horoball connection (see Definition 2.2) in \(\mathbb{H}^{2}\) joining a pair of decorated ideal vertices in \(\partial_{\infty}\mathbb{H}^{2}\). Recall that a decorated ideal vertex corresponds to a unique light-like vector. Let \((p_{1},h_{1})\) and \((p_{2},h_{2})\) be the lifts of the two decorated vertices that is joined by the horoball connection \(\beta\). Let \(v_{1},v_{2}\) be the corresponding light-like vectors. Then we have the following length function for horoball connections: \[\begin{array}{ccc}l_{\beta}:&\mathfrak{D}(S^{\,\odot})&\longrightarrow& \mathbb{R}\\ &m&\mapsto&\ln(-\frac{(\nabla_{1},\nabla_{2})}{2}).\end{array}\] The set of all horoball connections is denoted by \(\mathcal{H}\). Using Theorem 3.7, we have that **Theorem 3.9**.: _Let \(S^{\,\odot}\) be a hyperbolic surface with \(Q\) decorated spikes._ 1. _If_ \(S^{\,\odot}=S^{\vec{q},\vec{k}}_{g,n}\)_, then its deformation space_ \(\mathfrak{D}(S^{\vec{q},\vec{k}}_{g,n})\) _is homeomorphic to an open ball of dimension_ \(6g-6+3n+2Q\)_._ 2. _If_ \(S^{\,\odot}=T^{\vec{q},\vec{k}}_{h,n}\)_, then its deformation space_ \(\mathfrak{D}(T^{\vec{q},\vec{k}}_{h,n})\) _is homeomorphic to an open ball of dimension_ \(3h-6+3n+2Q\)_._ Next we define admissible deformations for hyperbolic surfaces with decorated spikes. **Definition 3.10**.: Let \(S^{\,\odot}\) be a hyperbolic surface with decorated spikes. The _admissible cone_ for a given metric \(m\in\mathfrak{D}(S^{\,\odot})\), denoted by \(\Lambda(m)\), is the set of all infinitesimal deformations of \(m\) that uniformly lengthens every horoball connection and every closed curve. _Remark 3.3_.: Lengthening every horoball connection lengthens every two-sided curve in the interior of the surface, and every peripheral loop bounding a crown. But one-sided curves and spike-less boundary components are not lengthened. If the decoration on \(S^{\,\odot}\) is such that the closures of the decorating horoballs are all pairwise disjoint, then an element \(v\in T_{m}\mathfrak{D}(S^{\,\odot})\) is admissible if and only if it satisfies the following condition: \[\inf_{\beta\in\mathcal{H}}\frac{\mathrm{d}l_{\beta}(m)(v)}{l_{\beta}(m)}>0, \tag{5}\] where \(l_{\beta}\) is the length function as in Definition (2.2) and \(\mathcal{H}\) is the set of all horoball connections. _Remark 3.4_.: Let \(S^{\,\wedge}\) be the underlying hyperbolic surface with undecorated spikes of \(S^{\,\odot}\). Let \(v\) be an admissible deformation of \((S^{\,\odot},m)\). Then \(v\) is an admissible deformation for any decoration of \(S^{\,\wedge}\) such that closures of the decorating horoballs are pairwise disjoint. If some of the decorating horoballs of the spikes of \(S^{\,\odot}\) overlap then an admissible \(v\) satisfies \[\inf_{\beta\in\mathcal{H}^{-}}\mathrm{d}l_{\beta}(m)(v)>0,\] where \(\mathcal{H}^{-}\) is the set of horoball connections with non-positive length, and (5) for horoball connections with positive length. **Lemma 3.11**.: _The subspace \(\Lambda(m)\) is an open convex cone of \(T_{m}\mathfrak{D}(S^{\,\odot})\)._ ## 4 Arcs and the arc complex ### Definitions In this section we define arcs and the arc complexes of crowned surfaces as well as surfaces with decorated spikes. An _arc_ on a hyperbolic surface \(S\) with non-empty boundary, possibly with spikes, is an embedding \(\alpha\) of a closed interval \(I\subset\mathbb{R}\) into \(S\). There are three possibilities depending on the nature of the interval: 1. \(I=[a,b]\): In this case, the arc \(\alpha\) is finite. We consider those finite arcs that verifiy: \(\alpha(a),\alpha(b)\in\partial S\) and \(\alpha(I)\cap S=\{\alpha(a),\alpha(b)\}\). 2. \(I=[a,\infty)\): These are embeddings of hyperbolic geodesic rays in the interior of the surface such that \(\alpha(a)\in\partial S\). There are two types: * The infinite end converges to a spike, i.e., \(\alpha(t)\overset{t\to\infty}{\longrightarrow}x\), where \(x\) is a spike. * The infinite end spirals around a totally geodesic boundary component of the surface. They are called _spiraling_ arcs. 3. \(I=\mathbb{R}\): The infinite ends can either converge to a spike or spiral along a simple closed curve. **Definition 4.1**.: An arc \(\alpha\) of a hyperbolic surface \(S\) with non-empty boundary is called _non-trivial_ if each connected component of \(S\smallsetminus\{\alpha\}\) has at least one spike or generalised vertex. Let \(\mathcal{A}\) be the set of all non-trivial arcs of the three types above. Two arcs \(\alpha,\alpha^{\prime}:I\longrightarrow S\) in \(\mathcal{A}\) are said to be _isotopic_ if there exists a homeomorphism \(f:S\longrightarrow S\) that preserves the boundary and fixes all decorated spikes and a continuous function \(H:S\times[0,1]\longrightarrow S\) such that 1. \(H(\cdot,0)=\mathrm{Id}\) and \(H(\cdot,1)=f\), 2. for every \(t\in[0,1]\), the map \(H(\cdot,t):S\longrightarrow S\) is a homeomorphism, 3. for every \(t\in I\), \(f(\alpha(t))=\alpha^{\prime}(t)\). We shall now give a formal definition of the arc complex. **Definition 4.2**.: The _arc complex_ of a surface \(S\), generated by a subset \(\mathcal{K}\subset\mathcal{A}\), is a simplicial complex \(\mathcal{A}(S)\) whose base set \(\mathcal{A}(S)^{(0)}\) consists of the isotopy classes of arcs in \(\mathcal{K}\), and there is an \(k\)-simplex for every \((k+1)\)-tuple of pairwise disjoint and distinct isotopy classes. The elements of \(\mathcal{K}\) are called _permitted_ arcs and the elements of \(\mathcal{A}\smallsetminus\mathcal{K}\) are called _rejected_ arcs. The permitted arcs are the building blocks of the different arc complexes. They are used to perform strip deformations of the surface. The way the surface is deformed depends on the nature of the arc used for the strip deformation. Next we specify the elements of \(\mathcal{K}\) for the different types of surfaces: * In the case of a hyperbolic surface possibly with undecorated spike, the set \(\mathcal{K}\) of permitted arcs comprises of non-trivial finite arcs that separate at least two spikes from the surface. * In the case of a hyperbolic surface with decorated spikes, the set \(\mathcal{K}\) of permitted arcs comprises of non-trivial finite arcs and infinite arcs of type 2 whose infinite ends converge to spikes and whose finite ends lie on the boundary of the surface. _Remark 4.1_.: 1. Two isotopy classes of arcs of \(S\) are said to be disjoint if it is possible to find a representative arc from each of the classes such that they are disjoint in \(S\). Such a configuration can be realised by geodesic segments in the context of hyperbolic surfaces. In our discussion, we shall always choose such arcs as representatives of the isotopy classes. The \(0\)-skeleton \(\sigma^{(0)}\) of a top-dimensional simplex \(\sigma\) of the arc complex \(\mathcal{A}(S)\) is called a _triangulation_ of the surface \(S\). **Definition 4.3**.: We define a _filling_ simplex of the arc complex of the different types of surfaces: * For a hyperbolic surface possibly with non-decorated spikes, a simplex \(\sigma\) is said to be filling if the arcs corresponding to \(\sigma^{(0)}\) decompose the surface into topological disks. * For a surface with decorated spikes, a simplex \(\sigma\) is said to be filling if the arcs corresponding to \(\sigma^{(0)}\) decompose the surface into topological disks with at most one spike. From the definition it follows that any simplex containing a filling simplex is also filling. **Definition 4.4**.: The _pruned arc complex_ of a surface \(S\), denoted by \(\widehat{\mathcal{A}}(S)\) is the union of the interiors of the filling simplices of the arc complex \(\mathcal{A}(S)\). Every point \(x\in\widehat{\mathcal{A}}(S)\) is contained in the interior of a unique simplex, denoted by \(\sigma_{x}\), i.e., there is a unique family of arcs \(\{\alpha_{1},\ldots,\alpha_{p}\}\), namely the \(0\)-skeleton of \(\sigma_{x}\), such that \[x=\sum_{i=1}^{p}t_{i}\alpha_{i},\,\sum_{i=1}^{p}t_{i}=1,\,\,\,\text{and}\,\, \forall i,\,t_{i}>0.\] Define the _support_ of a point \(x\in\widehat{\mathcal{A}}(S)\) as \(\operatorname{supp}\,(x):=\sigma_{x}^{(0)}\). In this section we shall be studying the topology of the pruned arc complexes of hyperbolic surfaces with undecorated spikes, followed by hyperbolic surfaces with decorated spikes. ### The arc complex of a crowned or compact surface Firstly, we shall discuss a theorem by Harer which proves that the pruned arc complex of an orientable surface is an open ball. In his paper, the terminology used is different from what we have seen up until now so we shall give a quick introduction to the objects involved in his result. Then by interpreting his result in the appropriate manner, we shall prove that the pruned arc complex in the case of orientable surfaces with spikes is an open ball. Finally, we shall derive the same result for non-orientable surfaces. Harer's Terminology:Let \(S_{g,r,s}\) be an orientable surface of genus \(g\) with \(r\) boundary components and \(s\) punctures: \[S_{g,r,s}:=S_{g,r}\setminus\{y_{1},\ldots,y_{s}\},\] where \(y_{1},\ldots,y_{s}\) are points in the interior of \(S_{g,r}\), that play the role of spikeless boundary components in our case. Mark \(Q\) distinct points \(x_{1},\ldots,x_{Q}\) on the boundary \(\partial S_{g,r,s}\) such that each boundary component, denoted by \(\partial_{i}S_{g,r,s}\) for \(i=1,\ldots,r\), contains at least one such point. These points shall play the role of spikes. Let \(\Omega=\{x_{1},\ldots,x_{Q},y_{1},\ldots y_{s}\}\). The deformation space \(\mathfrak{D}(S_{g,r,s})\) is an open ball of dimension \(N_{0}:=6g{-}6{+}3r{+}2s\). Define \(\mathcal{T}(\Omega):=\{(m,\lambda)\}\), where \(m\in\mathfrak{D}(S_{g,r,s})\) and \(\lambda\) is a positive projective weight on the points of \(\Omega\). Then, \[\mathcal{T}(\Omega)=\mathfrak{D}(S_{g,r,s})\times\mathbb{B}^{s+Q-1}\simeq \mathbb{B}^{6g-7+3r+3s+Q}.\] Consider \(\mathcal{K}\) to be the set of embedded arcs in \(S_{g,r}\) whose endpoints belong to \(\Omega\). The arc complex spanned by the arcs in \(\mathcal{K}\) is denoted by \(\mathcal{A}\big{(}S_{g,r,s}\big{)}\). A simplex \(\sigma\) of the arc complex is said to be "big" if the arcs corresponding to its \(0\)-skeleton divide the surface \(S_{g,r,s}\) into topological disks. The pruned arc complex \(\widetilde{\mathcal{A}}(S_{g,r,s})\) is defined to be the union of the interior of the big simplices. Finally, let \(\mathrm{MCG}(S_{g,r,s})\) be the mapping class group of the surface whose elements fix the points in \(\Omega\). Then, Harer proves the following theorem: **Theorem 4.5**.: _[_11_]_ _There is a natural homeomorphism \(\Phi:\mathcal{T}(\Omega)\longrightarrow\widetilde{\mathcal{A}}(S_{g,r,s})\) that commutes with the action of the mapping class group \(\mathrm{MCG}(S_{g,r,s})\)._ Interpretation:Let \(S_{g,n}^{\vec{q}}\) be an orientable hyperbolic surface with undecorated spikes with \(k\) boundary components homeomorphic to a circle. Recall that in the case of hyperbolic surfaces with undecorated spikes, the permitted arcs are finite and have both their endpoints on two (not necessarily distinct) connected components of the boundary of the surface. By comparing these arcs with the permitted arcs used by Harer, we get that punctures and spikeless boundaries play the same role in the two surfaces; similarly, the points \(y_{1},\ldots,y_{Q}\) can be interpreted as spikes in our case. Then, \(k=s\) and \(n=r+s\). So, the arc complex \(\mathcal{A}\big{(}S_{g,n}^{\vec{q}}\big{)}\) is isomorphic to the arc complex \(\mathcal{A}\big{(}S_{g,r,s}\big{)}\). Also, the definition of a big simplex is the same in the two approaches. Hence, from Theorem (4.5) we have that **Corollary 4.6**.: _The pruned arc complex \(\widehat{\mathcal{A}}(S_{g,n}^{\vec{q}})\) of a connected orientable surface \(S_{g,n}^{\vec{q}}\) with non-decorated spikes is homeomorphic to an open ball of dimension \(6g-7+3n+Q\)._ Next, we prove a similar result for non-orientable surfaces possibly with spikes using Harer's theorem. The _orientation covering_ of a surface \(S\) is defined as the pair \((\overline{S},\pi)\), where, \[\overline{S}:=\{(p,o(p)\mid p\in S,o(p)\text{ is an orientation of }T_{p}S)\}\] and \[\pi:\quad\begin{array}{ccc}\overline{T_{h,n}^{\vec{q}}}&\longrightarrow&T_ {h,n}^{\vec{q}}\\ (p,o(p))&\mapsto&p\end{array}.\] Since the tangent space \(T_{p}S\) has exactly two orientations, \(\pi\) is a two-sheeted covering map. Since \(T_{h,n}^{\vec{q}}\) is non-orientable, one has that \(\overline{T_{h,n}^{\vec{q}}}\) is a orientable surface with \[\chi(\overline{T_{h,n}^{\vec{q}}})=2\chi(T_{h,n}^{\vec{q}})<0.\] So we get that \(\overline{T_{h,n}^{\vec{q}}}\) is hyperbolic and that it is of the form \(S_{h-1,2n}^{\vec{q}\cdot\vec{q}}\), where \(\vec{q}\sqcup\vec{q}:=(q_{1},\ldots,q_{n},q_{1},\ldots,q_{n})\). Let \(\Upsilon:S_{h-1,2n}^{\vec{q}\cdot\vec{q}}\longrightarrow S_{h-1,2n}^{\vec{q} \cdot\vec{q}}\) be the covering automorphism that exchanges the two points in every fibre of \(\pi\). We shall revert back to Harer's notation -- suppose that \(T_{h,n}^{\vec{q}}\) has \(k\) spikeless boundary components. Then, \(S_{h-1,2n}^{2\vec{q}}\) has \(s:=2k\) spikeless boundary components. Let \(r:=2(n-k)\). We shall be working with \(S_{h-1,r,s}\) which is an orientable surface of genus \(h-1\) with \(r\) boundary components and \(s\) punctures. From Theorem (4.5), we know that \(\widehat{\mathcal{A}}(S_{h-1,r,s})\) is an open ball of dimension \[6(h-1)-7+3(r+s)+2Q=6h-13+6n+2Q.\] Let \(\mathcal{I}(\mathcal{A}\big{(}S_{h-1,r,s}\big{)})\) be the subset of \(\mathcal{A}\big{(}S_{h-1,r,s}\big{)}\) that is invariant under the action of \(\Upsilon\). Since the homeomorphism \(\Phi:\mathcal{T}(\Omega)\longrightarrow\widehat{\mathcal{A}}(S_{h-1,r,s})\) commutes with the action of \(\Upsilon\), we get that \(\Phi(\mathcal{I}(\mathcal{A}\big{(}S_{h-1,r,s}\big{)}))\) is the set of points \((m,\lambda)\in\mathcal{T}(\Omega)\) such that \(m\) and \(\lambda\) are invariant under \(\Upsilon\). Every permitted arc \(\alpha\) of \(T_{h,n}^{\vec{q}}\) lifts to two disjoint arcs \(\alpha_{1},\alpha_{2}\) in \(S_{h-1,r,s}\) because \(\Upsilon\) is a double cover and an arc is simply-connected. These two arcs are interchanged by the action of \(\Upsilon\). So, the isotopy classes \([\alpha^{1}],[\alpha^{2}]\) as well as the \(1\)-simplex generated by them belong to \(\mathcal{I}(\mathcal{A}\big{(}S_{h-1,r,s}\big{)})\). Consequently, we get the following map between the arc complexes: \[\begin{array}{cccc}h:&\mathcal{A}\Big{(}T_{h,n}^{\vec{q}}\Big{)}& \longrightarrow&\mathcal{I}(\mathcal{A}\big{(}S_{h-1,r,s}\big{)})\\ (0-\text{skeleton})&[\alpha]&\mapsto&\frac{[\alpha^{1}]+[\alpha^{2}]}{2},\\ (k-\text{skeleton})&\sum_{i=1}^{k+1}t_{i}[\alpha_{i}]&\mapsto&\sum_{i=1}^{ k+1}t_{i}\frac{[\alpha_{i}^{2}]+[\alpha_{i}^{2}]}{2},\end{array}\] where \(k\leq N_{0}\), \(t_{i}\geq 0\) for \(i=1,\ldots,k+1\),with \(\sum_{i=1}^{k+1}t_{i}=1\). **Lemma 4.7**.: _The map \(h:\widehat{\mathcal{A}}(T_{h,n}^{\vec{q}})\longrightarrow\mathcal{I}( \widehat{\mathcal{A}}(S_{h-1,r,s}))\) is an isomorphism._ Proof.: Firstly, we show that this map is well-defined. A point \(x\in\widehat{\mathcal{A}}(S)\) belongs to the interior of a unique big simplex \(\sigma_{x}\) of \(\mathcal{A}(S)\): \[x=\sum_{i=1}^{N_{0}}t_{i}\,[\alpha_{i}]\text{, with }t_{i}\in(0,1),[\alpha_{i}] \in\sigma_{x}^{(0)},\text{ for every }i=1,\ldots,N_{0}\text{ and }\sum_{i}t_{i}=1.\] The union \(\bigcup\limits_{i}e_{i}\) of arcs decomposes the surface \(S\) into topological disks with at most two spikes. Being simply connected, they lift to twice as many disks partitioning the double cover. So the simplex formed by \(\{[\alpha_{i}^{1}],[\alpha_{i}^{2}]\}_{i}\) is big. Hence we get that \(h(x)\in\widehat{\mathcal{A}}(S_{h-1,r,s})\). Since \(\Upsilon\) exchanges the two arcs \([\alpha_{i}^{1}],[\alpha_{i}^{2}]\) for every \(i=1,\ldots,N_{0}\), we get that \(h(x)\in\mathcal{I}(\widehat{\mathcal{A}}(S_{h-1,r,s}))\). Now we construct the inverse of \(h\). Start with \(y\in\mathcal{I}(\widehat{\mathcal{A}}(S_{h-1,r,s}))\). Since, \(y\in\widehat{\mathcal{A}}(S_{h-1,r,s})\), there exists a unique big simplex \(\sigma_{y}\) such that \(y\in\text{int}\,\Big{(}\sigma_{y}\Big{)}\), i.e., \[y=\sum_{j=1}^{q}s_{j}\,\alpha_{j}\text{, with }s_{j}\in(0,1),\alpha_{j}\in \sigma_{y}^{(0)},\text{ for every }j=1,\ldots,q\text{ and }\sum_{j}s_{j}=1.\] Since \(y\in\mathcal{I}\), it is invariant under the action of \(\Upsilon\). The family of arcs in \(\sigma_{y}^{(0)}\) project to equal or disjoint arcs in the quotient surface. Similarly, since \(\sigma_{y}\) is big the connected components of the complement of this family of arcs are disks and they project to equal or disjoint regions. If \(\alpha,\alpha^{\prime}\) are two arcs in \(\sigma_{y}^{(0)}\) that have equal weight \(t\), then they project to the same arc \(\beta\); so \(h^{-1}(t([\alpha]+[\alpha^{\prime}]):=t\beta\). This concludes the proof of the lemma. **Corollary 4.8**.: _The pruned arc complex \(\widehat{\mathcal{A}}(T_{h,n}^{\vec{q}})\) of a non-orientable surface \(T_{h,n}^{\vec{q}}\) with non-decorated spikes is homeomorphic to an open ball of dimension \(3h-7+3n+Q\)._ Proof.: The subset \(\Phi(\mathcal{I}(\mathcal{A}\big{(}S_{h-1,r,s}\big{)}))\) is an open ball of dimension \(3h-7+3n+Q\) -- it can be parametrised by the lengths of geodesic arcs of an \(\Upsilon\)-invariant triangulation of \(S_{h-1,r,s}\) and \(\Upsilon\)-invariant projective weights on the set \(\Omega\). Using the isomorphism \(h\), we get that \[\widehat{\mathcal{A}}(T_{h,n}^{\vec{q}})=h^{-1}(\mathcal{I}(\mathcal{A}(S_{h- 1,r,s})))=h^{-1}\Phi^{-1}(\mathbb{B}^{3h-7+3n+Q}).\] Next, we shall prove that the pruned arc complex of a surface with decorated spikes is an open ball. As mentioned previously, the arcs that are considered for spanning the arc complex for such a surface are either finite, separating at least one spike, or infinite with one endpoint exiting the surface through a spike. The former is referred to as an _edge-to-edge_ arc, while the latter is called a _spike-to-edge_ arc. Firstly, we will prove the following theorem for orientable surfaces \(S_{g,n}^{\vec{q},\vec{h}}\): **Theorem 4.9**.: _The pruned arc complex \(\widehat{\mathcal{A}}(S_{g,n}^{\vec{q},\vec{h}})\) of an orientable surface \(S_{g,n}^{\vec{q},\vec{h}}\) with decorated spikes is an open ball of dimension \(6g-7+3n+2Q\), where \(Q\) is the total number of spikes._ We shall denote \(S_{0}\) by the topological surface with genus \(g\), \(n\) boundary components and \(q_{i}\geq 0\) marked points on \(\partial_{i}S_{0}\), \(i=1,\ldots,n\) such that \(\chi(S_{0})<0\). Let \(\vec{\xi}=(\xi_{1},\ldots,\xi_{Q})\) be the set of marked points on \(\partial S_{0}\). Let \(Q(i)=\sum_{j=1}^{i}q_{i}\). Then we see that \(S_{0}\setminus\bigcup\{\xi_{l}\}_{l}\) is an orientable crowned hyperbolic surface. The marked points are called _vertices_ and the connected components of \(\partial_{i}S\setminus\{\xi_{\partial(i-1)+1},\ldots,\xi_{\partial(i)}\}\) are called _edges_. Firstly, we do a topological operation on \(S_{0}\) called the _doubling_ to obtain a "bigger" hyperbolic surface with boundary and without any marked points. This is done in two steps: 1. We truncate small neighbourhoods of every marked point along embedded arcs, denoted by \(V:=\{r_{l}\}_{l=1}^{Q}\), that join the edges adjacent to the spikes. The elements of \(V\) are called _\(V\)-edges_. Let \(S\) be the resulting surface. For \(i=1,\ldots,n\), when \(q_{i}>0\), the \(i\)-th boundary of \(S\), \(\partial_{i}S\), is the union of \(2q_{i}\) segments alternately partitioned into \(V\)-edges and the truncated boundary edges of \(S_{0}\). When, \(q_{i}=0\), \(\partial_{i}S=\partial_{i}S_{0}\). The truncated boundary edges along with any closed loop in \(\partial S_{0}\) are called \(E\)-edges. 2. Then we take a copy \(S^{\prime}\) of \(S\) and glue it to \(S\) along the \(V\)-edges. The final surface, denoted as \(\Sigma:=S\sqcup S^{\prime}/\sim\), has genus \(2g\), with \(2n+Q\) boundary components. If \(\partial_{i}S_{0}\) had \(q_{i}>0\)\(E\)-edges, then after gluing we get \(q_{i}\) boundary components made out of two copies of every \(E\)-edge. We get the Euler characteristic of the surface \(\Sigma\), \(\chi(\Sigma)=2-4g-2n<0\). So, it is hyperbolic. Since there are no spikes, we can consider all complete hyperbolic metrics with totally geodesic boundary. Its deformation space \(\mathfrak{D}(\Sigma)\) is an open ball of dimension \(12g-6+6n\). The surface \(\Sigma\) has a degree two symmetry \(\iota\in\mathrm{MCG}(\Sigma)\) that exchanges the two surfaces \(S\) and \(S^{\prime}\). Keeping this in mind, we construct an isomorphism, denoted by \(h\), between the subcomplex \(\mathrm{Fix}_{\iota}(\widehat{\mathcal{A}}(\Sigma))\) of the pruned arc complex of \(\Sigma\), invariant under the involution \(\iota\), and the pruned arc complex \(\widehat{\mathcal{A}}(S_{0})\) of \(S_{0}\) in the following way: At first we define it on the \(0\)-skeleton of the arc complex and then we extend it linearly to a generic point on the pruned arc complex. * Let \(e\) be an arc joining a spike \(\xi\) and an edge \(l\) of \(\partial S_{0}\). Then the Step 1 above truncates \(e\); in \(S\) it becomes an arc, again denoted by \(e\), joining the corresponding \(R\)-edge and the initial \(G\)-edge \(l\). Let \(e^{\prime}\in S^{\prime}\) be the twin arc of \(e\). Finally, after the Step 2, \(e:=e\sqcup e^{\prime}/\sim\) becomes the arc that joins the two copies of \(l\) that form the totally geodesic boundary in \(\Sigma\), and transverse to the \(R\)-edge. It is preserved as a set by the involution. Define \(h(e):=e^{\prime\prime}\). * Let \(e\) be an edge-to-edge arc not in \(V\). So \(e\) joins two distinct boundary edges of \(\partial S_{0}\). The Step 1 doesn't change the arc \(e\). It remains disjoint from its twin \(e^{\prime}\subset S^{\prime}\) inside \(\Sigma\). Define \(h(e):=\frac{e+e^{\prime}}{2}\). The map is then extended linearly over any point \(x\in\widehat{\mathcal{A}}(S)\). **Lemma 4.10**.: _The map \(h:\widehat{\mathcal{A}}(S_{0})\longrightarrow\mathrm{Fix}_{\iota}(\widehat{ \mathcal{A}}(\Sigma))\) is an isomorphism._ Proof.: Firstly, we show that this map is well-defined. As discussed before, a point \(x\in\widehat{\mathcal{A}}(S)\) belongs to the interior of a unique simplex \(\sigma_{x}\) of \(\mathcal{A}(S)\). In other words, \[x=\sum_{i=1}^{p}t_{i}\,e_{i},\,\text{with}\,\,t_{i}\in(0,1),e_{i}\in\sigma_{x}^ {(0)},\,\,\text{for every}\,\,i=1,\ldots,p\,\,\text{and}\,\,\sum_{i}t_{i}=1.\] The union \(\bigcup\limits_{i}e_{i}\) of arcs decomposes the surface \(S\) into topological disks with at most one vertex. Let \(y:=h(x)=\sum_{j=1}^{q}s_{j}\,\alpha_{j}\). Then from the definition of \(h\), it follows that \(s_{j}\in(0,1)\), and \(\alpha_{j}\in\mathcal{A}_{\mathcal{K}}(\Sigma)^{(0)}\), for every \(j=1,\ldots,q\). The family of arcs \(\left\{\alpha_{j}\right\}_{j}\) decomposes the surface \(\Sigma\) into topological disks. Otherwise there is a connected component \(K\) in the complement in \(\Sigma\) such that \(\pi_{1}(K)\neq\{1\}\). So it is possible to find a non-trivial simple closed curve \(\gamma\) in \(K\). Then either there was a curve in the complement of \(\{e_{i}\}\) in \(S\) such that \(\gamma\) is one of its copies or the curve \(\gamma\) was created from a vertex-to-vertex arc by the doubling operation. None of these two cases is possible because \(x\in\widehat{\mathcal{A}}(S_{0})\) and by definition of the pruned arc complex of a surface with decorated spikes, the family of arcs \(\{e_{i}\}_{i}\) decomposes the initial surface \(S_{0}\) into disks with at most one vertex. So we have that \(y\) is a point of \(\widehat{\mathcal{A}}(\Sigma)\). Finally, we verify that the point \(h(x)\) is \(\iota\)-invariant. \[\iota(h(e)) =\left\{\begin{array}{ll}\frac{\iota(e)+\iota(e^{\prime})}{2}, &\text{ if $e$ is edge-to-edge}\\ \iota(e^{\prime\prime})&\text{ if $e$ is edge-to-vertex},\end{array}\right.\] \[=\left\{\begin{array}{ll}\frac{e^{\prime}+\iota}{2},\\ e^{\prime\prime},\end{array}\right.\] \[=h(e).\] Figure 7: Doubling operation The inverse: Start with \(y\in\operatorname{Fix}_{\iota}(\widehat{\mathcal{A}}(\Sigma))\). Then there exists a unique simplex \(\sigma_{y}\) such that \(y\in\operatorname{int}\left(\sigma_{y}\right)\), i.e., \[y=\sum_{j=1}^{q}s_{j}\,\alpha_{j},\,\text{with}\,\,s_{j}\in(0,1),\alpha_{j}\in \sigma_{y}^{(0)},\,\,\text{for every}\,\,j=1,\ldots,q\,\,\text{and}\,\,\sum_{j}s_{j}=1.\] Since \(\iota(y)=y\), for every \(j\in\{1,\ldots,q\}\), either \(\iota(a_{j})=a_{j}\) or \(\iota(a_{j})=\alpha_{k}\) for some \(k\in\{1,\ldots,q\}\smallsetminus\{i\}\). In the former case, there exist an edge-to-vertex arc \(e_{j}\) in \(S\) and its twin \(e_{j}^{\prime}\) in \(S^{\prime}\) such that \(a_{j}=e_{j}\sqcup e_{j}^{\prime}/\sim\). So we define \(h^{-1}(s_{j}a_{j}):=s_{j}e_{j}\). In the latter case, we must also have \(s_{j}=s_{k}=:\,s_{jk}\). Suppose that \(a_{j}\in S\) and \(a_{j}\in S^{\prime}\). Then define \(h^{-1}(s_{jk}(a_{j}+a_{k})):=2s_{jk}a_{j}\). Now we shall prove Theorem 4.9. Proof of Theorem 4.9.: From Theorem 4.6, we get that there is a \(\operatorname{MCG}(\Sigma)\)-invariant homeomorphism: \[\operatorname{Fix}_{\iota}(\widehat{\mathcal{A}}(\Sigma))\cong\operatorname{ Fix}_{\iota}(T(\Omega)). \tag{6}\] The subspace \(\operatorname{Fix}_{\iota}(T(\Omega))\) is an open ball -- it can be parametrised by the lengths of the geodesic arcs of an \(\iota\)-invariant triangulation of \(\Sigma\). Finally, using the isomorphism \(h\) from above we get that \(\widehat{\mathcal{A}}(S_{0})\) is an open ball of dimension \(6g-7+3n+2Q\). Using the same method as in the previous section, we can show that **Theorem 4.11**.: _The pruned arc complex \(\widehat{\mathcal{A}}(T_{h,n}^{\vec{q},\vec{h}})\) of a non-orientable surface \(T_{h,n}^{\vec{q},\vec{h}}\) with decorated spikes is an open ball of dimension \(3h-7+3n+2Q\)._ ### Tiles Let \(S\) be a hyperbolic surface endowed with a hyperbolic metric \(m\in\mathfrak{D}(S)\). Let \(\mathcal{K}\) be the set of permitted arcs for an arc complex \(\mathcal{A}(S)\) of the surface. Given a simplex \(\sigma\subset\mathcal{A}(S)\), the _edge set_ is defined to be the set \[\mathcal{E}_{\sigma}:=\left\{\alpha_{g}(m)\in\alpha|\alpha\in\sigma^{(0)} \right\},\] where \(\alpha_{g}(m)\) is a geodesic representative from its isotopy class. The set of all lifts of the arcs in the edge set in the universal cover \(\widetilde{S}\subset\mathbb{H}^{2}\) is denoted by \(\widetilde{\mathcal{E}_{\sigma}}\). The set of connected components of the surface \(S\) in the complement of the arcs of the edge set is denoted by \(\mathcal{T}_{\sigma}\). The lifts of the elements in \(\mathcal{T}_{\sigma}\) in \(\mathbb{H}^{2}\) are called _tiles_; their collection is denoted by \(\widetilde{\mathcal{T}_{\sigma}}\). They are topological disks. The sides of a tile are either contained in the boundary of the original surface or they are the arcs of \(\mathcal{E}_{\sigma}\). The former case is called a _boundary side_ and the latter case is called an _internal side_. Two tiles \(d,d^{\prime}\) are called _neighbours_ if they have a common internal side. The tiles having finitely many edges are called _finite_. If \(\sigma\) has maximal dimension in \(\mathcal{A}(S)\), then the finite tiles can be of three types: 1. The tile has only one internal side, i.e., it has only one neighbour. 2. The tile has two internal sides, i.e., two neighbours. 3. The tile has three internal sides, i.e., three neighbours. _Remark 4.2_.: Any tile, obtained from a triangulation using a simplex \(\sigma\), must have at least one and at most three internal sides. Indeed, the only time a tile has no internal side is when the surface is an ideal triangle. Also, if a tile has four internal sides, then it must also have at least four distinct boundary sides to accommodate at least four endpoints of the arcs. The finite arc that joins one pair of non-consecutive boundary sides lies inside \(\mathcal{K}\). This arc was not inside the original simplex, which implies that \(\sigma\) is not maximal. Hence a tile can have at most 3 internal sides. The following are the different types of tiles possible: * When there is only one internal side of the tile, that side is an edge-to-edge arc of the original surface. The tile contains exactly one decorated spike \(\nu\) and two boundary sides. * When there are two internal sides, one of them is an edge-to-vertex and the other one is of edge-to-edge type. So the tile contains a decorated spike. * There are two possibilities in this case: either all the three internal sides are of edge-to-edge type or two of them are edge-to-vertex arcs and one edge-to-edge arcx. In the former case, the tile does not contain any vertex whereas in the latter case it contains exactly one. ## 5 Strip deformations In this section, we recapitulate strip deformations, strip template, tiles and tile maps. Informally, a strip deformation of a hyperbolic surface is done by cutting it along a geodesic arc \(\alpha_{g}\) in \(\alpha\) and gluing a strip of the hyperbolic plane \(\mathbb{H}^{2}\), without any shearing. The type of strip used depends on the type of arc and the surface being considered. Firstly, we define the different types of strips. Let \(l_{1}\) and \(l_{2}\) be any two geodesics in \(\mathbb{H}^{2}\). Then we consider two types of strips depending on the nature of their intersection: * Suppose that \(l_{1}\) and \(l_{2}\) are disjoint in \(\overline{\mathbb{H}^{2}}\). Then the region bounded by them in \(\mathbb{H}^{2}\) is called a hyperbolic strip. The _width_ of the strip is the length of the segment of the unique common perpendicular \(l\) to \(l_{1}\) and \(l_{2}\), contained in the strip. The _waist_ of the strip is defined to be the set of points of intersection \(l\cap l_{1}\) and \(l\cap l_{2}\). * Suppose that \(l_{1}\) and \(l_{2}\) intersect in \(\partial_{\infty}\mathbb{H}^{2}\) at a point \(p\). Let \(h\) be a horocycle based at \(p\). Then the region bounded by them inside \(\mathbb{H}^{2}\) is called a _parabolic strip_. The waist in this case is defined to be the ideal point \(p\) and the width (w.r.t \(h\)) is defined to be the length of the horocyclic arc of \(h\) subtended by \(l_{1}\) and \(l_{2}\). Let \(S\) be a hyperbolic surface endowed with a metric \(m\in\mathfrak{D}(S)\) on it. Let \(\mathcal{K}\) be the set of permitted arcs (Definition (4.2)). A strip template is the following data: * an \(m\)-geodesic representative \(\alpha_{g}\) from every isotopy class \(\alpha\) of arcs in \(\mathcal{K}\), along which the strip deformation is performed, * a point \(p_{\alpha}\in\alpha_{g}\) where the waist of the strip being glued must lie. A choice of strip template is the specification of this data. However, we shall see in the following section that even though we are allowed to choose the geodesic arcs in every case, the waists are sometimes fixed from beforehand by the nature of the arc being considered. Recall that finite arcs are embeddings of a closed and bounded interval into the surface with both the endpoints lying on the boundary of the surface. These arcs are present in the construction of every arc complex that we discuss. The strip glued along these arcs is of hyperbolic type except when the arc joins an elliptic vertex \(v\in\mathbb{H}^{2}\) with an edge, in a decorated polygon. In this case, the strip glued is of elliptic type, with its waist at \(v\). The representative \(\alpha_{g}\) from the isotopy class of such an arc can be any geodesic segment from \(v\) to that edge. In the case of a finite arc joining a truncated hyperideal vertex to an edge, the representative is chosen to be perpendicular to the vertex, and the waist is at the point of intersection of the arc with the truncated vertex. In every other case, including edge-to-edge arcs in decorated polygons, we are free to chose the geodesic representative and the waist of the hyperbolic strip. Let \([\alpha]\) be the isotopy class of a permitted infinite arc \(\alpha\) of a hyperbolic surface \(S\). Then \(\alpha\) has one finite end lying on \(\partial S\) and one infinite end that either escapes the surface through a decorated spike. We can choose any geodesic arc \(\alpha_{g}\) from \([\alpha]\) that does the same without any self intersection. Consider the universal cover of the surface inside the disk model of \(\mathbb{H}^{2}\). The lift of such an arc is a geodesic ray in \(\mathbb{H}^{2}\) whose infinite end is on an ideal point which is a lift of the spike (decorated surface. The strip added is of parabolic type, with its waist at this ideal point. Now we recall the formal definition of a strip deformation and its infinitesimal version. **Definition 5.1**.: Given an isotopy class \(\alpha\) of arcs and a choice of strip template \((\alpha_{g},p_{\alpha},w_{\alpha})\), define the _strip deformation_ along \(\alpha\) to be a map \[F_{\alpha}:\mathfrak{D}(S)\longrightarrow\mathfrak{D}(S)\] where the image \(F_{\alpha}(m)\) of a point \(m\in\mathfrak{D}(S)\) is a new metric on the surface obtained by cutting it along the \(m\)-geodesic arc \(\alpha_{g}\) in \(\alpha\) chosen by the strip template and gluing a strip whose waist coincides with \(p_{\alpha}\). The type of strip used depends on the type of arc and the surface being considered. **Definition 5.2**.: Given an isotopy class of arcs \(\alpha\) of a hyperbolic surface \(S\) and a strip template \(\{(\alpha_{g},p_{\alpha},w_{\alpha})\}_{\alpha\in\mathcal{K}}\) adapted to the nature of \(\alpha\) for every \(m\in\mathfrak{D}(S)\), define the _infinitesimal strip deformation_ \[\begin{array}{rl}f_{\alpha}:&\mathfrak{D}(S)\longrightarrow T\mathfrak{D}(S) \\ &m\mapsto[m(t)]\end{array}\] where the image \(m(\cdot)\) is a path in \(\mathfrak{D}(S)\) such that \(m(0)=m\) and \(m(t)\) is obtained from \(m\) by strip deforming along \(\alpha\) with a fixed waist \(p_{\alpha}\) and the width as \(tw_{\alpha}\). Let \(m=([\rho,\vec{x}])\in\mathfrak{D}(S)\) be a point in the deformation space of the surface, where \(\rho\) is the holonomy representation and denote \(\Gamma=\rho(\pi_{1}(S))\). Fix a strip template \(\{(\alpha_{g},p_{\alpha},w_{\alpha})\}\) with respect to \(m\). Let \(\sigma\) be a simplex of \(\mathcal{A}(S)\). Given an arc \(\alpha\) in the edgeset \(\mathcal{E}_{\sigma}\), there exist tiles \(\delta,\delta^{\prime}\in\mathcal{T}_{\sigma}\) such that every lift \(\widetilde{\alpha}\) of \(\alpha\) in \(\widetilde{S}\) is the common internal side of two lifts \(\widetilde{\delta},\widetilde{\delta^{\prime}}\) of the tiles. Also, \(p_{\gamma\widetilde{\alpha}}=\gamma\cdot p_{\widetilde{\alpha}}\), for every \(\gamma\in\Gamma\). Then the infinitesimal deformation \(f_{\alpha}(m)\) tends to pull the two tiles \(\delta\) and \(\delta^{\prime}\) away from each other due to the addition of the infinitesimal strip. Let \(u\) be a infinitesimal strip deformation of \(\rho\) caused by \(f_{\alpha}(m)\). Then we have a \((\rho,u)\)-equivariant _tile_ map \(\phi:\widetilde{\mathcal{T}_{\sigma}}\rightarrow\mathfrak{g}\) such that for every \(\gamma\in\Gamma\), \[\phi(\rho(\gamma)\cdot\widetilde{\delta})-\phi(\rho(\gamma)\cdot\widetilde{ \delta^{\prime}})=\rho(\gamma)\cdot v_{\widetilde{\alpha}}, \tag{7}\] where \(v_{\widetilde{\alpha}}\) is the Killing field in \(\mathfrak{g}\simeq\mathcal{X}\) corresponding to the strip deformation \(f_{\widetilde{\alpha}}(m)\) along a geodesic arc \(\widetilde{\alpha}_{g}\), isotopic to \(\widetilde{\alpha}\), adapted to the strip template chosen, and pointing towards \(\widetilde{\delta}\): * If \(f_{\alpha}(m)\) is a hyperbolic strip deformation with strip template \((\alpha_{g},p_{\alpha},w_{\alpha})\), then \(v_{\overline{\alpha}}\) is defined to be the hyperbolic Killing vector field whose axis is perpendicular to \(\widetilde{\alpha}_{g}\) at the point \(\widetilde{p_{\alpha}}\), whose velocity is \(w_{\alpha}\). * If \(\alpha\) is an infinite arc joining a spike and a boundary component, then \(f_{\alpha}(m)\) is a parabolic strip deformation with strip template \((\alpha,p_{\alpha})\), and \(v_{\overline{\alpha}}\) is defined to be the parabolic Killing vector field whose fixed point is the ideal point \(p_{\alpha}\) where the infinite end of \(\widetilde{\alpha}\) converges. _Remark 5.1_.: Such a strip deformation \(f_{\alpha}:\mathfrak{D}(S)\longrightarrow T_{m}\mathfrak{D}(S)\) does not deform the holonomy of a crowned surface if \(\alpha\) is completely contained outside the convex core of the surface. However, it does provide infinitesimal motion to the spikes. More generally, a linear combination of strip deformations \(\sum_{\alpha}c_{\alpha}f_{\alpha}(m)\) along pairwise disjoint arcs \(\{\alpha_{i}\}\subset\mathcal{E}_{\alpha}\) imparts motion to the tiles of the triangulation depending on the coefficient of each term in the linear combination. A tile map corresponding to it is a \((\rho,u)\)-equivariant map \(\phi:\widetilde{\mathcal{T}_{\sigma}}\rightarrow\mathfrak{g}\) such that for every pair \(\delta,\delta^{\prime}\in\mathcal{T}_{\sigma}\) which share an edge \(\alpha\in\mathcal{E}_{\nu}\), the equation 7 is satisfied by \(\phi\). **Definition 5.3**.: The _infinitesimal strip map_ is defined as: \[\begin{array}{ccccc}\mathbb{P}f&:&\widehat{\mathcal{A}}(S)&\longrightarrow& \mathbb{P}^{+}(T_{m}\mathfrak{D}(S))\\ &&\sum\limits_{i=1}^{\dim\mathfrak{D}(S)}c_{i}\alpha_{i}&\mapsto&\left[\sum \limits_{i=1}^{\dim\mathfrak{D}(S)}c_{i}f_{\alpha_{i}}(m)\right]\end{array}\] where \(\widehat{\mathcal{A}}(S)\) is the pruned arc complex of the surface (Definition (4.4)). ### Estimates Let \(S^{\odot}\) be a surface with decorated spikes with a metric \(m\). Consider a strip deformation \(f_{\alpha}(m)\) along a finite arc \(\alpha\), with strip template \((\alpha_{g},p_{\alpha},w_{\alpha})\). Then the strip added along \(\alpha\) is hyperbolic. Let \(w_{\alpha}(p)\) be the width of the strip at the point \(p\in\alpha_{g}\). Let \(\widetilde{\alpha_{g}},\widetilde{p_{\alpha}},\widetilde{p}\) be the lifts of \(\alpha_{g},p_{\alpha},p\) such that \(\widetilde{p},\widetilde{p_{\alpha}}\in\widetilde{\alpha_{g}}\). Suppose that \(v_{\overline{\alpha}}\) is the Killing field acting across \(\widetilde{\alpha_{g}}\) due to the strip deformation. Then, \(\|v_{\overline{\alpha}}\|=w_{\alpha}\). In the hyperboloid model \(\mathbb{H}^{2}\), suppose that \(v_{\overline{\alpha}}=(w_{\alpha},0,0)\) and let the plane containing \(\widetilde{\alpha_{g}}\) be \(\{(x,y,z)\in\mathbb{R}^{3}\mid y=0\}\). So, \(\widetilde{p_{\alpha}}=(0,0,1)\). A point \(p\) on the geodesic \(\widetilde{\alpha_{g}}\) is of the form \((x,0,\,\sqrt{x^{2}+1})\), with \(x\in\mathbb{R}\). Then we have \[w_{\alpha}(p)=\|(v_{\overline{\alpha}}\wedge p\|=w_{\alpha}\sqrt{x^{2}+1}=-w_{ \alpha}\langle p,\widetilde{p_{\alpha}}\rangle=w_{\alpha}\cosh d_{\mathbb{H}^{ 2}}(p,\widetilde{p_{\alpha}}). \tag{8}\] Now suppose that the arc \(\alpha\) is joining a decorated spike and a boundary component. Then the infinitesimal strip added by \(f_{\alpha}(m)\) is parabolic. Let \(v_{\overline{\alpha}}=(w_{\alpha},0,w_{\alpha})\) be the corresponding parabolic Killing field. Then, \[w_{\alpha}(p)=\|v_{\overline{\alpha}}\wedge p\|=w_{\alpha}(\sqrt{x^{2}+1}-x).\] Let \(L\) be the linear coordinate along the arc \(\alpha\) such that \(L<0\) if \(p\) lies between the points \(v_{\overline{\alpha}}\), \(p_{\alpha}\) and \(L>0\) if \(p_{\alpha}\) lies between the points \(v_{\overline{\alpha}}\) and \(p\). Taking \(x=\sinh L\) we get, \(w_{\alpha}(p)=\mathrm{e}^{L}\). The point \(p_{\alpha}\) is called the point of _minimum impact_ because \(w_{\alpha}(p_{\alpha})=w_{\alpha}\). **Definition 5.4**.: Let \(S^{\odot}\) be a surface with decorated spikes with a metric \(m\) and corresponding strip template \(\{(\alpha_{g},p_{\alpha},w_{\alpha})\}\). Let \(x=\sum_{i=1}^{N_{0}}c_{i}\alpha_{i}\) be a point in the pruned arc complex \(\widehat{\mathcal{A}}(\Pi)\). Then the _strip width function_ is defined as: \[\begin{array}{ccc}w_{x}:&\mathrm{supp}\,(x)&\longrightarrow&\mathbb{R}_{>0} \\ &p&\mapsto&c_{i}w_{\alpha_{i}}(p),\end{array}\] Normalisation:Let \(S^{\otimes}\) be surface with decorated spikes and \(\mathcal{K}\) be the set of permitted arcs. Then for every \(\alpha\in\mathcal{K}\), we choose \(w_{\alpha}>0\) such that the following equality holds for every \(x\in\widehat{\mathcal{A}}(\Pi)\): \[\sum_{p\in\widehat{a}\Pi\cap\operatorname{supp}(x)}w_{x}(p)=1. \tag{9}\] The following theorem was proved in the case of decorated polygons in [15]. **Lemma 5.5**.: _Let \(S^{\odot}\) be a surface with decorated spikes endowed with a decorated metric \(m\) and a corresponding strip template \((\alpha_{g},p_{\alpha},w_{\alpha})\). Let \(x\in\widehat{\mathcal{A}}(S^{\odot})\) and \(\gamma\) be a closed curve or a horoball connection of \(S^{\odot}\) intersecting \(\operatorname{supp}\left(x\right)\). Then,_ \[\mathrm{d}l_{\gamma}(f(x))=\sum_{p\in\gamma\cap\operatorname{supp}(x)}w_{x}(p )\sin\angle_{p}(\alpha_{g},\operatorname{supp}\left(x\right))>0. \tag{10}\] **Lemma 5.6**.: _Let \(x\) be a point of a hyperbolic surface \(S\). Let \(B\) be a geodesic ball centered at \(x\) with radius \(r\), where \(r\) is the injectivity radius of the surface. Then for every pair of distinct lifts \(B_{1},B_{2}\) of \(B\) in the universal cover of \(S\), we have that \(B_{1}\cap B_{2}=\emptyset\)._ Proof.: Let \(x,B,B_{1},B_{2}\) be as in the hypothesis. Then, \(B=\exp_{x}(B(0,r))\). For \(i=1,2\), let \(x_{i}\in\mathbb{H}^{2}\) be the center of \(B_{i}\). Then \(x_{2}=\gamma\cdot x_{1}\) for some \(\gamma\in\pi_{1}(S)\). If possible, let \(\overline{z}\in B_{1}\cap B_{2}\). Since the action of \(\pi_{1}(S)\) is free, the point \(z^{\prime}:=\gamma\cdot\overline{z}\) lies inside \(B_{2}\). Join \(x_{2}\) with \(z\) and \(z^{\prime}\) by geodesic segments \(l_{1},l_{2}\), respectively. On the surface \(S\), the path \(l_{1}\cup l_{2}\) is mapped to a loop based at \(z\). Since \(B\) is the embedding of a ball by the exponential map, this loop is trivial. So \(l_{1}\cup l_{2}\) is a loop in \(\widetilde{S}\) based at a lift \(\overline{z}\), which is a contradiction. **Lemma 5.7**.: _Let \(S\) be a hyperbolic surface with decorated spikes and a metric \(m\in\mathfrak{D}(S)\). Then there exists \(M>0\) such that for every non-trivial closed geodesic \(\gamma\) and for every non-trivial geodesic arc \(\alpha\), the following inequality holds:_ \[\sum_{p\in\gamma\cap\alpha}w_{\alpha}(p)\leq Ml_{\gamma}(m). \tag{11}\] Figure 8: Intersection of the arc and the curve Proof.: Let \(\gamma\) and \(\alpha\) be as in the hypothesis. We further suppose that \(\alpha\) is an infinite arc joining a decorated spike and a boundary component of \(S\). We prove that there exists a positive constant \(M_{0}\) such that for every unit segment \(\eta\) and for every arc \(\alpha\), the following inequality holds. \[\sum_{p\in\pi\cap\alpha}w_{\alpha}(p)\leq M_{0}. \tag{12}\] Given such a unit segment \(\eta\), let \(\eta\cap\alpha=\{p_{1},\ldots,p_{k}\}\). We order them so that \(p_{i}\) lies closer to the horoball than \(p_{j}\) if and only if \(i>j\). Take a lift \(\widetilde{\alpha}\) of \(\alpha\) in the universal cover inside \(\mathbb{H}^{2}\). Since \(\alpha\) is embedded, for every \(i=1,\ldots,k\), there is exactly one lift \(\widetilde{p_{i}}\) of \(p_{i}\) that lies on \(\widetilde{\alpha}\). Let \(r_{0}\) be the injectivity radius of the surface. Cover the entire arc \(\widetilde{\alpha}\) with balls \(\{B_{j}\}_{j\in J}\) of radius \(r:=\frac{r_{0}}{2}\), such that two consecutive balls are tangent to each other. We claim that each ball contains at most \(M_{0}\) number of intersection points, where \(M_{0}:=[\frac{1}{r_{0}}]+1\). Consider one lift of \(\eta\) and a ball \(B_{r}\) in the above covering. Then a reformulation of the claim is that there are at most \(M_{0}\)-many balls that are in the same orbit as \(B\). Now each of these balls are contained in the bigger ball \(B_{2r}\) with the same center and of radius \(r_{0}\) and by Lemma 5.6, no two of them can intersect. Thus the maximum number of balls \(B_{r}\) in the same orbit intersecting \(\eta\) is \(M_{0}\). Next, we know that \(w_{\widetilde{\alpha}}(\widetilde{p_{i}})=\mathrm{e}^{L_{i}}\), where \(L_{i}\) is the negative arc coordinate of \(\widetilde{p_{i}}\) along \(\widetilde{\alpha}\). Then, \(w_{\widetilde{\alpha}}(\widetilde{p_{i}})\) decreases exponentially as \(i\) increases. In every ball, the maximum value of \(w_{\widetilde{\alpha}}\) is attained at the rightmost point. Two such points in two consecutive balls are at most \(r_{0}\) distance apart because the balls are of radius \(\frac{r_{0}}{2}\) and tangent to each other. Inside the first ball, the maximum value of \(w_{\widetilde{\alpha}}\) can be at most \(1\), if the point of unit impact is an intersection point. So we have, \[\sum_{p\in\pi\cap\alpha}w_{\alpha}(p)=\sum_{i=1}^{k}w_{\widetilde{\alpha}}( \widetilde{p_{i}})=\sum_{i=1}^{k}\mathrm{e}^{L_{i}}\leq M_{0}(1+\mathrm{e}^{-r _{0}}+\ \mathrm{e}^{-2r_{0}}+\ldots)=\frac{M_{0}}{1-\mathrm{e}^{-r_{0}}}. \tag{13}\] Finally, taking the sum over all the unit segments of \(\gamma\), we get that \[\sum_{p\in\gamma\cap\alpha}w_{\alpha}(p)\leq Ml_{\gamma}(m),\] where \(M:=\frac{M_{0}}{1-\mathrm{e}^{-r_{0}}}\). **Lemma 5.8**.: _Let \(S_{c}\) be a compact hyperbolic surface equipped with a metric \(m\in\mathfrak{D}(S_{c})\). Then for every \(\epsilon>0\) there exists \(M>0\) such that whenever a geodesic arc \(\alpha\) has \(m\)-length \(l_{\alpha}(m)>M\), there exists a closed geodesic \(\gamma\) on the surface such that it intersects \(\alpha\) as well as every geodesic arc, that is disjoint from \(\alpha\), at angle less than \(\epsilon\)._ Proof.: Given \(\epsilon>0\), there exists \(N\in\mathbb{N}\) such that \(N\mathrm{diam}(S_{c})\epsilon>3\operatorname{area}(S_{c})\). Take \(M=N\mathrm{diam}(S_{c})\). Consider an arc \(\alpha\) of length \(M\) and its \(\epsilon\)-neighbourhood \(V_{\epsilon}(\alpha)\). The area of \(V_{\epsilon}(\alpha)\) is at least \(2M\epsilon\). So \(V_{\epsilon}(\alpha)\) cannot be embedded inside the surface -- it self-overlaps threefold. It follows that there exists a segment \(\eta\) of the arc \(\alpha\) such that its length is \(N^{\prime}\mathrm{diam}(S_{c})\) for \(1<N^{\prime}<N\) and its endpoints lie at a distance \(2\epsilon\) from each other, with the velocities at those points being parallel. Join the two endpoints by a geodesic segment to get a closed loop. Then choose the unique closed geodesic \(\gamma\) in its homotopy class. We claim that \(\gamma\) satisfies the condition of the lemma. Firstly, we show that \[\max_{p\in\gamma\cap\alpha}\angle_{p}(\gamma,\alpha)<\epsilon. \tag{14}\] Let \(\widetilde{\gamma}\) be a infinite geodesic lift of \(\gamma\). Let \(\widetilde{\eta}\) be a lift of the arc segment \(\eta\) and consider its \(\rho(\gamma)\)-orbit. For every \(i\in\mathbb{Z}\), the two endpoints of \(\rho(\gamma)^{i}\cdot\widetilde{\eta}\) are \(\epsilon\)-close to one endpoint of \(\rho(\gamma)^{i-1}\cdot\widetilde{\eta}\) and one endpoint of \(\rho(\gamma)^{i+1}\cdot\widetilde{\eta}\). Let \(p_{i}:=\widetilde{\gamma}\cap\rho(\gamma^{i})\cdot\widetilde{\eta}\). The entire geodesic \(\widetilde{\gamma}\) is contained in the union \(V:=\bigcup_{i\in\mathbb{Z}}V_{\epsilon}(\rho(\gamma^{i})\cdot\widetilde{\eta})\) of the \(\epsilon\)-neighbourhoods of the lifts of \(\eta\). So the angle of intersection at \(p_{i}\) satisfies \[\angle_{p_{i}}(\widetilde{\gamma},\rho(\gamma^{i})\cdot\widetilde{\eta})<\epsilon.\] Now let \(\alpha^{\prime}\) be an arc disjoint from \(\alpha\) that intersects \(\gamma\). Then the point of intersection, denoted by \(p\), lies inside \(V_{\epsilon}(\alpha)\). Then from eq. (14) we have that \(\alpha^{\prime}\) intersects \(\gamma\) at an angle less than \(\epsilon\). The following lemma is an analogue of Proposition 2.3 in [5]. **Lemma 5.9**.: _Let \(S^{\,\odot\!}\) be a hyperbolic surface with decorated spikes endowed with metric \(m\in\mathfrak{D}(S^{\,\odot\!})\). For any choice of minimally intersecting geodesic representatives \(\{\alpha\}\) whose finite endpoints lie outside the horoball decoration of the spikes, there exists \(\theta_{0}\in(0,\frac{\pi}{2}]\) such that all the arcs intersect the boundary of the surface at an angle greater or equal to \(\theta_{0}\)._ ## 6 Main theorem The goal of this section is to prove our parametrisation theorem: **Theorem 6.1**.: _Let \(S^{\,\odot\!}=S^{\,\widetilde{\sigma},\widetilde{\sigma}}_{g,n}\) or \(T^{\,\widetilde{\sigma},\widetilde{\sigma}}_{h,n}\) be a hyperbolic surface with decorated spikes. Let \(m\in\mathfrak{D}(S^{\,\odot\!})\) be a decorated metric. Fix a choice of strip template \(\{({{}_{\sigma}}_{g},{{}_{\sigma}}_{n},{{}_{\sigma}}_{n})\}_{\alpha\in \mathcal{K}}\) with respect to \(m\). Then, the infinitesimal strip map \(\mathbb{P}f:\widehat{\mathcal{A}}(S^{\,\odot\!})\longrightarrow\mathbb{P}^{+} (T_{m}\mathfrak{D}(S^{\,\odot\!}))\) is a homeomorphism on its image \(\mathbb{P}^{+}(\Lambda(m))\)._ Firstly, we show that the map \(\mathbb{P}f\) is a local homeomorphism and then we show that it is proper. This proves that the map is a covering map between two open balls of the same dimension, which implies that it is a homeomorphism. ### Codimension 0 In this section, we show thatnthe map \(\mathbb{P}f\) is a local homeomorphism around points \(x\in\widehat{\mathcal{A}}(S)\) such that \(\operatorname{codim}\left(\sigma_{x}\right)=0\). For that, we first define longitudinal motions. Let \(\mathbf{a},\mathbf{b}\in\mathbb{R}^{2,1}\) be two linearly independent future-pointing light-like vectors whose projective images are denoted by \(A,B\). Let \(AB\) be the unique hyperbolic geodesic joining these two points. **Definition 6.2**.: Given a Killing vector field \(X\in\mathfrak{g}\), the _longitudinal motion_\(X_{l}\) imparted by \(X\) to the geodesic \(AB\) is defined as \[X_{l}:=\langle\mathbf{v}_{X},\frac{\mathbf{a}\wedge\mathbf{b}}{\|(\mathbf{a} \wedge\mathbf{b})\|}\rangle, \tag{15}\] where \(\mathbf{v}_{X}\) is the vector in \(\mathbb{R}^{2,1}\) corresponding to \(X\). The motion is called "longitudinal" because the vector \(X_{l}\) is equal to the component of \(X(p)\) along the direction of the line AB, for every point \(p\in\mathbb{H}^{2}\) lying on AB. **Lemma 6.3**.: _Let \(A,B,C,D\) be four ideal points ordered in anti-clockwise manner. Let \(AB\) and \(CD\) be two disjoint geodesics in \(\mathbb{H}^{2}\). Let \(E,F,G\) be the intersection points \(\overleftrightarrow{AD}\cap\overleftrightarrow{BC}\), \(\overleftrightarrow{AB}\cap\overleftrightarrow{CD}\) and \(\overleftrightarrow{AC}\cap\overleftrightarrow{BD}\), respectively. Then the set of all Killing fields that impart at least the same amount (absolute value) of longitudinal motion to \(AB\) as to \(CD\), is given by the bigon bounded by \(\overleftrightarrow{GF}\) and \(\overleftrightarrow{EF}\), that contains the segment \(CD\)._ Proof.: Firstly, we prove that the Killing vector fields who projective images are one of \(E,F,G\), impart equal longitudinal motion to both \(AB\) and \(CD\). Let \(\mathbf{a},\mathbf{b},\mathbf{c},\mathbf{d}\) be future-pointing light-like vectors in the preimages of \(A,B,C,D\) such that \[\begin{array}{ll}\mathbf{a}=(-\cos\theta,-\sin\theta),1),&\mathbf{b}=(\cos \theta,-\sin\theta,1)\\ \mathbf{c}=(\cos\theta,\sin\theta,1),&\mathbf{d}=(-\cos\theta,\sin\theta,1), \end{array}\] where \(\theta\in[0,\frac{\pi}{2}]\). The existence of _theta_ can be assumed up to applying a hyperbolic isometry to the quadruple \((A,B,C,D)\) because any cross-ratio is realized by some rectangle. Then the points \(A,B,C,D\) form a rectangle in the projective plane. Also, we have that \[E=\left[(\mathbf{a}\wedge\mathbf{d})\wedge(\mathbf{c}\wedge\mathbf{b})\right],\quad F=\left[(\mathbf{a}\wedge\mathbf{b})\wedge(\mathbf{c}\wedge\mathbf{d}) \right], \tag{16}\] \[G=\left[(\mathbf{a}\wedge\mathbf{c})\wedge(\mathbf{b}\wedge\mathbf{d})\right]. \tag{17}\] It follows directly from the definition of cross product that any Killing vector field that is a preimage of \(F\) imparts no longitudinal motion either to \(AB\) or to \(CD\). By inserting the coordinates of \(\mathbf{a},\mathbf{b},\mathbf{c},\mathbf{d}\) in the formulae (16),(17), we get that \[\begin{array}{ll}\mathbf{a}\wedge\mathbf{c}=2(-\sin\theta,\cos\theta,0),& \mathbf{b}\wedge\mathbf{d}=2(-\sin\theta,-\cos\theta,0),\\ G=[(0,0,-4\sin(2\theta))].\end{array}\] Furthermore, we have that \[\begin{array}{ll}\mathbf{a}\wedge\mathbf{b}=(0,2\cos\theta,-\sin(2\theta)), \\ \mathbf{c}\wedge\mathbf{d}=(0,-2\cos\theta,-\sin(2\theta)),\\ \|\mathbf{a}\wedge\mathbf{b}\|^{2}=\|\mathbf{c}\wedge\mathbf{d}\|^{2}=(1+\cos (2\theta))^{2}.\end{array}\] If we take any Killing vector field \(X_{G}=k((\mathbf{a}\wedge\mathbf{c})\wedge(\mathbf{b}\wedge\mathbf{d}))\) in the preimage of \(G\), \(k\in\mathbb{R}^{*}\), we get that \[\langle X_{G},\frac{\mathbf{a}\wedge\mathbf{b}}{\|\mathbf{a}\wedge\mathbf{b} \|}\rangle=\langle X_{G},\frac{\mathbf{c}\wedge\mathbf{d}}{\|\mathbf{c}\wedge \mathbf{d}\|}\rangle=\frac{-4\sin^{2}(2\theta)}{(1+\cos(2\theta))}=-8\sin^{2}\theta.\] Figure 9: The shaded region is the bigon of Lemma 6.3 Finally, we calculate the coordinates of \(E\): \[\begin{array}{c}\mathbf{a}\wedge\mathbf{d}=r^{2}(-2\sin\theta,0,\sin(2\theta)), \quad\mathbf{c}\wedge\mathbf{b}=r^{2}(2\sin\theta,0,\sin(2\theta))\\ E=[4r^{4}(0,\sin\theta\sin(2\theta),0)].\end{array}\] If \(X_{E}=k^{\prime}(a\wedge d)\wedge(c\wedge b)\) for some \(k^{\prime}\in\mathbb{R}\), then we have \[\langle X_{E},\frac{\mathbf{a}\wedge\mathbf{b}}{\|\mathbf{a}\wedge\mathbf{b} \|}\rangle=-\langle X_{E},\frac{\mathbf{c}\wedge\mathbf{d}}{\|\mathbf{c} \wedge\mathbf{d}\|}\rangle=4\cos^{2}\theta.\] By linearity, we get that the Killing vector fields whose projections lie on the straight line \(\overleftrightarrow{E}\) and \(\overleftrightarrow{G}\)\(\overleftrightarrow{F}\) impart equal or opposite longitudinal motions to \(AB\) and \(CD\). Now, any Killing field, whose projective image lies on the straight line \(c\wedge d\), imparts zero motion to \(CD\) and non-zero motion to \(AB\). Since the longitudinal motion on a given line is a linear function of the Killing field, the bigon containing the segment \(CD\) is the desired one. **Theorem 6.4**.: _Given a triangulation \(\sigma\) of a hyperbolic surface with decorated spikes \(S^{\heartsuit}=S^{\vec{q},\vec{k}}_{g,n}\) or \(T^{\vec{q},\vec{k}}_{h,n}\) with corresponding edge set \(\mathcal{E}_{\sigma}\), the set of infinitesimal strip deformations \(B=\{f_{e}(m)|e\in\mathcal{E}_{\sigma}\}\) forms a basis of \(T_{m}\mathfrak{D}(S^{\heartsuit})\)._ Proof.: We start with a neutral tile map \(\phi_{0}:\widetilde{\mathcal{T}_{\sigma}}\longrightarrow\mathfrak{g}\) for the triangulation \(\sigma\), representing the linear combination \[\sum_{e\in\mathcal{E}_{\sigma}}c_{e}f_{e}(m)=0,\] and we show that the maximal longitudinal motion along any arc of the triangulation is zero. Let \(e\) be a common internal edge of two tiles \(d,d^{\prime}\in\widetilde{\mathcal{T}_{\sigma}}\). From the definition of tile maps we know that when \(e\) is spike-to-edge, the difference \(\phi_{0}(d)-\phi_{0}(d^{\prime})\) is a light-like point in the plane \(L_{e}\), and when \(e\) is an edge-to edge arc, the difference is a space-like point in \(L_{e}\). We claim that the longitudinal motions imparted to \(e\) by \(\phi_{0}(d)\) and \(\phi_{0}(d^{\prime})\) are equal. Indeed, we can decompose the Minkowski space as \[\mathbb{R}^{2,1}=L_{e}\oplus L_{e}{}^{\perp},\] where \(L_{e}\) is the plane \(\mathbb{P}^{-1}(\overleftrightarrow{e})\) and \(L_{e}^{\perp}\) is the \(\langle\cdot,\cdot\rangle\)-dual of \(L_{e}\). Then \(\phi_{0}(d)=\mathbf{v}_{t}+\mathbf{v}_{l}\) and \(\phi_{0}(d^{\prime})=\mathbf{v}_{t}^{\prime}+\mathbf{v}_{l}^{\prime}\) with \(\mathbf{v}_{t},\mathbf{v}_{t}^{\prime}\in L_{e}\) and \(\mathbf{v}_{l},\mathbf{v}_{l}^{\prime}\in L_{e}{}^{\perp}\). Now from the definition of tile maps we have that the vector \(\phi(d)-\phi(d^{\prime})\) is a space-like point of \(L_{e}\). Hence, \(\mathbf{v}_{l}=\mathbf{v}_{l}^{\prime}\), proving our claim. Moreover, when \(e\) is spike-to-edge, the Killing fields \(\phi_{0}(d),\phi_{0}(d^{\prime})\) are parabolic preserving the spike as well as the horoball decoration. So the longitudinal motion along \(e\) is zero in this case. It remains to show that the maximal longitudinal motion along any edge-to-edge arc is zero. We use the following theorem: **Lemma 6.5**.: _Suppose that \(e\) is an edge-to-edge arc with maximal longitudinal motion. Let \(d\in\widetilde{\mathcal{T}_{\sigma}}\) be a tile with \(e\) as an internal edge. Then, the point \([\phi_{0}(d)]\) is contained in the interior of the projective triangle based at \(e\), containing \(d\)._ Proof.: Figure 10 shows the different tiles formed after the triangulation of the surface. * Suppose that \(d\) is of type one, i.e., it is a triangle with one decorated ideal vertex and one internal edge which is edge-to-edge. (Topmost tile in Fig. 10). The point \([\phi_{0}(d)]\) is given by the ideal vertex, which lies inside the desired triangle. * Suppose that \(d\) is of type two, i.e., it is a quadrilateral with one decorated ideal vertex and two internal edges, one of which is edge-to-edge (Third tile from the top in Fig. 10). Once again, the point \([\phi_{0}(d)]\) is given by the ideal vertex, which lies inside the desired triangle. * Suppose that \(d\) is of type three. Firstly, we suppose that it is a pentagon with one decorated ideal vertex and three internal edges, one of which is edge-to-edge (Fourth tile in Fig. 10). Once again, the point \([\phi_{0}(d)]\) is given by the ideal vertex, which lies inside the desired triangle. Finally, if \(d\) is a hexagon with three edge-to-edge arcs (second tile from the top in Fig. 10), the proof is identical to the proof of Claim 3.2(0) in [5]. This finishes the proof of Lemma 6.5. Now let \(e\) be an internal edge of two neighbouring tiles \(d,d^{\prime}\in\widetilde{\mathcal{T}_{\sigma}}\), such that \(e\) has maximal (non-zero) longitudinal motion. So \(e\) is an edge-to-edge arc. By Lemma 6.5, the points \([\phi_{0}(d)]\) and \([\phi_{0}(d^{\prime})]\) belong to two projective triangles whose interiors are disjoint. If \(c_{e}\neq 0\), then \([\phi_{0}(d)-\phi_{0}(d^{\prime})]\) must be a point in \(\overleftrightarrow{e}\backslash\overline{\mathbb{H}^{2}}\). But any line joining \([\phi_{0}(d)]\) and \([\phi_{0}(d^{\prime})]\) intersects \(\overleftrightarrow{e}\) inside \(\mathbb{H}^{2}\). So we arrive at a contradiction. Hence, the longitudinal motion along every edge-to-edge arc is zero. Now we prove that \(\phi_{0}(d)=0\) for every \(d\in\widetilde{\mathcal{T}_{\sigma}}\). Every tile \(d\) of the triangulation has an internal edge-to-edge arc. Suppose that \(d\) has a decorated ideal vertex \(p\). Then, either \(\phi_{0}(d)=0\) or \(\phi_{0}(d)\in\mathbb{P}^{-1}\left\{p\right\}\). Let \(e\) be an internal edge-to-edge arc, with endpoints \(A,B\in\partial_{\infty}\mathbb{H}^{2}\). Let \(\mathbf{a}\in\mathbb{P}^{-1}\left\{A\right\}\) and \(\mathbf{b}\in\mathbb{P}^{-1}\left\{B\right\}\) be future pointing light-like vectors. Since the longitudinal motion along \(e\) is zero we have that \[\langle\phi_{0}(d),\frac{\mathbf{a}\wedge\mathbf{b}}{\left\|\mathbf{a}\wedge \mathbf{b}\right\|}\rangle=0,\] which is possible only if \(\phi_{0}(d)=0\). Finally suppose that \(d\) is a hexagon. Then it has three internal edges, Figure 10: Tiles of a triangulation of \(S^{(1,1,0)}_{0,3}\) denoted by \(e_{1},e_{2},e_{3}\). Choose space-like vectors \(\mathbf{v}_{i}\in\mathbb{P}^{-1}\left[e_{i}{}^{\perp}\right]\) for \(i=1,2,3\). Then from above we know that the longitudinal motions along its three internal (pairwise disjoint in \(\mathbb{H}^{2}\)) edges are zero. So \(\phi_{0}(d)=0\). ### Codimension 1,2 The proofs for the local homeomorphism of \(\mathbb{P}f\) around points belonging to the interiors of simplices of codimension more than \(0\) are identical to that in the cases of decorated (once-punctured) polygons, Sections 5.2-5.3 in [15]. ### Properness In this section we prove that the projectivised strip map \(\mathbb{P}f\) is proper. **Theorem 6.6**.: _Let \(S^{\,\otimes}\) be a hyperbolic surface with decorated spikes. Let \(m\in\mathfrak{D}(S^{\,\otimes})\). Then the projectivised strip map \(\mathbb{P}f:\widetilde{\mathcal{A}}(S^{\,\otimes})\longrightarrow\mathbb{P}^ {+}(\Lambda(m))\) is proper._ Proof.: Let \((x_{n})_{n}\) be a sequence in the pruned arc complex \(\widehat{\mathcal{A}}(S^{\,\otimes})\) such that \(x_{n}\rightarrow\infty\): for every compact \(K\) in \(\widehat{\mathcal{A}}(S^{\,\otimes})\), there exists an integer \(n_{0}\in\mathbb{N}\) such that for all \(n\geq n_{0}\), \(x_{n}\notin K\). We want to show that \(\mathbb{P}f(x_{n})\rightarrow\infty\) in the projectivised admissible cone \(\mathbb{P}^{+}\Lambda(m)\). Recall that the admissible cone \(\Lambda(m)\) is an open convex subset of \(T_{m}\mathfrak{D}(S)\). Its boundary \(\partial\Lambda(m)\) consists of \(\vec{0}\in T_{m}\mathfrak{D}(S)\) and is supported by hyperplanes (and their limits) given by the kernels of linear functionals \(\mathrm{d}l_{\beta}:T_{m}\mathfrak{D}(S)\longrightarrow\mathbb{R}\), where \(\beta\) is a horoball connection or a nontrivial closed geodesic of the surface. It suffices to show that \(f(x_{n})\) tends to infinity (in the sense of leaving every compact subset) inside \(\Lambda(m)\) but stays bounded away from \(\vec{0}\) so that \(\mathbb{P}f(x_{n})\) tends to infinity in \(\mathbb{P}^{+}\Lambda(m)\). From Lemma 5.7, we get that there exists a constant \(M^{\prime}>0\) depending on the normalisation such that for every closed geodesic \(\gamma\) and every point \(x\in\mathcal{A}\big{(}S^{\,\otimes}\big{)}\) the following inequality holds \[\sum_{p\in\gamma\cap\mathrm{supp}(x)}w_{x}(p)\leq M^{\prime}l_{\gamma}(m), \tag{18}\] where \(w_{x}:\mathrm{supp}\left(x\right)\rightarrow\mathbb{R}_{>0}\) is the strip width function. Let \(K(S_{c})\) be a compact neighbourhood of the convex core \(S_{c}\) of the surface. Then every arc has bounded length outside \(K(S_{c})\): there exists \(C>0\) such that for every geodesic arc \(\alpha\), \(l_{\alpha\times K(S_{c})}<C\). Given \(\epsilon>0\), we get a constant \(M>0\) from Lemma 5.8 applied to \(\frac{\epsilon}{M^{\prime}}\). Define \[\mathcal{K}_{M}:=\{\alpha\in\mathcal{K}\mid l_{\alpha}(m)\leq M+C\}.\] Since there exist only finitely many geodesic arcs in \(S^{\,\odot}\) (and hence finitely many permitted arcs) up to any given length, we have that \(\mathcal{K}_{M}\) is finite. Consequently, the set \(\Sigma_{M}\) of simplices in \(\mathcal{A}\big{(}S^{\,\odot}\big{)}\) spanned by the arcs in \(\mathcal{K}_{M}\) is also finite. We will show that there exists \(n_{1}\in\mathbb{N}\) such that for every \(n\geq n_{1}\), there exists a closed geodesic \(\gamma(n)\) that satisfies: \[\frac{\mathrm{d}l_{\gamma(n)}(f(x_{n}))}{l_{\gamma(n)}(m)}<\epsilon. \tag{19}\] It is enough to prove the above inequality for two types of subsequences of \((x_{n})_{n}\)-- a subsequence whose terms live in one of the finitely many simplices in \(\Sigma_{M}\) and a subsequence whose every term lies in simplices outside \(\Sigma_{M}\). Finally, in both the cases we show that \(f(x_{n})\) does not converge to \(\vec{0}\). * Consider a subsequence \((y_{n})_{n}\) of \((x_{n})_{n}\) such that \(y_{n}\in\sigma\) spanned by the arcs \(\{\alpha_{1},\dots,\alpha_{N}\}\subset\Sigma_{M}\), where \(N\leq\dim\mathfrak{D}(S^{\otimes})\). Since \(y_{n}\to\infty\), it has a subsequence that converges to a point \(y\in\mathcal{A}\big{(}S^{\otimes}\big{)}\smallsetminus\widehat{\mathcal{A}}(S^ {\otimes})\). So \(y_{n}\) is of the form: \[y_{n}=\sum_{i=1}^{N}t_{i}(n)[\alpha_{i}],\text{with }t_{i}(n)\in(0,1]\text{ and }\sum_{i=1}^{N}t_{i}(n)=1,\] and the limit point \(y\) is then given by: \[y=\sum_{i=1}^{N}t_{i}^{\infty}[\alpha_{i}],\] where there exists \(\mathcal{I}\subsetneq\{1,\dots,N\}\) such that \[\text{for }i\in\mathcal{I},\;t_{i}(n)\mapsto t_{i}^{\infty}\in(0,1],\text{ and }\sum_{i\in\mathcal{I}}t_{i}^{\infty}=1,\] \[\text{for }i\in\{1,\dots,N\}\smallsetminus\mathcal{I},\;t_{i}(n) \to t_{i}^{\infty}=0.\] Since \(y\in\mathcal{A}\big{(}S^{\otimes}\big{)}\smallsetminus\widehat{\mathcal{A}}(S^ {\otimes})\), in the complement of \(\text{supp}\,(y)=\bigcup_{i\in\mathcal{I}}\alpha_{i}\), there is either a loop or a horoball connection, denoted by \(\beta\). By construction, \(\beta\) intersects only the arcs \(\{\alpha_{i}\}_{i\neq\mathcal{I}}\). By continuity of the infinitesimal strip map \(f\) on \(\sigma\), the sequence \((f(y_{n}))_{n}\) converges to \(f(y)\in\partial\Lambda(m)\) and \[\text{d}l_{\beta}(f(y))=\sum_{i\neq\mathcal{I}}t_{i}^{\infty}\text{d}l_{\beta }(f_{\alpha_{i}}(m))=0.\] Hence \(f(y)\) fails to lengthen \(\beta\). Next we show that \(f(y)\neq 0\). Let \(\gamma\) be the boundary component containing one endpoint of an arc \(\alpha_{i}\) for \(i\in\mathcal{I}\). Then we have \[\text{d}l_{\gamma}(f(y)) =\sum_{p\in\gamma\cap\text{supp}(y)}w_{y}(p)\sin\measuredangle_{p}( \gamma,\text{supp}\,(y))\] \[\geq t_{i}^{\infty}w_{\alpha_{i}}(p)\sin\measuredangle_{p}(\gamma, \text{supp}\,(y))\] \[>0.\] * Consider a subsequence \((z_{n})_{n}\) such that for every \(n\in\mathbb{N}\) there exists an arc \(\alpha_{n}\subset\text{supp}\,(z_{n})\) with \(l_{\alpha_{n}}(m)>M+C\). So \(l_{\alpha\cap K(S_{c})}>M\). From Lemma 5.8, there exists a geodesic, denoted by \(\gamma(n)\), which satisfies \[\theta_{0}:=\max_{p\in\gamma(n)\cap\text{supp}(z_{n})}\measuredangle_{p}(\text{ supp}\,(z_{n})\,,\gamma(n))<\frac{\epsilon}{M^{\prime}}.\] (20) Thus we have \[\text{d}l_{\gamma(n)}(f(z_{n})) =\sum_{p\in\gamma(n)\cap\text{supp}(z_{n})}w_{z_{n}}(p)\sin \measuredangle_{p}(\gamma(n),\text{supp}\,(z_{n}))\] \[\leq\theta_{0}\sum_{p\in\gamma(n)\cap\text{supp}(z_{n})}w_{z_{n}} (p)\] \[\leq l_{\gamma(n)}(m)\epsilon.\] Hence we get that the closed geodesics \(\{\gamma(n)\}_{n}\) do not get uniformly lengthened by the strip map. Hence, \(f(z_{n})\) converges to a point in \(\partial\Lambda(m)\). Now we show that \(f(z_{n})\not\to\vec{0}\). Let \(\lambda:=\lim\limits_{n\to\infty}\operatorname{supp}\left(z_{n}\right)\) be the limit in Hausdorff topology. The normalisation condition states that for every \(n\in\mathbb{N}\), we have \[\sum_{p\in\partial S\,\,\odot\,\cap\operatorname{supp}\left(z_{n}\right)}w_{z _{n}}(p)=1.\] So for every \(p\in\partial S\,\,\odot\,\cap\operatorname{supp}\left(x_{n}\right)\), we have \(w_{z_{n}}(p)\geq\frac{1}{2N}\). Let \(b\) be a boundary component of the surface such that for every \(n\in\mathbb{N}\) it contains an endpoint \(p(n)\) of an arc \(\alpha_{n}\) in \(z_{n}\). Then \[\operatorname{d}\!l_{b}(f(z_{n}))\geq\frac{\sin\angle_{p(n)}(b,\operatorname{ supp}\left(z_{n}\right))}{N}\geq\frac{\sin\theta_{0}}{N}>0.\] Thus we have that \(f(z_{n})\) is bounded away from \(\vec{0}\). ## 7 Parametrisation of Margulis spacetimes In this section we first recall the parametrisation of Margulis spacetimes using the pruned arc complex and the construction of the fundamental domain of a Margulis spacetime from an admissible deformation of a compact hyperbolic surface with boundary, as done in [5]. ### Undecorated Margulis spacetimes Drumm's construction of proper cocycles.Let \(\rho:\Gamma\longrightarrow G\) be a convex cocompact representation. A fundamental domain for the action of \(\rho(\Gamma)\) on the hyperbolic plane \(\mathbb{H}^{2}\) is bounded by finitely many pairwise disjoint geodesics. These geodesics are used to construct the stems of pairwise disjoint crooked planes in \(\mathbb{R}^{2,1}\). Then these planes are made disjoint from each other by adding points from their respective stem quadrants. The polyhedron bounded by these new crooked planes is a fundamental domain for the action of the group \(\Gamma\) and the resulting manifold \(X/\Gamma\) is complete. Finally, Drumm determined \(u\). From proper cocycles to Margulis spacetimes, [5].Let \(S_{c}\) be a compact hyperbolic surface with totally geodesic boundary. Let \(\rho:\pi_{1}(S_{c})\longrightarrow\operatorname{PGL}(2,\mathbb{R})\) be a holonomy representation and \(u:\pi_{1}(S_{c})\longrightarrow\mathfrak{g}\) be a \(\rho\)-cocycle such that \([u]\in\Lambda([\rho])\). From Theorem 1.7 in [5], we know that the projectivised strip map when restricted to the pruned arc complex of the surface \(S_{c}\) is a homeomorphism onto its image \(\Lambda([\rho])\). So there exists a point \(x\in\widetilde{\mathcal{A}}(S_{c})\) and a unique simplex \(\sigma\) such that \(\mathbb{P}f(x)=[u]\in\mathbb{P}^{+}\Lambda([\rho])\) and \(x\in\operatorname{int}\left(\sigma\right)\). So \(x=\sum_{i}t_{i}[\alpha_{i}]\) with \(\sum_{i}t_{i}=1\) and \(f(x)=\sum_{i}t_{i}f_{\alpha_{i}}(m)\). Corresponding to this linear combination of strip maps, we get a class of tile maps \(\phi:\widetilde{\mathcal{T}_{\sigma}}\longrightarrow\mathfrak{g}\) that are \((\rho(\pi_{1}(S_{c})),u)\)-equivariant. Let \(\alpha\in\mathcal{E}_{\sigma}\) be any arc of \(\sigma\) and \(\widetilde{\alpha}\) be any lift. There exists tiles \(d_{1},d_{2}\in\widetilde{\mathcal{T}_{\sigma}}\) that have \(\widetilde{\alpha}\) as their common internal edge. Suppose that the geodesic arc \(\widetilde{\alpha}\) is positively transversely oriented from \(d_{1}\) to \(d_{2}\). Then the Killing field \(\phi(d_{2})-\phi(d_{1})\) is hyperbolic and represents the term \(t_{\alpha}f_{\alpha}(m)\) in \(f(x)\). Let \(\mathbf{v}_{\widetilde{\alpha}}\in\alpha^{\perp}\) be a hyperbolic Killing field with attracting and repelling fixed points given by \([\mathbf{v}_{\widetilde{\alpha}}^{+}],[\mathbf{v}_{\widetilde{\alpha}}^{-}]\) such that the triplet \((\mathbf{v}_{\widetilde{\alpha}}^{+},\mathbf{v}_{\widetilde{\alpha}},\mathbf{ v}_{\widetilde{\alpha}}^{-})\) is positively oriented and the tile \(d_{2}\) lies to the left of the axis when viewed from \([\mathbf{v}_{\widetilde{\alpha}}^{-}]\). Then the crooked plane associated to \(\widetilde{\alpha}\) is given by \(\mathcal{P}_{\widetilde{\alpha}}:=\mathcal{P}(\mathbf{w}_{\widetilde{\alpha}}, \mathbf{v}_{\widetilde{\alpha}})\), where \(\mathbf{w}_{\widetilde{\alpha}}:=\frac{\phi(d_{1})+\phi(d_{2})}{2}\). For other arcs in the orbit of \(\widetilde{\alpha}\), the crooked plane is defined as: for every \(\gamma\in\pi_{1}(S_{c})\), \(\mathcal{P}_{\rho(\gamma)\cdot\widetilde{\alpha}}=\rho(\gamma)\cdot\mathcal{P} _{\widetilde{\alpha}}+u(\gamma)\). Firstly, it is shown that for every two disjoint arcs \(\widetilde{\alpha_{1}},\widetilde{\alpha_{2}}\in\mathcal{E}_{\sigma}\) their associated crooked planes \(\mathcal{P}_{\rho(\gamma)\cdot\widetilde{\alpha_{1}}},\mathcal{P}_{\rho( \gamma)\cdot\widetilde{\alpha_{2}}}\) are disjoint by using Drum's sufficient condition. Then they consider a fundamental domain of the surface bounded by finitely many arcs in \(\widetilde{\mathcal{E}_{\sigma}}\) and show that the associated crooked planes form a fundamental domain for the Margulis spacetime. We shall adapt this method to our surfaces with decorated spikes. ### Decorating a Margulis spacetime #### 7.2.1 Photons and Killing fields Consider the projective disk model of \(\mathbb{H}^{2}\) and a point \(p\in\partial_{\infty}\mathbb{H}^{2}\). Recall that an open horoball \(h\) based at \(p\) is the projective image of the subset \(H(\mathbf{v})=\{\mathbf{w}\in\mathbb{H}^{2}\mid\langle\mathbf{w},\mathbf{v} \rangle>-1\}\) of the hyperboloid \(\mathbb{H}^{2}\), where \(\mathbf{v}\) is a future-pointing light-like point in \(\mathbb{P}^{-1}\left[p\right]\). If \(k>k^{\prime}>0\), then the horoball \(h:=\mathbb{P}H(k\mathbf{v}_{0})\) is smaller than the horoball \(h^{\prime}:=\mathbb{P}H(k^{\prime}\mathbf{v}_{0})\). **Definition 7.1**.: Let \(\mathbf{v_{0}}\in\mathbb{R}^{2,1}\) be a future-pointing light-like vector and let \(\mathbf{v}\in\mathbb{R}^{2,1}\) be any point. Then the affine line \(\mathcal{L}(\mathbf{v},\mathbf{v_{0}}):=\mathbf{v}+\mathbb{R}\mathbf{v_{0}}\) is called a _photon_. A vector \(\mathbf{u}\in\mathcal{L}(\mathbf{v},\mathbf{v_{0}})\) corresponds to a Killing field that moves the vector \(\mathbf{v_{0}}\) in the direction \(\mathbf{u}\wedge\mathbf{v_{0}}\). A vector \(\mathbf{w}\in H\) is moved in the direction \(\mathbf{u}\wedge\mathbf{w}\). * When \(\mathbf{v}\in\mathbb{R}\mathbf{v_{0}}\), the photon \(\mathcal{L}(\mathbf{v},\mathbf{v_{0}})\) is the vectorial line \(\mathbb{R}\mathbf{v_{0}}\), coloured green in Fig.11. Its non-zero points correspond to parabolic Killing fields that fix the ideal point \([\mathbf{v_{0}}]\) in the hyperbolic plane and preserve the horoballs based at this ideal point as sets: for \(k\in\mathbb{R}\smallsetminus\{0\}\), \(k\mathbf{v_{0}}\wedge\mathbf{v_{0}}=0\). So the vector \(\mathbf{v_{0}}\) and hence the set \(H(\mathbf{v})\) is preserved by the flow of the Killing field associated to \(kv_{0}\). * When \(\mathbf{v}\) is contained in the light-like plane \(\mathbf{v_{0}}^{\perp}\), the photon also lies inside \(\mathbf{v_{0}}^{\perp}\). Such a photon is coloured blue in Fig.11. Any vector \(\mathbf{u}\) on such a photon, that is not contained in \(\mathbb{R}\mathbf{v_{0}}\), is a hyperbolic Killing field with one of its fixed points at \([\mathbf{v_{0}}]\). We have that \(\mathbf{u}\wedge\mathbf{v_{0}}\in\mathbf{u}^{\perp}\cap\mathbf{v_{0}}^{\perp}= \mathbb{R}\mathbf{v_{0}}\). So the vector \(\mathbf{v_{0}}\) and the set \(H\) gets scaled by the flow of the Killing vector field \(\mathbf{u}\). The connected component of the set \(\mathbf{v_{0}}^{\perp}\smallsetminus\mathbb{R}\mathbf{v_{0}}\) that contains the hyperbolic Killing fields whose attracting (resp. repelling) fixed point is given by \([\mathbf{v_{0}}]\) shrinks (resp. enlarges) the horoballs centered at this point. Figure 11: The different types of photons. * When \(\mathbf{v}\in\mathbb{R}^{2,1}\setminus\mathbf{v_{0}}^{\perp}\), any vector \(\mathbf{u}=\mathbf{v}+k\mathbf{v_{0}}\in\mathcal{L}(\mathbf{v},\mathbf{v_{0}})\) moves the light-like vector away from \(\mathbb{R}\mathbf{v_{0}}\) and in the direction given by \(\mathbf{u}\wedge\mathbf{v_{0}}\). Such a photon is coloured in pink in Fig.11. When \(\mathbf{v}\) lies above (resp. below), the point \([\mathbf{v_{0}}]\) is moved in the clockwise (resp. anticlockwise) direction on \(\partial_{\infty}\mathbb{H}^{2}\). The space of photons can be identified with the tangent bundle over the space of horoballs, modulo simultaneous scaling of all horoballs. This identification is equivariant for the actions of \(G\ltimes\mathfrak{g}=T(G)\). #### 7.2.2 Handedness Let \(\mathbf{v}_{1},\mathbf{v}_{2}\in\mathbb{R}^{2,1}\) be two future-pointing light-like vectors. For \(i=1,2\), let \(\mathbf{w}_{i}\in\mathcal{W}^{+}(\mathbf{v}_{i})\), where \(\mathcal{W}^{+}(\mathbf{v}_{i})\) is the positive wing of \(\mathbf{v}_{i}\). Then the photon \(\mathcal{L}(\mathbf{w}_{i},\mathbf{v}_{i})\) consists of hyperbolic Killing fields that have \([\mathbf{v}_{i}]\) as attracting fixed point. So for every \(i=1,2\), the vector \(\mathbf{v}_{i}\) gets infinitesimally deformed towards \(k\mathbf{v}_{i}\) for \(k>1\) and the horoball \(h_{i}:=\mathbb{P}(H(\mathbf{v}_{i}))\) gets shrunken. Finally, consider the pair of photons \(\{\mathcal{L}(\mathbf{w}_{1},\mathbf{v}_{1}),\mathcal{L}(\mathbf{w}_{2}, \mathbf{v}_{2})\}\). Any horoball connection joining the decorated spikes \(([\mathbf{v}_{1}],h_{1})\) and \(([\mathbf{v}_{2}],h_{2})\) gets lengthened. Now let \(\{\mathcal{L}(\mathbf{w}_{1},\mathbf{v}_{1}),\mathcal{L}(\mathbf{w}_{2}, \mathbf{v}_{2})\}\) be any pair of disjoint photons. They are contained in the two affine light-like planes \(A_{1}:=\mathbf{w}_{1}+\mathbf{v}_{1}{}^{\perp},A_{2}:=\mathbf{w}_{2}+\mathbf{ v}_{2}{}^{\perp}\), respectively. Let \(\mathbf{v}\in\mathcal{L}(\mathbf{w}_{1},\mathbf{v}_{1})\cap A_{2}\) and \(\mathbf{v}^{\prime}\in\mathcal{L}(\mathbf{w}_{2},\mathbf{v}_{2})\cap A_{1}\). Then the relative motions of \(\mathbf{v}_{1}\) and \(\mathbf{v}_{2}\) is given by \[\mathbf{v}-\mathbf{v}^{\prime} =\mathbf{w}_{1}-\mathbf{w}_{2}-\frac{\langle\mathbf{w}_{1}, \mathbf{v}_{2}\rangle}{\langle\mathbf{v}_{1},\mathbf{v}_{2}\rangle}\mathbf{v} _{1}+\frac{\langle\mathbf{w}_{2},\mathbf{v}_{1}\rangle}{\langle\mathbf{v}_{1},\mathbf{v}_{2}\rangle}\mathbf{v}_{2}\] \[=\frac{\langle\mathbf{w}_{1}-\mathbf{w}_{2},\mathbf{v}_{1}\wedge \mathbf{v}_{2}\rangle}{\|\mathbf{v}_{1}\wedge\mathbf{v}_{2}\|^{2}}(\mathbf{v} _{1}\wedge\mathbf{v}_{2}).\] The sign of the real number \(\langle\mathbf{w}_{1}-\mathbf{w}_{2},\mathbf{v}_{1}\wedge\mathbf{v}_{2}\rangle\) gives the _handedness_ of the pair \(\{\mathcal{L}(\mathbf{w}_{1},\mathbf{v}_{1}),\mathcal{L}(\mathbf{w}_{2}, \mathbf{v}_{2})\}\). Figure 12: A pair of photons. ### From decorated surfaces to decorated Margulis spacetimes In this section, we will adapt the parametrisation of Margulis spacetimes to our case of hyperbolic surfaces with decorated spikes. We start by defining decorated Margulis spacetimes. Let \(S^{\,\heartsuit}\) be a hyperbolic surface with \(Q\) decorated spikes, endowed with a decorated metric \(m=[\rho,\mathbf{x},\mathbf{h}]\in\mathfrak{D}(S^{\,\heartsuit})\). Then the metric on the convex core \(S_{c}\) is given by \([\rho]\). The admissible cone \(\Lambda(m)\) is an affine bundle over the admissible cone \(\Lambda([\rho])\) of the convex core; denote by \(\pi\), the bundle projection \(\pi:\Lambda(m)\longrightarrow\Lambda([\rho])\). The fibres are open subsets of \(\mathbb{R}^{2Q}\) that are stable under the scaling of horoballs. Let \([u]\in\Lambda(m)\) be an admissible deformation of the surface \(S^{\,\heartsuit}\). Let \([u_{0}]:=\pi([u])\). Then \(u_{0}\) is a proper \(\rho\)-cocycle and the group of isometries \(\Gamma^{(\rho,u_{0})}\) acts properly discontinuously on \(\mathbb{R}^{2,1}\). The quotient \(M:=\mathbb{R}^{2,1}/\Gamma^{(\rho,u_{0})}\) is a Margulis spacetime, which we decorate with photons in the following way: the infinitesimal deformation \([u]\) imparts motion to every lift of each decorated spike of the surface. From the previous section, we know that set of Killing fields realising this particular variation to an ideal point decorated with a horoball, happens to be a photon. This collection of photons, denoted by \(\mathcal{L}\), is \(\Gamma^{(\rho,u_{0})}\)-equivariant and is the decoration of the underlying Margulis spacetime. The pair \((M,\mathcal{L})\) is called a _decorated_ Margulis spacetime. Next we will give another way of looking at this decoration using tile maps. We know that the projectivised strip map when restricted to the pruned arc complex is a homeomorphism onto its image \(\Lambda(m)\). So there exists a point \(x=\widehat{\mathcal{A}}(S^{\,\heartsuit})\) and a unique big simplex \(\sigma\) such that \(\mathbb{P}f(x)=[u]\in\mathbb{P}^{+}\Lambda(m)\) and \(x\in\operatorname{int}\left(\sigma\right)\). So \(x=\sum_{i}t_{i}[\alpha_{i}]\) with \(\sum_{i}t_{i}=1\), \(t_{i}>0\) for every \(i\) and \(f(x)=\sum_{i}t_{i}f_{\alpha_{i}}(m)\). Corresponding to this linear combination of strip maps we get a class of tile maps \(\phi:\widetilde{\mathcal{T}_{\sigma}}\longrightarrow\mathfrak{g}\). Now suppose that the surface has \(Q\) spikes and write the spike vector \(\mathbf{x}\) as \((x_{1},\ldots,x_{Q})\). Since the arcs of \(\sigma\) decompose the surface into tiles with at most one spike, there exist exactly \(Q\) tiles \(d_{1},\ldots,d_{Q}\) such that \(x_{i}\in d_{i}\) for every \(i=1,\ldots,Q\). Using the tile map, we get a collection of \(Q\) Killing fields \(\phi(d_{1}),\ldots,\phi(d_{Q})\) where \(\phi(d_{i})\) acts on the ideal point \(x_{i}\). Now suppose that \(\mathbf{h}=(h_{1},\ldots,h_{Q})\) is the horoball decoration given by the metric \(m\). Then for each \(i=1,\ldots,Q\), there exists a future pointing light-like vector \(\mathbf{v}_{i}\) and the set \(H(\mathbf{v}_{i})\) such that \(x_{i}=[\mathbf{v}_{i}]\) and \(h_{i}=[H(\mathbf{v}_{i})]\). Then consider the collection of photons of the form \(\phi(d_{i})+\mathbb{R}\mathbf{v}_{i}\) for \(i=1,\ldots,Q\) and take their \(\Gamma^{(\rho,u_{0})}\)-orbit. _Remark 7.1_.: Note that these photons are pairwise disjoint. If two photons intersect, their intersection point is a Killing field that realizes the motions of the two corresponding horoballs, hence the horoball connection has zero infinitesimal length variation. Hence the infinitesimal deformation \([u]\) fails to be admissible. _Remark 7.2_.: Every pair of photons has the same handedness, because every horoball connection is lengthened. #### 7.3.1 From decorated Margulis space-times to admissible deformations. Let \(\Gamma\) be a finitely generated free discrete group acting properly discontinuously on \(\mathbb{R}^{2,1}\) and its representation \(\rho:\Gamma\longrightarrow G\bowtie\mathfrak{g}\). Let \((M:=\mathbb{R}^{2,1}/\rho(\Gamma),\mathcal{L})\) be a decorated Margulis spacetime with convex cocompact linear part \(\rho_{0}:\Gamma\longrightarrow G\). Using Drum's construction of proper cocycles, we have that \(\Gamma=\Gamma^{(\rho_{0},u_{0})}\), where \(u_{0}\) is a proper \(\rho_{0}\)-cocycle. The surface \(S_{c}:=\mathbb{H}^{2}/\rho_{0}(S_{c})\) is compact with totally geodesic boundary. Denote its boundary components by \(b_{1},\ldots,b_{n}\). The set \(\mathcal{L}\) is \(\Gamma^{(\rho_{0},u_{0})}\)-equivariant; there exists finitely many pairs \((\mathbf{w}_{1},\mathbf{v}_{1}),\ldots,(\mathbf{w}_{Q},\mathbf{v}_{Q})\) of points in \(\mathbb{R}^{2,1}\) such that for every \(i\), the vector \(\mathbf{v}_{i}\) is future-pointing and light-like and \(\mathcal{L}\) is generated by the photons \(\mathcal{L}_{i}:=\mathcal{L}(\mathbf{w}_{i},\mathbf{v}_{i})\), \(i=1,\ldots,Q\). This gives us \(Q\) ideal points \(x_{i}=[\mathbf{v}_{i}]\in\partial_{\infty}\mathbb{H}^{2}\). Take the \(\rho_{0}(\Gamma)\)-orbit of this collection and join every consecutive pair, that lie on the same side of a lift of the boundary loop \(b_{i}\), by a geodesic. Let \(R\) be the simply-connected region in \(\mathbb{H}^{2}\) bounded these geodesics. Then we get a hyperbolic surface with decorated spikes \(S^{\,\heartsuit}:=R/\rho_{0}(\Gamma)\) with the metric \(m=[\rho_{0},\mathbf{x},\mathbf{h}]\), where \(\mathbf{x}=(x_{1},\ldots,x_{Q})\) and \(\mathbf{h}=(h_{1},\ldots,h_{Q})\), \(h_{i}=\mathbb{P}(H(\mathbf{v}_{i}))\). The surface \(S_{c}\) is the convex core of \(S^{\,\heartsuit}\). The admissible deformation of \(S^{\,\heartsuit}\) is determined in the following way: for every \(i=1,\ldots,Q\), the photon \(\mathcal{L}_{i}\) imparts infinitesimal motion to the spike \(x_{i}\) as well as the horoball \(h_{i}\) in the sense that \(\mathcal{L}_{i}\) is exactly the set of Killing fields all of whom cause \(h_{i}\) to vary in a certain infinitesimal way. Since no two photons intersect, every horoball connection is deformed and since every pair of photons has the same handedness, every horoball connection gets lengthened. Thus we get an admissible deformation \([u]\in\Lambda(m)\) with \(\pi([u])=[u_{0}]\). #### 7.3.2 From admissible deformations to decorated Margulis space-times. Let \(S^{\,\heartsuit}\) be a hyperbolic surface with \(Q\) decorated spikes, endowed with a decorated metric \(m\), which is of the form \(m=[\rho,\mathbf{x},\mathbf{h}]\in\mathfrak{T}(S^{\,\heartsuit})\). Let \([u]\in\Lambda(m)\) be an admissible deformation of the surface \(S^{\,\heartsuit}\). Let \([u_{0}]:=\pi([u])\). Then \(u_{0}\) is a proper \(\rho\)-cocycle and the group of isometries \(\Gamma^{(\rho,u_{0})}\) acts properly discontinuously on \(\mathbb{R}^{2,1}\). By Theorem 6.1 there exists a unique point \(x=\widetilde{\mathcal{H}}(S^{\,\heartsuit})\) and a unique big simplex \(\sigma\) such that \(\mathbb{P}f(x)=[u]\in\mathbb{P}^{+}\Lambda(m)\) and \(x\in\mathrm{int}\left(\sigma\right)\). So \(x=\sum_{i}t_{i}[\alpha_{i}]\) with \(\sum_{i}t_{i}=1\), \(t_{i}>0\) for every \(i\) and \(f(x)=\sum_{i}t_{i}f_{\alpha_{i}}(m)\). Corresponding to this linear combination of strip maps, we get a class of tile maps \(\phi:\widetilde{\mathcal{T}_{\sigma}}\longrightarrow\mathfrak{q}\) that are \((\rho(\pi_{1}(S_{c})),u)\)-equivariant. Let \(\alpha\in\mathcal{E}_{\sigma}\) be any arc of \(\sigma\) and \(\widetilde{\alpha}\) be any lift. There exists tiles \(d_{1},d_{2}\in\widetilde{\mathcal{T}_{\sigma}}\) that have \(\widetilde{\alpha}\) as their common internal edge. The arc is either finite or joins a decorated spike with a bounary component. Let \(\phi(d_{2})-\phi(d_{1})\) be the Killing field that represents the term \(t_{\alpha}f_{\alpha}(m)\) in \(f(x)\). When \(\alpha\) is finite, the difference is a hyperbolic Killing field belonging to the stem quadrant \(\mathrm{SQ}(\overset{\rightarrow}{l_{\mathbf{v}}})\). Otherwise, it is a parabolic Killing field fixing the ideal endpoint of \(\widetilde{\alpha}\). Define the associated crooked plane as before: \(\mathcal{P}_{\widetilde{\alpha}}:=\mathcal{P}(\mathbf{w}_{\widetilde{\alpha}},\mathbf{v}_{\widetilde{\alpha}})\), with \(\mathbf{w}_{\widetilde{\alpha}}:=\frac{\phi(d_{1})+\phi(d_{2})}{2}\). Let \(R\) be a fundamental domain of the surface \(S^{\,\heartsuit}\) bounded by some arcs \(e_{1},f_{1},\ldots,e_{k},f_{k}\) in \(\widetilde{\mathcal{E}_{\sigma}}\) such that there exists \(\gamma_{1},\cdots,\gamma_{k}\in\pi_{1}(S^{\,\heartsuit})\) such that for \(i=1,\cdots,k\), \(f_{i}=\rho(\gamma_{i})\cdot e_{i}\). Since there are no parabolic elements in \(\rho(\pi_{1}(S^{\,\heartsuit}))\), nor any spiralling arcs, for every pair \((e_{i},f_{i})\) of spike-to-edge arcs, the spikes are distinct. So there exists an edge-to-edge arc \(\alpha\in\mathcal{E}_{\sigma}\) whose lift \(\widetilde{\alpha}\) separates \(e_{i}\) from \(f_{i}\). Since the arc \(\widetilde{\alpha}\) is disjoint to both \(e_{i}\) and \(f_{i}\) in \(\overline{\mathbb{H}^{2}}\), its associated crooked plane \(\mathcal{P}(\mathbf{w}_{\widetilde{\alpha}},\mathbf{v}_{\widetilde{\alpha}})\) separates the crooked planes \(\mathcal{P}_{e_{i}},\mathcal{P}_{f_{i}}\). Hence we have that for every \(i=1,\ldots,k\), the crooked planes \(\mathcal{P}_{e_{i}},\mathcal{P}_{f_{i}}\) are disjoint and \((\rho(\gamma_{i}),u_{0}(\gamma_{i}))\mathcal{P}_{e_{i}}=\mathcal{P}_{f_{i}}\). The region \(\mathcal{D}\) bounded by these crooked planes is a fundamental domain for the action of \(\Gamma^{(\rho,u_{0})}\) on \(\mathbb{R}^{2,1}\). Thus we have proved the following theorem: **Theorem 7.2**.: _Let \(S^{\,\heartsuit}\) be a hyperbolic surface with decorated spikes and let \(\rho:\pi_{1}(S^{\,\heartsuit})\rightarrow\mathrm{PGL}(2,\mathbb{R})\) be a holonomy representation. Let \(\mathcal{M}^{\,\heartsuit}\) be the space of all Margulis spacetimes with convex cocompact linear part as \(\rho\). Then there is a bijection \(\Psi:\widetilde{\mathcal{H}}(S^{\,\heartsuit})\rightarrow\mathcal{M}^{\,\heartsuit}\)._
2310.05681
Electronic matrix elements for parity doubling in YbOH molecule
YbOH molecule is one of the most sensitive systems for the electron electric dipole moment ($e$EDM) searches. The $e$EDM-induced energy shift is proportional to polarization ($P$) of the molecule. In Ref. [A. Petrov and A. Zakharova, Phys. Rev. A 105, L050801 (2022)] was shown that the value of l-doubling and spin-rotation splitting directly influences the maximum value of $P$. Recently in Ref. [Jadbabaie, Y. Takahashi, N. H. Pilgram, C. J. Conn, Y. Zeng, C. Zhang, and N. R. Hutzler, New Journal of Physics 25, 073014 (2023)] the corresponding energy levels was determined experimentally. We introduced electronic matrix elements in Hund's case $c$ coupling scheme to reproduce experimental energy levels and calculated $P$ as function of external electric field.
Alexander Petrov
2023-10-09T12:51:46Z
http://arxiv.org/abs/2310.05681v1
# Electronic matrix elements for parity doubling in YbOH molecule ###### Abstract YbOH molecule is one of the most sensitive systems for the electron electric dipole moment (\(e\)EDM) searches. The \(e\)EDM-induced energy shift is proportional to polarization (\(P\)) of the molecule. In Ref. [A. Petrov and A. Zakharova, Phys. Rev. A 105, L050801 (2022)] was shown that the value of l-doubling and spin-rotation splitting directly influences the maximum value of \(P\). Recently in Ref. [Jadbabaie, Y. Takahashi, N. H. Pilgram, C. J. Conn, Y. Zeng, C. Zhang, and N. R. Hutzler, New Journal of Physics 25, 073014 (2023)] the corresponding energy levels was determined experimentally. We introduced electronic matrix elements in Hund's case \(c\) coupling scheme to reproduce experimental energy levels and calculated \(P\) as function of external electric field. ## I Introduction Measuring the electron electric dipole moment (\(e\)EDM) is now considered as a most promising test for existence of physics beyond the Standard model [1; 2; 3; 4]. The current constrain for \(e\)EDM \(|d_{\rm e}|<4.1\times 10^{-30}\)\(e\)-cm (90% confidence) was obtained using trapped \({}^{180}\)Hf\({}^{19}\)F\({}^{+}\) ions [5] with spinless \({}^{180}\)Hf isotope. Cold polar triatomic molecules provide opportunities for further progress in search for effects of symmetry violation [6]. In such molecules the sensitivity of the experiments can be strongly enhanced due to laser cooling [7] and the existence of \(l\)-doublets of the excited \(v=1\) bending vibrational mode helps to suppress many systematics [8; 9]. A great progress is achieved recently in both theoretical and experimental study of triatomics. In Ref. [10] a quantum control of trapped triatomic molecules for \(e\)EDM searches is demonstrated. In Ref. [11] a detailed spectroscopy of the \(e\)EDM sensitive \(l\)-doublets of the ground rotational \(N=1\) level of the excited \(v=1\) bending vibrational mode of \({}^{174}\)YbOH was performed. Unusually large, compared to other metal hydroxides, asymmetric parity-doubling of the \(J=1/2\) and \(J=3/2\) manifolds was revealed. As is noted in Ref. [11] a further study is required to determine in details the nature of this asymmetry. In Ref. [12] the method for computation of energy levels and different properties of triatomic molecules was developed. The method was applied for calculation of sensitivity of the \({}^{174}\)YbOH molecule to \(e\)EDM in the ground rotational \(N=1\) level of first excited \(v=1\) bending mode in the external electric field. In calculations (see below for details) matrix elements for \(v=1\) were assumed to be equal to ones for \(v=0\) and taken from Ref. [9]. In this approximation there were no asymmetry in parity-doubling of the \(J=1/2\) and \(J=3/2\). In Ref. [12] we have shown that the value of l-doubling and spin-rotation splitting directly influences the maximum degree of \(T,P\)-odd polarization and thus influences the sensitivity of linear triatomic molecules to the \(T,P\)-odd effects. Therefore, in the current work we modified and introduced new _electronic_ matrix elements in Hund's case \(c\) coupling scheme, which allows us to reproduce experimental energy levels and in particular asymmetry of the \(J=1/2\) and \(J=3/2\) manifolds. As opposite to Hund's case \(b\) using Hund's case \(c\) coupling scheme helps in future to calculate this effect _ab initio_ as many modern quantum chemical packages allow to include the spin-orbit interaction in all orders. Using newly obtained electronic matrix elements we recalculated the sensitivity of the \({}^{174}\)YbOH molecule to \(e\)EDM in the external electric field. ## II Method Following Ref. [12] we present our Hamiltonian in molecular reference frame as \[\mathbf{\hat{H}}=\mathbf{\hat{H}}_{\rm mol}+\mathbf{\hat{H}}_{\rm hfs}+ \mathbf{\hat{H}}_{\rm ext}, \tag{1}\] where \[\mathbf{\hat{H}}_{\rm mol}=\frac{(\mathbf{\hat{J}}-\mathbf{\hat{J}}^{e-v})^{2 }}{2\mu R^{2}}+\frac{(\mathbf{\hat{J}}^{v})^{2}}{2\mu_{\rm OH}r^{2}}+V(\theta) \tag{2}\] is the molecular Hamiltonian, \(\mu\) is the reduced mass of the Yb-OH system, \(\mu_{\rm OH}\) is the reduced mass of the OH, \(\mathbf{\hat{J}}\) is the total electronic, vibrational, and rotational angular momentum, \(\mathbf{\hat{J}}^{e-v}=\mathbf{\hat{J}}^{e}+\mathbf{\hat{J}}^{v}\) is the electronic-vibrational momentum, \(\mathbf{\hat{J}}^{e}\) is the electronic momentum, \(\mathbf{\hat{J}}^{v}\) is the vibrational momentum, \(R\) is the distance between Yb and the center mass of OH, \(r\) is OH bond length and \(\theta\) is the angle between OH and the axis (\(z\) axis of the molecular frame) directed from Yb to the OH center of mass. The condition \(\theta=0\) corresponds to the linear configuration where the O atom is between Yb and H ones. \(R\), \(r\) and \(\theta\) are the so called Jacobi coordinates. In the current work we have fixed \(R\) and \(r\) in such a way that \(\frac{\hbar^{2}}{2\mu R^{2}}=7329\) MHz to reproduce the experimental value for rotation constant [11] and \(\frac{\hbar^{2}}{2\mu_{\rm OH}r^{2}}=19.6\) cm\({}^{-1}\) to fit experimental value of 24 MHz for \(l-\)doubling [11]. Recently the same value was also obtained in _ab initio_ calculations [13]. In this approximation we neglect the influence of the stretching (associated with R) and OH ligand (associated with r) modes but nevertheless take into account the bending ones (associated with \(\theta\)) with fixed \(R,r\). \(V(\theta)\) is the potential energy curve obtained in the electronic structure calculations [14]. \(\hat{\bf H}_{\rm hfs}\) and \(\hat{\bf H}_{\rm ext}\) are the hyperfine interaction with H nucleus and Stark interaction with the external electric field respectively, as they are described in Ref. [12] Wavefunctions, rovibrational energies and hyperfine structure were obtained by numerical diagonalization of the Hamiltonian (1) over the basis set of the electronic-rotational-vibrational-nuclear spins wavefunctions \[\Psi_{\Omega}P_{lm}(\theta)\Theta^{J}_{M_{J},\omega}(\alpha,\beta)U^{\rm H}_{ M_{I}^{\rm H}}. \tag{3}\] Here \(\Theta^{J}_{M_{J},\omega}(\alpha,\beta)=\sqrt{(2J+1)/4\pi}D^{J}_{M_{J},\omega} (\alpha,\beta,\gamma=0)\) is the rotational wavefunction, \(\alpha,\beta\) correspond to azimuthal and polar angles of the \(z\) axis, \(U^{\rm H}_{M_{I}^{\rm H}}\) is the hydrogen nuclear spin wavefunction, \(M_{J}\) is the projection of the molecular (electronic-rotational-vibrational) angular momentum \(\hat{\bf J}\) on the lab axis, \(\omega\) is the projection of the same momentum on \(z\) axis of the molecular frame, \(M_{I}^{\rm H}\) is the projections of the nuclear angular momenta of hydrogen on the lab axis, \(P_{lm}(\theta)\) is the associated Legendre polynomial, \(l\) is the vibration angular momentum and \(m\) is its projection on the molecular axis, \(\Psi_{\Omega}\) is the electronic wavefunction (see Ref. [12] for details). In this calculation functions with \(\omega-m=\Omega=\pm 1/2\), \(l=0-30\) and \(m=0,\pm 1,\pm 2\), \(J=1/2,3/2,5/2\) were included to the basis set (3). The ground vibrational state \(v=0\) corresponds to \(m=0\), the first excited bending mode \(v=1\) (focus of this paper) to \(m=\pm 1\), the second excited bending mode \(v=2\) has states with \(m=0,\pm 2\) etc. Provided that the _electronic-vibrational_ matrix elements are known, the matrix elements of \(\hat{\bf H}\) between states in the basis set (3) can be calculated with help of the angular momentum algebra [12; 15] mostly in the same way as for the diatomic molecules [16]. The required matrix elements associated with H nucleus magnetic hyperfine interaction were taken from Ref. [12]. Dipole moment operator \[\langle\Psi_{\Omega}|D_{z}|\Psi_{\Omega}\rangle=-0.850\ {\rm a.u.} \tag{4}\] determining interaction with the external electric field was taken from Ref. [11]. A special attention is given for the matrix element of \(J_{+}^{e}=J_{x}^{e}+J_{y}^{e}\) operator which, in particular, ensure asymmetry of the \(l-\)doubling of \(J=1/2\) and \(J=3/2\) manifolds. We put \[\frac{1}{\mu R^{2}}\langle\Psi_{\Omega=1/2}|J_{+}^{e}|\Psi_{\Omega=-1/2} \rangle=p_{0}+p_{1}P_{l=1m=0}(\theta), \tag{5}\] \[\frac{1}{\mu R^{2}}\langle\Psi_{\Omega=-1/2}|J_{+}^{e}|\Psi_{\Omega=+1/2} \rangle = p_{2}P_{l=2m=2}(\theta). \tag{6}\] Here we take into account that pure electronic matrix elements, in general, depend on \(\theta\), and the selection rules for \(\Omega\) quantum number can be violated [12]. It is assumed that \(\Psi_{\Omega}\) are chosen in such a way that \(\langle\Psi_{\Omega=1/2}|\partial/\partial\theta|\Psi_{\Omega=-1/2}\rangle=0\). Note that here \(\Omega\) is the projection of the total electronic angular momentum on the molecular axis \(z\) for _linear_ configuration. Equation \(J_{z}^{e}\Psi_{\Omega}(\theta)=\hbar\Omega\Psi_{\Omega}(\theta)\) is not satisfied for the bending configuration \(\theta\neq 0\). The selection rules for \(\omega\) quantum number are rigorous and are the same as for \(\Omega\) quantum number for the linear configuration. ## III Results ### Energy levels for field-free case The rotational levels of the excited \(v=1\) bending vibrational mode of YbOH are well described by the Hund's case \(b\) coupling scheme [11]. Electron spin \(S=1/2\) for a good approximation is an integral of motion. Its interaction (spin-rotation) with the rovibrational momentum \({\bf N}={\bf J}-{\bf S}\) gives rise to the splitting between the energy levels with total \(J=N\pm 1/2\) momenta. Each level has two parity eigenstates - l-doublets. \(l-\)doubling is, in general, different for the \(J=N+1/2\) and \(J=N-1/2\) levels. The experimental energy levels obtained in Ref. [11] for the ground rotational level \(N=1\) are depicted in Fig. 1. The \(p_{0}\) value used in the Hund's case \(c\) coupling scheme Figure 1: Experimental energies of the ground rotational \(N=1\) level of the excited \(v=1\) bending vibrational modes of \({}^{174}\)YbOH [11]. Parity of the states is shown as a superscript. Unusually large asymmetry of parity-doubling of the \(J=1/2\) (\(\Delta E_{3}\)) and \(J=3/2\) (\(\Delta E_{1}\)) manifolds is observed. Hyperfine structure is not shown. can be obtained by equation [17] \[p_{0}=\frac{\hbar^{2}}{\mu R^{2}}-\gamma=0.492\ \mathrm{cm}^{-1}, \tag{7}\] where \(\gamma=-88.7\) MHz determines spin-rotation interaction \(\gamma(\mathbf{\hat{S}}\cdot\mathbf{\hat{N}}-S_{z}N_{z})\) in the Hund's case \(b\) coupling scheme [11]. Use the correlated by Eq. (7) \(p_{0}\) and \(\gamma\) gives the same energy levels in both Hund's case coupling schemes. In this approximation (corresponds to the calculations in Ref. [12]) \(l-\)doubling is the same for \(J=N\pm 1/2\) levels (\(\Delta E_{1}\approx\Delta E_{3}\), see Table 1 and Fig. 1 for definition of \(\Delta E_{i}\)). To reproduce the experimental energy levels we also put \[p_{1}=-37.0\ \mathrm{MHz}, \tag{8}\] \[p_{2}=125.9\ \mathrm{MHz}. \tag{9}\] In Table 1 the calculated values \(\Delta E_{1}\), \(\Delta E_{2}\) and \(\Delta E_{3}\). for the cases (A) \(p_{0}=0.492\ \mathrm{cm}^{-1}\), \(p_{1}=0.0\ \mathrm{MHz}\), \(p_{2}=0.0\ \mathrm{MHz}\); (B) \(p_{0}=0.492\ \mathrm{cm}^{-1}\), \(p_{1}=0.0\ \mathrm{MHz}\), \(p_{2}=125.9\ \mathrm{MHz}\); (C) \(p_{0}=0.492\ \mathrm{cm}^{-1}\), \(p_{1}=-37.0\ \mathrm{MHz}\), \(p_{2}=125.9\ \mathrm{MHz}\) (corresponds to optimal matrix elements (7 - 9)) are given. Accounting for the \(p_{2}\) constant leads to the asymmetry of the \(l-\)doubling of \(J=1/2\) (\(\Delta E_{3}\)) and \(J=3/2\) (\(\Delta E_{1}\)) manifolds. One can see that increments (\(\delta\Delta E_{i}\)) for \(\Delta E_{i}\) energy splittings when \(p_{2}\) constant is taken into account have ratios \(\delta\Delta E_{1}/\delta\Delta E_{2}=2\) and \(\delta\Delta E_{3}/\delta\Delta E_{2}=-4\). Exactly the same ratios are for the Hamiltonian \(p_{G}/2(N_{+}S_{+}e^{-i2\phi}+N_{-}S_{-}e^{+i2\phi})\) used in Ref. [11] for the Hund's case \(b\) coupling scheme. Therefore one should associate \(p_{G}\) and \(p_{2}\) constants. From Eq. (6) one can see that \(p_{2}\) constant is nonzero when \(\Omega\) quantum number is violated for the bending configuration. Accounting for the \(p_{1}\) constant leads only to the increment of \(\Delta E_{2}\). The same effect is for the Hamiltonian \(\gamma_{G}N_{z}S_{z}\) used in Ref. [11]. Therefore one should associate \(\gamma_{G}\) and \(p_{1}\) constants. From Eq. (5) one can see that \(p_{1}\) constant can be nonzero without violation of the \(\Omega\) quantum number. To the best of our knowledge, calculation of constants \(p_{1}\) and \(p_{2}\) is not currently available in public quantum-chemical codes and should be a goal for the further development. ### Sensitivity to the \(e\)Edm Any \(e\)EDM experiment searches for an \(e\)EDM induced Stark shift \[\delta E=PE_{\mathrm{eff}}d_{e}, \tag{10}\] where \(d_{e}\) is the value of electron electric dipole moment, \(E_{\mathrm{eff}}\) is _effective electric field_ acting on electron in the molecule, \(P\) is the polarization of the molecule by the external electric field. (We note, that \(P\) is not equal to the mean value of the projection of unit vector \(\hat{z}\) along molecular axis on direction of the external electric field.) To extract \(d_{e}=\delta E/(E_{\mathrm{eff}}P)\) from the measured shift \(\delta E\), one needs to know \(PE_{\mathrm{eff}}\). \(E_{\mathrm{eff}}\) was a subject of molecular calculations [14; 18; 19; 20]. In this work to calculate \(P\) we also include hyperfine interaction with the hydrogen nucleus. Hydrogen nucleus has a nonzero nuclear spin I = 1/2, which gives rise to the hyperfine energy splitting between the levels with total (electronic-vibrational-rotational-nuclear spin) angular momentum \(F=J\pm 1/2\) (not showed on Fig. 1). In Fig. 2 the calculated polarizations \(P\) for six \(M_{F}=M_{J}+M_{I}=1\) hyperfine sublevels of the lowest N = 1 rotational level of the first excited \(v=1\) bending vibrational mode of the \({}^{174}\)YbOH as function of the external electric field are given. The maximum of polarization \(P=0.80\) is reached for the sixth level at electric field 100 V/cm in agreement with data of Ref. [11]. In Fig. 3 the corresponding calculated energy levels are presented. \begin{table} \begin{tabular}{c c c c} & (A) & (B) & (C) \\ \hline \(\Delta E_{1}\) & 23.9 & 18.5 (-5.4) & 18.5 (0.0) \\ \(\Delta E_{2}\) & 42.4 & 39.7 (-2.7) & 27.8 (-11.9) \\ \(\Delta E_{3}\) & 24.1 & 35.0 (10.9) & 35.0 (0.0) \\ \end{tabular} \end{table} Table 1: The calculated \(\Delta E_{1}\), \(\Delta E_{2}\) and \(\Delta E_{3}\) energy splittings (MHz) for the \(p_{0},p_{1},p_{2}\) parameters corresponding to the (A), (B) and (C) cases. In (A) case only \(p_{0}\) constant is taken into account. In (B) case \(p_{1}\) constant is added. In (C) case all three constants are taken into account. See text for details. Case (C) reproduces experimental values for \(\Delta E_{1}\), \(\Delta E_{2}\) and \(\Delta E_{3}\). In brackets for the (B) case the increment from the (A) case, for the (C) case the increment from the (B) case are given. Figure 2: (Color online) Calculated polarization \(P\) (see eq. (10)) for the \(M_{F}=M_{J}+M_{I}=1\) hyperfine sublevels of the lowest \(N=1\) rotational level of the first excited the \(v=1\) bending vibrational mode of \({}^{174}\)YbOH as function of the external electric field. Colors (numbering) of lines correspond to colors (numbering) of lines in Fig. 3. ## IV Conclusion We determined electronic matrix element in Hund's case \(c\) coupling scheme to reproduce experimental energy levels and in particular asymmetry in \(l-\)doubling structure of the ground rotational level of the first excited bending vibrational mode of \({}^{174}\)YbOH. Matrix elements can be associated with the parameters of the effective Hamiltonian in Hund's case \(b\) coupling scheme. \(T,P-\)odd polarization determining sensitivity to \(e\)EDM is calculated as function of the external electric field. The maximum value \(P=\)0.8 is found for electric field \(E=100\) V/cm.
2301.02877
Deep Learning for Mean Field Games with non-separable Hamiltonians
This paper introduces a new method based on Deep Galerkin Methods (DGMs) for solving high-dimensional stochastic Mean Field Games (MFGs). We achieve this by using two neural networks to approximate the unknown solutions of the MFG system and forward-backward conditions. Our method is efficient, even with a small number of iterations, and is capable of handling up to 300 dimensions with a single layer, which makes it faster than other approaches. In contrast, methods based on Generative Adversarial Networks (GANs) cannot solve MFGs with non-separable Hamiltonians. We demonstrate the effectiveness of our approach by applying it to a traffic flow problem, which was previously solved using the Newton iteration method only in the deterministic case. We compare the results of our method to analytical solutions and previous approaches, showing its efficiency. We also prove the convergence of our neural network approximation with a single hidden layer using the universal approximation theorem.
Mouhcine Assouli, Badr Missaoui
2023-01-07T15:39:48Z
http://arxiv.org/abs/2301.02877v2
# Deep Learning for Mean Field Games with non-separable Hamiltonians ###### Abstract This paper introduces a new method based on Deep Galerkin Methods (DGMs) for solving high-dimensional stochastic Mean Field Games (MFGs). We achieve this by using two neural networks to approximate the unknown solutions of the MFG system and forward-backward conditions. Our method is efficient, even with a small number of iterations, and is capable of handling up to 300 dimensions with a single layer, which makes it faster than other approaches. In contrast, methods based on Generative Adversarial Networks (GANs) cannot solve MFGs with non-separable Hamiltonians. We demonstrate the effectiveness of our approach by applying it to a traffic flow problem, which was previously solved using the Newton iteration method only in the deterministic case. We compare the results of our method to analytical solutions and previous approaches, showing its efficiency. We also prove the convergence of our neural network approximation with a single hidden layer using the universal approximation theorem. keywords: Mean Field Games, Deep Learning, Deep Galerkin Method, Traffic Flow, Non-Separable Hamiltonian ## 1 Introduction Mean Field Games (MFGs) are a widely studied topic that can model a variety of phenomena, including autonomous vehicles [1; 2], finance [3; 4], economics [5; 6; 7], industrial engineering [8; 9; 10], and data science [11; 12]. MFGs are dynamic, symmetric games where the agents are indistinguishable but rational, meaning that their actions can affect the mean of the population. In the optimal case, the MFG system reaches a Nash equilibrium (NE), in which no agent can further improve their objective. MFGs are described by a system of coupled partial differential equations (PDEs) known as equation \[\left\{\begin{array}{rl}-\partial_{t}\phi-\nu\Delta\phi+H(x,\rho,\nabla\phi)=0, \ in&E_{1},\\ \partial_{t}\rho-\nu\Delta\rho-\mbox{div}\left(\rho\nabla_{p}H(x,\rho,\nabla \phi)\right)=0,\ in&E_{2},\\ \rho(0,x)=\rho_{0}(x),\ \ \phi(T,x)=g(x,\rho(T,x)),\ in&\Omega,\end{array}\right. \tag{1}\] where, \(E_{1}=(0,T]\times\Omega\), \(E_{2}=[0,T)\times\Omega\), \(\Omega\subset\mathbb{R}^{d}\) and \(g\) denotes the terminal cost. The Hamiltonian H with separable structure is defined as \[H(x,\rho,p)=inf_{v}\{-p.v+L_{0}(x,v)\}-f_{0}(x,\rho)=H_{0}(x,p)-f_{0}(x,\rho), \tag{2}\] consisting of a forward-time Fokker-Planck equation (FP) and a backward-time Hamilton-Jacobi-Bellman equation (HJB), which describe the evolution of the population density (\(\rho\)) and the cost value (\(\phi\)), respectively. The PDEs are defined in the domain \(E_{1}=(0,T]\times\Omega\) and \(E_{2}=[0,T)\). The Hamiltonian \(H\) has a separable structure and is defined as the infimum of the Lagrangian function \(L_{0}\), which is the Legendre transform of the Hamiltonian, minus the interaction function \(f_{0}\) between the population of agents. The MFG system also includes boundary conditions, with the initial density \(\rho(0,x)\) given by \(\rho_{0}(x)\) and the terminal cost \(\phi(T,x)\) given by \(g(x,\rho(T,x))\). These boundary conditions apply in the domain \(\Omega\subset\mathbb{R}^{d}\). One of the main challenges of MFGs is the viscosity problem, in addition to the complexity of the PDEs and forward-backward conditions. Many methods for solving MFGs are limited to the deterministic setting (\(\nu=0\)). For example, the Newton iteration method has been applied to the problem of traffic flow in [1], where a flexible machine learning framework was provided for the numerical solution of potential MFGs. While numerical methods do exist for solving the system of PDEs (1) [13; 14; 15; 16], they are not always effective due to computational complexity, especially in high dimensional problems. Deep learning methods, such as Generative Adversarial Networks (GANs) [17; 18], have been used to address this issue by reformulating MFGs as a primal-dual problem [19; 20; 14]. This approach uses the Hopf formula in density space [21] to establish a connection between MFGs and GANs. However, this method requires the Hamiltonian \(H\) to be separable in \(\rho\) and \(p\). In cases where the Hamiltonian is non-separable, such as in traffic flow [1], it is not possible to reformulate MFGs as a primal-dual problem. Recently, [22] proposed a policy iteration algorithm for MFGs with non-separable Hamiltonians using the contraction fixed point method. _Contributions_ In this work, we present a new method based on DGM for solving stochastic MFG with a non-separable Hamiltonian. Inspired by the work [23; 24; 25], we approximate the unknown solutions of the system (1) by two neural networks trained simultaneously to satisfy each equation of the MFGs system and forward-backward conditions. While the GAN-based techniques are limited to problems with separable Hamiltonians, our algorithm, called New-Method, can solve any MFG system. Moreover, we prove the convergence of the neural network approximation with a single layer using a fundamental result of the universal approximation theorem. Then, we test the effectiveness of our New-Method through several numerical experiments, where we compare our results of New-Method with previous approaches to assess their reliability. At last, our approach is applied to solve the MFG system of traffic flow accounting for the stochastic case. _Contents_ The structure of the rest of the paper is as follows: in Section 2, we introduce the main description of our approach. Section 3 examines the convergence of our neural network approximation with a single hidden layer. In Section 4, we present a review of prior methods. Section 5 investigates the numerical performance of our proposed algorithms. We evaluate our method using a simple analytical solution in Section 5.1 and compare it to the previous approach in Section 5.2. We also apply our method to the traffic flow problem in Section 5.3. Finally, we conclude the paper and discuss potential future work in Section 6. ## 2 Methodology Our method involves using two neural networks, \(N_{\theta}\) and \(N_{\omega}\), to approximate the unknown variables \(\rho\) and \(\phi\), respectively. The weights for these networks are \(\theta\) and \(\omega\). Each iteration of our method involves updating \(\rho\) and \(\phi\) with the approximations from \(N_{\theta}\) and \(N_{\omega}\). To optimize the accuracy of these approximations, we use a loss function based on the residual of the first equation (HJB) to update the parameters of the neural networks. We repeat this process using the second equation (FP) and new parameters; see Figure 1. Both neural networks are simultaneously trained on the first equation, and the results are then checked in the second equation, where they are the Hamilton Jacobi Bellman equations and the Fokker-Planck equation. We have developed a solution for the problem of MFG systems 1 that does not rely on the Hamiltonian structure. Our approach involves using a combination of physics-informed deep learning [24] and deep hidden physics models [25] to train our model to solve high-dimensional PDEs that adhere to specified differential operators, initial conditions, and boundary conditions. Our model is also designed to adhere to general nonlinear partial differential equations that describe physical laws. To train our model, we define a loss function that minimizes the residual of the equation at randomly chosen points in time and space within the domain \(\Omega\). We initialize the neural networks as a solution to our system. We let: \[\phi_{\omega}(t,x)=N_{\omega}(t,x),\quad\rho_{\theta}(t,x)=N_{\theta}(t,x). \tag{3}\] Our training strategy starts by solving (HJB). We compute the loss (4) at randomly sampled points \(\{(t_{b_{1}},x_{b_{1}})\}_{b_{1}=1}^{B_{1}}\) from \(E_{1}\), and \(\{x_{s_{1}}\}_{s_{1}=1}^{S_{1}}\) from \(\Omega\). \[\text{Loss}^{(HJB)}_{total}=\text{Loss}^{(HJB)}+\text{Loss}^{(HJB)}_{cond}, \tag{4}\] Figure 1: The learning mechanism of our method. where \[\text{Loss}^{(HJB)} =\frac{1}{B_{1}}\sum_{b_{1}=1}^{B_{1}}\Big{|}\partial_{t}\phi_{ \omega}(t_{b_{1}},x_{b_{1}})+\nu\Delta\phi_{\omega}(t_{b_{1}},x_{b_{1}})\] \[\quad-H(x_{b_{1}},\rho_{\theta}(t_{b_{1}},x_{b_{1}}),\nabla\phi_{ \omega}(t_{b_{1}},x_{b_{1}}))\Big{|}^{2},\] and \[\text{Loss}^{(HJB)}_{cond}=\frac{1}{S_{1}}\sum_{s_{1}=1}^{S_{1}}\Big{|}\phi_{ \omega}(T,x_{s_{1}})-g(x_{s_{1}},\rho_{\theta}(T,x_{s_{1}}))\Big{|}^{2}.\] We then update the weights of \(\phi_{\omega}\) and \(\rho_{\theta}\) by back-propagating the loss (4). We do the same to (FP) with the updated weights. We compute (5) at randomly sampled points \(\{(t_{b_{2}},x_{b_{2}})\}_{b_{2}=1}^{B_{2}}\) from \(E_{2}\), and \(\{x_{s_{2}}\}_{s_{2}=1}^{S_{2}}\) from \(\Omega\). \[\text{Loss}^{(FP)}_{total}=\text{Loss}^{(FP)}+\text{Loss}^{(FP)}_{cond}, \tag{5}\] where \[\text{Loss}^{(FP)} =\frac{1}{B_{2}}\sum_{b_{2}=1}^{B_{2}}\Big{|}\partial_{t}\rho_{ \theta}(t_{b_{2}},x_{b_{2}})-\nu\Delta\rho_{\theta}(t_{b_{2}},x_{b_{2}})\] \[\quad-\text{div}\left(\rho_{\theta}(t_{b_{2}},x_{b_{2}})\nabla_{ p}H(x_{b_{2}},\rho_{\theta}(t_{b_{2}},x_{b_{2}}),\nabla\phi_{\omega}(t_{b_{2}},x_{b _{2}}))\right)\Big{|}^{2},\] and \[\text{Loss}^{(FP)}_{cond}=\frac{1}{S_{2}}\sum_{s_{2}=1}^{S_{2}}\Big{|}\rho_{ \theta}(0,x_{s_{2}})-\rho_{0}(x_{s_{2}})\Big{|}^{2}.\] Finally, we update the weights of \(\phi_{\omega}\) and \(\rho_{\theta}\) by back-propagating the loss (5); see Algorithm [1]. ## 3 Convergence Following the steps of [23], this section presents theoretical results that guarantee the existence of a single layer feedforward neural networks \(\rho_{\theta}\) and \(\phi_{\omega}\) which can universally approximate the solutions of (1). Denote \[L_{1}(\rho_{\theta},\phi_{\omega})=\Big{\|}\mathcal{H}_{1}(\rho_{\theta},\phi_ {\omega})\Big{\|}_{L^{2}(E_{1})}^{2}+\Big{\|}\phi_{\omega}(T,x)-\phi(T,x) \Big{\|}_{L^{2}(\Omega)}^{2}, \tag{6}\] where \[\mathcal{H}_{1}(\rho_{\theta},\phi_{\omega})=\partial_{t}\phi_{\omega}(t,x)+\nu \Delta\phi_{\omega}(t,x)-H(x,\rho_{\theta}(x,t),\nabla\phi_{\omega}(t,x)).\] \[L_{2}(\rho_{\theta},\phi_{\omega})=\left\|\mathcal{H}_{2}(\rho_{\theta},\phi_{ \omega})\right\|_{L^{2}(E_{2})}^{2}+\left\|\rho_{\theta}(0,x)-\rho_{0}(x) \right\|_{L^{2}(\Omega)}^{2}, \tag{7}\] and \[\mathcal{H}_{2}(\rho_{\theta},\phi_{\omega})=\partial_{t}\rho_{\theta}(t,x)- \nu\Delta\rho_{\theta}(t,x)-\text{div}\left(\rho_{\theta}(t,x)\nabla_{p}H(x, \rho_{\theta}(t,x),\nabla\phi_{\omega}(t,x))\right).\] Denote \(||f(x)||_{L^{2}(E)}=\left(\int_{E}|f(x)|^{2}d\mu(x)\right)^{\frac{1}{2}}\) the norm on \(L^{2}\) and \(\mu\) is a positive probability density on \(E\). The aim of our approach is to identify a set of parameters \(\theta\) and \(\omega\) such that the functions \(\rho_{\theta}(x,t)\) and \(\phi_{\omega}(x,t)\) minimizes the error \(L_{1}(\rho_{\theta},\phi_{\omega})\) and \(L_{2}(\rho_{\theta},\phi_{\omega})\). If \(L_{1}(\rho_{\theta},\phi_{\omega})=0\) and \(L_{2}(\rho_{\theta},\phi_{\omega})=0\), then \(\rho_{\theta}(t,x)\) and \(\phi_{\omega}(t,x)\) are solutions to (1). To prove the convergence of the neural networks, we use the results [26] on the universal approximation of functions and their derivatives. Define the class of neural networks with a single hidden layer and \(n\) hidden units, \[\mathcal{N}^{n}(\sigma)=\Big{\{}\Phi(t,x):\mathbb{R}^{1+d}\mapsto\mathbb{R}: \Phi(t,x)=\sum_{i=1}^{n}\beta_{i}\sigma\left(\alpha_{1,i}t+\sum_{j=1}^{d}\alpha _{j,i}x_{j}+c_{j}\right)\Big{\}},\] Where \[\theta=(\beta_{1},\cdots,\beta_{n},\alpha_{1,1},\cdots,\alpha_{d,n},c_{1},c_ {1},\cdots,c_{n})\in\mathbb{R}^{2n+n(1+d)},\] the vector of the parameter to be learned. The set of all functions implemented by such a network with a single hidden layer and \(n\) hidden units is \[\mathcal{N}(\sigma)=\bigcup_{n\geq 1}\mathcal{N}^{n}(\sigma), \tag{8}\] We consider \(E\) a compact subset of \(\mathbb{R}^{d+1}\), from [26, Th 3]. we know that if \(\sigma\in\mathcal{C}^{2}\left(\mathbb{R}^{d+1}\right)\) is non constant and bounded, then \(\mathcal{N}(\sigma)\) is uniformly 2-dense on \(E\). This means by [26, Th 2] that for all \(u\in\mathcal{C}^{1,2}\left([0,T]\times\mathbb{R}^{d}\right)\) and \(\epsilon>0\), there is \(f_{\theta}\in\mathcal{N}(\sigma)\) such that: \[\sup_{(t,x)\in E}\left|\partial_{t}u(t,x)-\partial_{t}f_{\theta}(t,x)\right|+ \max_{|a|\leq 2}\sup_{(t,x)\in E}\left|\partial_{x}^{(a)}u(t,x)-\partial_{x}^{ (a)}f_{\theta}(t,x)\right|<\epsilon. \tag{9}\] To prove the convergence of our algorithm, we make the following assumptions, * **(H1):**\(E_{1},E_{2}\) are compacts and consider the measures \(\mu_{1},\mu_{2},\mu_{3},\text{ and }\mu_{4}\) whose support is contained in \(E_{1},\Omega,E_{2},\text{ and }\Omega\) respectively. * **(H2):** System (1) has a unique solution \((\phi,\rho)\in\mathcal{X}\times\mathcal{X}\) such that: \[\mathcal{X}=\Big{\{}u(t,x)\in\mathcal{C}\left([0,T\bar{]}\times\Omega\right) \bigcap\mathcal{C}^{1+\eta/2,2+\eta}\left([0,T]\times\Omega\right)\] \[\text{ with }\ \eta\in(0,1)\text{and that }\sup_{(t,x)\in[0,T]\times \Omega}\sum_{k=1}^{2}\left|\nabla_{x}^{(k)}u(t,x)\right|<\infty\Big{\}}.\] * **(H3):**\(H,\ \nabla_{p}H,\ \nabla_{pp}H,\ \nabla_{\rho p}H\) are locally Lipschitz continuous in \((\rho,p)\) with Lipschitz constant that can have at most polynomial growth in \(\rho\) and \(p\), uniformly with respect to \(t,x\). **Remark 3.1**.: _It is important to note that the nonlinear term of \(L_{2}\) can be simplified as follows,_ \[\begin{split}\operatorname{div}(\rho\nabla_{p}H(x,\rho,\nabla \phi))=&\nabla_{p}H(x,\rho,\nabla\phi)\nabla\rho+\nabla_{p\rho}H(x,\rho,\nabla\phi)\nabla\rho.\rho\\ &+\sum_{i,j}\nabla_{p_{i}p_{j}}H(x,\rho,\nabla\phi)(\partial_{x_ {j}x_{i}}\phi)\rho.\end{split}\] **Theorem 3.1**.: _Let consider \(\mathcal{N}(\sigma)\) where \(\sigma\) is \(\mathcal{C}^{2}\left(\mathbb{R}^{d+1}\right)\), non constant and bounded. Suppose_ **(H1)**_,_ **(H2)**_,_ **(H3)** _hold. Then for every \(\epsilon_{1},\epsilon_{2}>0\), there exists two positives constant \(C_{1},C_{2}>0\) and there exists two functions \((\rho_{\theta},\phi_{\omega})\in\mathcal{N}(\sigma)\times\mathcal{N}(\sigma)\), such that,_ \[L_{i}(\rho_{\theta},\phi_{\omega})\leq C_{i}(\epsilon_{1}+\epsilon_{2}), \qquad\text{ for}\quad i=\{1,2\}.\] The proof of this theorem is in A. Now we have \(L_{1}(\rho_{\theta}^{n},\phi_{\omega}^{n})\mapsto 0\), and \(L_{2}(\rho_{\theta}^{n},\phi_{\omega}^{n})\mapsto 0\) as \(n\mapsto\infty\) but it does not necessarily imply that \((\rho_{\theta}^{n},\phi_{\omega}^{n})\mapsto(\rho,\omega)\) is the unique solution. We now prove, under stronger conditions, the convergence of the neural network, \((\rho_{\theta}^{n},\phi_{w}^{n})\) to the solution \((\rho,\phi)\) of the system 1 as \(n\to\infty\). To avoid some difficulties, we add homogeneous boundary conditions that assume the solution is vanishing on the boundary. The MFG system (1) writes \[\left\{\begin{array}{rll}-\partial_{t}\phi-\nu\operatorname{div}\left(a_{1}( \nabla\phi)\right)+\gamma(\rho,\nabla\phi)=0,\ in&\Omega_{T},\\ \partial_{t}\rho-\nu\operatorname{div}\left(a_{2}(\nabla\rho)\right)- \operatorname{div}\left(a_{3}(\rho,\nabla\phi)\right)=0,\ in&\Omega_{T},\\ \rho(0,x)=\rho_{0}(x),&\phi(T,x)=g(x,\rho(T,x)),\ in&\Omega,\\ \rho(t,x)=\phi(t,x)=0,&in&\Gamma,\end{array}\right. \tag{10}\] where, \(\Omega_{T}=(0,T)\times\Omega\), \(\Gamma=(0,T)\times\partial\Omega\) and \[\begin{array}{l}a_{1}(t,x,\nabla\phi)=\nabla\phi,\\ a_{2}(t,x,\nabla\rho)=\nabla\rho,\\ a_{3}(t,x,\rho,\nabla\phi)=\rho\nabla pH(x,\rho,\nabla\phi),\\ \gamma(t,x,\rho,\nabla\phi)=H(x,\rho,\nabla\phi),\end{array}\] \(a_{1}:\Omega_{T}\times\mathbb{R}^{N}\rightarrow\mathbb{R}^{N}\), \(a_{2}:\Omega_{T}\times\mathbb{R}^{N}\times\mathbb{R}^{N}\rightarrow\mathbb{R}^{N}\), \(a_{3}:\Omega_{T}\times\mathbb{R}\times\mathbb{R}^{N}\rightarrow\mathbb{R}^{N}\) and \(\gamma:\Omega_{T}\times\mathbb{R}\times\mathbb{R}^{N}\rightarrow\mathbb{R}\) are Caratheodory functions. Then we introduce the approximate problem of the system (10) as \[\left\{\begin{array}{ccc}-\partial_{t}\phi_{\omega}^{n}-\nu\operatorname{div }\left(a_{1}(\nabla\phi_{\omega}^{n})\right)+\gamma(\rho_{\theta}^{n},\nabla \phi_{\omega}^{n})=0,&in&\Omega_{T},\\ \partial_{t}\rho_{\theta}^{n}-\nu\operatorname{div}\left(a_{2}(\nabla\rho_{ \theta}^{n})\right)-\operatorname{div}\left(a_{3}(\rho_{\theta}^{n},\nabla \phi_{\omega}^{n})=0,&in&\Omega_{T},\\ \rho_{\theta}^{n}(0,x)=\rho_{0}(x),&\phi_{\omega}^{n}(T,x)=g(x,\rho_{\theta}^ {n}(T,x)),&in&\Omega,\\ &\rho_{\theta}^{n}(t,x)=\phi_{\omega}^{n}(t,x)=0.&in&\Gamma,\end{array}\right. \tag{11}\] Let us first introduce some definitions. Let \(r\geq 1\). In the sequel we denote by \(L^{r}\left(0,T;W_{0}^{1,r}(\Omega)\right)\) the set of functions \(u\) such that \(u\in L^{r}\left(\Omega_{T}\right)\), \(u(t,\cdot)\in W_{0}^{1,r}(\Omega)\). The space \(L^{r}\left(0,T;W_{0}^{1,r}(\Omega)\right)\) is equipped with the norm \[\|u\|_{L^{r}\left(0,T;W_{0}^{1,r}(\Omega)\right)}:=\left(\int_{0}^{T}\int_{ \Omega}|\nabla u(x,t)|^{r}dxdt\right)^{\frac{1}{r}},\] is a Banach space. For \(s,r\geq 1\), the space \(V_{0}^{s,r}\left(\Omega_{T}\right):=L^{\infty}\left(0,T;L^{s}(\Omega)\right) \cap L^{r}\left(0,T;W_{0}^{1,r}(\Omega)\right)\) endowed with the norm \[\|\varphi\|_{V_{0}^{s,r}(\Omega_{T})}:=\operatorname{ess}\sup_{0\leq t\leq T }\|\varphi(.,t)\|_{L^{s}(\Omega)}+\|\varphi\|_{L^{r}\left(0,T;W_{0}^{1,r}( \Omega)\right)},\] is also a Banach space. For this convergence, we make the following set of assumptions, * **(H4):** There is a constant \(\mu>0\) and positive functions \(\kappa(t,x),\lambda(t,x)\) such that for all \((t,x)\in\Omega_{T}\), we have \[\|a_{3}(t,x,\rho,p)\|\leq\mu(\kappa(t,x)+\|p\|),\text{ and }|\gamma(t,x,\rho,p)|\leq\lambda(t,x)\|p\|,\] with \(\kappa\in L^{2}\left(\Omega_{T}\right),\lambda\in L^{d+2}\left(\Omega_{T} \right).\) * **(H5):**\(a_{3}(t,x,\rho,p)\) and \(\gamma(t,x,\rho,p)\) are Lipschitz continuous in \((t,x,\rho,p)\in\Omega_{T}\times\mathbb{R}\times\mathbb{R}^{d}\) uniformly on compacts of the form \(\left\{(t,x)\in\bar{\Omega}_{T},|\rho|\leq C,|p|\leq C\right\}\). * **(H6):** There is a positive constant \(\alpha>0\) such that \[a_{3}(t,x,\rho,p)p\geq\alpha|p|^{2}.\] * **(H7):** For every \(n\in\mathbb{N},\rho_{\theta}^{n},\phi_{\omega}^{n}\in\mathcal{C}^{1,2}\left( \bar{\Omega}_{T}\right)\). In addition, \((\rho_{\theta}^{n})_{n\in\mathbb{N}}\,,(\phi_{\omega}^{n})_{n\in\mathbb{N}}\in L ^{2}\left(\Omega_{T}\right).\) **Theorem 3.2**.: _Under previous assumptions (H4)-(H7), if we assume that (10) has a unique bounded solution \((\phi,\rho)\in V_{0}^{2,2}\times V_{0}^{2,2}\), then \((\phi_{\omega}^{n},\rho_{\theta}^{n})\) converge to \((\phi,\rho)\) strongly in \(L^{p}\left(\Omega_{T}\right)\times L^{p}\left(\Omega_{T}\right)\) for every \(p<2\)._ The proof of this theorem is in B. Related Works ## 4 Related Works **GANs:** Generative adversarial networks, or GANs, are a class of machine learning introduced in 2014 [27] that have been successful in generating images and processing data [28; 29; 30]. In recent years, there has been increasing interest in using GANs for financial modeling as well [31]. GANs consist of two neural networks, a generator network, and a discriminator network, that work against each other in order to generate samples from a specific distribution. As described in various sources [27; 32; 33], the goal is to reach equilibrium for the following problem, \[\min_{G}\max_{D}\Big{\{}\mathbb{E}_{x\sim P_{data}(x)}[log(D(x)]+\mathbb{E}_{z \sim P_{g}(z)}[log(1-D(G(z))]\Big{\}}, \tag{12}\] where \(P_{data}(x)\) is the original data and \(P_{g}(z)\) is the noise data. In (12), the goal is to minimize the generator's output (G) and maximize the discriminator's output (D). This is achieved by comparing the probability of the original data \(P_{data}(x)\) being correctly identified by the discriminator D with the probability of the generated data G produced by the generator using noise data \(P_{g}(z)\) being incorrectly identified as real by the discriminator \(1-D(G(z))\). Essentially, the discriminator is trying to accurately distinguish between real and fake data, while the generator is attempting to create fake data that can deceive the discriminator. **APAC-Net:** In [17], the authors present a method (APAC-Net) based on GANs for solving high-dimensional MFGs in the stochastic case. They use of the Hopf formula in density space to reformulate the MFGs as a saddle-point problem given by, \[\begin{split}\inf_{\rho(x,t)}\sup_{\phi(x,t)}&\Big{\{} \mathbb{E}_{z\sim P(z),t\sim Unif[0,T]}[\partial_{t}\phi(\rho(t,z),t)+\nu \Delta\phi(\rho(t,z),t)\\ &-H(\rho(t,z),\nabla\phi)]+\mathbb{E}_{z\sim P(z)}\phi(0,\rho(0,z ))-\mathbb{E}_{x\sim\rho_{T}}\phi(T,x)\Big{\}},\end{split} \tag{13}\] where \[H(x,p)=inf_{v}\{-p.v+L(x,v)\}.\] In this case, we have a connection between the GANs and MFGs, since (13) allows them to reach the Kantorovich-Rubenstein dual formulation of Wasserstein GANs [33] given by, \[\begin{array}{c}\min_{G}\max_{D}\{\mathbb{E}_{x\sim P_{data}(x)}[(D(x)]- \mathbb{E}_{z\sim P_{g}(z)}[(D(G(z))]\},\\ s.t.\ \ ||\nabla D||\leq 1.\end{array} \tag{14}\] Finally, we can use an algorithm similar to GANs to solve the problems of MFGs. Unfortunately, we notice that the Hamiltonian in this situation has a separable structure. Due to this, we cannot solve the MFG-LWR system (to be detailed in section 5.3). In general, we cannot solve the MFGs problems, where its Hamiltonian is non-separable, since we cannot reformulate MFGs as 13. **MFGANs:** In [18, 17], the connection between GANs and MFGs is demonstrated by the fact that equation (13) allows them to both reach the Kantorovich-Rubinstein dual formulation of Wasserstein GANs, as described in reference [33]. This is shown in equation (12), which can be solved using an algorithm similar to those used for GANs. However, it is not possible to solve MFGs problems with non-separable Hamiltonians, as they cannot be reformulated as in equation (13). This is because the Hamiltonian in these cases has a separable structure, which prevents the solution of the MFG-LWR system (to be discussed in section 5.3). **DGM-MFG:** In [34], section 4 discusses the adaptation of the DGM algorithm to solve mean field games, referred to as DGM-MFG. This method is highly versatile and can effectively solve a wide range of partial differential equations due to its lack of reliance on the specific structure of the problem. Our own work is similar to DGM-MFG in that we also utilize neural networks to approximate unknown functions and adjust parameters to minimize a loss function based on the PDE residual, as seen in [34] and [18]. However, our approach, referred to as New-Method, differs in the way it is trained. Instead of using the sum of PDE residuals as the loss function and SGD for optimization, we define a separate loss function for each equation and use ADAM for training, following the approach in [18]. This modification allows for faster and more accurate convergence. **Policy iteration Method:** To the best of our knowledge, [22] was the first to successfully solve systems of mean field game partial differential equations with non-separable Hamiltonians. They proposed two algorithms based on policy iteration, which involve iteratively updating the population distribution, value function, and control. These algorithms only require the solution of two decoupled, linear PDEs at each iteration due to the fixed control. This approach reduces the complexity of the equations, but it is limited to low-dimensional problems due to the computationally intensive nature of the method. In contrast, our method utilizes neural networks to solve the HJB and FP equations at each iteration, allowing for updates to the population distribution and value function in each equation without the limitations of [22]. ## 5 Numerical Experiments To evaluate the effectiveness of the proposed algorithm [1], we use the example provided in [17], as it has an explicitly defined solution structure that allows for easy numerical comparison. We compare the performance of New-Method, APAC-Net's MFGAN, and DGM-MFG on the same data to assess their reliability. Additionally, we apply New-Method to the traffic flow problem [19], which is characterized by its non-separable Hamiltonian [20], to determine its ability to solve this type of problem in a stochastic case. ### Analytic Comparison We test our method by comparing it to a simple example of the analytic solution used to test the effectiveness of Apac-Net [17]. For the sake of simplicity, we take the spatial domain \(\Omega=[-2,2]^{d}\), the final time \(T=1\), and without congestion (\(\gamma=0\)). For \[\begin{array}{c}H_{0}(x,p)=\frac{||p||^{2}}{2}-\beta\frac{||x||^{2}}{2}, \quad f_{0}(x,\rho)=\gamma\texttt{ln}(\rho),\\ g(x)=\alpha\frac{||x||^{2}}{2}-(\nu d\alpha+\gamma\frac{d}{2}ln\frac{\alpha}{2 \pi\nu}),\end{array} \tag{15}\] and \(\nu=\beta=1\), where \[\alpha=\frac{-\gamma+\sqrt{\gamma^{2}+4\nu^{2}\beta}}{2\nu}=1.\] The corresponding MFG system is: \[\left\{\begin{array}{c}-\partial_{t}\phi-\Delta\phi+\frac{||\nabla\phi||^{2}}{2 }-\frac{||x||^{2}}{2}=0,\\ \partial_{t}\rho-\Delta\rho-\mbox{div}\,(\rho\nabla\phi)=0,\\ \rho(0,x)=(\frac{1}{2\pi})^{\frac{d}{2}}e^{-\frac{||x||^{2}}{2}},\\ \phi(T,x)=\frac{x^{2}}{2}-d,\end{array}\right. \tag{16}\] and the explicit formula is given by \[\begin{array}{c}\phi(t,x)=\frac{||x||^{2}}{2}-d.t,\\ \rho(t,x)=(\frac{1}{2\pi})^{\frac{d}{2}}e^{-\frac{||x||^{2}}{2}}.\end{array} \tag{17}\] **Test 1:** We consider the system of PDEs [16] in one dimension (\(d=1\)). To obtain results, we run Algorithm [1] for \(5.10^{3}\) iterations, using a minibatch of 50 samples at each iteration. The neural networks employed have three hidden layers with 100 neurons each, and utilize the Softplus activation function for \(N_{\omega}\) and the Tanh activation function for \(N_{\theta}\). Both networks use ADAM with a learning rate of \(10^{-4}\) and a weight decay of \(10^{-3}\). We employ ResNet as the architecture of the neural networks, with a skip connection weight of 0.5. The numerical results are shown in Figure 2, which compares the approximate solutions obtained by New-Method to the exact solutions at different time states. To evaluate the performance of New-Method, we compute the relative error between the model predictions and the exact solutions on a \(100\times 100\) grid within the domain \([0,1]\times[-2,2]\). Additionally, we plot the HJB and FP residual loss, as defined in Algorithm [1], to monitor the convergence of our method (see Figure 3). **Test 2:** In this experiment, we use a single hidden layer with varying numbers of hidden units (nU) for both neural networks. As previously shown in section 2, the number of hidden units can affect the convergence of the model. To verify this, we repeat the previous test using the same hyper-parameters and a single hidden layer but with different numbers of hidden units. The relative error between the model predictions and the exact solutions is then calculated on a \(100\times 100\) grid within the domain \([0,1]\times[-2,2]\), as shown in Figure 4. **Test 3:** We solve the MFG system [16] for dimensions 2, 50, and 100. Figure 5 shows the residuals of the HJB and FP equations over \(5.10^{4}\) iterations. A minibatch of 1024, 512, and 128 samples were used for d=100, d=50, and d=2, respectively. The neural networks Figure 3: The relative error for \(\rho\), \(\phi\) for the figure on the left. On the right, the HJB, FP Loss. Figure 2: The exact solution and prediction calculated by New-Method in dimension one at t=(0.25, 0.5, 0.75 ). with 100 neurons each and utilized the Softplus activation function for \(N_{\omega}\) and the Tanh activation function for \(N_{\theta}\). Both networks used ADAM with a learning rate of \(10^{-4}\), weight decay of \(10^{-3}\), and employed ResNet as their architecture with a skip connection weight of 0.5. The results were obtained by recording the residuals every 100 iterations and using a rolling average over 5 points to smooth out the curves. **Test 4:** In this test, we use the same setup as before, but with a single layer of 100 neurons instead of multiple layers. We keep all other neural network hyperparameters unchanged. This test is meant to demonstrate that a single layer can perform better than multiple layers, even when the dimension increases, as seen in section 2. Figure 6 shows improved results compared to the previous test, even with few iterations, which allows for faster computation times. Figure 4: The relative error for \(\rho\), \(\phi\) in 1-dimension for nU=(2, 5, 10, 20, 50). Figure 5: The loss HJB and FP equation for d=(2,10,100) ### Comparison In previous sections, we introduced and discussed four methods for solving MFGs: APAC-Net, MFGAN, DGM-MFG, and New-Method. Here, we compare these approaches to assess their performance. For APAC-Net, it is only possible to compare the cost values \(\phi\) due to the unavailability of the density function. In APAC-Net, the generator neural network represents \(\rho\), which generates the distribution. In order to compare the results, we need to use kernel density estimation to transform the distribution into a density, which is only an estimate. We use the simple example from the analytic solution with \(d=1\) and \(T=1\) for this comparison. The two neural networks in this comparison have three hidden layers with 100 neurons each, and utilize ResNet as their architecture with a skip connection weight of 0.5. They also use the Softplus activation function for \(N_{\omega}\) and the Tanh activation function for \(N_{\theta}\). For training APAC-Net, MFGAN, and New-Method, we use ADAM with a learning rate of \(10^{-4}\) and a weight decay of \(10^{-3}\) for both networks. For training DGM-MFG, we use SGD initialized with a value of \(10^{-3}\) and a weight decay of \(10^{-3}\) for both networks. We run the four algorithms for \(5.10^{3}\) iterations, using a minibatch of 50 samples at each iteration. The relative error between the model predictions and the exact solutions is then calculated on a \(100\times 100\) grid within the domain \([0,1]\times[-2,2]\), as shown in Figure 7. ### Application (Traffic Flow): In a study published in [1], the authors focused on the longitudinal speed control of autonomous vehicles. They developed a mathematical model called Figure 6: The loss HJB and FP equation with a minibatch of 128, 512, and 1024 samples for d=2, d=50, and d=(100,200,300), respectively. a Mean Field Game (MFG) to solve a traffic flow problem for autonomous vehicles and demonstrated that the traditional Lighthill-Whitham-Richards (LWR) model can be used as a solution to the MFG-LWR model described by the following system of equations: \[MFG-LWR\left\{\begin{array}{c}V_{t}+U(\rho)V_{x}-\frac{1}{2}V_{x}^{2}=0,\\ \rho_{t}+(\rho u)_{x}=0,\\ u=U(\rho)-V_{x},\\ V_{T}=g(\cdot,\rho_{T}),\ \ \ \ \rho(\cdot,0)=\rho_{0}.\end{array}\right. \tag{18}\] Here, \(\rho\), \(V\), and \(u\) represent the density, optimal cost, and speed function, respectively, and the Greenshields density-speed relation is given by \(U(\rho)=u_{max}(1-\rho/\rho_{jam})\), where \(\rho_{jam}\) is the jam density and \(u_{max}\) is the maximum speed. By setting \(\rho_{jam}=1\) and \(u_{max}=1\), the authors generalized the MFG-LWR model to include a viscosity term \(\mu>0\), resulting in the following system: \[MFG-LWR\left\{\begin{array}{c}V_{t}+\nu\Delta V-H(x,p,\rho)=0,\\ \rho_{t}-\nu\Delta\rho-\mbox{div}(\nabla_{p}H(x,p,\rho)\rho)=0,\\ V_{T}=g(\cdot,\rho_{T}),\ \ \ \ \rho(\cdot,0)=\rho_{0}.\end{array}\right. \tag{19}\] In this model, \(\rho\) and \(V\) represent the density and optimal cost function, respectively, and \(H\) is the Hamiltonian with a non-separable structure given by \[H(x,p,\rho)=\frac{1}{2}||p||^{2}-(1-\rho)p,\ \ \ with\ \ p=V_{x}, \tag{20}\] where \(p=V_{x}\). The authors solved the system in (19) using the Newton Iteration method for the deterministic case (\(\nu=0\)) with a numerical Figure 7: comparison between APAC-Net, MFGAN, DGM-MFG, and New-Method. method that considers only a finite number of discretization points to reduce computational complexity. In this work, we propose a new method using a neural network to approximate the unknown and solve the problem in the stochastic case, while also avoiding the computational complexity of the previous method. To evaluate the performance of the new method, we consider the traffic flow problem defined by the MFG-LWR model in 19 with a non-separable Hamiltonian in (20) on the spatial domain \(\Omega=[0,1]\) with dimension \(d=1\) and final time \(T=1\). The terminal cost \(g\) is set to zero and the initial density \(\rho_{0}\) is given by a Gaussian distribution, \(\rho_{0}(x)=0.2-0.6\ exp\left(\frac{-1}{2}\left(\frac{x-0.5}{0.1}\right)^{2}\right)\). The aim is to investigate the performance of the new method, called the "New-Method," in solving this traffic flow problem. The corresponding MFG system is, \[\left\{\begin{array}{c}V_{t}+\nu\Delta V-\frac{1}{2}||V_{x}||^{2}+(1-\rho)V_ {x}=0\\ \rho_{t}-\nu\Delta\rho-\mbox{div}((V_{x}-(1-\rho))\rho)=0\\ \rho(x,0)=0.2-0.6\ exp(\frac{-1}{2}(\frac{x-0.5}{0.1})^{2}),\\ \phi(x,T)=0.\end{array}\right. \tag{21}\] We study the deterministic case (\(\nu=0\)) and stochastic case (\(\nu=0.5\)). We represent the unknown solutions by two neural networks \(N_{\omega}\) and \(N_{\theta}\), which have a single hidden layer of 50 neurons. We use the ResNet architecture with a skip connection weight of 0.5. We employ ADAM with learning rate \(4\times 10^{-4}\) for \(N_{\omega}\) and \(5\times 10^{-4}\) for \(N_{\theta}\) and weight decay of \(10^{-4}\) for both networks, batch size 100, in both cases \(\nu=0\) and \(\nu=0.5\) we use the activation function Softmax and Relu for \(N_{\omega}\) and \(N_{\theta}\) respectively. In Figure 8 we plot over different times the density function, the optimal cost, and the speed which is calculated according to the density and the optimal cost [1] by the following formula, \[u=u_{max}(1-\rho/\rho_{jam})-V_{x}\] where, we take the jam density \(\rho_{jam}=1\) and the maximum speed \(u_{max}=1\) and \(10^{4}\) iterations. ++ In Figure (9), we plot the HJB, FP residual loss for \(\nu=0\) and \(\nu=0.5\), which helps us monitor the convergence of our method. Unfortunately, we do not have the exact solution to compute the error. To validate the results of Figure (8), we use the fundamental traffic flow diagram, an essential tool to comprehend classic traffic flow models. Precisely, this is a graphic that Figure 8: The solution of the problem MFG-LWR by New-Method for \((\nu=0)\) and \((\nu=0.5)\) at t=(0,0.5,1). Figure 9: The loss HJB and FP equation for \((\nu=0)\) and \((\nu=0.5)\). displays a link between road traffic flux (vehicles/hour) and the traffic density (vehicles/km) [35; 36; 37]. We can find this diagram numerically [1] such as its function \(q\) is given by, \[q(t,x)=\rho(t,x)u(t,x).\] Figure (10) shows the fundamental diagram of our results. ## 6 Conclusion * We present a new method based on the deep galerkin method (DGM) for solving high-dimensional stochastic mean field games (MFGs). The key idea of our algorithm is to approximate the unknown solutions by two neural networks that were simultaneously trained to satisfy each equation of the MFGs system and forward-backward conditions. * Consequently, our method shows better results even in a small number of iterations because of its learning mechanism. Moreover, it shows the Figure 10: Fundamental diagram for \(\nu=(0,0.5)\) at \(t=(0,0.5,1)\). potential of up to 300 dimensions with a single layer, which gives more speed to our method. * we proved that as the number of hidden units increases, the neural networks converge to the MFG solution. * Comparison with the previous methods shows the efficiency of our approach even with multilayer neural networks. * Test on traffic flow problem in the deterministic case gives results similar to the newton iteration method, showing that it can solve this problem in the stochastic case. To address the issue of high dimensions in the problem, we used a neural network but found that it took a significant amount of time. While our approach has helped to reduce the time required, it is still not fast enough. Therefore, we are seeking an alternative to neural networks in future research to improve efficiency. ## Appendix A Proof of Theorem 3.1. Denote \(\mathcal{N}(\sigma)\) the space of all functions implemented by such a network with a single hidden layer and \(n\) hidden units, where \(\sigma\) in \(\mathcal{C}^{2}\left(\mathbb{R}^{d+1}\right)\) non-constant, and bonded. By **(H1)** we have that for all \(\rho,\phi\in\mathcal{C}^{1,2}\left([0,T]\times\mathbb{R}^{d}\right)\) and \(\varepsilon_{1},\varepsilon_{2}>0\), There is \(\rho_{\theta},\phi_{\omega}\in\mathcal{N}(\sigma)\) such That, \[\begin{split}\sup_{(t,x)\in E_{1}}&|\partial_{t} \phi(t,x)-\partial_{t}\phi_{\omega}(t,x)|\\ &+\max_{|a|\leq 2}\sup_{(t,x)\in E_{1}}\left|\partial_{x}^{(a)} \phi(t,x)-\partial_{x}^{(a)}\phi_{\omega}(t,x)\right|<\epsilon_{1}\end{split}\] (A.1) \[\begin{split}\sup_{(t,x)\in E_{2}}&|\partial_{t} \rho(t,x)-\partial_{t}\rho_{\theta}(t,x)|\\ &+\max_{|a|\leq 2}\sup_{(t,x)\in E_{2}}\left|\partial_{x}^{(a)} \rho(t,x)-\partial_{x}^{(a)}\rho_{\theta}(t,x)\right|<\epsilon_{2}\end{split}\] (A.2) From **(H3)** we have that \((\rho,p)\mapsto H(x,\rho,p)\) is locally Lipschitz continuous in \((\rho,p)\), with Lipschitz constant that can have at most polynomial growth in \(\rho\) and \(p\), uniformly with respect to \(t,x\). This means that \[\begin{split}|H(x,\rho,p)-H(x,\gamma,s)|\leq&\Big{(} |\rho|^{q_{1}/2}+|p|^{q_{2}/2}+|\gamma|^{q_{3}/2}+|s|^{q_{4}/2}\Big{)}\\ &\times(|\rho-\gamma|+|p-s|).\end{split}\] with some constants \(0\leq q_{1},q_{2},q_{3},q_{4}<\infty\). As a result, we get using Holder inequality with exponents \(r_{1},r_{2}\), \[\int_{E_{1}}\left|H\left(x,\rho_{\theta},\nabla_{x}\phi_{\omega} \right)-H\left(x,\rho,\nabla\phi\right)\right|^{2}d\mu_{1}(t,x)\] \[\leq\int_{E_{1}}\left(\left|\rho_{\theta}(t,x)\right|^{q_{1}}+ \left|\nabla\phi_{\omega}(t,x)\right|^{q_{2}}+\left|\rho(t,x)\right|^{q_{3}}+ \left|\nabla\phi(t,x)\right|^{q_{4}}\right)\] \[\quad\times\left(\left|\rho_{\theta}(t,x)-\rho(t,x)\right|^{2}+ \left|\nabla\phi_{\omega}(t,x)-\nabla\phi(t,x)\right|^{2}\right)d\mu_{1}(t,x)\] \[\leq\Big{(}\int_{E_{1}}(\left|\rho_{\theta}(t,x)\right|^{q_{1}}+ \left|\nabla\phi_{\omega}(t,x)\right|^{q_{2}}+\left|\rho(t,x)\right|^{q_{3}}+ \left|\nabla\phi(t,x)\right|^{q_{4}})^{r_{1}}d\mu_{1}(t,x)\Big{)}^{1/r_{1}}\] \[\quad\times\Big{(}\int_{E_{1}}(\left|\rho_{\theta}(t,x)-\rho(t,x) \right|^{2}+\left|\nabla\phi_{\omega}(t,x)-\nabla\phi(t,x)\right|^{2})^{r_{2}} d\mu_{1}(t,x)\Big{)}^{1/r_{2}}\] \[\leq C_{1}\Big{(}\int_{E_{1}}(\left|\rho_{\theta}(t,x)-\rho(t,x) \right|^{q_{1}}+\left|\nabla\phi_{\omega}(t,x)-\nabla\phi(t,x)\right|^{q_{2}}\] \[\quad\quad+\left|\rho(t,x)\right|^{q_{1}\lor q_{3}}+\left|\nabla \phi(t,x)\right|^{q_{2}\lor q_{4}})^{r_{1}}d\mu_{1}(t,x)\Big{)}^{1/r_{1}}\] \[\quad\times\Big{(}\int_{E_{1}}(\left|\rho_{\theta}(t,x)-\rho(t,x) \right|^{2}+\left|\nabla\phi_{\omega}(t,x)-\nabla\phi(t,x)\right|^{2})^{r_{2} }d\mu_{1}(t,x)\Big{)}^{1/r_{2}}\] \[\leq C_{1}\left(\epsilon_{1}^{q_{1}}+\epsilon_{2}^{q_{2}}+\sup_{ E_{1}}\left|\rho\right|^{q_{1}\lor q_{3}}+\sup_{E_{1}}\left|\nabla\phi\right|^{q_{2} \lor q_{4}}\right)(\epsilon_{1}^{2}+\epsilon_{2}^{2})\] \[\leq C_{1}(\epsilon_{1}^{2}+\epsilon_{2}^{2}),\] where the constant \(C_{1}<\infty\) may change from line to line and \(q_{i}\lor q_{j}=max\{q_{i},q_{j}\}\). In the two last steps we used A.1, A.2 and **(H2)**. We recall that, \[\mathcal{H}_{1}(\rho_{\theta},\phi_{\omega})=\partial_{t}\phi_{ \omega}(t,x)+\nu\Delta\phi_{\omega}(t,x)-H(x,\rho_{\theta}(t,x),\nabla\phi_{ \omega}(t,x)).\] Note that \(\mathcal{H}_{1}(\rho,\phi)=0\) for \(\rho,\theta\) that solves the system of PDEs, \[L_{1}(\rho_{\theta},\phi_{\omega})= \left\|\mathcal{H}_{1}(\rho_{\theta},\phi_{\omega})\right\|_{L^{2} (E_{1})}^{2}+\left\|\phi_{\omega}(T,x)-\phi(T,x)\right\|_{L^{2}(\Omega)}^{2}\] \[= \left\|\mathcal{H}_{1}(\rho_{\theta},\phi_{\omega})-\mathcal{H}_{ 1}(\rho,\phi)\right\|_{L^{2}(E_{1})}^{2}+\left\|\phi_{\omega}(x,T)-g(x,\rho_{ \theta}(x,T))\right\|_{L^{2}(\Omega)}^{2}\] \[\leq \int_{E_{1}}\left|\partial_{t}\phi_{\omega}(t,x)-\partial_{t} \phi(t,x)\right|^{2}d\mu_{1}(t,x)\] \[+|\nu|\int_{E_{1}}\left|\Delta\phi_{\omega}(t,x)-\Delta\phi(t,x) \right|^{2}d\mu_{1}(t,x)\] \[+\int_{E_{1}}\left|H\left(x,\rho_{\theta},\nabla\phi_{\omega} \right)-H\left(x,\rho,\nabla\phi\right)\right|^{2}d\mu_{1}(t,x)\] \[+\int_{\Omega}|\phi_{\omega}(T,x)-\phi(T,x)|^{2}d\mu_{2}(t,x)\] \[\leq C_{1}(\epsilon_{1}^{2}+\epsilon_{2}^{2})\] for an appropriate constant \(C_{1}<\infty\). In the last step, we use A.1, A.2 and the previous result. For \(L_{2}\) we use remark 3.1 to simplified the nonlinear term, \[\operatorname{div}(\rho\nabla_{p}H(x,\rho,\nabla\phi))=\alpha_{1}(x,\rho, \nabla\phi)+\alpha_{2}(x,\rho,\nabla\phi)+\alpha_{3}(x,\rho,\nabla\phi),\] where, \[\alpha_{1}(x,\rho,\nabla\phi) =\nabla_{p}H(x,\rho,\nabla\phi)\nabla\rho,\] \[\alpha_{2}(x,\rho,\nabla\phi) =\nabla_{p\rho}H(x,\rho,\nabla\phi)\nabla\rho.\rho,\] \[\alpha_{3}(x,\rho,\nabla\phi) =\sum_{i,j}\nabla_{p_{i}p_{j}}H(x,\rho,\nabla\phi)(\partial_{x_{j} x_{i}}\phi)\rho.\] In addition, from **(H3)** we have also \(\nabla_{p}H(x,\rho,p)\), \(\nabla_{p\rho}H(x,\rho,p)\), and \(\nabla_{pp}H(x,\rho,p)\) are locally Lipschitz continuous in \((\rho,p)\). Then, we have after an application of Holder inequality, for some constant \(C_{2}<\infty\) that may change from line to line, \[\int_{E_{2}}\left|\alpha_{1}\left(x,\rho_{\theta},\nabla\phi_{\omega} \right)-\alpha_{1}(x,\rho,\nabla\phi)\right|^{2}d\mu_{3}(t,x)\] \[=\int_{E_{2}}\left|\nabla_{p_{\omega}}H\left(x,\rho_{\theta}, \nabla\phi_{\omega}\right)\nabla\rho_{\theta}-\nabla_{p}H(x,\rho,\nabla\phi) \nabla\rho\right|^{2}d\mu_{3}(t,x)\] \[\leq\int_{E_{2}}\left|\Big{(}\nabla_{p_{\omega}}H\left(x,\rho_{ \theta},\nabla\phi_{\omega}\right)-\nabla_{p}H(x,\rho,\nabla\phi)\Big{)}\nabla \rho\right|^{2}d\mu_{3}(t,x)\] \[\qquad+\int_{E_{2}}\left|\nabla_{p_{\omega}}H\left(x,\rho_{ \theta},\nabla\phi_{\omega}\right)\left(\nabla\rho_{\theta}-\nabla\rho\right) \right|^{2}d\mu_{3}(t,x)\] \[\leq C_{2}\left(\int_{E_{2}}\left|\nabla_{p_{\omega}}H\left(x, \rho_{\theta},\nabla\phi_{\omega}\right)-\nabla_{p}H(x,\rho,\nabla\phi) \right|^{2r_{1}}\!d\mu_{3}\left(t,x\right)\right)^{1/r_{1}}\] \[\qquad\times\Big{(}\int_{E_{2}}\left|\nabla\rho\right|^{2r_{2}} d\mu_{3}(t,x)\Big{)}^{1/r_{2}}+C_{2}\left(\int_{E_{2}}\left|\nabla_{p_{ \omega}}H\left(x,\rho_{\theta},\phi_{\omega}\right)\right|^{2s_{1}}\!d\mu_{3} (t,x)\right)^{1/s_{1}}\] \[\qquad\times\left(\int_{E_{2}}\left|\nabla\rho_{\theta}-\nabla \rho\right|^{2s_{2}}d\mu_{3}(t,x)\right)^{1/s_{2}}\] \[\leq C_{2}\Big{(}\int_{E_{2}}\left|\nabla\rho\right|^{2r_{2}}d\mu _{3}(t,x)\Big{)}^{1/r_{2}}\] \[\qquad\times\Big{(}\int_{E_{2}}(\left|\rho_{\theta}(t,x)-\rho(t,x )\right|^{q_{1}}+\left|\nabla\phi_{\omega}(t,x)-\nabla\phi(t,x)\right|^{q_{2}}\] \[\qquad+\left|\rho(t,x)\right|^{q_{1}\lor q_{3}}+\left|\nabla\phi (t,x)\right|^{q_{2}\lor q_{4}})^{v_{1}r_{1}}d\mu_{3}(t,x)\Big{)}^{1/v_{1}r_{1}}\] \[\times\Big{(}\int_{E_{2}}(\left|\rho_{\theta}(t,x)-\rho(t,x) \right|^{2}+\left|\nabla_{x}\phi_{\omega}(t,x)-\nabla_{x}\phi(t,x)\right|^{2} )^{v_{2}r_{2}}d\mu_{3}(t,x)\Big{)}^{1/v_{2}r_{2}}\] \[\leq C_{2}(\epsilon_{1}^{2}+\epsilon_{2}^{2})\] where in the last steps, we followed the computations previously. We do same for \(\alpha_{2}(x,\rho,\nabla\phi)\) and \(\alpha_{3}(x,\rho,\nabla\phi)\), we obtain for a \(C_{2}<\infty\), \[\int_{E_{2}}\Big{|}\operatorname{div}(\rho_{\theta}\nabla_{p_{ \omega}}H(x,\rho_{\theta},\nabla\phi_{\omega}))-\operatorname{div}(\rho\nabla _{p}H(x,\rho,\nabla\phi))\Big{|}^{2}d\mu_{3}(t,x)\] \[\leq C_{2}(\epsilon_{1}^{2}+\epsilon_{2}^{2}).\] We recall that, \[\mathcal{H}_{2}(\rho_{\theta},\phi_{\omega})=\partial_{t}\rho_{\theta}(t,x)-\nu \Delta\rho_{\theta}(t,x)-\operatorname{div}\left(\rho_{\theta}(t,x)\nabla_{p}H(x,\rho_{\theta}(t,x),\nabla\phi_{\omega}(t,x))\right)\] Note that \(\mathcal{H}_{2}(\rho,\phi)=0\) for \(\rho,\theta\) that solves the system of PDEs, then we have, \[L_{2}(\rho_{\theta},\phi_{\omega})= \left\|\mathcal{H}_{2}(\rho_{\theta},\phi_{\omega})\right\|_{L^{2 }(E_{2})}^{2}+\left\|\rho_{\theta}(0,x)-\rho_{0}(x)\right\|_{L^{2}(\Omega)}^{2}\] \[= \left\|\mathcal{H}_{2}(\rho_{\theta},\phi_{\omega})-\mathcal{H}_ {2}(\rho,\phi)\right\|_{L^{2}(E_{2})}^{2}+\left\|\rho_{\theta}(0,x)-\rho_{0}( x)\right\|_{L^{2}(\Omega)}^{2}\] \[\leq \int_{E_{2}}\left|\partial_{t}\rho_{\theta}(t,x)-\partial_{t} \rho(t,x)\right|^{2}d\mu_{3}(t,x)\] \[+\left|\nu\right|\int_{E_{2}}\left|\Delta\rho_{\theta}(t,x)- \Delta\rho(t,x)\right|^{2}d\mu_{3}(t,x)\] \[+\int_{E_{2}}\Big{|}\operatorname{div}(\rho_{\theta}\nabla_{p_{ \omega}}H(x,\rho_{\theta},\nabla\phi_{\omega}))-\operatorname{div}(\rho\nabla _{p}H(x,\rho,\nabla\phi))\Big{|}^{2}d\mu_{3}(t,x)\] \[+\int_{\Omega}|\rho_{\theta}(0,x)-\rho_{0}(x)|^{2}d\mu_{4}(t,x)\] \[\leq C_{2}(\epsilon_{1}^{2}+\epsilon_{2}^{2})\] for an appropriate constant \(C_{2}<\infty\). The proof of theorem 3.1 is complete after rescaling \(\epsilon_{1}\) and \(\epsilon_{2}\) ## Appendix B Proof of Theorem 3.2. We follow the method used in [23] for a single PDE. (See also section 4 in [38] for a coupled system). Let us denote the solution of problem 11 by. \(\left(\hat{\rho}_{\theta}^{n},\hat{\phi}_{\omega}^{n}\right)\in V=V_{0}^{2,2} \times V_{0}^{2,2}\). Due to Conditions \((H_{4})-(H_{6})\) and by using lemma 1.4 [39] on each equation then, there exist, \(C_{1}\), \(C_{2}\) such that: \[\left\|\hat{\rho}_{\theta}^{n}\right\|_{V_{0}^{2,2}}\leq C_{1}\] \[\left\|\hat{\phi}_{\omega}^{n}\right\|_{V_{0}^{2,2}}\leq C_{2}\] These applies and gives that the both sequence \(\{\hat{\rho}_{\theta}^{n}\}_{n\in\mathbf{N}}\), \(\{\hat{\phi}_{\omega}^{n}\}_{n\in\mathbf{N}}\) are uniformly bounded with respect to n in at least \(V\). These uniform energy bounds imply the existence of two subsequences, (still denoted in the same way) \(\{\hat{\rho}_{\theta}^{n}\}_{n\in\mathbf{N}}\), \(\{\hat{\phi}_{\omega}^{n}\}_{n\in\mathbf{N}}\) and two functions \(\rho\), \(\phi\) in \(L^{2}\left(0,T;W_{0}^{1,2}(\Omega)\right)\) such that, \[\hat{\rho}_{\theta}^{n}\rightarrow\rho\text{ weakly in }L^{2}\left(0,T:W_{0}^{1,2}( \Omega)\right)\] \[\hat{\phi}_{\omega}^{n}\rightarrow\phi\text{ weakly in }L^{2}\left(0,T:W_{0}^{1,2}( \Omega)\right)\] Next let us set \(q=1+\frac{d}{d+4}\in(1,2)\) and note that for conjugates, \(r_{1},r_{2}>1\) such that \(1/r_{1}+1/r_{2}=1\) \[\int_{\Omega_{T}}\left|\gamma\left(t,x,\hat{\rho}_{\theta}^{n}, \nabla\hat{\phi}_{\omega}^{n}\right)\right|^{q} \leq\int_{\Omega_{T}}\left|\lambda\right|^{q}\left|\nabla\hat{ \phi}_{\omega}^{n}\right|^{q}\] \[\leq\left(\int_{\Omega_{T}}\left|\lambda\right|^{r_{1}q}\right)^ {1/r_{1}}\left(\int_{\Omega_{T}}\left|\nabla\hat{\phi}_{\omega}^{n}\right|^{r _{2}q}\right)^{1/r_{2}}\] Let us choose \(r_{2}=2/q>1\). Then we calculate \(r_{1}=\frac{r_{2}}{r_{2}-1}=\frac{2}{2-q}\). Hence, we have that \(r_{1}q=d+2\). Recalling the assumption \(\lambda\in L^{d+2}\left(\Omega_{T}\right)\) and the uniform bound on the \(\nabla\hat{\phi}_{\omega}^{n}\) we subsequently obtain that for \(q=1+\frac{d}{d+4}\), there is a constant \(C<\infty\) such that \[\int_{\Omega_{T}}\left|\gamma\left(t,x,\hat{\rho}_{\theta}^{n},\nabla\hat{ \phi}_{\omega}^{n}\right)\right|^{q}\leq C\] On the other hand, it is obvious that \(a_{1}\) is bounded uniformly then, according to the HJB equation of 11, we have \(\left\{\partial_{t}\hat{\phi}_{\omega}^{n}\right\}_{n\in\mathbb{N}}\) is bounded uniformly with respect to \(n\) in \(L^{2}\left(0,T;W^{-1,2}(\Omega)\right)\). Then we can extract a subsequence, (still denoted in the same way) \(\left\{\partial_{t}\hat{\phi}_{\omega}^{n}\right\}_{n\in\mathbb{N}}\) such that \[\partial_{t}\hat{\phi}_{\theta}^{n}\rightarrow\partial_{t}\phi\text{ weakly in }L^{2}\left(0,T;W^{-1,2}(\Omega)\right)\] Also, it will be shown that \[\partial_{t}\hat{\rho}_{\theta}^{n}\rightarrow\partial_{t}\rho\text{ weakly in }L^{2}\left(0,T;W^{-1,2}(\Omega)\right)\] Since the problem is nonlinear, the weak convergence of \(\hat{\phi}_{\omega}^{n}\) and \(\hat{\rho}_{\theta}^{n}\) in the space \(L^{2}\left(0,T;W_{0}^{1,2}(\Omega)\right)\) is not enough in order to prove that \(\phi\) and \(\rho\) are a solution of problem 10. To do this, we need the almost everywhere convergence of the gradients for a subsequence of the approximating solutions \(\hat{\phi}_{\omega}^{n}\) and \(\hat{\rho}_{\theta}^{n}\). However, the uniform boundedness of \(\{\hat{\phi}_{\omega}^{n}\}_{n\in\mathbf{N}}\) and \(\{\hat{\rho}_{\theta}^{n}\}_{n\in\mathbf{N}}\) in \(L^{2}\left(0,T;W_{0}^{1,2}(\Omega)\right)\) and their weak convergence to \(\phi\) and \(\rho\) respectively in that space, allows us to conclude, by using Theorem 3.3 of [40] on each equation, that \[\nabla\hat{\phi}_{\omega}^{n}\rightarrow\nabla\phi\ \ \text{almost}\ \ \text{everywhere}\ \ \text{in}\ \ \Omega_{T}.\] \[\nabla\hat{\rho}_{\theta}^{n}\rightarrow\nabla\rho\ \ \text{almost}\ \ \text{everywhere}\ \ \text{in}\ \ \Omega_{T}.\] Hence, we obtain that \(\{\hat{\phi}_{\omega}^{n}\}_{n\in\mathbf{N}}\) and \(\{\hat{\rho}_{\theta}^{n}\}_{n\in\mathbf{N}}\) converges respectively to \(\phi\) and \(\rho\) strongly in \(L^{p}\left(0,T;W_{0}^{1,p}(\Omega)\right)\) for every \(p<2\). It remains to discuss the convergence of \(\phi_{\omega}^{n}-\hat{\phi}_{\omega}^{n}\) and \(\rho_{\theta}^{n}-\hat{\rho}_{\theta}^{n}\) to zero. By last step of proof theorem 7.3 [23] we get \(\left\{\phi_{\omega}^{n}-\hat{\phi}_{\omega}^{n}\right\}_{n\in\mathbb{N}}\) and \(\{\rho_{\theta}^{n}-\hat{\rho}_{\theta}^{n}\}_{n\in\mathbb{N}}\) goes to zero strongly in \(L^{p}\left(\Omega_{T}\right)\) for every \(p<2\). Finally we conclude the proof of the convergence in \(L^{p}\left(\Omega_{T}\right)\) for every \(p<2\)
2302.05490
Optimal Design and Cascading Failure Evaluation of Remedial Action Schemes
Remedial action schemes (RAS) are often seen as an alternative to building new transmission infrastructure to relieve congestion in the system. Consequently, there has been a rapid growth in the number of RAS in electric power systems across the world. However, most RAS rely on fixed parameters and hence cannot adapt to the rapidly evolving nature of the electric grid. In this paper, an optimization framework (RAS-SCOPF) to automate the RAS design procedure is proposed. The proposed framework is a mixed integer quadratic program (MIQP) that chooses a set of optimal RAS actions and minimizes load shed when a contingency occurs. The cost of operation of the RAS-SCOPF is compared against those of standard OPF and SCOPF formulations. Moreover, the risk of cascading failure for the different formulations are evaluated using a DC power flow based cascading failure simulator (CFS). The proposed framework is applied to the RTS-96 24-bus network. The inclusion of RAS allows the system to be operated at a lower cost while preventing any contingency from evolving into cascading blackouts.
Aditya Rangarajan, Line Roald
2023-02-10T20:02:24Z
http://arxiv.org/abs/2302.05490v1
# Optimal Design and Cascading Failure Evaluation of Remedial Action Schemes ###### Abstract Remedial action schemes (RAS) are often seen as an alternative to building new transmission infrastructure to relieve congestion in the system. Consequently, there has been a rapid growth in the number of RAS in electric power systems across the world. However, most RAS rely on fixed parameters and hence cannot adapt to the rapidly evolving nature of the electric grid. In this paper, an optimization framework (RAS-SCOPF) to automate the RAS design procedure is proposed. The proposed framework is a mixed integer quadratic program (MIQP) that chooses a set of optimal RAS actions and minimizes load shed when a contingency occurs. The cost of operation of the RAS-SCOPF is compared against those of standard OPF and SCOPF formulations. Moreover, the risk of cascading failure for the different formulations are evaluated using a DC power flow based cascading failure simulator (CFS). The proposed framework is applied to the RTS-96 24-bus network. The inclusion of RAS allows the system to be operated at a lower cost while preventing any contingency from evolving into cascading blackouts. System Integrity Protection Scheme, Remedial Action Schemes, Cascading Failure Simulator, MIQP ## I Introduction The introduction of competitive electricity markets along with increasing electricity demand and renewable generation have led to increased stress on the existing transmission infrastructure. As a result, the electric power system is forced to operate closer to its limits. Since post-contingency security constraints are often the source of congestion, there is an increasing reliance on post-contingency control avoid post-contingency overloads and maintain secure system operation [1]. Post-contingency manual corrective actions of the operator (e.g., generator redispatch, adjusting transformer tap settings, etc) can be too slow to arrest the propagation of disturbances. This has led to the global adoption of fast-acting system-wide protection systems, called Remedial Action Schemes(RAS), to maintain reliability [2]. According to the North American Electric Reliability Corporation (NERC), RAS are automatic protection systems that detect abnormal system conditions and take predetermined and fast control actions, including but not limited to, generator rejection, load shedding, and line switching [3]. RAS are also commonly referred to as system integrity protection schemes (SIPS) or special protection systems (SPS). RAS can use measurements from and take remedial actions at remote locations of the system, and thus differ from local protection systems. Since RAS can reduce violations of post-contingency constraints, they are often viewed as an inexpensive alternative to building new transmission infrastructure [4]. However, increasing the number of RAS increases the operational complexity and poses several challenges as the power system continues to evolve rapidly. Existing procedures for designing RAS are often slow, requiring numerous offline simulations to ensure that the proposed action is sufficient and does not interact adversely with existing RAS and other protection devices [4, 5]. As a result, parameters of RAS, such as the conditions specified to trigger the RAS and the type of actions taken, typically do not change during real-time operations [6]. The slow design procedure, coupled with the use of fixed control parameters, may prevent RAS from adapting to rapidly evolving grid conditions. Recent research has sought to address some of the issues and risks associated with corrective action identification. To improve the RAS design procedure, [7] proposes a sensitivity-based method to generate a set of triggering conditions and generator tripping actions to address post-contingency line overloads. However, the proposed method manually identifies suitable RAS actions, which limits the number of contingencies, operating conditions and control actions that can be studied. Further, recent research has shown the utility of considering the risk of corrective action failure (i.e. the risk of post-contingency corrective actions not being implemented correctly) in identifying an optimal system dispatch [8, 9, 10]. These methods considers corrective generator re-dispatch as the only corrective action, which simplifies modelling and solution of the optimization problem. However, generator re-dispatch is typically neither automatic nor quick enough to be considered as a RAS. In this paper, we extend the above studies by proposing an optimization framework to design RAS and develop a cascading failure simulation to assess risk of RAS misoperation. Our first contribution is an extension of the traditional security constrained optimal power flow (SCOPF) to design and optimize RAS settings at an operational time frame, which we refer to as RAS-SCOPF. The RAS-SCOPF is a mixed-integer optimization problem which models system operations in three stages, namely (i) pre-contingency operation, (ii) intermediate (post-contingency, pre-RAS) operations, and (iii) post-contingency, post-RAS operations. An innovative aspect of this model is that the second stage includes a set of logical constraints that describes whether or not the RAS is triggered, with RAS actions as decision variables. Our second contribution is to develop an cascading failure simulation to assess the risk of the RAS not working as intended due to, e.g., unexpected operating conditions. Our cascading failure simulator is based on the DCSIMSEP simulator [11], but was reimplemented in Julia [12] and extended to include a model of the relevant RAS schemes. Our third contribution is to demonstrate our proposed method in a case study on the IEEE RTS-96 single area system. First, we show that the proposed RAS-SCOPF method reduce both operating cost and cascading risk compared to the traditional OPF and SCOPF formulations. Second, we assess the risk of RAS misoperation under loading conditions different from what it was designed for. The results highlight the benefits of reoptimizing the RAS settings in operations. The rest of the paper is structured as follows. Section II describes the mathematical formulation of the proposed optimization problem, while Section III discusses the set-up of the cascading simulations. Section IV presents the results of the case study, and Section V concludes the paper. ## II Optimal Design of Remedial Action Schemes RAS schemes are typically used to mitigate the impact of particular contingencies on system operations, thus allowing more effective use of existing transmission capacity in normal operations. While RAS may resolve different kinds of post-contingency problems and may involve different kinds of control actions, we focus on alleviating post-contingency line overloads using generation tripping. Our optimization aims to optimally choose the RAS actions, i.e. which generators are tripped once the RAS is triggered. We assume that the RAS is triggered when the considered line is overloaded, regardless of what caused the line flow to exceed the limit. As a result, the RAS action is shared among all considered contingencies. ### _Formulation of the RAS-SCOPF_ We consider a power system where the sets \(\mathcal{G},\mathcal{B}\) and \(\mathcal{L}\) represent the generators, buses and lines in the system, and \(|\mathcal{G}|\) represents the number of elements in \(\mathcal{G}\). The parameters \(P_{g}^{max}\) and \(P_{g}^{min}\) represent the maximum and minimum generation limits, while \(P_{f}^{max}\) is the maximum transmission line capacity. We use **bold fonts** for decision variables, with the vectors \(\mathbf{P_{g}^{o}}\), \(\mathbf{\theta^{o}}\) and \(\mathbf{P_{f}^{o}}\) representing the pre-contingency generation, voltage angles and power flows, respectively. Similar decision variables with superscripts \(i\) and \(c\) are used to represent the power flows in the intermediate and the post-RAS stage, respectively. Generator and voltage variables related to individual generator or buses are denoted with subscript \(i\) or \(j\), e.g. \(\mathbf{P_{gi}^{o}}\), while lines have double subscripts \(ij\) representing either ends of the line, e.g. \(\mathbf{P_{fij}^{o}}\). #### Ii-A1 Objective function The objective function is given by \[\min\ \sum_{i\in\mathcal{G}}f_{i}(\mathbf{P_{gi}^{o}})+\sum_{k\in\mathcal{C}} \left(\sum_{i\in\mathcal{B}}\gamma(\mathbf{P_{di}^{c,k}}-P_{di}^{o})+\sum_{i\in \mathcal{G}}\rho(1-\mathbf{z_{gi}^{j}})\right) \tag{1}\] The first term represents the generation cost of normal operation, with \[f_{i}(\mathbf{P_{gi}^{o}})=c_{2,i}(\mathbf{P_{gi}^{o}})^{2}+c_{1,i}\mathbf{P_{gi}^{o}}\] where \(c_{2,i}\) and \(c_{1,i}\) are the quadratic and linear cost coefficients of generator \(i\), respectively. The second term penalizes post-contingency load shedding, with \(P_{di}^{o}\) representing the normal load and \(\mathbf{P_{di}^{c,k}}\) representing load served after a RAS is triggered. The parameter \(\gamma\) represents the load shedding cost. The third term minimizes the magnitude of the RAS action, represented as the number of generators tripped multiplied by a penalty factor \(\rho\). #### Ii-A2 Pre-contingency operating constraints We use a DC power flow model to model our system. The pre-contingency nodal power balance and power flows are given by \[\mathbf{P_{gi}^{o}}-P_{di}^{o}=\sum_{j\in\mathcal{B}}\mathbf{P_{fij}^{o}} \forall i\in\mathcal{B} \tag{2}\] \[\mathbf{P_{fij}^{o}}=-b_{ij}(\mathbf{\theta_{i}^{o}}-\mathbf{\theta_{j}^{o}}) \forall ij\in\mathcal{L} \tag{3}\] where \(b_{ij}\) is the admittance of line \(ij\). Generator and line limits are enforced by \[P_{gi}^{min}\leq\mathbf{P_{gi}^{o}}\leq P_{gi}^{max} \forall i\in\mathcal{G} \tag{4}\] \[-P_{fij}^{max}\leq\mathbf{P_{fij}^{o}}\leq P_{fij}^{max} \forall ij\in\mathcal{L} \tag{5}\] #### Ii-A3 Intermediate operating constraints We next model the intermediate operating condition just after each contingency (prior to the implementation of any RAS). These constraints are included for all contingencies \(k\in\mathcal{C}\), where \(\mathcal{C}\) is the set of all contingencies. The subset of critical contingencies that the RAS is designed to protect against is denoted by \(\mathcal{C}_{M}\subset\mathcal{C}\). To make up for generation imbalances following a generation or load outage, we assume a distributed slack model and redispatch generators using pre-determined participation factors \(K_{i}\). The new generation levels \(\mathbf{P_{g}^{i,k}}\) and associated generation limit constraints are given by \[\mathbf{P_{gi}^{i,k}}\!=\!\mathbf{P_{gi}^{o}}+K_{i}\left[\sum_{i\in \mathcal{B}}(P_{di}^{i,k}\!-\!P_{di}^{o})+\sum_{i\in\mathcal{G}}(\mathbf{P_{gi}^{o} }\!-\!\mathbf{P_{gi}^{i,k}})+\mathbf{\Delta}^{i}\right],\] \[\forall i\in\mathcal{G}_{k},k\in\mathcal{C} \tag{6}\] \[P_{gi}^{min}\leq\mathbf{P_{gi}^{i,k}}\leq P_{gi}^{max}, \forall i\in\mathcal{G},k\in\mathcal{C} \tag{7}\] where \(\mathcal{G}_{k}\) is the set of online generators after contingency \(k\) and the variable \(\mathbf{\Delta}^{i}\) represents the power mismatch in the network that arises because the participation factors \(K_{i}\) of the online generators may not sum to 1. The intermediate power balance and power flow constraints are given by \[\mathbf{P_{gi}^{i,k}}-P_{di}^{i,k}=\sum_{j\in\mathcal{B}}\mathbf{P_{fij}^{i,k}} \forall i\in\mathcal{B},k\in\mathcal{C} \tag{8}\] \[\mathbf{P_{fij}^{i,k}}=-b_{ij}(\mathbf{\theta_{i,k}^{i}}-\mathbf{\theta_{j,k}^ {i}}) \forall ij\in\mathcal{L}_{k},k\in\mathcal{C} \tag{9}\] where \(\mathcal{L}_{k}\) is the set of non-outaged lines in contingency \(k\). For the contingencies the RAS is designed to protect against, denoted by \(\mathcal{C}_{M}\), we do not enforce power flow limits, as we assume there may be overloads. For all other contingencies \(k\in\mathcal{C}\backslash\mathcal{C}_{M}\), we enforce post-contingency line limits, \[-P_{fij}^{max}\leq\mathbf{P_{fij}^{i,k}}\leq P_{fij}^{max} \forall ij\in\mathcal{L},k\in\mathcal{C}\backslash\mathcal{C}_{M} \tag{10}\] 4 RAS design and triggering constraints In the intermediate stage, we also include constraints to evaluate whether the RAS is triggered by an overload. These constraints are included for all lines in the set of monitored lines, denoted by \(\mathcal{L}_{M}\), and for the contingencies \(\mathcal{C}_{M}\). Note that the set \(\mathcal{L}_{M}\) can include one or more lines. The following logic constraints assess whether the loading on a monitored line \(ij\in\mathcal{L}_{M}\) exceeds its maximum loading after a contingency \(k\), \[\mathbf{P^{i,k}_{fi\mathbf{j}}}-P^{max}_{fij} \geq m(1-\mathbf{z^{k}_{1,ij}}) \forall ij\in\mathcal{L}_{M},k\in\mathcal{C}_{M} \tag{11}\] \[\mathbf{P^{i,k}_{fi\mathbf{j}}}-P^{max}_{fij} \leq M\mathbf{z^{k}_{1,ij}} \forall ij\in\mathcal{L}_{M},k\in\mathcal{C}_{M}\] (12) \[\mathbf{P^{i,k}_{fi\mathbf{j}}}-P^{max}_{fij} \geq m(1-\mathbf{z^{k}_{2,ij}}) \forall ij\in\mathcal{L}_{M},k\in\mathcal{C}_{M}\] (13) \[\mathbf{P^{i,k}_{fi\mathbf{j}}}-P^{max}_{fij} \leq M\mathbf{z^{k}_{2,ij}} \forall ij\in\mathcal{L}_{M},k\in\mathcal{C}_{M}\] (14) \[\mathbf{z^{k}_{1,ij}}+\mathbf{z^{k}_{2,ij}} \geq\mathbf{z^{k}_{3,ij}} \forall ij\in\mathcal{L}_{M},k\in\mathcal{C}_{M}\] (15) \[\mathbf{z^{k}_{1,ij}}+\mathbf{z^{k}_{2,ij}} \leq\mathbf{z^{k}_{3,ij}} \forall ij\in\mathcal{L}_{M},k\in\mathcal{C}_{M} \tag{16}\] Eqs. (11) and (12) set the variable \(\mathbf{z^{k}_{1,ij}}=1\) if the line is overloaded in the positive flow direction, and \(\mathbf{z^{k}_{1,ij}}=0\) otherwise. Eqs. (13), (14) and \(\mathbf{z^{k}_{2,ij}}\in\{0,1\}\) enforce the same condition for the negative flow direction. If the line is overloaded in either direction, (15) and (16) set \(\mathbf{z^{k}_{3,ij}}=1\) to indicate that contingency \(k\) causes a RAS-triggering condition on line \(ij\), and ensure that \(\mathbf{z^{k}_{3,ij}}=\mathbf{0}\) otherwise. The parameters \(M\) and \(m\) are big-M constants that represent valid lower and upper bounds on the left hand side of the constraints. The binary variable \(\mathbf{y^{k}_{j}}\) indicates whether the \(j^{th}\) RAS scheme has been triggered. Specifically, \(\mathbf{y^{k}_{j}}=1\) if \(\mathbf{z^{k}_{3,ij}}=1\) for one or more lines in the monitored set \(\mathcal{L}^{j}_{M}\) and \(y^{k}_{j}=0\) otherwise. For all \(k\in\mathcal{C}_{M}\), this condition is expressed by \[\sum_{(i,j)\in\mathcal{L}^{j}_{M}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ## III Cascading simulation Modelling cascading failures in power systems is challenging because there are several ways in which an initiating contingency can evolve into a series of cascading outages, e.g., cascading thermal overloads, voltage instability, transient instability, hidden failures in protection systems or human errors [13, 14]. Here, we focus on cascading events driven by thermal overloads, as our goal is to evaluate the effectiveness of a RAS scheme in preventing such cascading events. We base our cascading simulator on a DC power flow model, which is computationally very efficient and hence widely used to assess system behaviour when subjected to multiple contingencies and operating conditions. The drawback of using a DC power flow based model is that it does not capture reactive power and voltage variability, and also assumes that the system reaches a steady-state after every contingency. Thus, DC models cannot capture the effects of dynamic phenomena like voltage and transient instability, or in situations where a steady-state solution does not exist. However, in the early stages of a cascade, before the loss of dynamic stability, the DC power flow models can describe the evolution of the system with good accuracy. Thus, choosing an appropriate definition of system failure helps limit the difference between actual system behaviour and that predicted by models based on DC power flow approximations in the case of cascading overloads [15]. Figure 1 illustrates the cascading failure simulator designed to test the performance of the designed RAS. Our simulator builds on DCSIMSEP [11]. A major difference between DCSIMSEP and our simulator is RAS modelling, which is absent in DCSIMSEP. Another difference includes the way in which generators and loads are redispatched, which we do by solving the optimization problem (31)-(39). Further, our simulator does not model overcurrent relays to track the time elapsed between successive outages. The steps involved in a cascading simulation are summarized in Fig. 1 and described below in more detail: _Step 0: Initialization_ At the beginning of the simulation, the system has to be initialized appropriately. When evaluating the risk of cascading failures of the different formulations, the system is initialized using the RAS-SCOPF, OPF or SCOPF, respectively. When analysing the effectiveness of the RAS for different loading conditions, the system is initialized with the solutions of the RAS-aware SCOPF. _Step 1: Apply a contingency_ We apply an \(n-1\) contingency to the system by modifying the status of the outaged line. _Step 2: Check for system failure_ We calculate the line admittance matrix \(B^{F}\) considering all outages in the system and identify all the resulting islands. Here, we define system failure as the state when at least 10% of the buses are disconnected from the largest island. If the system satisfies the chosen definition of system failure, terminate the simulation. If it is not satisfied, continue to step 3. _Step 3: Implement RAS action_ If the applied contingency causes overloads in any one of the lines monitored by a RAS, this RAS is triggered. In this case, we trip generators according to the pre-defined RAS action and shed the necessary amount of load. If there are no RAS present in the system or if the RAS has already been triggered, we do nothing. _Step 4: Redispatch generators and load_ If there is a mismatch between load and generation because the system is separated into several islands or a RAS scheme has lead to generation tripping and load shed, we redispatch generators and loads in every island by solving the following optimization problem: \[\min\sum_{i\in\mathcal{B}_{isl}}\mathbf{P_{di}^{n}}-P_{di}^{o}+\sum_{i\in \mathcal{G}_{isl}}(1-\mathbf{z_{gi}^{n}}) \tag{31}\] subject to \[\text{for all generators in the island }i\in\mathcal{G}_{isl} \tag{32}\] \[\mathbf{z_{gi}^{n}}\leq z_{gi}^{o} \tag{33}\] \[\mathbf{P_{gi}^{n}}-\Bigg{(}P_{gi}^{o}+K_{i}\sum_{i\in\mathcal{B}_{isl }}(\mathbf{P_{di}^{n}}-P_{di}^{o}) \tag{34}\] \[\qquad+K_{i}\left[\sum_{i\in\mathcal{G}_{isl}}(\mathbf{P_{gi}^{n}}-P _{gi}^{o})+\Delta\right]\!\right)\!\leq\!M(1\!-\!\mathbf{z_{gi}^{n}})\] (35) \[\mathbf{P_{gi}^{n}}-\Bigg{(}P_{gi}^{o}+K_{i}\sum_{i\in\mathcal{B}_{isl }}(\mathbf{P_{di}^{n}}-P_{di}^{o})\] \[\qquad+K_{i}\left[\sum_{i\in\mathcal{G}_{isl}}(\mathbf{P_{gi}^{n}}-P _{gi}^{o})+\Delta\right]\!\Bigg{)}\!\!\geq\!M(1\!-\!\mathbf{z_{gi}^{n}})\] \[\sum_{i\in\mathcal{G}_{isl}}\mathbf{P_{gi}^{n}}=\sum_{i\in\mathcal{ B}_{isl}}\mathbf{P_{di}^{n}}\] (36) \[\mathbf{z_{gi}^{min}}_{gi^{i}}\leq\mathbf{P_{gi}^{n}}\leq\mathbf{z_{gi}^{n}}P _{gi}^{max} \tag{37}\] for all buses in the island \(i\in\mathcal{B}_{isl}\) \[0\leq\mathbf{P_{di}^{n}}\leq P_{di}^{o} \tag{38}\] Fig. 1: Flowchart for the Cascading Failure Simulator Here, the superscripts \(o\) and \(n\) refer to the generation and load before and after the redispatch respectively. Similar to the RAS-SCOPF, the generators are redispatched using a distributed slack bus model and participation factors \(K_{i}\). _Step 5: Trip overloaded lines_ We solve a DC power flow to compute the line flows and identify all lines in the island that are overloaded. Because protective equipment for each line is assumed to have inverse time characteristics (i.e. it trips faster larger the overload), we trip only the line with the maximum overload, unless it is monitored by RAS and the RAS has not been triggered yet. Move to step 2). ## IV Case Study The RAS-SCOPF is used to design a remedial action scheme for the 24-bus system shown in Fig. 2. The system is based on the single area IEEE RTS-96 system described in [16], with some modifications. To make the case study more interesting, we reduce the line rating of all lines to 80% of the original capacities listed in [16]. This makes the system more congested, with a subset of contingencies causing post-contingency line overloads. We only consider line outage constraints on non-radial lines and the capacity of the radial line 11 (from bus 7 to 8) is increased to 150% of its original rating to ensure that it is never binding. While designing the RAS, we consider only the peak load scenario with a total load of 2850 MW. We assume that only generators 1-16 are responsible for maintaining power balance in the system. For these generators, we define non-zero participation factors based on their maximum generation capacity, \[K_{i}=\frac{P_{gi}^{max}}{\sum_{k=1}^{16}P_{gk}^{max}}\qquad\text{ for }i=1,...,16. \tag{40}\] We assume a fixed penalty \(\gamma\) = $5000/MW for load shedding and a fixed cost \(\rho\) = $1000 for every generator that is shed from the network. We set \(M=-m=100\) p.u.. ### _RAS design_ Table I shows all the contingencies that would result in post-contingency overloads on other lines, assuming the initial dispatch is obtained using the OPF formulation. We observe that there are several different contingencies that lead to an overload on line 23. We therefore demonstrate our proposed method by designing a RAS scheme that monitors overloads on this line. The set of contingencies \(\mathcal{C}_{M}\) which the RAS is designed to protect against thus corresponds to the outages of lines \(\{7,18,21,22,27,29\}\). Fig. 2 shows the location of the monitored line (ine green) and the lines that correspond to the critical contingencies (in red). Solving the RAS-SCOPF with a single RAS scheme on line 23 and the given set of contingencies, we obtain a solution with pre-contingency generation cost $62784.0. When detecting an overload in line 23, the RAS trips the generator \(\mathcal{G}_{RAS}=\{22\}\), which corresponds to a 155 MW of generation capacity. The resulting power imbalance is balanced by generators 1-16, and does not cause any further overloads or load shed. As a result, we declare that our RAS scheme is effective. ### _Comparison with other OPF formulations_ We next compare the RAS-SCOPF to the OPF and SCOPF formulations described in Section II.B. We first solve each optimization problem and observe the corresponding pre-contingency generation costs and then evaluate the risk of post-contingency cascading failure by running cascading simulations for all contingencies listed in Table I. Table II shows the pre-contingency generation cost for the three OPF formulations. Dispatching generators using the OPF results in the lowest pre-contingency operating cost of the system. The RAS-SCOPF results in 3% higher pre-contingency costs than the OPF, as it needs to ensure that the generator redispatch after the RAS action is feasible and that the initial dispatch is secure against critical contingencies that do not trigger the RAS. However, the cost of the RAS Fig. 2: IEEE RTS-96 24-bus system. Line 23 is shown in green and the lines whose outages cause overloads in line 23 are shown in red SCOPF is significantly lower than the SCOPF, which results in a cost increase of nearly 12%. This is because the SCOPF needs to ensure that all contingency constraints are satisfied without post-contingency generation tripping actions. When considering the outcome of the cascading simulations for each solution, also listed in Table II, we observe that the low-cost OPF solution has a significantly higher risk of cascading failure than the other two solutions. When the generators are dispatched using OPF, all the contingencies listed in Table I result in cascading failure with a total load shed of 7832.8 MW across all contingencies. With the RAS-SCOPF and the SCOPF, none of the contingencies result in a cascading failure. Based on these results, we conclude that the RAS we designed reduces operational costs relative to the SCOPF solution and lowers the risk of cascading failure relative to the OPF solution. ### _Sensitivity to load distribution_ When designing the RAS, we only consider the peak load scenario. To assess how the RAS performs under different load conditions, cascade simulations are run for a range of loading conditions. To vary the load distribution, we first scale the load at every bus by a factor that is randomly drawn from a uniform distribution \(X\), and then rescale the load to ensure that the total load in the system remains at 2850 MW, \[P_{di}^{n}=X_{i}P_{di}^{o}\frac{\sum_{\ell\in\mathcal{B}}P_{d\ell}^{o}}{\sum_{ j\in\mathcal{B}}P_{dj}^{n}}\qquad\text{ for }i\in\mathcal{B}. \tag{41}\] When studying the performance of RAS when subjected to small load disturbances, \(X\sim U(0.9,1.1)\), while \(X\sim U(0.5,1.5)\) for larger load disturbances. For each load scenario, the system is initialized by solving the RAS-aware SCOPF. Any scenario that renders the RAS-aware SCOPF infeasible is discarded. When load deviations are small (\(X\sim U(0.9,1.1)\)), around 70% (68 out of 100) of scenarios have a feasible solution to the RAS-aware SCOPF. For most of the feasible load scenarios, the RAS is capable of preventing any critical contingency listed in table I from evolving into a cascading event. However, there are 3 load scenarios where, for at least one critical contingency, the RAS action is not sufficient to remove the overload on line 23 resulting in a cascading failure. The total load shed across these scenarios is 1399.2 MW. In the case of larger load deviation (\(X\sim U(0.5,1.5)\)), only 55% (55 out of 100) of the scenarios have a feasible solution to the RAS-aware SCOPF. Among those scenarios, there are many that lead to cascading events after an initial line outage. Since outage of line 7 causes the largest overload in line 23, results pertaining only to the outage of line 7 are presented. The results are similar for all other contingencies in the set \(\mathcal{C}_{M}\), except that fewer scenarios result in cascading failure. Out of the 55 feasible scenarios, there are 16 scenarios where the RAS action is not sufficient to prevent cascading failure when line 7 is outaged. In all these cases, RAS was not able to remove the overload in line 23, resulting in overloads on other lines and eventually lead to cascading failure. For example, in one of the scenarios, the outage of line 7 triggers the RAS but the RAS action is insufficient to alleviate the overload on line 23. Thus, line 23 is tripped, followed by subsequent outages of lines 22 and 21 resulting in a total load shed of 1013.6 MW. The scenario described above was the worst case observed across all scenarios and contingencies. In 11 additional load scenarios, the outage of line 7 causes overloads in lines that are not monitored by the RAS. These overloads do not trigger the RAS, but cause cascading outages involving other lines. In these cases, the system has separated into multiple islands and a significant amount load shed (up to 14.5%) occurs. From these results, we could conclude that the RAS schemes either should be designed to be robust to large deviations in the loading condition, or that the RAS actions should be updated to reflect changing loading conditions using, e.g., the RAS-SCOPF. ## V Conclusions Remedial action schemes (RAS) are an important tool to reduce congestion in power systems operation. However, RAS pose several challenges to grid operators due to their inability to adapt to changing conditions and the added risk of implementation failure. To address these challenges, we propose the RAS-SCOPF to choose a set of optimal RAS actions in response to current loading conditions. We further implement a DC power flow based cascading failure simulator to evaluate the risk of cascading failure when the RAS is present in the system and when it is not. The proposed method is applied to the RTS-96 24-bus network. Using the RAS-SCOPF lowered operational costs compared to the SCOPF, while ensuring same level of security against all critical contingencies. We further observe that the RAS designed using RAS-SCOPF is robust against small deviations in load, with very few scenarios resulting in system failure. However, the RAS was designed for only a single load scenario and is not effective in preventing cascading failures for loading scenarios that differ too much from the design scenario. This demonstrates the need for considering several load scenarios when solving the RAS-SCOPF or updating the RAS actions in real time. In future work, we aim to develop algorithms that solve the RAS-SCOPF efficiently for larger systems and multiple scenarios, while accounting for the RAS failure probabilities.
2310.00921
Extensions realizing affine datum : the Wells derivation
We develop the Wells derivation for extensions realizing affine datum in arbitrary varieties; in particular, we show there is an exact sequence connecting the group of compatible automorphisms determined by the datum and the subgroup of automorphisms of an extension which preserves the extension's kernel. This implies a homomorphism between $2^{\mathrm{nd}}$-cohomology groups which realizes a group of kernel-preserving automorphisms of an extension as itself an extension of a subgroup of compatible automorphisms by the group of derivations of the datum. A refinement of this general Wells's-type theorem is given for a restricted class of varieties with a difference term which include any variety of groups with multiple operators in the sense of Higgins. The same results are obtained for nonabelian extensions in any variety of $R$-modules expanded by multilinear operations.elf an extension of a subgroup of compatible automorphisms by the group of derivations of the datum. The same results are obtained for nonabelian extensions in any variety of $R$-modules expanded by multilinear operations.
Alexander Wires
2023-10-02T06:25:31Z
http://arxiv.org/abs/2310.00921v3
# Extensions realizing affine datum : the well's derivation ###### Abstract. We develop the Well's derivation for extensions realizing affine datum in arbitrary varieties; in particular, we show there is an exact sequence connecting the group of compatible automorphisms determined by the datum and the subgroup of automorphisms of an extension which preserves the extension's kernel. This implies a homomorphism between \(2^{\mathrm{nd}}\)-cohomology groups which realizes a group of kernel-preserving automorphisms of an extension as itself an extension of a subgroup of compatible automorphisms by the group of derivations of the datum. The same results are obtained for nonabelian extensions in any variety of \(R\)-modules expanded by multilinear operations. ## 1. Introduction We consider an application of the machinery developed in Wires [13] for extensions which realize affine datum in arbitrary varieties of universal algebras. Our reference is the following theorem of Charles Wells. **Theorem 1.1**.: ([12]) Let \(\pi:G\to Q\) be a surjective group homomorphism with \(K=\ker\pi\). There are homomorphisms and a set map \(C\to H^{2}_{\alpha}(Q,ZK)\) such that \[1\longrightarrow Z^{1}_{\bar{\alpha}}(Q,ZK)\longrightarrow\operatorname{Aut} _{K}G\longrightarrow C(Q,K,\alpha)\longrightarrow H^{2}_{\bar{\alpha}}(Q,ZK)\] is exact. The extension \(\pi:G\to Q\) determines datum \((Q,K,\alpha)\) where the homomorphism \(\alpha:Q\to\operatorname{Out}K\) is induced by the conjugation action of \(Q\) on \(K\) afforded by a lifting of \(\pi\). The group \(\operatorname{Aut}_{K}G\) is the subgroup of automorphisms of \(G\) which preserve \(K\) set-wise. There is an action of \(\operatorname{Aut}K\times\operatorname{Aut}Q\) on the \(2\)-cochains of the datum given by simultaneously permuting their domains and codomains in the natural way. The subgroup \(C(Q,K,\phi)\leq\operatorname{Aut}K\times\operatorname{Aut}Q\) consists of those pairs of automorphisms which satisfy a compatibility condition concerning the homomorphism \(\alpha\) which guarantees that the set of \(2\)-cocycles is closed under the action of the subgroup \(C(Q,K,\alpha)\). The map \(C(Q,K,\alpha)\to H^{2}_{\bar{\alpha}}(Q,ZK)\) is commonly referred to as the _Well's map_, or _Well's derivation_, and is indeed a principal derivation for the action of compatible automorphisms \(C(Q,K,\alpha)\) on \(H^{2}_{\bar{\alpha}}(Q,ZK)\). The proof of Well's theorem is intimately related to how the equivalence on \(2\)-cocycles which defines second-cohomology classes is characterized by stabilizing isomorphisms between extensions. A more thorough explanation can be found in the monograph of Passi, Singh and Yadav [10] and in Wells [12], of course. It is important to note that Well's theorem applies to all group extensions and not just those with abelian kernels; however, we can utilize the development in Wires [13] to consider Well's argument in the restricted case of group extensions with abelian kernels and extend it to the general setting of extensions realizing affine datum in arbitrary varieties of universal algebras. This is the principal content of this manuscript. **Theorem 1.2**.: Suppose \(\mathcal{U}\) is a variety containing affine datum \((Q,A^{\alpha,\tau},*)\) which is realized by an extension \(\pi:A\to Q\) with associated \(2\)-cocycle \(T\). Then we have the exact sequence \[1\longrightarrow\operatorname{Der}(Q,A^{\alpha,\tau},*)\longrightarrow \operatorname{Aut}_{\alpha}A\stackrel{{\psi}}{{\longrightarrow}}C( Q,A^{\alpha,\tau},*)\stackrel{{ W_{T}}}{{\longrightarrow}}H^{2}_{\mathcal{U}}(Q,A^{ \alpha,\tau},*).\] By an abuse of notation, we define \(\ker W=\left\{[T]\in H^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*):\ker W_{T}=C(Q,A^{ \alpha,\tau},*)\right\}\) which will be a subgroup of \(2^{\text{nd}}\)-cohomology. For group datum \((Q,K,\phi)\) with \(K\) abelian, we write \(K\rtimes_{(\phi,f)}Q\) for the extension realizing the datum defined by modifying the operations of the semidirect product \(K\rtimes_{\phi}Q\) by the addition of the \(2\)-cocycle \(f\). **Theorem 1.3**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum contained in the variety \(\mathcal{U}\). For each \([T]\in H^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\) there is an action \(\phi_{T}:\ker W_{T}\to\operatorname{Aut}\operatorname{Der}(Q,A^{\alpha,\tau},*)\) and a map \[\Pi:H^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\to H^{2}\left(\ker W_{T}, \operatorname{Der}(Q,A^{\alpha,\tau},*),\phi_{T}\right)\] such that \(\operatorname{Aut}_{\tilde{a}}A_{T}(Q,A^{\alpha,\tau},*)\approx\operatorname {Der}(Q,A^{\alpha,\tau},*)\rtimes_{(\phi_{T},\Pi([T]))}\ker W_{T}\) and the restriction of \(\Pi\) to \(\ker W\) is a group homomorphism; in particular, \(\operatorname{Aut}_{\tilde{a}}A(\alpha)/\Delta_{\alpha\alpha}\approx \operatorname{Der}(Q,A^{\alpha,\tau},*)\rtimes_{\phi_{T}}C(Q,A^{\alpha,\tau},*)\). The paper Wires [13] examines the deconstruction/reconstruction of extensions realizing affine datum in arbitrary varieties of universal algebras. As a step toward general extensions (what is often called nonabelian cohomology), the paper Wires [14] also examines the parameters for characterizing extensions in varieties of \(R\)-modules expanded by multilinear operations. For abelian ideals with unary actions, this agrees with the development for general affine datum. Varieties of \(R\)-modules expanded by multilinear operations can be seen as a special case of Higgin's groups with multiple operators formalisms [6], but are still general enough to include many examples of multilinear algebras such as rings (Everett [5]), associative algebras (Agore and Militaru [1], Hochschild [7]), Lie algebras (Inassaridze, Khmaladze and Ladra [9]), Liebniz algebras (Casas, Khmaladze and Ladra [3]), dendriform algebras and bilinear Rota-Baxter algebras (Das and Rathee [4]), Lie-Yamaguti algebras ( Yamaguti [15]) or conformal algebras (Bakalov, Kac and Voronov[2], Hou and Zhao [8], Smith [11]) to name just a few well-studied classes. The constructions in [13] parametrizing nonabelian extensions recovers in a uniform manner the cohomological classification of extensions previously developed for these different varieties. The version of Well's Theorem for these varieties proceeds in the same manner as the affine datum case with a modification; namely, the values of the Well's map resides in the free abelian group generated by \(2^{\text{nd}}\)-cohomology for the variety. This is done to provide the Well's map with an appropriate codomain since cohomology classes in \(H^{2}_{\mathcal{V}}(Q,I)\) for an arbitrary subvariety \(\mathcal{V}\) may not be closed under the natural addition of \(2\)-cocycles induced by \(I\). **Theorem 1.4**.: Let \(\mathcal{V}\) be a variety of \(R\)-modules expanded by multilinear operations. If \(A\in\mathcal{V}\) is an extension \(\pi:A\to Q\) with \(I=\ker\pi\) and has associated \(2\)-cocycle \(T\), then there exists an exact sequence \[1\longrightarrow\operatorname{Der}(Q,I)\longrightarrow\operatorname{Aut}_{I}A \stackrel{{\psi}}{{\longrightarrow}}\operatorname{Aut}I\times \operatorname{Aut}Q\xrightarrow{W_{T}}FA\left(H^{2}_{\mathcal{V}}(Q,I)\right).\] ## 2. Preliminaries In this section, we define the maps and the groups which appear in Theorem 1.2. For a tuple \(\vec{a}=(a_{1},\dots,a_{n})\in A^{n}\), the partial tuples determined by \(1\leq i<n\) are denoted by \(\vec{a}_{i}=(a_{1},\dots,a_{i})\) and \(\vec{a}=(a_{i+1},\dots,a_{n})\). For a map \(f:A\to B\) and \(\vec{a}\in A^{n}\), we write \(f(\vec{a})=(f(a_{1}),\dots,f(a_{n}))\in B^{n}\) for the tuple of coordinate-wise evaluations. Fix affine datum \((Q,A^{\alpha,\tau},*)\) in the signature \(\tau\). According to Wires [13], there is a unique semidirect product \(\rho:A(\alpha)/\Delta_{\alpha\alpha}\to Q\) realizing the datum which can be reconstructed from the operations \(F_{f}\) for each signature symbol \(f\in\tau\) which are given by \[F_{f}(\vec{a})=f^{\Delta}\left(a_{1},\delta(l\circ\rho(a_{2})),\dots,\delta(l \circ\rho(a_{n}))\right)+_{u}\sum_{i=2}^{n}a(f,i)\left(\rho(a_{1}),\dots,\rho(a _{i-1}),a_{i},\rho(a_{i+1},\dots,\rho(a_{n}))\right.\] for \(\vec{a}\in A(\alpha)/\Delta_{\alpha\alpha}\) where \(n=\operatorname{ar}f\) and \(u=l\circ f^{Q}(\rho(\vec{a}))\) for any choice of lifting \(l:Q\to A\) associated to the datum. The operation \(+_{u}\) is given by \(x+_{u}y=m(x,u,y)\) for the ternary operation \(m\) prescribed by the datum. The map \(\delta:A\to A(\alpha)/\Delta_{\alpha\alpha}\) is the diagonal embedding \(\delta(a)=\begin{bmatrix}a\\ a\end{bmatrix}/\Delta_{\alpha\alpha}\). Note for unary and nullary symbols \(f\in\tau\), the operations \(F_{f}\) are interpreted using only the partial operations \(f^{\Delta}\) of the partial structure \(A^{\alpha,\tau}\). According to Wires [13], the extensions in a variety \(\mathcal{U}\) which realize the datum can be uniquely reconstructed by the algebras \(A_{T}(Q,A^{\alpha,\tau},*)\) parametrized by the \(\mathcal{U}\)-compatible \(2\)-cocycles \(T\). The operations of the algebra are defined by adding the parameter \(T\) to the operations of the semidirect product according to \[F_{f}(\vec{a}) =f^{\Delta}\left(a_{1},\delta(l\circ\rho(a_{2})),\ldots,\delta(l \circ\rho(a_{n}))\right)+_{u}\sum_{i=2}^{n}a(f,i)\left(\rho(a_{1}),\ldots,\rho (a_{i-1}),a_{i},\rho(a_{i+1},\ldots,\rho(a_{n})\right)\] \[\quad+_{u}T_{f}\left(\rho(\vec{a})\right).\] Taken together, the \(\mathcal{U}\)-compatible \(2\)-cocycles form the \(2^{\mathrm{nd}}\)-cohomology group \(H^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\) parametrizing as an abelian group the extensions in \(\mathcal{U}\) realizing the datum. Details can be found in Wires [13]. The _compatible automorphisms_ for the datum is the set \(C(Q,A^{\alpha,\tau},*)\) which consists of all pairs \((\sigma,\kappa)\in\operatorname{Aut}A(\alpha)/\Delta_{\alpha\alpha}\times \operatorname{Aut}Q\) such that for each \(f\in\tau\), 1. \(\sigma\circ a(f,i)(\vec{p},a,\vec{q})=a(f,i)(\kappa(\vec{p}),\sigma(a),\kappa( \vec{q}))\) \(\left(\vec{p}\in Q^{i-1},\vec{q}\in Q^{i},a\in A(\alpha)/\Delta_{\alpha\alpha}\right)\) \(\left(\vec{q}\in Q^{n-1},a\in A(\alpha)/\Delta_{\alpha\alpha}\right)\) 2. \(\rho\circ\sigma(x)=\kappa\circ\rho(x)\) \(\left(x\in A(\alpha)/\Delta_{\alpha\alpha}\right)\) 3. \(\sigma\left(\begin{bmatrix}x\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)=\begin{bmatrix}y\\ y\end{bmatrix}/\Delta_{\alpha\alpha}\) for some \(y\in A\) \(\left(x\in A\right)\) We see that the compatible automorphisms form a subgroup of the direct product \(\operatorname{Aut}A(\alpha)/\Delta_{\alpha\alpha}\times\operatorname{Aut}Q\). For any algebra \(A\) and \(\alpha\in\operatorname{Con}A\), define the subgroup \(\operatorname{Aut}_{\alpha}A=\{\phi\in\operatorname{Aut}A:\phi(\alpha)\subseteq\alpha\}\) of the full automorphism group of \(A\). We now assume the extension \(\pi:A\to Q\) realizes the affine datum \((Q,A^{\alpha,\tau},*)\). According to Wires [13], may assume \(\alpha=\ker\pi\in\operatorname{Con}A\). The goal is to define a homomorphism \(\psi:\operatorname{Aut}_{\alpha}A\to C(Q,A^{\alpha,\tau},*)\). For any lifting \(l:Q\to A\) of \(\pi\) we have \[\pi\circ l=\operatorname{id}_{Q}\qquad\quad\text{and}\qquad\quad(l\circ\pi(a),a)\in\alpha\qquad(a\in A). \tag{1}\] For any \(\phi\in\operatorname{Aut}_{\alpha}A\), define \(\phi_{l}:Q\to Q\) by \(\phi_{l}:=\pi\circ\phi\circ l\). If \(l^{\prime}:Q\to A\) is another lifting, then for \(q\in Q\) we have \((l(q),l^{\prime}(q))\in\alpha\Rightarrow(\phi(l(q)),\phi(l^{\prime}(q)))\in\alpha\) and so \(\phi_{l}=\phi_{l^{\prime}}\); thus, the map \(\phi_{l}\) does not depend on the choice of the lifting. If \(\phi_{l}(p)=\phi_{l}(q)\), then \((\phi(l(p)),\phi(l(q)))\in\alpha=\ker\pi\) and so \((l(p),l(q))\in\alpha\); therefore, \(p=\pi(l(p))=\pi(l(q))=q\). Also, for \(q\in Q\), we see that \(\left((l(p))(\phi^{-1}(l(q))),\phi^{-1}(l(q))\right)\in\alpha\) and so \(\left((\phi\circ l\circ\pi)(\phi^{-1}(l(q))),l(q)\right)\in\alpha\). This implies \(q=\phi_{l}\circ\pi\circ\phi^{-1}\circ l(q)\) and so \(\phi_{l}\) is bijective; incidentally, we have also shown \(\phi_{l}^{-1}=\pi\circ\phi^{-1}\circ l\). To show \(\phi_{l}\) is a homomorphism, take \(f\in\tau\), \(\vec{q}\in Q^{\pi\prime}\) a substitute \(f^{A}(l(\vec{q}))\) into \(a\) in Eq (1) to get \(\left(l(f^{Q}(\vec{q})),f^{A}(l(\vec{q}))\right)=\left(l\circ\pi(f^{A}(l(\vec{ q}))),f^{A}(l(\vec{q}))\right)\in\alpha\). Applying \(\phi\in\operatorname{Aut}_{\alpha}A\) to the relation yields \(\left(\phi\circ l(f^{Q}(\vec{q})),f^{A}(\phi\circ l(\vec{q}))\right)\in\alpha\) which then implies \(\phi_{l}(f^{Q}(\vec{q}))=\pi\circ\phi\circ l(f^{Q}(\vec{q}))=\pi(f^{A}(\phi \circ l(\vec{q})))=f^{Q}(\phi_{l}(\vec{q}))\); altogether, we have shown \(\phi_{l}\in\operatorname{Aut}Q\). For \(\phi\in\operatorname{Aut}_{\alpha}A\), define \(\widehat{\phi}:A(\alpha)/\Delta_{\alpha\alpha}\to A(\alpha)/\Delta_{\alpha\alpha}\) by \(\widehat{\phi}\left(\begin{bmatrix}a\\ b\end{bmatrix}/\Delta_{\alpha\alpha}\right):=\begin{bmatrix}\phi(a)\\ \phi(b)\end{bmatrix}/\Delta_{\alpha\alpha}\). If \(\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\alpha}\)\(\begin{bmatrix}c\\ d\end{bmatrix}\), then \(d=m(b,a,c)\) since \(A\) realizes affine datum. Applying the automorphism \(\phi\) we have \(\phi(d)=m(\phi(b),\phi(a),\phi(c))\) which implies \(\begin{bmatrix}\phi(a)\\ \phi(b)\end{bmatrix}\Delta_{\alpha\alpha}\)\(\begin{bmatrix}\phi(c)\\ \phi(d)\end{bmatrix}\); thus, \(\widehat{\phi}\) is well-defined. Reversing the preceding calculation and applying \(\phi^{-1}\) shows injectivity, and \(\begin{bmatrix}a\\ b\end{bmatrix}/\Delta_{\alpha\alpha}=\widehat{\phi}\left(\begin{bmatrix}\phi^{-1}(a) \\ \phi^{-1}(b)\end{bmatrix}/\Delta_{\alpha\alpha}\right)\) establishes surjectivity. Since \(\pi:A\to Q\) realizes the datum, in the semidirect product we have by Wires [13] for any \(\alpha\)-trace \(r:A\to A\) and \(f\in\tau\) with \(n=\operatorname{ar}f\), \[\widehat{\phi}\circ F_{f}\left(\begin{bmatrix}r(a_{1})\\ a_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\ldots,\begin{bmatrix}r(a_{n})\\ a_{n}\end{bmatrix}/\Delta_{\alpha\alpha}\right) =\widehat{\phi}\left(\begin{bmatrix}f(r(a_{1}),\ldots,r(a_{n})) \\ f(a_{1},\ldots,a_{n})\end{bmatrix}/\Delta_{\alpha\alpha}\right)\] \[=\begin{bmatrix}\phi(f(r(a),\ldots,r(a_{n})))\\ \phi(f(a_{1},\ldots,a_{n}))\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}f\left(\phi\circ r(a_{1}),\ldots,\phi\circ r(a_{n })\right)\\ f(\phi(a_{1}),\ldots,\phi(a_{1}))\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=F_{f}\left(\begin{bmatrix}\phi\circ r(a_{1})\\ \phi(a_{1})\end{bmatrix}/\Delta_{\alpha\alpha},\ldots,\begin{bmatrix}\phi \circ r(a_{1})\\ \phi(a_{n})\end{bmatrix}/\Delta_{\alpha\alpha}\right)\] \[=F_{f}\left(\widehat{\phi}\left(\begin{bmatrix}r(a_{1})\\ a_{1}\end{bmatrix}/\Delta_{\alpha\alpha}\right),\ldots,\widehat{\phi}\left( \begin{bmatrix}r(a_{n})\\ a_{n}\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right).\] This shows \(\widehat{\phi}\) is an endomorphism of the semidirect product; altogether, \(\widehat{\phi}\in\operatorname{Aut}A(\alpha)/\Delta_{\alpha\alpha}\). We now define \(\psi(\phi):=(\widehat{\phi},\phi_{l})\). For \(\gamma,\phi\in\operatorname{Aut}_{\alpha}A\), clearly \(\widehat{\phi\circ\gamma}=\widehat{\phi}\circ\widehat{\gamma}\). For the second-coordinate, write \(a=\gamma\circ l(q)\) for \(q\in Q\). Then Eq (1) and \(\phi\in\operatorname{Aut}_{\alpha}A\) implies \((\phi\circ l\circ\pi(a),\phi(a))\in\alpha\) and so \(\pi\circ\phi\circ l\circ\pi(a)=\pi\circ\phi(a)\). We then see that \[\phi_{l}\circ\gamma_{l}(q)=\pi\circ\phi\circ l\circ\pi\circ\gamma\circ l(q)= \pi\circ\phi\circ l\circ\pi(a)=\pi\circ\phi\circ\gamma\circ l(q)=(\phi\circ \gamma)_{l}(q);\] therefore, \(\psi\) is a homomorphism. We can calculate how a pair of automorphisms in the image of \(\psi\) interacts with the action in the semidirect product. If we fix \(f\in\tau\) with \(\operatorname{ar}f=n>1\) and \(1\leq i<n\), \(\vec{q}=(q_{1},\ldots,q_{i})\in Q^{i}\) and \(\vec{a}=\left(\begin{bmatrix}a_{1}\\ b_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\ldots,\begin{bmatrix}a_{n-i}\\ b_{n-i}\end{bmatrix}/\Delta_{\alpha\alpha}\right)\in(A(\alpha)/\Delta_{ \alpha\alpha})^{n-i}\), then by realization we have \[\widehat{\phi}\left(a(f,i)(\vec{q},\vec{a})\right) =\widehat{\phi}\left(\begin{bmatrix}f\left(l(q_{1}),\ldots,l(q_{i} ),a_{1},\ldots,a_{n-i}\right)\\ f\left(l(q_{1}),\ldots,l(q_{i}),b_{1},\ldots,b_{n-i}\right)\end{bmatrix}/\Delta _{\alpha\alpha}\right)\] \[=\begin{bmatrix}f\left(\phi(l(q_{1})),\ldots,\phi(l(q_{i})),\phi (a_{1}),\ldots,\phi(a_{n-i})\right)\\ f\left(\phi(l(q_{1})),\ldots,\phi(l(q_{i})),\phi(b_{1}),\ldots,\phi(b_{n-i}) \right)\end{bmatrix}/\Delta_{\alpha\alpha}=a(f,i)\left(\phi_{l}(\vec{q}), \widehat{\phi}(\vec{a})\right).\] This shows \(\operatorname{im}\psi\leq C(Q,A^{\alpha,\tau},*)\). We now define the map \(W_{T}:C(Q,A^{\alpha,\tau},*)\to H^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\) where \(T\) is a \(2\)-cocycle compatible with the variety \(\mathcal{U}\). For each pair \((\sigma,\kappa)\in\operatorname{Aut}\mathbf{A}(\alpha)/\Delta_{\alpha\alpha} \times\operatorname{Aut}\mathbf{Q}\), define \(T^{(\sigma,\kappa)}\) by the rule \[T^{(\sigma,\kappa)}_{f}(\vec{q}):=\sigma\circ T_{f}(\kappa^{-1}(\vec{q})) \left(f\in\tau,\vec{q}\in Q^{\operatorname{ar}f}\right).\] If \([T]=[S]\) as cohomology classes, then there exists \(h:Q\to A(\alpha)/\Delta_{\alpha\alpha}\) such that \[S_{f}(\vec{q})-_{u}T_{f}(\vec{q})=f^{A(\alpha)/\Delta_{\alpha\alpha}}(h(\vec{q }))-_{u}h(f^{Q}(\vec{q})) \left(f\in\tau,\vec{q}\in Q^{\operatorname{ar}f}\right)\] where \(u=l(f^{Q}(\vec{q}))\). Applying the automorphism \(\sigma\) to the above equation and making the substitution \(\vec{q}\mapsto\kappa^{-1}(\vec{q})\) we have \[\sigma\circ S_{f}(\kappa^{-1}(\vec{q}))-_{v}\sigma\circ T_{f}(\kappa^{-1}(\vec{ q}))=f^{A(\alpha)/\Delta_{\alpha\alpha}}(\sigma\circ h\circ\kappa^{-1}(\vec{q}))-_{v} \sigma\circ h\circ\kappa^{-1}(f^{Q}(\vec{q}))\] where \(v=\sigma\circ l\circ\kappa^{-1}(f^{Q}(\vec{q}))\). This shows \(S^{(\sigma,\kappa)}\sim T^{(\sigma,\kappa)}\) and so \([T]^{(\sigma,\kappa)}=[T^{(\sigma,\kappa)}]\) is well-defined. If we restrict to the subgroup \(C(Q,A^{\alpha,\tau},*)\), then the action preserves \(\mathcal{U}\)-compatibility and so defines a group action of the compatible automorphism pairs on \(2^{\operatorname{nd}}\)-cohomology. To see this, take \([T]\in H^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\) which means \(T\) is a \(2\)-cocycle of the datum which is compatible with the identities of \(\mathcal{U}\). According to Wires [13], this means that for \(t=s\in\operatorname{Id}\mathcal{U}\), the \(2\)-cocycle \(T\) satisfies for every appropriate assignment \(\epsilon:\operatorname{var}t\cup\operatorname{var}s\to Q\) an equation \[t^{\partial,T}(\epsilon(\operatorname{var}t))=s^{\partial,T}(\epsilon(\operatorname {var}s)) \tag{2}\] For our purposes, the important thing to note is that \(t^{\partial,T}(\epsilon(\operatorname{var}t))\) is a sum over \(+_{u}\) where \(u=l\left(t^{Q}(\rho(\vec{a}))\right)\). The summands are recursively defined from basic expressions of the form \[a(f,i)\left(\vec{p},\omega,\vec{q}\right)\enspace,\qquad\quad g^{\Delta}(\omega, \delta\circ l(\vec{q}))\qquad\text{ and }\qquad T_{h}(\vec{p})\qquad\qquad\qquad(f,g,h\in\tau) \tag{3}\] by substituting each other at \(\omega\) in a certain specified manner according to the composition tree of the term \(t\) with the result that \(\omega\) does not appear. This means that in the tree describing the resulting composition, \(T_{h}(\vec{p})\) for some fundamental symbol \(h\in\tau\) would appear at each leaf. Similarly for \(s^{\partial,T}(\epsilon(\operatorname{var}s))\) and \(+_{v}\) with \(v=l\left(s^{Q}(\rho(\vec{b}))\right)\). In this way, we can think of Eq (2) as an equation satisfied by the 2-cocycle \(T\). The tuples \(\vec{p}\) and \(\vec{q}\) which appears in the expressions in Eq (3) are determined by starting with the assignment \(\epsilon(\operatorname{var}t)\) and propagating through the composition tree of \(t\) starting from the variables; consequently, \(\vec{p}\) and \(\vec{q}\) are the result of evaluations of different subterms calculated entirely in \(Q\) starting from the assignment \(\epsilon(\operatorname{var}t)\) of the variables. It then follows that any substitution \(\epsilon(\operatorname{var}t)\to Q\) induces in a consistent manner simultaneous substitutions in all expressions in Eq (3) which were composed to form the expression \(t^{\partial,T}(\epsilon(\operatorname{var}t))\). Given \((\sigma,\kappa)\in C(Q,A^{\alpha,\tau},*)\), applying \(\sigma\) to the expressions in Eq (3), using condition (C1) and then making the substitution \(\operatorname{var}t\mapsto\kappa^{-1}(\epsilon(\operatorname{var}t))\) we have the expressions \[a(f,i)\left(\vec{p},\sigma(\omega),\vec{q}\right)\enspace,\qquad\quad g^{ \Delta}(\sigma(\omega),\delta\circ l(\vec{q}))\qquad\text{ and }\qquad\sigma\circ T_{h}(\kappa^{-1}(\vec{p}))\qquad\qquad\qquad(f,g,h\in\tau)\] which compose to give the same result as applying \(\sigma\) to the expression \(t^{\partial,T}(\epsilon(\operatorname{var}t))\) and making the substitution \(\epsilon(\operatorname{var}t)\mapsto\kappa^{-1}(\epsilon(\operatorname{var }t))\). A similar discussion applies to the right-hand side of Eq (2). Altogether, this shows that \(T^{(\sigma,\kappa)}\) satisfies the same 2-cocycle identity determined by \(t=s\in\operatorname{Id}\mathcal{U}\); therefore, \([T]^{(\sigma,\kappa)}\in H^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\). We now define the map \[W:C(Q,A^{\alpha,\tau},*)\times H^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\to H^ {2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\] by \(W\left((\sigma,\kappa),[T]\right):=[T-T^{(\sigma,\kappa)}]\). It is easy to see that it is a homomorphism in the second-coordinate. Since there is a group action of \(C(Q,A^{\alpha,\tau},*)\) on \(H^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\), the restriction \(W_{T}:=W(-,[T])\) is a principal derivation of the corresponding group datum, and is called the _Well's derivation_. As a derivation, we have \(\ker W_{T}\leq C(Q,A^{\alpha,\tau},*)\) but not generally as a normal subgroup. Observe that \(\ker W_{T}\cap\ker W_{T^{\prime}}\leq\ker W_{T+T^{\prime}}\); therefore, it follows that \(\ker W=\{[T]:\ker W_{T}=C(Q,A^{\alpha,\tau},*)\}\) is a subgroup of \(H^{2}_{\mathcal{U}}\left(Q,A^{\alpha,\tau},*\right)\). ## 3. Demonstrations for Theorem 1.2 and Theorem 1.3 In this section, we give the proofs of Theorem 1.2 and Theorem 1.3. Proof.: (of Theorem 1.2) Fix an extension \(\pi:A\to Q\) realizing affine datum \((Q,A^{\alpha,\tau},*)\) with \(A\in\mathcal{U}\) and let \([T]\in H^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\) by the 2-cocycle compatible with \(\mathcal{U}\) associated to the extension. Fix a lifting \(l:Q\to A\) for \(\pi\). We see that \(\phi\in\ker\psi\) if and only if the two conditions \[\widehat{\phi}=\operatorname{id}_{A(\alpha)/\Delta_{\alpha\alpha}}\qquad\quad \text{ and }\qquad\quad\pi\circ\phi\circ l=\phi_{l}=\operatorname{id}_{Q} \tag{4}\] on \(\phi\) hold. Assume \(\phi\in\ker\psi\). For any \(a\in A\), \((l\circ\pi(a),a)\in\alpha\) implies \((\phi\circ l\circ\pi(a),\phi(a))\in\alpha\) which yields \(\pi\circ\phi(a)=\pi\circ\phi\circ l\circ\pi(a)=\pi(a)\) by the second condition on \(\phi\); that is, \[\pi\circ\phi=\pi. \tag{5}\] Now according to Wires [13], every element in the universe \(A(\alpha)/\Delta_{\alpha\alpha}\) is uniquely represented in the form \(\begin{bmatrix}r(a)\\ a\end{bmatrix}/\Delta_{\alpha\alpha}\) for any \(\alpha\)-trace \(r:A\to A\). Since \((Q,A^{\alpha,\tau},*)\) is affine datum, the first condition in Eq (4) yields \(\begin{bmatrix}\phi(r(a))\\ \phi(a)\end{bmatrix}/\Delta_{\alpha\alpha}=\widehat{\phi}\left(\begin{bmatrix}r(a) \\ a\end{bmatrix}/\Delta_{\alpha\alpha}\right)=\begin{bmatrix}r(a)\\ a\end{bmatrix}/\Delta_{\alpha\alpha}\) which implies \[\phi(a)=m(\phi(r(a)),r(a),a)\qquad\quad\text{ for any }\alpha-\text{trace }r:A\to A. \tag{6}\] According to Wires [13], Eq (5) and Eq (6) are precisely the conditions for \(\phi\in\operatorname{Stab}(\pi:A\to Q)\). Working the above argument in reverse shows Eq (5) and Eq (6) implies Eq (4); altogether, \(\operatorname{Stab}(\pi:A\to Q)=\ker\psi\). Then by Wires [13], this yields an embedding \(\operatorname{Der}(Q,A^{\alpha,\tau},*)\approx\operatorname{Stab}(\pi:A\to Q)= \ker\psi\leq\operatorname{Aut}_{\alpha}A\). We now verify exactness at \(C(Q,A^{\alpha,\tau},*)\). Given \(\phi\in\operatorname{Aut}_{\alpha}A\) and \(q\in Q\), observe \(\pi\circ\phi\circ l\circ\phi_{l}^{-1}(q)=\phi_{l}\circ\phi_{l}^{-1}(q)=q\). This implies \(\phi\circ l\circ\phi_{l}^{-1}:Q\to A\) is another lifting for \(\pi\). Since \(\pi:A\to Q\) realizes the datum, we see that for \(f\in\tau\) and \(\vec{q}\in Q^{\operatorname{ar}f}\), \[T^{(\widehat{\phi},\phi_{l})}(\vec{q})=\widehat{\phi}\circ T_{f}(\phi_{l}^{-1 }(\vec{q}))=\widehat{\phi}\left(\begin{bmatrix}l\left(f^{Q}(\phi_{l}^{-1}( \vec{q}))\right)\\ f^{A}\left(l\circ\phi_{l}^{-1}(\vec{q})\right)\end{bmatrix}/\Delta_{\alpha \alpha}\right)=\begin{bmatrix}\phi\circ l\circ\phi_{l}^{-1}\left(f^{Q}(\vec{q} )\right)\\ f^{A}\left(\phi\circ l\circ\phi_{l}^{-1}(\vec{q})\right)\end{bmatrix}/\Delta_{ \alpha\alpha}.\] This implies \(T^{(\widehat{\phi},\phi_{l})}\) is the \(2\)-cocycle defined by the lifting \(\phi\circ l\circ\phi_{l}^{-1}\). According to Wires [13], we see that \(T\sim T^{(\widehat{\phi},\phi_{l})}\) and so \(W_{T}\circ\psi(\phi)=[T-T^{(\widehat{\phi},\phi_{l})}]=0\). Now take \((\sigma,\kappa)\in C(Q,A^{\alpha,\tau},*)\) and suppose \(W_{T}(\sigma,\kappa)=0\) which implies \(T\sim T^{(\sigma,\kappa)}\). So there exists \(h:Q\to A(\alpha)/\Delta_{\alpha\alpha}\) such that \[\sigma\circ T_{f}(\kappa^{-1}(\vec{q}))-_{w}T_{f}(\vec{q})=f^{A(\alpha)/ \Delta_{\alpha\alpha}}(h(\vec{q}))-_{w}h(f^{Q}(\vec{q}))\qquad\qquad\qquad \qquad\left(f\in\tau,\vec{q}\in Q^{\operatorname{ar}f}\right) \tag{7}\] where \(w=l(f^{Q}(\vec{q}))\). We use this to define \(\phi\in\operatorname{Aut}_{\alpha}A\) in the following manner. Fix the \(\alpha\)-trace \(r=l\circ\pi\). By Wires [13], there is an isomorphism \(\gamma:A\to A_{T}(Q,A^{\alpha,\tau},*)\) given by \(\gamma(x)=\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\) where \(A_{T}(Q,A^{\alpha,\tau},*)\) is the extension directly reconstructed from the datum. It will be useful to record \[\rho\circ\gamma=\pi\qquad\qquad\qquad\rho\circ h=\operatorname{id}_{Q} \tag{8}\] for the following calculations. Define \(\bar{\phi}:A_{T}(Q,A^{\alpha,\tau},*)\to A_{T}(Q,A^{\alpha,\tau},*)\) by the rule \[\bar{\phi}\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right):=\sigma\left(\begin{bmatrix}r(x) \\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)+_{u}h\left(\kappa\circ\rho\left( \begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right) \tag{9}\] where \(u=l\circ\rho\circ\sigma\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)\). Then \(\phi:=\gamma^{-1}\circ\bar{\phi}\circ\gamma\in\operatorname{Aut}A_{\alpha}\) will be the automorphism we seek such that \(\psi(\phi)=(\sigma,\kappa)\). We first show \(\phi_{l}=\kappa\). Define \(\sigma^{\prime}:A\to A\) by \(\sigma^{\prime}=\gamma^{-1}\circ\sigma\circ\gamma\). Then \(u=l\circ\rho\circ\sigma\circ\gamma=l\circ\rho\circ\gamma\circ\gamma^{-1} \circ\sigma\circ\gamma=r\circ\sigma^{\prime}\). This implies we can write \(\sigma\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)=\sigma\circ\gamma(x)=\begin{bmatrix}u \\ \sigma^{\prime}(x)\end{bmatrix}/\Delta_{\alpha\alpha}\). We note that \(r(u)=u\) and thus, \(\gamma(u)=\begin{bmatrix}r(u)\\ u\end{bmatrix}/\Delta_{\alpha\alpha}=\begin{bmatrix}u\\ u\end{bmatrix}/\Delta_{\alpha\alpha}\). We can also see that \(\sigma\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)=\begin{bmatrix}u\\ \sigma^{\prime}(x)\end{bmatrix}/\Delta_{\alpha\alpha}\ \ \hat{\alpha}/\Delta_{\alpha\alpha}\ \left[\begin{matrix}u\\ u\end{bmatrix}/\Delta_{\alpha\alpha}\right.\) by definition of \(u\in A\); thus, \(\rho\circ\sigma\circ\gamma(x)=\rho\circ\gamma(u)\). Recall that since \((\sigma,\kappa)\) are compatible, \(\rho\circ\sigma\circ\gamma(x)=\kappa\circ\rho\circ\gamma(x)\). Then by using the idempotence of the term \(m\) in Q, we have \[\pi\circ\phi(x)=\pi\circ\gamma^{-1}\circ\bar{\phi}\circ\gamma(x) =\pi\circ\gamma^{-1}\left(\sigma\circ\gamma(x)+_{u}h\left(\kappa \circ\rho\circ\gamma(x)\right)\right)\] \[=\rho\circ m^{A(\alpha)/\Delta_{\alpha\alpha}}\left(\sigma\circ \gamma(x),\gamma(u),h\left(\kappa\circ\rho\circ\gamma(x)\right)\right)\] \[=m^{Q}\left(\rho\circ\sigma\circ\gamma(x),\rho\circ\gamma(u), \kappa\circ\rho\circ\gamma(x)\right)\] \[=m^{Q}\left(\rho\circ\sigma\circ\gamma(x),\rho\circ\gamma(u),\rho \circ\sigma\circ\gamma(x)\right)\] \[=\rho\circ\sigma\circ\gamma(x)=\kappa\circ\rho\circ\gamma(x)=\kappa \circ\pi(x);\] that is, \[\pi\circ\phi=\kappa\circ\pi. \tag{10}\] Then \(\phi_{l}=\pi\circ\phi\circ l=\kappa\circ\pi\circ l=\kappa\circ\mathrm{id}_{Q}=\kappa\) as desired. Let us now show \(\widehat{\phi}=\sigma\). If we define \(h^{\prime}:Q\to A\) by \(h^{\prime}=\gamma^{-1}\circ h\), then we can write \(h(\pi(x))=\begin{bmatrix}r\circ h(\pi(x))\\ h^{\prime}(\pi(x))\end{bmatrix}/\Delta_{\alpha\alpha}\). Since \(l\circ\rho\circ h(\kappa\circ\pi(x))=l\circ\rho\circ h(\kappa\circ\rho\circ \gamma(x))=l\circ\kappa\circ\rho\circ\gamma(x)=l\circ\rho\circ\sigma\circ \gamma(x)=u\), we see that \(h(\kappa\circ\pi(x))=\begin{bmatrix}u\\ h^{\prime}(\kappa\circ\pi(x))\end{bmatrix}/\Delta_{\alpha\alpha}\). At this point, we should note \(h^{\prime}(\kappa\circ\pi(x))=h^{\prime}(\kappa\circ\pi(r(x))\). We can also give a representation of \(\bar{\phi}\) by \[\bar{\phi}\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\delta_{\alpha\alpha}\right)=\begin{bmatrix}u\\ \sigma^{\prime}(x)\end{bmatrix}/\Delta_{\alpha\alpha}+_{u}\begin{bmatrix}u\\ h^{\prime}(\kappa\circ\pi(x))\end{bmatrix}/\Delta_{\alpha\alpha}=\begin{bmatrix}u \\ m^{A}\left(\sigma^{\prime}(x),u,h^{\prime}(\kappa\circ\pi(x))\right)\end{bmatrix}/ \Delta_{\alpha\alpha} \tag{11}\] which yields \(\phi(x)=m^{A}\left(\sigma^{\prime}(x),u,h^{\prime}(\kappa\circ\pi(x))\right)\). Since \((\sigma,\kappa)\) are compatible, we have \(\rho\circ\sigma\circ\gamma(x)=\kappa\circ\rho\circ\gamma(x)=\kappa\circ\rho \circ\gamma(r(x))=\rho\circ\sigma\circ\gamma(r(x))\). This implies \(u=l\circ\rho\circ\sigma\circ\gamma(x)=l\circ\rho\circ\sigma\circ\gamma(r(x))= l\circ\pi\circ\gamma^{-1}\circ\sigma\circ\gamma(r(x))=r\circ\sigma^{\prime}(r(x))\); therefore, since \((\sigma,\kappa)\) are compatible, it must be that \(\sigma\left(\begin{bmatrix}r(x)\\ r(x)\end{bmatrix}/\Delta_{\alpha\alpha}\right)=\begin{bmatrix}r\circ\sigma^{ \prime}(r(x))\\ \sigma^{\prime}(r(x))\end{bmatrix}/\Delta_{\alpha\alpha}=\begin{bmatrix}u\\ \sigma^{\prime}(r(x))\end{bmatrix}/\Delta_{\alpha\alpha}\) is a \(\Delta_{\alpha\alpha}\)-class of a diagonal. Since this element is unique in any \(\hat{\alpha}/\Delta_{\alpha\alpha}\)-class because \(\alpha\) is abelian, it must be that \(\sigma^{\prime}(r(x))=u\). We can now calculate \[\widehat{\phi}\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)=\begin{bmatrix}\phi(r(x))\\ \phi(x)\end{bmatrix}/\Delta_{\alpha\alpha} =\begin{bmatrix}m^{A}\left(\sigma^{\prime}(r(x)),u,h^{\prime}( \kappa\circ\pi(r(x)))\right)\\ m^{A}\left(\sigma^{\prime}(x),u,h^{\prime}(\kappa\circ\pi(x))\right)\end{bmatrix}/ \Delta_{\alpha\alpha}\] \[=m^{A(\alpha)/\Delta_{\alpha\alpha}}\left(\begin{bmatrix}\sigma ^{\prime}(r(x))\\ \sigma^{\prime}(x)\end{bmatrix}/\Delta_{\alpha\alpha},\begin{bmatrix}u\\ u\end{bmatrix}/\Delta_{\alpha\alpha},\begin{bmatrix}u\\ u\end{bmatrix}/\Delta_{\alpha\alpha}\right)\] \[=\begin{bmatrix}\sigma^{\prime}(r(x))\\ \sigma^{\prime}(x)\end{bmatrix}/\Delta_{\alpha\alpha}=\begin{bmatrix}u\\ \sigma^{\prime}(x)\end{bmatrix}/\Delta_{\alpha\alpha}=\sigma\left(\begin{bmatrix} r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)\] since \(m\) is Mal'cev on \(\hat{\alpha}/\Delta_{\alpha\alpha}\)-classes and so \(\widehat{\phi}=\sigma\); altogether, we have shown \(\psi(\phi)=(\sigma,\kappa)\). Let us now show \(\phi\) preserves \(\alpha\) and is bijective. If \((a,b)\in\alpha\), then by Eq (10) we have \(\pi\circ\phi(a)=\kappa\circ\pi(a)=\kappa\circ\pi(b)=\pi\circ\phi(b)\) and so \((\phi(a),\phi(b))\in\alpha\); thus, \(\phi(\alpha)\subseteq\alpha\). We show \(\phi\) is injective. If \(\phi(a)=\phi(b)\), then \(\kappa\circ\pi(a)=\pi\circ\phi(a)=\pi\circ\phi(b)=\kappa\circ\pi(b)\) which implies \(\pi(a)=\pi(b)\) since \(\kappa\in\mathrm{Aut}\,Q\); therefore, \(r(a)=r(b)\). We also see that \(h(\kappa\circ\rho\circ\gamma(a))=h(\kappa\circ\pi(a))=h(\kappa\circ\pi(b))=h( \kappa\circ\rho\circ\gamma(b))\) and \(l\circ\rho\circ\sigma\circ\gamma(a)=l\circ\kappa\circ\pi(a)=l\circ\kappa \circ\pi(b)=l\circ\rho\circ\sigma\circ\gamma(b)\). Using these facts in Eq (9) yields \(\sigma\left(\begin{bmatrix}r(a)\\ a\end{bmatrix}/\Delta_{\alpha\alpha}\right)=\sigma\left(\begin{bmatrix}r(b) \\ b\end{bmatrix}/\Delta_{\alpha\alpha}\right)\) and so \(\begin{bmatrix}r(a)\\ a\end{bmatrix}/\Delta_{\alpha\alpha}=\begin{bmatrix}r(b)\\ b\end{bmatrix}/\Delta_{\alpha\alpha}\) since \(\sigma\) is an automorphism. Then \(r(a)=r(b)\) implies \(a=b\). To show \(\phi\) is surjective, it suffices to show \(\bar{\phi}\) is. Given \(a\in A\), set \(b=\sigma^{-1}\left(\begin{bmatrix}r(a)\\ a\end{bmatrix}/\Delta_{\alpha\alpha}-_{r(a)}h(\pi(a))\right)\). Observe that \[h\left(k\circ\rho\circ\sigma^{-1}\left(\begin{bmatrix}r(a)\\ a\end{bmatrix}/\Delta_{\alpha\alpha}-_{r(a)}h(\pi(a))\right)\right)=h\left( \rho\left(\begin{bmatrix}r(a)\\ a\end{bmatrix}/\Delta_{\alpha\alpha}-_{r(a)}h(\pi(a))\right)\right)=h(\pi(a)) \tag{12}\] by idempotence of \(m^{Q}\); in particular, \(l\circ\rho\circ\sigma(b)=r(a)\). Then using Eq (12) we have \[\bar{\phi}(b)=\sigma(b)+_{r(a)}h(\pi(a))=\begin{bmatrix}r(a)\\ a\end{bmatrix}/\Delta_{\alpha\alpha}-_{r(a)}h(\pi(a))+_{r(a)}h(\pi(a))=\begin{bmatrix}r (a)\\ a\end{bmatrix}/\Delta_{\alpha\alpha};\] therefore; \(\hat{\phi}\) is surjective. The last task is to show \(\bar{\phi}\) is a homomorphism which will imply \(\phi\) is, as well. In Eq (7), we make the substitution \(\vec{q}\mapsto\kappa(\vec{q})\) and rewrite to conclude \[\sigma\circ T_{f}(\vec{q})+_{v}h\left(f^{Q}(\kappa(\vec{q}))\right)=f^{A(\alpha) /\Delta_{\alpha\alpha}}\left(h(\kappa(\vec{q}))\right)+_{v}T_{f}(\kappa(\vec{q })) \tag{13}\] where \(v=l\left(f^{Q}(\kappa(\vec{q}))\right)\). Let us observe that \(l\circ\rho\circ\sigma\left(\begin{bmatrix}r(f(\vec{x}))\\ f(\vec{x})\end{bmatrix}/\Delta_{\alpha\alpha}\right)=l\circ\kappa\circ\rho \left(\begin{bmatrix}r(f(\vec{x}))\\ f(\vec{x})\end{bmatrix}/\Delta_{\alpha\alpha}\right)=l\left(f^{Q}(\kappa(\vec {q}))\right)=v\) where we have written \(\pi(\vec{x})=\vec{q}\). Then by realization of the datum, we have \[\bar{\phi}\left(F_{f}\left(\begin{bmatrix}r(\vec{x})\\ \vec{x}\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right) =\bar{\phi}\left(\begin{bmatrix}r(f(\vec{x}))\\ f(\vec{x})\end{bmatrix}/\Delta_{\alpha\alpha}\right)\] \[=\sigma\left(\begin{bmatrix}r(f(\vec{x}))\\ f(\vec{x})\end{bmatrix}/\Delta_{\alpha\alpha}\right)+_{v}h\left(\kappa\circ \rho\left(\begin{bmatrix}r(f(\vec{x}))\\ f(\vec{x})\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right)\] \[=\sigma\circ f^{A(\alpha)/\Delta_{\alpha\alpha}}\left(\begin{bmatrix} r(\vec{x})\\ \vec{x}\end{bmatrix}/\Delta_{\alpha\alpha}\right)+_{v}\sigma\circ T_{f}(\vec{q})+_{ v}h\left(\kappa(f^{Q}(\vec{q}))\right)\] \[=f^{A(\alpha)/\Delta_{\alpha\alpha}}\left(\sigma\left(\begin{bmatrix }r(\vec{x})\\ \vec{x}\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right)+_{v}f^{A(\alpha)/ \Delta_{\alpha\alpha}}\left(h(\kappa(\vec{q}))\right)+_{v}T_{f}\left(\kappa( \vec{q})\right)\] \[=f^{A(\alpha)/\Delta_{\alpha\alpha}}\left(\bar{\phi}(\vec{x}) \right)+_{v}T_{f}(\kappa(\vec{q}))\] \[=F_{f}\left(\bar{\phi}(\vec{x})\right).\] The demonstration is now complete. Proof.: (of Theorem 1.3) Given \([T]\in H^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\), there is a realization \(\pi:A_{T}(Q,A^{\alpha,\tau},*)\to Q\) with \(A_{T}(Q,A^{\alpha,\tau},*)\in\mathcal{U}\). For each \((\sigma,\kappa)\in\ker W_{T}\), there is a function \(h^{T}_{(\sigma,\kappa)}:Q\to A(\alpha)/\Delta_{\alpha\alpha}\) such that \[T^{(\sigma,\kappa)}_{f}(\vec{q})-_{v}T_{f}(\vec{q})=f^{A(\alpha)/\Delta_{ \alpha\alpha}}\left(h^{T}_{(\sigma,\kappa)}(\vec{q})\right)-_{v}h^{T}_{( \sigma,\kappa)}\left(f^{Q}(\vec{q})\right) \left(f\in\tau,\vec{q}\in Q^{\mathrm{ar}\,f}\right)\] where \(v=l(f^{Q}(\vec{q}))\) for a lifting \(l\) of \(\pi\). We follow Eq (9) and define \[\hat{l}_{T}(\sigma,\kappa)\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right):=\sigma\left(\begin{bmatrix}r(x) \\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)+_{u}h^{T}_{(\sigma,\kappa)}\left( \kappa\circ\pi\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right) \tag{14}\] for \((\sigma,\kappa)\in\ker W_{T}\) where \(u=l\circ\pi\circ\sigma\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)\) and \(\alpha\)-trace \(r=l\circ\pi\). As in the last part of the proof of Theorem 1.2, we can see that \(\hat{l}_{T}:\ker W_{T}\rightarrow\operatorname{Aut}_{\hat{\alpha}}A_{T}(Q,A^{ \alpha,\tau},*)\) and is a lifting for \(\psi:A_{T}(Q,A^{\alpha,\tau},*)\to C(Q,A^{\alpha,\tau},*)\). Now define \(\Pi\left([T]\right)(<x,y>,<u,v>):=\left[\hat{l}_{T}(x,y)\circ\hat{l}_{T}(u,v) \circ\hat{l}_{T}(xu,yv)^{-1}\right]\) the cohomology class of the 2-cocycle defined by the lifting \(\hat{l}\) for the extension \(\psi\). Theorem 1.2 and group cohomology yields the isomorphism \[\operatorname{Aut}_{\hat{\alpha}}A(\alpha,*,T)\approx\operatorname{Der}(Q,A^{ \alpha,\tau},*)\rtimes_{(\Pi([T]),\phi)}\ker W_{T}\] where the action \(\phi:\ker W_{T}\rightarrow\operatorname{Aut}\operatorname{Der}(Q,A^{\alpha, \tau},*)\) is induced by the lifting \(\hat{l}\) of \(\psi\) in the standard manner. For the second part of the theorem, we will evaluate the image of \(\Pi([T])\) on \(A_{T}(Q,A^{\alpha,\tau},*)\). Let us first note that \[\hat{l}_{T}(\gamma\sigma,\beta\kappa)^{-1}\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)=\sigma^{-1}\circ\gamma^{-1}\left( \left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)-_{r(x)}h^{T}_{(\gamma\sigma,\beta \kappa)}\left(\pi\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right)\right).\] Using compatibility of \((\gamma,\beta)\), we can simplify \[u =l\circ\pi\circ\sigma\left(\hat{l}_{T}(\gamma\sigma,\beta\kappa)^{-1 }\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right)\] \[=l\circ\pi\circ\gamma^{-1}\left(\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)-_{r(x)}h_{(\gamma\sigma,\beta \kappa)}^{T}\left(\pi\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right)\right)\] \[=l\circ\beta^{-1}\circ\pi\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)-_{r(x)}h_{(\gamma\sigma,\beta \kappa)}^{T}\left(\pi\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right)\right)\] \[=l\circ\beta^{-1}\circ\pi\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)\] and \[h_{(\sigma,\kappa)}^{T}\left(\kappa\circ\pi\left(\hat{l}_{T}( \gamma\sigma,\beta\kappa)^{-1}\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right)\right) =h_{(\sigma,\kappa)}^{T}\left(\pi\circ\sigma\left(\hat{l}_{T}( \gamma\sigma,\beta\kappa)^{-1}\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right)\right)\] \[=h_{(\sigma,\kappa)}^{T}\left(\beta^{-1}\circ\pi\left(\begin{bmatrix }r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right).\] Then \[\hat{l}_{T}(\sigma,\kappa)\circ\hat{l}_{T}(\gamma\sigma,\beta \kappa)^{-1}\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right) =\gamma^{-1}\left(\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)-_{r(x)}h_{(\gamma\sigma,\beta \kappa)}^{T}\left(\pi\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right)\right)\] \[+_{u}h_{(\sigma,\kappa)}^{T}\left(\beta^{-1}\circ\pi\left( \begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right).\] Again, we can calculate in a similar manner \[w =l\circ\pi\circ\gamma\left(\hat{l}_{T}(\sigma,\kappa)\circ\hat{l} _{T}(\gamma\sigma,\beta\kappa)^{-1}\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right)\] \[=l\circ\beta\circ\pi\left(\hat{l}_{T}(\sigma,\kappa)\circ\hat{l} _{T}(\gamma\sigma,\beta\kappa)^{-1}\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right)\] \[=l\circ\beta\left(\beta^{-1}\circ\pi\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right)=r(x)\] and \[z=l\circ\pi\circ\gamma\left(\begin{bmatrix}u\\ u\end{bmatrix}/\Delta_{\alpha\alpha}\right)=l\circ\beta\circ\pi\left(\begin{bmatrix} u\\ u\end{bmatrix}/\Delta_{\alpha\alpha}\right)=l\circ\beta^{-1}\circ\beta\circ\pi \left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)=r(x).\] Altogether, \[\Pi\left([T]\right)(<\gamma,\beta>,<\sigma,\kappa>)\left( \begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right) =\gamma\left(\hat{l}_{T}(\sigma,\kappa)\circ\hat{l}_{T}(\gamma \sigma,\beta\kappa)^{-1}\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right)+_{w}\] \[h_{(\gamma,\beta)}^{T}\left(\beta\circ\pi\left(\hat{l}_{T}( \sigma,\kappa)\circ\hat{l}_{T}(\gamma\sigma,\beta\kappa)^{-1}\left(\begin{bmatrix }r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right)\right)\] \[=\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}-_{r(x)}h_{(\gamma\sigma,\beta\kappa)}^{T} \left(\pi\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right)+_{r(x)}\] \[\gamma\circ h_{(\sigma,\kappa)}^{T}\left(\beta^{-1}\circ\pi\left( \begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right)+_{r(x)}\] \[h_{(\gamma,\beta)}^{T}\left(\pi\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right).\] Since \(\Pi\left([T]\right)(<\gamma,\beta>,<\sigma,\kappa>)\) is a stabilizing automorphism, according to Wires [13] the expression on the right-hand side of the last equations built from the \(h^{T}\) functions is a derivation. Let us recall that if \(h^{T}\) and \(h^{T^{\prime}}\) respectively determine the 2-coboundaries that witness \([T]=0\) and \([T^{\prime}]=0\) in cohomology, then \([T+T^{\prime}]=0\) is witnessed by the 2-coboundary determined by \(h^{T}+h^{T^{\prime}}\). Together with the previous observation, this guarantees the restriction \[\Pi:\ker W\to H^{2}\left(C(Q,A^{\alpha,\tau},*),\operatorname{Der}(Q,A^{ \alpha,\tau},*),\phi\right)\] is a homomorphism. Determining the compatible automorphisms of an extensions appears quit difficult, but for central extensions in varieties with a difference we can make a general observation. A ternary term \(m\) is a _difference term_ for a variety \(\mathcal{V}\) if for all algebras \(A\in\mathcal{V}\) and congruences \(\alpha\in\operatorname{Con}A\), the term satisfies \[m(x,x,y)=y\qquad\text{and}\qquad m(x,y,y)\left[\alpha,\alpha\right]x\qquad \qquad\qquad(x,y,\in\alpha).\] We rely on the characterization in [13] of central extensions in such varieties. **Proposition 3.1**.: Let \(A\in\mathcal{V}\) a variety with a difference term and \(\pi:A\to Q\) a central extension realizing affine datum \((Q,A^{\alpha,\tau},*)\). Assume \(Q\) has an idempotent element. Then the compatible automorphism \(C(Q,A^{\alpha,\tau},*)\approx\operatorname{Aut}^{0}A(\alpha)/\Delta_{\alpha 1 }\times\operatorname{Aut}Q\) where \(\operatorname{Aut}^{0}A(\alpha)/\Delta_{\alpha 1}\) is the group of automorphism which fix the unique \(\Delta_{\alpha 1}\)-class of the diagonal elements. Proof.: Let us begin by noting what the assumptions provide us. We are given that \(Q=A/\alpha\). We write \(\rho:A(\alpha)/\Delta_{\alpha\alpha}\to Q\) for the extension induced by \(\pi\) of the associated semidirect product. Since \(A\) realizes affine datum, if we fix an \(\alpha\)-trace \(r:A\to A\), then every element in \(A(\alpha)/\Delta_{\alpha\alpha}\) is uniquely represented in the form \(\left[\begin{matrix}r(a)\\ a\end{matrix}\right]/\Delta_{\alpha\alpha}\) for \(a\in A\). Let \(l:Q\to A\) be the lifting such that \(r=l\circ\pi\). Let \(v\in Q\) be an idempotent element and choose \(u\in\pi^{-1}(v)\). Since \(A\in\mathcal{V}\) has a difference term and \(\alpha=\ker\pi\) is central, we have by [13, Lem 3.35] an isomorphism for the semidirect product \(A(\alpha)/\Delta_{\alpha\alpha}\approx A(\alpha)/\Delta_{\alpha 1}\times Q\) given by \(\eta:\left[\begin{matrix}a\\ b\end{matrix}\right]/\Delta_{\alpha\alpha}\mapsto\left\langle\left[\begin{matrix} a\\ b\end{matrix}\right]/\Delta_{\alpha 1}\,,\,\pi(a)\right\rangle\) where the first-coordinate is given by the canonical epimorphism \(\phi:A(\alpha)/\Delta_{\alpha\alpha}\to A(\alpha)/\Delta_{\alpha 1}\) for the congruence \(\Delta_{\alpha 1}/\Delta_{\alpha\alpha}\). Since \(\alpha\) is central, the diagonal elements of \(A(\alpha)\) form a singleton \(\Delta_{\alpha 1}\)-class denoted by \(\hat{\delta}\). Then \(\operatorname{Aut}^{0}A(\alpha)/\Delta_{\alpha 1}=\{\sigma\in\operatorname{Aut}A( \alpha)/\Delta_{\alpha 1}:\sigma(\hat{\delta})=\hat{\delta}\}\). Let us note how condition (C1) on the action relates to the algebra \(A(\alpha)/\Delta_{\alpha 1}\). According to [13, Lem 3.6(1a)], \(A(\alpha)/\Delta_{\alpha 1}\) is an abelian algebra in which \(\hat{\delta}\) is an idempotent element; thus, \(A(\alpha)/\Delta_{\alpha 1}\) is term-equivalent to a module on the same universe for some unital ring \(R\) in which \(\hat{\delta}\) is the zero of the module. This means for operation symbol \(f\in\tau\) with \(n=\operatorname{ar}f\), there are ring elements \(r_{i}\in R\) such that \(f\) interprets as \(f^{A(\alpha)/\Delta_{\alpha 1}}(x_{1},\dots,x_{n})=r_{1}\cdot x_{1}+\dots+r_{n} \cdot x_{1}\). Then by realization of the datum, we can calculate the action terms by \[\phi\circ a(f,i)(q_{1},\dots,x,\dots,q_{n}) =\phi\circ F_{f}\big{(}\delta\circ l(q_{1}),\dots,x,\dots,\delta \circ l(q_{n})\big{)}\] \[=\phi\circ f^{A(\alpha)/\Delta_{\alpha\alpha}}\big{(}\delta \circ l(q_{1}),\dots,x,\dots,\delta\circ l(q_{n})\big{)}\] \[=r_{1}\cdot\hat{\delta}+\dots+r_{i}\cdot\phi(x)+\dots+r_{n}\cdot \hat{\delta}\] \[=r_{i}\cdot\phi(x)\] since each \(\delta\circ l(q_{i})\) is a diagonal \(\Delta_{\alpha\alpha}\)-class; therefore, for complimentary automorphisms \((\sigma,\kappa)\) we have \[\phi\circ a(f,i)(\kappa(q_{1}),\dots,\sigma(x),\dots,\kappa(q_{n}))=r_{i}\cdot \phi(\sigma(x)). \tag{15}\] Given a pair of compatible automorphisms \((\sigma,\kappa)\in C(Q,A^{\alpha,\tau},*)\), define \(\hat{\sigma}:A(\alpha)/\Delta_{\alpha 1}\to A(\alpha)/\Delta_{\alpha 1}\) by \(\hat{\sigma}\left(\left[\begin{matrix}b\\ a\end{matrix}\right]/\Delta_{\alpha 1}\right):=\phi\circ\sigma\left(\left[\begin{matrix}r(u)\\ m(a,b,r(u))\end{matrix}\right]/\Delta_{\alpha\alpha}\right)\). Note by (C3) that \(\sigma\left(\left[\begin{matrix}b\\ b\end{matrix}\right]/\Delta_{\alpha 1}\right)\) and \(\sigma\left(\left[\begin{matrix}r(u)\\ r(u)\end{matrix}\right]/\Delta_{\alpha 1}\right)\) are both \(\Delta_{\alpha\alpha}\)-classes of a diagonal element. We observe \[\begin{split}\hat{\sigma}\circ\phi\left(\begin{bmatrix}b\\ a\end{bmatrix}/\Delta_{\alpha\alpha}\right)&=\hat{\sigma}\left(\begin{bmatrix}b \\ a\end{bmatrix}/\Delta_{\alpha 1}\right)\\ &=\phi\circ\sigma\left(\begin{bmatrix}r(u)\\ m(a,b,r(u))\end{bmatrix}/\Delta_{\alpha\alpha}\right)\\ &=m\left(\phi\circ\sigma\left(\begin{bmatrix}b\\ a\end{bmatrix}/\Delta_{\alpha\alpha}\right),\phi\circ\sigma\left(\begin{bmatrix} b\\ b\end{bmatrix}/\Delta_{\alpha\alpha}\right),\phi\circ\sigma\left(\begin{bmatrix}r(u) \\ r(u)\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right)\\ &=m\left(\phi\circ\sigma\left(\begin{bmatrix}b\\ a\end{bmatrix}/\Delta_{\alpha\alpha}\right),\hat{\delta},\hat{\delta}\right)\\ &=\phi\circ\sigma\left(\begin{bmatrix}b\\ a\end{bmatrix}/\Delta_{\alpha\alpha}\right)\end{split} \tag{16}\] since \(A(\alpha)/\Delta_{\alpha 1}\) is an abelian algebra. We can see from Eq (16) and condition (C3) that \(\hat{\sigma}(\hat{\delta})=\hat{\delta}\). By condition (C2) we see that \[\rho\circ\sigma\left(\begin{bmatrix}r(u)\\ m(a,b,r(u))\end{bmatrix}/\Delta_{\alpha\alpha}\right) =\kappa\circ\rho\left(\begin{bmatrix}r(u)\\ m(a,b,r(u))\end{bmatrix}/\Delta_{\alpha\alpha}\right)\] \[=\kappa\circ m\left(\rho\left(\begin{bmatrix}b\\ a\end{bmatrix}/\Delta_{\alpha\alpha}\right),\rho\left(\begin{bmatrix}b\\ b\end{bmatrix}/\Delta_{\alpha\alpha}\right),\rho\left(\begin{bmatrix}r(u)\\ r(u)\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right)\] \[=\kappa\circ\rho\left(\begin{bmatrix}r(u)\\ r(u)\end{bmatrix}/\Delta_{\alpha\alpha}\right)\] \[=\kappa(v)\] since \(\rho\left(\begin{bmatrix}b\\ a\end{bmatrix}/\Delta_{\alpha\alpha}\right)=\rho\left(\begin{bmatrix}b\\ b\end{bmatrix}/\Delta_{\alpha\alpha}\right)\). Using the isomorphism \(\eta\) we can then represent \[\sigma\left(\begin{bmatrix}r(u)\\ m(a,b,r(u))\end{bmatrix}/\Delta_{\alpha\alpha}\right)\longmapsto\left\langle \hat{\sigma}\left(\begin{bmatrix}b\\ a\end{bmatrix}/\Delta_{\alpha 1}\right),\kappa(v)\right\rangle. \tag{17}\] Since \(v\in Q\) is idempotent, it follows that \(\hat{\sigma}\) is a homomorphism. We also see from Eq (15) condition (C2) collapses to the condition that \(\hat{\sigma}\) respects the module structure on \(A(\alpha)/\Delta_{\alpha\alpha}\) which is already guaranteed since it is a homomorphism which fixes the zero. Surjectivity of \(\sigma\) and \(\phi\) guarantee by Eq (16) that \(\hat{\sigma}\) is also surjective. To show injectivity, assume \(\hat{\sigma}\left(\begin{bmatrix}b\\ a\end{bmatrix}/\Delta_{\alpha 1}\right)=\hat{\sigma}\left(\begin{bmatrix}d\\ c\end{bmatrix}/\Delta_{\alpha 1}\right)\). Then by the representation in Eq (17) and injectivity of \(\sigma\) as an automorphism we conclude that \(\begin{bmatrix}r(u)\\ m(a,b,r(u))\end{bmatrix}/\Delta_{\alpha\alpha}=\begin{bmatrix}r(u)\\ m(c,d,r(u))\end{bmatrix}/\Delta_{\alpha\alpha}\). Then passing to the quotient \[\begin{bmatrix}b\\ a\end{bmatrix}/\Delta_{\alpha 1}=m\left(\begin{bmatrix}b\\ a\end{bmatrix}/\Delta_{\alpha 1},\hat{\delta},\hat{\delta}\right) =m\left(\phi\left(\begin{bmatrix}b\\ a\end{bmatrix}/\Delta_{\alpha\alpha}\right),\phi\left(\begin{bmatrix}b\\ b\end{bmatrix}/\Delta_{\alpha\alpha}\right),\phi\left(\begin{bmatrix}r(u) \\ r(u)\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right)\] \[=\phi\left(\begin{bmatrix}r(u)\\ m(a,b,r(u))\end{bmatrix}/\Delta_{\alpha\alpha}\right)\] \[=\phi\left(\begin{bmatrix}r(u)\\ m(c,d,r(u))\end{bmatrix}/\Delta_{\alpha\alpha}\right)\] \[=\begin{bmatrix}d\\ c\end{bmatrix}/\Delta_{\alpha 1};\] altogether, \(\hat{\sigma}\) is an automorphism. For the last step, define \(\psi:C(Q,A^{\alpha,\tau},*)\to\operatorname{Aut}A(\alpha)/\Delta_{\alpha 1}\times \operatorname{Aut}Q\) by \(\psi(\sigma,\kappa):=(\hat{\sigma},\kappa)\). We can use Eq (16) to show \(\psi\) is a homomorphism by observing for all \(x\in A(\alpha)/\Delta_{\alpha\alpha}\), \[(\hat{\gamma}\circ\hat{\sigma})\left(\phi(x)\right)=\hat{\gamma}\circ\phi\circ \sigma(x)=\phi\circ\gamma\circ\sigma(x)=\widehat{\gamma\circ\hat{\sigma}}(x)\] which shows \(\psi\) is a homomorphism. We show surjectivity of \(\phi\). Given \((\sigma,\kappa)\in\operatorname{Aut}A(\alpha)/\Delta_{\alpha 1}\times \operatorname{Aut}Q\) define \(\lambda:A(\alpha)/\Delta_{\alpha\alpha}\to A(\alpha)/\Delta_{\alpha\alpha}\) by the rule \(\lambda\left(\begin{bmatrix}b\\ a\end{bmatrix}/\Delta_{\alpha\alpha}\right)=\eta^{-1}\left(\left\langle \sigma\left(\begin{bmatrix}b\\ a\end{bmatrix}/\Delta_{\alpha 1}\right),\kappa\circ\pi(b)\right\rangle\right)\). It is straightforward to see that \(\lambda\) is an automorphism and \((C2)\) holds. Since \(\sigma\) fixes \(\hat{\delta}\) we see that \[\lambda\left(\begin{bmatrix}a\\ a\end{bmatrix}/\Delta_{\alpha\alpha}\right)=\eta^{-1}\left(\left\langle\sigma \left(\hat{\delta}\right),\kappa\circ\pi(a)\right\rangle\right)=\eta^{-1} \left(\left\langle\hat{\delta},\kappa\circ\pi(a)\right\rangle\right)=\begin{bmatrix} l\circ\kappa\circ\pi(a)\\ l\circ\kappa\circ\pi(a)\end{bmatrix}/\Delta_{\alpha\alpha}\] which shows \(\lambda\) satisfies condition (C3); therefore, \((\lambda,\kappa)\) is a complimentary pair. This establishes surjectivity of \(\psi\). To show injectivity of \(\psi\), suppose \(\psi(\sigma,\kappa)=(\operatorname{id},\operatorname{id})\); thus, \(\hat{\sigma}=\operatorname{id}\) and \(\kappa=\operatorname{id}\). Then by (C2), we have \(\rho\circ\sigma(x)=\kappa\circ\rho(x)=\rho(x)\). This implies \((\sigma(x),x)\in\hat{\alpha}/\Delta_{\alpha\alpha}\) for all \(x\in A(\alpha)/\Delta_{\alpha\alpha}\). We also see using Eq (16) that \(\phi(x)=\hat{\sigma}\circ\phi\left(x\right)=\phi\circ\sigma\left(x\right)\) which implies \((\sigma(x),x)\in\Delta_{\alpha 1}/\Delta_{\alpha\alpha}\); thus, we see that \((\sigma(x),x)\in\hat{\alpha}/\Delta_{\alpha\alpha}\wedge\Delta_{\alpha 1}/\Delta_{\alpha\alpha}\). By [13, Lem 3.5(3)] we have \(\Delta_{\alpha\alpha}=\Delta_{\alpha 1}\wedge\hat{\alpha}\) since \(\alpha\) is central; therefore, \(\sigma(x)=x\) for all \(x\in A(\alpha)/\Delta_{\alpha\alpha}\). We have shown \(\psi\) is an isomorphism which finishes the demonstration. **Example 3.2**.: We illustrate with a simple and familiar example. Consider an \(R\)-module \(M\) with submodule \(I\leq M\) and set \(Q:=M/I\). Let \(\pi:M\to Q\) denote the canonical surjection. Since a module is an abelian algebra, the extension is central. Having fixed a lifting \(l:Q\to M\) of \(\pi\), then \(I\otimes^{T}Q\) is the module on the set \(I\times Q\) with operations * \(\left\langle a,x\right\rangle+\left\langle b,y\right\rangle:=\left\langle a+b +T_{+}(x,y),x+y\right\rangle\), * \(r\cdot\left\langle a,x\right\rangle:=\left\langle r\cdot a+T_{r}(x),x\right\rangle\), where \(T=\{T_{+},T_{r}:r\in R\}\) is defined by * \(T_{+}(x,y)=l(x)+l(y)-l(x+y)\) for \(x,y\in Q\), * \(T_{r}(x)=r\cdot l(x)-l(r\cdot x)\) for \(x\in Q,r\in R\). If \(\alpha_{I}=\{(a,b):a-b\in I\}\) is the congruence determined by \(I\), then \(M(\alpha_{I})/\Delta_{\alpha_{I}\alpha_{I}}\approx I\) witnessed by the isomorphism \(\begin{bmatrix}a\\ b\end{bmatrix}/\Delta_{\alpha_{I}\alpha_{I}}\mapsto b-a\) and \(M\approx I\otimes^{T}Q\) given by \(\begin{bmatrix}a\\ b\end{bmatrix}/\Delta_{\alpha_{I}\alpha_{I}}\mapsto\left\langle b-a,\pi(a)\right\rangle\); in particular, \(A(\alpha_{I})/\Delta_{\alpha_{I}1}\approx I\). We saw in Eq (15) that the action terms correspond to the module action in I; therefore, \(\operatorname{Der}(Q,I)=\operatorname{Hom}_{R}(Q,I)\). Since automorphisms are linear and preserve the zero of the module, Proposition 3.1 yields the compatible automorphisms \(C(Q,I)=\operatorname{Aut}I\times\operatorname{Aut}Q\). We now consider the case of the direct sum \(M=I\oplus Q\). Since the \(2\)-cocycle \(T=0\), we have \(\ker W_{T}=C(Q,I)=\operatorname{Aut}I\times\operatorname{Aut}Q\) and so by Theorem 1.2 we recover the semidirect decomposition \[\operatorname{Aut}_{I}M\approx\operatorname{Hom}_{R}(Q,I)\rtimes_{\gamma}( \operatorname{Aut}I\times\operatorname{Aut}Q) \tag{18}\] of the group of nonsingular transformations which have \(I\) as an invariant subspace. The action \(\gamma\) is defined by the lifting \(\hat{l}:\operatorname{Aut}I\times\operatorname{Aut}Q\to\operatorname{Aut}_{I}M\) of \(\psi\) given by \(\hat{l}(\sigma,\kappa)\left\langle a,x\right\rangle=\left\langle\sigma(a), \kappa(x)\right\rangle\). The identification of stabilizing automorphisms and derivations \(\phi\mapsto d_{\phi}\) is determined by \(\phi\left\langle a,x\right\rangle=\left\langle a+d_{\phi}(x),x\right\rangle\). The action is then calculated by \((\sigma,\kappa)*\phi\left\langle a,x\right\rangle=\hat{l}(\sigma,\kappa)\circ \phi\circ\hat{l}(\sigma,\kappa)^{-1}\left\langle a,x\right\rangle=\left\langle a +d_{\phi}^{(\sigma,\kappa)}(x),x\right\rangle\) where \(d_{\phi}^{(\sigma,\kappa)}(x):=\sigma\circ d_{\phi}(\kappa^{-1}(x))\). We can also consider the matrix representation of automorphisms in \(\phi\in\operatorname{Aut}_{I}M\) afforded by the direct sum \(M=I\oplus M\) in the form \[[\phi]=\begin{bmatrix}\sigma&D\\ 0&\kappa\end{bmatrix}\] where \(\sigma\in\operatorname{Aut}I,\kappa\in\operatorname{Aut}Q,D\in\operatorname{Hom}_{R} (Q,I)\). In terms of the standard coset-decomposition in groups, the isomorphism in Eq (18) then takes the form \[\operatorname{Aut}_{I}M\ni\begin{bmatrix}\sigma&D\\ 0&\kappa\end{bmatrix} =\begin{bmatrix}\sigma&D\\ 0&\kappa\end{bmatrix}\begin{bmatrix}\sigma^{-1}&0\\ 0&\kappa^{-1}\end{bmatrix}\begin{bmatrix}\sigma&0\\ 0&\kappa\end{bmatrix}\] \[=\begin{bmatrix}I&D\circ\kappa^{-1}\\ 0&I\end{bmatrix}\begin{bmatrix}\sigma&0\\ 0&\kappa\end{bmatrix}\longmapsto\left\langle D\circ\kappa^{-1},(\sigma, \kappa)\right\rangle\in\operatorname{Hom}_{R}(Q,I)\rtimes_{\gamma}\left( \operatorname{Aut}I\times\operatorname{Aut}Q\right),\] and the group product of automorphisms is represented in the two ways by \[\begin{bmatrix}\alpha\sigma&\alpha D+E\kappa\\ 0&\beta\kappa\end{bmatrix} =\begin{bmatrix}\alpha&E\\ 0&\beta\end{bmatrix}\begin{bmatrix}\sigma&D\\ 0&\kappa\end{bmatrix}\] \[\longmapsto\left\langle E\beta^{-1},(\alpha,\beta)\right\rangle \left\langle D\kappa^{-1},(\sigma,\kappa)\right\rangle=\left\langle E\beta^{- 1}+\alpha D\kappa^{-1}\beta^{-1},(\alpha\sigma,\beta\kappa)\right\rangle.\] ## 4. Demonstration for Theorem 1.4 Fix a \(\mathcal{V}\) a variety in the signature \(\tau\) of \(R\)-modules expanded by multilinear operations named by \(F\). Let \(A\in\mathcal{V}\) and \(\pi:A\to Q\) a surjective homomorphism with \(\ker\pi\) determined by the ideal \(I\triangleleft A\). Take \(\phi\in\operatorname{Aut}_{I}A\). For any lifting \(l:Q\to A\) of \(\pi\), we can verify that \(\phi_{l}:=\pi\circ\phi\circ l:Q\to Q\) is an automorphism of \(Q\). The restriction of \(\phi\) to \(I\) is denoted by \(\phi|_{I}:I\to I\). As in Section 2, it follows that \(\psi:\operatorname{Aut}_{I}\to\operatorname{Aut}I\times\operatorname{Aut}Q\) given by \(\psi(\phi):=(\phi|_{I},\phi_{l})\) is a group homomorphism independent of the lifting \(l\). For nonabelian extensions, the actions terms are usually folded into the notion of a 2-cocycle (or factor system) and are affected by the equivalence determined by 2-coboundaries. The compatible automorphisms of the datum \((Q,I)\) is then given by the full direct product \(\operatorname{Aut}I\times\operatorname{Aut}Q\), but the previous "compatibility" notion is incorporated into the action of the full product on cohomology classes \(H^{2}_{\mathcal{V}}(Q,I)\) by the addition of a new twisting of the action terms in a 2-cocycle. The action of \(\operatorname{Aut}I\times\operatorname{Aut}Q\) on the \(\mathcal{V}\)-compatible 2-cocycles \(Z^{2}_{\mathcal{V}}(Q,I)\) is given by the following: for \((\sigma,\kappa)\in\operatorname{Aut}I\times\operatorname{Aut}Q\) and 2-cocycle \(T\in Z^{2}_{\mathcal{V}}(Q,I)\), define the 2-cocycle \[T^{(\sigma,\kappa)}=\left\{T^{(\sigma,\kappa)}_{+},T^{(\sigma,\kappa)}_{r},T^{ (\sigma,\kappa)}_{f},a(f,s)^{(\sigma,\kappa)}:r\in R,f\in F,s\in[\operatorname {ar}f]^{*}\right\}\] by the rules \[T^{(\sigma,\kappa)}_{+}(x,y) :=\sigma\circ T_{+}(\kappa^{-1}(x),\kappa^{-1}(y))\] \[T^{(\sigma,\kappa)}_{r}(x) :=\sigma\circ T_{r}(\kappa^{-1}(x))\] \[T^{(\sigma,\kappa)}_{f}(\vec{x}) :=\sigma\circ T_{f}(\kappa^{-1}(\vec{x}))\] \[a(f,s)^{(\sigma,\kappa)}(\vec{x},\vec{a}) :=\sigma\circ a(f,s)(\kappa^{-1}(\vec{x}),\sigma^{-1}(\vec{a})).\] Given \(t=s\in\operatorname{Id}\mathcal{V}\), because \(T\) is a \(\mathcal{V}\)-compatible 2-cocycle we have the corresponding identity \(t^{\partial,T}=s^{\partial,T}\) in the signature of the 2-cocycle such that the multi-sorted structure \(\left\langle I\cup Q,\tau^{I},\tau^{Q},T\right\rangle\vDash t^{\partial,T}=s ^{\partial,T}\). By applying \(\sigma\) and \(\kappa^{-1}\) to the interpretations of the terms \(t^{\partial,T}\) and \(s^{\partial,T}\) in a manner similar to that outlined in Section 2, we would be able to conclude that \(\left\langle I\cup Q,\tau^{I},\tau^{Q},T^{(\sigma,\kappa)}\right\rangle\vDash t ^{\partial,T^{(\sigma,\kappa)}}=s^{\partial,T^{(\sigma,\kappa)}}\); therefore, \(T^{(\sigma,\kappa)}\in Z^{2}_{\mathcal{V}}(Q,I)\). An argument similar to that given in Section 2 again shows \([T]^{(\sigma,\kappa)}:=[T^{(\sigma,\kappa)}]\) yields a well-defined action. Define \(W_{T}(\sigma,\kappa):=[T-T^{(\sigma,\kappa)}]\) for \((\sigma,\kappa)\in\operatorname{Aut}I\times\operatorname{Aut}Q\). For \([T]\in H^{2}_{\mathcal{V}}(Q,I)\) and \((\sigma,\kappa)\in\operatorname{Aut}I\times\operatorname{Aut}Q\), it may be that \([T-T^{(\sigma,\kappa)}]\not\in H^{2}_{\mathcal{V}}(Q,I)\) since the nonabelian cohomology classes may no longer be closed under the natural addition induced by \(I\). One way to remedy this defect occurs in the original group case, where the nonabelian cohomology classes realizing a fixed action can be identified with the cohomology classes realizing a related action on the center of the kernel. This identification is then incorporated into the definition of the Well's derivation [12, 10]; unfortunately, an analogue of this comparison for our general varieties is not established in [13] and so we resort to the convenient stopgap of taking the free abelian group generated by the cohomology classes as a formal codomain. Proof.: (proof of Theorem 1.4) As in the proof of Theorem 1.2, we see that the kernel of \(\psi\) is given by \(\ker\psi=\{\phi\in\operatorname{Aut}_{I}A:\phi|_{I}=\operatorname{id},\pi=\pi \circ\phi\}\approx\operatorname{Der}(Q,I)\). We now demonstrate exactness at \(\operatorname{Aut}I\times\operatorname{Aut}Q\). Having fixed a lifting \(l:Q\to A\), we again see that \(\phi\circ l\circ\phi_{l}^{-1}:Q\to A\) is another lifting of \(\pi:A\to Q\) and by direct definition of a \(2\)-cocycle defined by a lifting we see that \(T^{(\phi|_{I},\phi_{l})}\) is the \(2\)-cocycle defined by the lifting \(\phi\circ l\circ\phi_{l}^{-1}\). Then by [13] we have \(T\sim T^{(\phi|_{I},\phi_{l})}\) which yields \(W_{T}\circ\psi(\phi)=0\); therefore, \(\operatorname{im}\psi\leq\operatorname{Aut}I\times\operatorname{Aut}Q\). We now assume \((\sigma,\kappa)\in\ker W_{T}\); equivalently, \([T]=[T^{(\sigma,\kappa)}]\) with \((\sigma,\kappa)\in\operatorname{Aut}I\times\operatorname{Aut}Q\). Then there exists a map \(h:Q\to I\) such that \(h(0)=0\) and after pre-composing with \(\kappa^{-1}\) we have the following: 1. \(T_{+}(\kappa(x),\kappa(y))-\sigma\circ T_{+}(x,y)=h(\kappa(x))+h(\kappa(y))-h( \kappa(x)+\kappa(y))\); 2. \(T_{r}(\kappa(x))-\sigma\circ T_{r}(x)=r\cdot h(\kappa(x))-h(r\cdot\kappa(x))\) for each \(r\in R\); 3. for each \(f\in F\), \[T_{f}(\kappa(\vec{x}))-\sigma\circ T_{f}(\vec{x})=\sum_{s\in[\operatorname{ ar}f]^{*}}(-1)^{1+|s|}a(f,s)(\kappa(\vec{x}),h(\kappa(\vec{x})))+(-1)^{1+n}f^{I}(h( \kappa(\vec{x})))-h(f^{Q}(\kappa(\vec{x})));\] 4. for each \(f\in F\) and \(s\in[\operatorname{ar}f]^{*}\), \[a(f,s)(\kappa(\vec{x}),\sigma(\vec{a}))-\sigma\circ a(f,s)(\vec{x },\vec{a}) =\sum_{s\subseteq r\subseteq[\operatorname{ar}f]}(-1)^{1+|r|-|s|}a( f,r)\left(\kappa(\vec{x}),h(\kappa(\vec{x}))\right)_{s}[\sigma(\vec{a})]\] \[\quad+(-1)^{1+|\operatorname{ar}f|-|s|}f(h(\kappa(\vec{x})))_{s}[ \sigma(\vec{a})].\] At this point, we explicitly use the isomorphism \(A\approx I\rtimes_{T}Q\) from Wires [13] given by \(A\ni x\longmapsto\langle x-l\circ\pi(x),\pi(x)\rangle\). Define \(l(\sigma,\kappa):I\rtimes_{T}Q\to I\rtimes_{T}Q\) by the rule \(\hat{l}(\sigma,\kappa)\left\langle a,x\right\rangle:=\langle\sigma(a)-h( \kappa(x)),\kappa(x)\rangle\). To show \(\hat{l}(\sigma,\kappa)\) is a homomorphism, take a multilinear operation \(f\in F\) with \(n=\operatorname{ar}f\) and \(\vec{a}\in I^{n},\vec{x}\in Q^{n}\). Using (E3) and (E4), we calculate \[F_{f} \left(\hat{l}(\sigma,\kappa)\left\langle a_{1},x_{1}\right\rangle,\ldots,\hat{l}(\sigma,\kappa)\left\langle a_{n},x_{n}\right\rangle\right)\] \[=F_{f}\left(\left\langle\sigma(a_{1})-h(\kappa(x_{1})),\kappa(x_{ 1})\right\rangle,\ldots,\left\langle\sigma(a_{n})-h(\kappa(x_{n})),\kappa(x_ {n})\right\rangle\right)\] \[=\left\langle f^{I}\left(\sigma(\vec{a})-h(\kappa(\vec{x})) \right)+\sum_{s\in[n]^{*}}a(f,s)\left(\kappa(\vec{x}),\sigma(\vec{a})-h( \kappa(\vec{x}))+T_{f}(\kappa(\vec{x})),f^{Q}(\kappa(\vec{x}))\right\rangle\right.\] \[=\left\langle f^{I}(\sigma(\vec{a}))+\sum_{\emptyset\neq t\subseteq[ n]}(-1)^{|t|}f^{I}\left(\sigma(\vec{a})\right)_{t}[h(\kappa(\vec{x}))]+\sum_{s\in[n]^{*}} \sum_{r\subseteq s}(-1)^{|r|}a(f,s)\left(\kappa(\vec{x}),\sigma(\vec{a}) \right)_{r}[h(\kappa(\vec{x}))]\right.\] \[\quad+T_{f}(\kappa(\vec{x})),\kappa\left(f^{Q}(\vec{x})\right)\left.\right\] \[=\left\langle f^{I}(\sigma(\vec{a}))+\sum_{u\in[n]^{*}}(-1)^{n-| u|}f^{I}\left(h(\kappa(\vec{x}))\right)_{u}[\sigma(\vec{a})]+(-1)^{n}f^{I} \left(h(\kappa(\vec{x}))\right)\right.\] \[\quad+\sum_{u\in[n]^{*}}\sum_{u\subseteq v\in[n]^{*}}(-1)^{|v|-| u|}a(f,v)(\kappa(\vec{x}),h(\kappa(\vec{x})))_{u}[\sigma(\vec{a})]+T_{f}( \kappa(\vec{x})),\kappa\left(f^{Q}(\vec{x})\right)\left.\right\rangle\] \[=\left\langle\sigma\circ f^{I}(\vec{a})+\sum_{s\in[n]^{*}}\sigma \circ a(f,s)(\vec{x},\vec{a})+\sigma\circ T_{f}(\vec{x})-h\left(\kappa(f^{Q}( \vec{x}))\right),\kappa\left(f^{Q}(\vec{x})\right)\right\rangle\] \[=\hat{l}(\sigma,\kappa)\left\langle f^{I}\left(\vec{a}\right)+ \sum_{s\in[n]^{*}}a(f,s)(\vec{x},\vec{a})+T_{f}(\vec{x}),f^{Q}(\vec{x})\right\rangle\] \[=\hat{l}(\sigma,\kappa)\left(F_{f}\left(\left\langle a_{1},x_{1} \right\rangle,\ldots,\left\langle a_{1},x_{1}\right\rangle\right)\right).\] For the module operations, using (E1) and (E2) we have \[r\cdot \hat{l}(\sigma,\kappa)\left\langle a,x\right\rangle+\hat{l}( \sigma,\kappa)\left\langle b,y\right\rangle\] \[=\left\langle r\cdot\sigma(a)-r\cdot h(\kappa(x))+T_{r}(\kappa(x ))+\sigma(b)-h(\kappa(y))+T_{+}(r\cdot\kappa(x),\kappa(y)),r\cdot\kappa(x)+ \kappa(y)\right\rangle\] \[=\left\langle\sigma(r\cdot a)+\sigma\circ T_{r}(x)-h(r\cdot \kappa(x))+\sigma(b)+\sigma\cdot T_{+}(r\cdot x,y)+h(\kappa(r\cdot x)),\kappa( r\cdot x+y)\right\rangle\] \[=\hat{l}(\sigma,\kappa)\left\langle r\cdot a+T_{r}(x)+b+T_{+}(r \cdot x,y),r\cdot x+y\right\rangle\] \[=\hat{l}(\sigma,\kappa)\left(r\cdot\left\langle a,x\right\rangle +\left\langle b,y\right\rangle\right).\] We have shown \(\hat{l}(\sigma,\kappa)\) is a homomorphism. It is straightforward to see that \(\hat{l}(\sigma,\kappa)\) is bijective such that \(\hat{l}(\sigma,\kappa)(I\times 0)=I\times 0\); therefore, \(\hat{l}(\sigma,\kappa)\in\operatorname{Aut}_{I\times 0}I\rtimes_{T}Q\). The extension \(p_{2}:I\rtimes_{T}Q\to Q\) given by second-projection has the lifting \(r:Q\to I\rtimes_{T}Q\) given by \(r(x)=\langle 0,x\rangle\). Then we easily see that \(\hat{l}(\sigma,\kappa)_{r}=p_{2}\circ\hat{l}(\sigma,\kappa)\circ r=\kappa\). If we allow for the identification \(I\) with \(I\times 0\), then \(\hat{l}(\sigma,\kappa)|_{I\times 0}=\sigma\); altogether, we see that \(\psi\left(\hat{l}(\sigma,\kappa)\right)=(\sigma,\kappa)\) and so have demonstrated exactness at \(\operatorname{Aut}I\times\operatorname{Aut}Q\) and finished the proof of the theorem. **Acknowledgments 4.1**.: The research in this manuscript was supported by NSF China Grant #12071374.
2302.06199
COACH: Cooperative Robot Teaching
Knowledge and skills can transfer from human teachers to human students. However, such direct transfer is often not scalable for physical tasks, as they require one-to-one interaction, and human teachers are not available in sufficient numbers. Machine learning enables robots to become experts and play the role of teachers to help in this situation. In this work, we formalize cooperative robot teaching as a Markov game, consisting of four key elements: the target task, the student model, the teacher model, and the interactive teaching-learning process. Under a moderate assumption, the Markov game reduces to a partially observable Markov decision process, with an efficient approximate solution. We illustrate our approach on two cooperative tasks, one in a simulated video game and one with a real robot.
Cunjun Yu, Yiqing Xu, Linfeng Li, David Hsu
2023-02-13T09:15:45Z
http://arxiv.org/abs/2302.06199v1
# COACH: Cooperative Robot Teaching ###### Abstract Knowledge and skills can transfer from human teachers to human students. However, such direct transfer is often not scalable for _physical_ tasks, as they require one-to-one interaction, and human teachers are not available in sufficient numbers. Machine learning enables robots to become experts and play the role of teachers to help in this situation. In this work, we formalize _cooperative robot teaching_ as a Markov game, consisting of four key elements: the target task, the student model, the teacher model, and the interactive teaching-learning process. Under a moderate assumption, the Markov game reduces to a partially observable Markov decision process, with an efficient approximate solution. We illustrate our approach on two cooperative tasks, one in a simulated video game and one with a real robot. Robot Teaching, Human-Robot Interaction ## 1 Introduction How do we teach humans to re-orientate a table jointly or play tennis? Humans often learn by practicing the skills with teachers or partners [1; 2; 3]. This mode of learning is, however, difficult to scale up, as it requires one-to-one interaction and there are not sufficient human teachers [4]. With advances in machine learning, robots can not only master complex tasks [5; 6; 7] but also collaborate with humans and adapt to human behaviors [8; 9; 10]. In this work, we aim to create _robot teachers_ for physical tasks, thus scaling up teaching and providing learning opportunities to many even when human teachers are not available. Specifically, we propose _Cooperative rObot teACHing_ (COACH), a robot teaching framework to teach humans cooperative skills for two-player physical tasks through interaction. We assume the robot teacher has full knowledge of the task, specifically, a set of policies to execute the task. The objective is to teach the student a policy as fast as possible. See Fig. 1 for an illustration. COACH treats the teaching task as a two-player Markov game for a _target task_. One player is the robot teacher, and the other is the human student. Under a suitable student learning model, COACH transforms the game into a _partially observable Markov decision process_ (POMDP). The POMDP solution enables the robot teacher to adapt to the different behaviors, according to the history of interactions. One key challenge of COACH is to represent the student's knowledge of the target skills and learning behaviors. First, we leverage _item response theory_ (IRT), a well-established framework for educational assessment [11]. IRT provides simplified parametric models that capture the student's knowledge level with respect to the task difficulty in a small number of parameters. COACH treats these parameters as latent variables in the teaching POMDP and learns them from human-robot interaction data by solving the POMDP. Next, to teach complex skills, we draw insights from student-centered learning [12] and human-robot cross-training [13]. We decompose a complex target skill into a set of sub-skills, based on the student's potential roles in the target task. With this compact, decomposed skill representation, we naturally obtain a _partially assistive_ robot teaching curriculum to facilitate learning: the human student learns the sub-skills one at a time, and the robot teacher assists with the sub-skills not yet learned, to complete the target task. While the robot assists the human in the teaching task, its behavior differs from those in common collaborative human-robot interaction tasks [14; 15]. There, the primary objective is to complete the task, and the robot is fully assistive: if the human does not perform, the robot then tries to complete the task on its own, if possible. In the teaching task, the robot is partially assistive and usually avoids assisting with the specific sub-skill to be learned, in order to encourage student exploration and learning. As a first attempt, we conducted human-subject experiments on two challenging human-robot collaboration tasks, Overcooked-AI and Cooperative Ball Maze ( Fig. 2). Our results show that COACH enables the robot teacher to model and reason over adaptive human students in cooperative teaching. Also, a fully-assistive teacher may impede student learning, and a partially assistive teacher indeed motivates the student to explore new strategies. ## 2 Related Work **Assistance in HRI.** One major aspect of HRI is how the robot could assist humans with a hidden human objective [14; 15]. The objective of the robot is to infer the human's intention and learns to assist the human. In its simplest form, the action selection and human intention inference are separated [16; 17; 18]. A decision-theoretic framework, assistant POMDP, is developed to capture the general notion of assistance in HRI [19]. The robot integrates the reward learning and control modules to perform sophisticated reasoning over human feedback [20; 21]. However, these two approaches neglect human learning/adaptation and may hinder humans from improving their skills. Our work focuses on how to generate behaviors that facilitate human learning during interactions. **Collaboration in HRI.** Another important aspect of HRI is to model interactions as the collaboration between the human and the robot [22], for which the human and the robot share the same objective. However, the joint optimal policy, e.g. rotating the table counter-clockwise, is unknown to both agents in the first place. Their interaction is mutually adaptative [23; 24; 25]. Particularly, as pointed out in [10], if one side is only aware of partial information about the task, the optimal policy pair naturally induces the behavior of active teaching, active learning, and efficient communication between the robot and human. In this work, we focus on the following setting: given that the robot teacher knows the policy to teach, and how to carry out active teaching. **Teaching Algorithm for Algorithms.** Teaching for algorithms aims to facilitate the learning of the algorithm by choosing or generating training samples. Various teaching techniques including curriculum learning [26] and machine teaching [27; 28; 29; 30; 31] have been effectively applied to supervised learning and semi-supervised learning problems. Similar ideas are further extended to train reinforcement learning agents to learn complex skills, e.g., generate training environment for reinforcement learning [32; 33; 34], choose various demonstrations [35] or learn to decompose the skill [36; 37]. Teaching in cooperative multi-agent RL allows agents to simultaneously become teachers and students for each other [3; 38; 39]. However, such approaches generally require relatively more data for training and to some extent the controlled learning behavior of the learner. Transfer of these approaches to human learning is promising but difficult. **Teaching Algorithm for Human.** Despite the aforementioned practical challenges, some algorithms have been successfully deployed for human learning. Attempts on teaching the crowd on classification or concepts prove to be successful [40; 41; 42; 43; 44]. While humans can learn con Figure 1: Cooperative robot teaching. In the target task (left), two human players jointly reorient a table, for example. In the corresponding teaching task (right), the robot teacher interacts with the human student and teaches cooperative skills so that the student learns to cooperate with partners with varying capabilities or preferences in the target task. cepts from visual or verbal examples, complex skills like motor control skills can hardly be mastered through these signals. Recently, skill discovery techniques in reinforcement learning have been introduced to generate a curriculum based on skill decomposition and facilitate humans to learn motor control skills [45]. It focuses on how to adaptively decompose the skill into learnable sub-skills for a human to practice on its own and achieves promising results. Here, we seek to automate the teaching process for humans to cooperate in a physical task and provide a framework for this teaching mode, e.g., table co-reorientation. ## 3 Cooperative Robot Teaching We identify four key elements in COACH: (1) target task, (2) student, (3) teacher, and (4) interactive teaching-learning. **Target task.** In this work, we focus on teaching in _a two-player cooperative task_, which we call it the _target task_. **Definition 1**: _. The target task is a two-player cooperative Markov game \(\mathcal{M}=(S,A^{1},A^{2},T,R,\gamma)\) between two agents, \(1\) and \(2\), where_ * \(S\) _is a set of target task states;_ * \(A^{1}\) _is a set of actions for agent_ \(1\)_;_ * \(A^{2}\) _is a set of actions for agent_ \(2\)_;_ * \(T(s^{\prime}|s,a^{1},a^{2})\) _is a conditional probability function on the next target task state_ \(s^{\prime}\in S\)_, given the current state_ \(s\in S\) _and both agents' actions_ \(a^{1}\in A^{1}\) _and_ \(a^{2}\in A^{2}\)_;_ * _R(_\(s\)_,_ \(a^{1}\)_,_\(a^{2}\)_) is a target task reward function that maps the target task state and players' actions to a real number;_ * \(\gamma\) _is a discount factor._ At each step \(t\), agent \(1\) and \(2\) both observe the current task state \(s_{t}\) and select their respective actions \(a^{1}_{t}\sim\pi^{1}\) and \(a^{2}_{t}\sim\pi^{2}\), where \(\pi^{i}\) the policy of agent \(i\) for \(i=1,2\). They then receive a joint reward \(r_{t}=R(s_{t},a^{1}_{t},a^{2}_{t})\). The next state is updated as \(s_{t+1}\sim T(s_{t+1}\mid s_{t},a^{1}_{t},a^{2}_{t})\). Given the definition of the target task, we first answer how to represent the knowledge/skills. In this work, we choose to represent a _skill_ by a policy \(\phi^{*}\) to the target task. For example, in the table co-reorientation task, the agent needs to learn to deal with either stubborn or adaptive partners. We recognize that there are other ways to represent knowledge/skills, such as a set of demonstrations and the ground-truth reward function. However, such representations are indirectly linked with the skill's performance; therefore, evaluating its proficiency is more obscured. We choose a known policy to be taught as the representation since it can be directly optimized over and evaluated. **Student.** The student policy is non-stationary since it will improve along with teaching. We model this evolutionary behavior with a tuple of student policy \(\phi\) and an updating function \(U\), \((\phi,U)\). The student policy represents the student knowledge state. It will take in the current target task state \(s\) as input and output the student's action. The updating function \(U\) models how the student changes its policy after each teaching step. **Teacher.** We define the teacher as a knowledgeable agent (expert) who knows a set of policies \(\Phi^{*}\) for a target task. The teacher aims to acquire a teaching policy \(\bar{\pi}\) that can teach any \(\phi^{*}_{i}\in\Phi^{*}\) to the student effectively. In this general setting, the choice of the student policy to teach \(\phi^{*}\) depends on the capability, preference, and current knowledge level of the student. A principled approach to selecting the policy to teach needs to consider the student's preference, his/her update model for the knowledge level, and an estimate of his/her current capability. In this paper, we assume that we have an oracle to choose the policy to teach \(\phi^{*}\in\Phi^{*}\), such that this policy \(\phi^{*}\) matches the preference of the student. The teacher can be described by a tuple of a target task policy and the corresponding teaching policy, \((\phi^{*},\bar{\pi})\). **Interactive teaching-learning**. In the target task, the teacher knows the target task policy \(\phi^{*}\) while the student does not. The teacher's goal is to act in the most informative way so that the student learns \(\phi^{*}\) fastest. The choice of \(\phi^{*}\) should account for the student's preferences. To embed the objective of teaching and distinguish it from the _Target Task_, we define it as the _Teaching Task_ in the following way: **Definition 2**: _. Given a target task \(\mathcal{M}=(S,A^{1},A^{2},T,R,\gamma)\), a student \((\phi,U)\), and a policy to teach \(\phi^{*}\) for the target task, the teaching task is a POMDP \(\bar{\mathcal{M}}=(\bar{S},\bar{A},\bar{T},\bar{O},\bar{Z},\bar{R},\bar{\gamma})\) for the teacher, where_ * \(\bar{S}\) _is a set of teaching states:_ \(\bar{s}\) _=_ \((s,\phi)\)_, for target task state_ \(s\in S\) _and student policy_ \(\phi\)_;_ * \(\bar{A}\) _is a set of actions:_ \(\bar{A}=A^{1}\cup A^{2}\)_;_ * \(\bar{T}(\bar{s}^{\prime}|\,\bar{s},\bar{a})\) _is a conditional probability function on the next state_ \(\bar{s}^{\prime}\in\bar{S}\)_, given the current state_ \(\bar{s}\in\bar{S}\) _and teacher's action_ \(\bar{a}\in\bar{A}\)_;_ * \(\bar{O}\) _is a set of observations:_ \(\bar{o}=(s,r)\)_, for target task state_ \(s\in S\) _and target task reward_ \(r\)_;_ * \(\bar{Z}(\bar{o}\ |\,\bar{a},\bar{s})\) _is a conditional probability function on the observation_ \(\bar{o}\in\bar{O}\) _, given teacher's action_ \(\bar{a}\in\bar{A}\) _and current state_ \(\bar{s}\in\bar{S}\)_;_ * \(\bar{R}(\bar{s},\bar{a},\bar{s}^{\prime})\) _is a teaching reward function that maps current state_ \(\bar{s}\in\bar{S}\)_, teacher's action_ \(\bar{a}\in\bar{A}\)_, and next state_ \(\bar{s}^{\prime}\in\bar{S}\) _to a real number measuring the effectiveness of teaching;_ * \(\bar{\gamma}\) _is a discount factor._ The objective of the teaching task is to derive a teaching policy \(\bar{\pi}\), enabling students to learn \(\phi^{*}\) for the target task fastest. More specifically, the teacher can influence the student through interactive actions \(\bar{a}\in\bar{A}\). First, we define the learning behavior of the student. We consider humans would take the interaction history into account. The history of observation is \(h_{t}=[(s_{0},r_{0}),...,(s_{t},r_{t})]\). Thus, the student updates \(\phi\) with any arbitrary iterative functions conditioned on the history of interactions: \(\phi_{t+1}=U(\phi_{t},h_{t})\). Next, we give the definition of the reward function. To incentivize the teacher to speed up the teaching process, we introduce a step-wise teaching cost to the teacher \(c_{t}=C(s_{t},\bar{a}_{t})\) to penalize unnecessary teaching actions. To this end, we define the reward function as \[\bar{R}(\bar{s},\bar{a}_{t},\bar{s}^{\prime};D,C,\phi^{*},\omega)=D(\phi_{t}, \phi^{*})-D(\phi_{t+1},\phi^{*})-\omega C(s_{t},\bar{a}_{t}), \tag{1}\] where \(\bar{s}=(s_{t},\phi_{t}),\ \bar{s}^{\prime}=(s_{t+1},\phi_{t+1})\), \(\omega\) is the weighting factor to trade-off the teaching cost and teaching efficiency, and \(D\) can be any reasonable distance measure between two policies, e.g., initial state value in the target task. Lastly, we introduce our choice of the teaching policy \(\bar{\pi}\). To devise a student-aware teaching strategy, apart from the current state \(s_{t}\) and the target policy \(\phi^{*}\), our \(\bar{\pi}\) also takes the history of observation as input. The action of the teacher can be sampled from the policy, i.e., \(\bar{a}_{t}\sim\bar{\pi}(\bar{a}_{t}\ |\ h_{t-1},s_{t},\phi^{*})\). The solution to the POMDP \(\mathcal{M}^{\prime}\) is a teaching policy \(\bar{\pi}\) that maximizes the expected sum of rewards \(\mathbb{E}_{\bar{a}_{t}\sim\bar{\pi}}[\sum_{t=0}^{\infty}\bar{\gamma}^{t}\bar {R}(\bar{s},\bar{a}_{t},\bar{s}^{\prime})]\). ## 4 Method In this section, we provide a solution that grounds all the elements in the conceptual framework of COACH. The main spirit of our solution is to parameterize students' knowledge state with IRT and decompose complex tasks into a set of role-based independent skills. The action space in definition 2 allows the teacher to take all possible actions in the 2-player task. Thus, the teacher is able to switch roles freely. For example, the teacher may serve as either follower or leader in the classic leader-follower model [46, 47]. This enables easier evaluation of the students' proficiencies and provides a ground to derive the partially assistive interaction mode. Our solution is summarized in Algorithm 1. To begin with, we first define the action space, \(\bar{A}\). ``` 0: Maximum Interactions \(L\), Predefined Interactions \(N\) 1:for\(k\in\bar{A}\)do: Randomly initialize \(\lambda\) and \(\alpha_{t}\), \(\beta\), and \(\mathbb{X}=\{\}\) 2:for\(i=1,2,...,N\)do: 3:\(\underline{\times}\)add\((v_{i})\) 4:endfor 5:for\(i=1,2,...,L\)do: 6:for\(k\in\bar{A}\)do: 7: Learn \(\lambda\) and \(\alpha_{t}\), \(\beta\) from \(\mathbb{X}\) 8:endfor 9:\(k\leftarrow\)Action selection from \(\lambda\) and \(\alpha_{t}\), \(\beta\) 10:\(v_{i}\leftarrow\)Performance measure from interactions 11:\(\underline{\times}\)add\((v_{i})\) 12:endfor ``` **Algorithm 1** Approximated Solution to the Teaching Task ### Action The action space is constructed through sub-skill decomposition. Sub-skills decomposition is well-studied for single-agent tasks [48; 49; 50]. However, extending the same idea to a multi-agent setting is still challenging since task completion relies on the interaction among multiple parties. We observe that in a multi-agent game, the task naturally comprises several roles, of which each agent takes a subset. The well-established leader-follower model is a particular choice of role-based skill decomposition [46; 47; 51; 52]. Therefore in our work, we propose to decompose skills based on role allocation. We divide the skill into \(K\) independent teachable sub-skills according to the student's potential roles in the task. The teacher's action space \(\bar{A}=\{k:k\in\mathbb{Z},\ 0\leq k<K\}\) consists of teaching each sub-skill. Such a decomposition of skills naturally leads to a partially assistive mode of interaction. ### State The state space is constructed with Item Response Theory (IRT) [11]. IRT provides a parametric form to represent students' skill levels. Given the limited interactions, we adopted the simplest form, the one-parameter logistic model (1PL), to model human skills. In the 1PL model, each sub-skill \(k\in\bar{A}\) is assigned a parameter \(\beta^{k}\) representing the difficulty, and a parameter \(\alpha^{k}\) called the _proficiency_ representing a student's knowledge state. The probability that a student has mastered sub-skill \(k\) is given by \(P(k):=\sigma(\alpha^{k}-\beta^{k})\), where \(\sigma\) is the sigmoid function. Hence, instead of representing the state with the student's policy \(\phi\), we use \((\alpha,\beta)^{K}\) to represent the hidden state. That is, for \(\bar{s}\in\bar{S},\bar{s}=(s,(\alpha,\beta)^{K})\), where \((\alpha,\beta)^{K}\) is hidden. For each student and each \(k\in\bar{A}\), we assume that \(\alpha\) changes over time while \(\beta\) does not. ### Transition The transition model consists of two main parts, the target task transition model \(T\), and the student's update function \(U\). While the former one is known to the teacher, we need to make assumptions about the latter one. Since we define the state space over the student's proficiency \(\alpha\) in Sec 4.2, the transition model is also constructed over the proficiency. Following the previous work on online estimation of student proficiency [53; 54], for each sub-skill, we model the student's proficiencies over time as a Wiener process: \(U(\alpha_{t+\Delta t}|\alpha_{t})=\exp\left(-\frac{(\alpha_{t+\Delta t}- \alpha_{t})^{2}}{2\lambda\Delta t}\right),\) where \(\Delta t\) refers to the step interval and \(\lambda\) is a parameter controlling the "smoothness" with which student's proficiency varies over time. For each student and for each \(k\in\bar{A}\), we assume \(\lambda\) does not change over time and is learned for each sub-skill respectively. To this end, we construct the transition model in the POMDP as \(\bar{T}=\{T,U\}\), where \(T\) is the transition function in the target task. ### Observation The observation is composed of the target task state and the reward received, \((s,r)\). Recall that in Sec 4.1, we define the action as choosing one sub-skill to train the student, which is a macro-action. For teaching sub-skill \(k\), we redefine the observation as the ratio between the target task rewards achieved by the student's current and the policy to be taught: \(v:=\frac{R(s,\bar{a},a^{\bar{s}})}{R(s,\bar{a},a^{\bar{s}})},\) where \(a^{\star}\) is the action generated by the policy to be taught \(\phi^{\star}\) and \(a^{\bar{s}}\) is the action from student's policy given the same the target task state \(s\). Since all the sub-skills are treated equally, we will omit the index \(k\) for simplicity in the following discussion. As a result, for \(\bar{o}\in\bar{O}\), \(\bar{o}=(s,v)\). Unlike the binary response in conventional knowledge tracing, the response \(v\) we have is continuous and we assume the teacher will only teach one sub-skill at a time. Thus, we use the continuous Bernoulli distribution to construct the observation model: \(Z(v|P(k))=P(k)^{v}(1-P(k))^{1-v},\) where \(k\) is the sub-skill being taught when \(v\) is observed. As a result, the observation model can be defined as \(\bar{Z}=\{I,Z\}\), where \(I\) is an identity mapping for the observable target task state, \(I(s)=s\). ### Reward The distance between the student's policy and the policy to be taught can be represented using \(P(k)\). We represent the distance as the average of one minus master probabilities of each sub-skill: \(D(\phi,\phi^{*})=\frac{\sum_{k=0}^{K}1-P(k)}{K}\). There are other ways to specify the goal according to the decomposition of the skill, e.g. weakest or multiply [55]. We choose the sum due to our independence assumption on sub-skills. In this work, we assume the cost is uniform, thus, given a finite horizon of interactions, maximizing the reward function defined in Equation (1) is equivalent to maximizing \(\bar{R}(\bar{s},\bar{a}_{t},\bar{s}^{\prime})=\frac{\sum_{k=0}^{K}P_{t+1}(k)-P_{ t}(k)}{K}\), where \(P_{t}(k)=\sigma(\alpha_{t}^{k}-\beta^{k})\). ### Model Learning and Decision Making We use the student's performance during the interactions to estimate both \(\lambda\) and \(\alpha_{t},\beta\). Parameters for each sub-skill are learned separately, thus, we omit \(k\) for simplicity. Let \(v_{1:t}\) denote sequences of student's performance measure against the policy to be taught up to step \(t\). We have the posterior \(P(\lambda,\alpha_{t},\beta|v_{1:t})\propto P(v_{1:t}|\lambda,\alpha_{t},\beta) P(\lambda,\alpha_{t},\beta)\). The conditional probability of the observation and current proficiency can be obtained by integrating out all the previous proficiencies. The likelihood can be approximated through \(P(v_{1:t}|\lambda,\alpha_{t},\beta)\approx\prod_{t^{\prime}=1}^{t}\int P(v_{t^ {\prime}}|\ \lambda,\alpha_{t^{\prime}},\beta)U(\alpha_{t^{\prime}}|\alpha_{t}) \mathrm{d}\alpha_{t^{\prime}}\). An approximation of the log posterior over the student's current proficiency given previous responses can be derived to learn the parameters \(\lambda\) and \(\alpha_{t}\), \(\beta\). Following [53; 54], we employ maximum a posteriori estimation (MAP) to learn these parameters. Given the estimation of the current state using the past history, we use a one-step look-ahead. Such a choice allows us to reduce the impact of the learned inaccurate model and generate a more efficient solution compared with the full-blown POMDP solution. At timestep \(t\), the teacher's action is given as \[\bar{a}_{t+1}=\operatorname*{arg\,max}_{k\in\bar{A}}\int U(\alpha_{t+1}^{k}| \alpha_{t}^{k})P_{t+1}(k)\;\mathrm{d}\alpha_{t+1}^{k}-P_{t}(k). \tag{2}\] In practice, the student is asked to perform on each sub-skill for a few interactions to initialize the parameters. ### Training on Sub-skills Our overall strategy for training students on each sub-skill is to diversify scenarios the student would encounter during training. Training students on sub-skills naturally leads to a partially assistive partner on unlearned sub-skills, which allows the student to explore the sub-skill freely. We adopt an intuitive assumption: _an agent learns cooperation better with a diverse group of partners_. Such a teaching strategy is effective when dealing with synthetic students [56; 57]. The student could learn from a diverse set of partially assistive partners or learn to cope with them by acquiring new skills. ## 5 Experiments We carried out two human-subject experiments to demonstrate how COACH works, one in simulation (Overcooked-AI [58]) and the other with a real robot (Cooperative Ball Maze). Experiment setups are shown in Figure 2. We investigated the teaching performances of three types of teachers: the **fully-assistive** teacher who performs optimally concerning the student's initial capability, the **student-aware** teacher who behaves according to our teaching strategy, and the **random** teacher. The random teacher in the Cooperative Ball Maze experiment chooses sub-skills randomly, while the random teacher in the Overcooked-AI experiment executes actions randomly. ### Setups **Overcooked-AI.** Overcooked-AI is a benchmark environment for fully cooperative human-AI task performance and has become a well-established domain for studying coordination [59; 60; 61; 62]. The goal of the game is to cook and deliver as much soup as possible in a limited time. We decompose the policy into two sub-skills: _putting ingredients in the pot_ and _delivering the soup_. To put ingredients in the pot, there exists one _efficient strategy_ to pass the ingredient through the middle table. In brief, rather than picking up one onion at a time and putting them into the pot, the efficient strategy is 1) put multiple onions on the middle table; 2) go to the pot; 3) pick up onions from the middle table; 4) put them into the pot. The overall idea is to reduce the number of movements needed to deliver the same amount of ingredients. We recruited \(N\)=20 (8 females and 12 males) participants and randomly assigned them into three groups, each with a different teaching strategy. Students are trained with different teachers and are evaluated with a common unseen partner. We emulate the human partner in evaluation using a trained model. Each participant was trained for 5 games and then evaluated for 1 game. **Cooperative Ball Maze.** The Cooperative Ball Maze game requires coordination from both the robot and the human. Each party will hold one side of the maze board and tilt it to move the ball out from one of the two exits. We define two sub-skills _leading the rotation_ and _following the rotation_. We recruited \(N\)=21 (10 females and 11 males) participants to carry out human-subject experiments. The participants were first evaluated in the two sub-skills, then trained for 20 interactions, and finally re-evaluated in the two sub-skills. Details can be found in the supplementary materials. ### Results _A fully-assistive teacher impedes human's acquisition of skills._ In the Overcooked-AI experiment shown in Figure 3(a), we observe that the students trained with a fully-assistive teacher perform worse than the students with a random teacher: it seems that a student becomes "lazy" and free rides the teacher when the teacher unilaterally adapts to the student and performs optimally. We further investigate the learning pattern of the "lazy student" problem and find out that _this "laziness" does not lie in the student's reluctance to take actions, but rather in the lack of motivation to explore and improve_. In Figure 3(c), we show the percentage of reward achieved by the student in Overcooked Figure 4: Results of the Cooperative Ball Maze experiment. (a) Evaluation of performances of the two sub-skills of all participants. The marker styles correspond to the sub-skill preferences of the participants. (b) Evaluation performances. The error bars correspond to the 95%CIs. (c) Improvements after 20 interactions. The error bars correspond to the 95%CIs. The students improve more under student-aware teachers than both fully-assistive and random teachers (with one-sided \(p\)-values \(0.069\) and \(0.039\)). Figure 3: Results of the Overcooked-AI experiment. (a) Rewards achieved together by the human-robot pairs during training and evaluation. The error bars correspond to the 95% confidence intervals (95%CI). The student-aware teacher outperformed the fully assistive and the random teachers in terms of the evaluation reward (with one-sided \(p\)-values \(0.001\) and \(0.03\)). (b) Percentage of students who found the efficient strategy. None of the students are aware of this strategy at the beginning of the training. (c) Percentage of reward achieved by the human participants during training. Figure 2: Experiment setups. (a) Overcooked-AI layout: human participants control the “cher” and the robot controls the “robot”. (b) The real robot setup of Cooperative Ball Maze with a simplified setting. AI during training. Compared with the student-aware counterpart, the percentage of reward achieved by humans is similar. However, only 17% of the participants of the group find out the efficient strategy (Figure 3(b)), which is crucial to achieving high scores in the evaluation. _Partially assistive or random partner motivates students to explore new strategies._ By leaving some/all work to the student, partially assistive and random teachers both motivate the student to acquire new skills. This is shown in Figure 3(b) that most of the students under these two teachers can find out the efficient strategy in Overcooked-AI. However, their performance and the robustness of the learned strategies differ significantly. Though multiple explanations could account for it, we hypothesize the student under the random teacher learns a single fixed strategy to finish the task alone (Figure 3(c)). Such a strategy that completes the task alone cannot utilize the possibly helpful inputs from the partner, therefore resulting in a poorer performance score. _An individualized curriculum should be designed for each student._ In the post-experiment survey of Cooperative Ball Maze, we asked the participants "which mode of the robot is easier to cooperate with?". Out of the 21 participants, 4 participants preferred to follow the robot and 17 participants preferred to lead the robot. Moreover, as we evaluated the student performance with partners of different sub-skills, we found that the student performances were consistent with their declared preferences (Figure 4(a)). That is to say, the student may have a bias over which strategy to acquire, and tailoring the teaching curriculum to focus on that specific strategy is efficient and more intuitive to the student. As demonstrated in Figure 5(a), after the first 6 trials that estimated the student's proficiency for each sub-skill, the teacher found out this student improved more as the leader, therefore, the teacher allocated 10 trials to perfect the _leading_ sub-skills and only 4 trials for _following_. Moreover, one participant in the random teacher group responded "the robot leading mode is too difficult and I gave up". This demonstrates the importance of an individualized curriculum: though there are multiple equally optimal strategies, the individual may have strong preferences, and teaching a non-preferable strategy will discourage the student from learning anything at all. We refer the readers to the Appendix for the complete data of all participants. ## 6 Limitation **Decomposition into sub-skills.** For many tasks, it is not easy to identify distinct roles to fulfill the local-independence criteria of sub-skills. We manually decompose the skill into a few sub-skills according to the role of the student. Often, such a decomposition may not be possible or requires careful design. We can mitigate this problem with recent progress on skill decomposition in single-agent task [45] and role-based task decomposition in multi-agent tasks [63]. However, it still demands much more effort to verify their efficacy with a real human on real-world tasks. **Teacher's Knowledge.** In the definition of the teaching task, we assume the teacher has full knowledge of the policies to be taught. However, it can be hard for the robot to know the oracle human policy beforehand. To make the conceptual framework practical, we need to relax the requirement on the teacher's prior knowledge. In our implementation, we reduce such an assumption by approximating the distance through the difference in performances. There can be cases where the target performance is hard to know or such relaxation results in severe information loss. We need more insights on tasks to make the framework practical. **Curriculum design.** In this work, we only design the curriculum over different sub-skills. However, during our experiment, we observe that humans show various responses to the same sub-skill of different difficulties. One specific finding is that people may give up learning when the task becomes Figure 5: Sub-skill performances (vertical axis) with respect to training progress (horizontal axis) of two example participants trained by the student-aware teacher. Dots represent the raw scores and lines represent the smoothed scores. The top and bottom figures correspond to leading and following sub-skills respectively. (a) Participant 4. The student improved more when trained in the leading sub-skill. (b) Participant 6. The student improved more when trained in the following sub-skill. too difficult. As a result, a finer-grained curriculum on the sub-skill training shall be generated to further facilitate human learning. ## 7 Conclusion In this work, we propose a conceptual framework, Cooperative Robot Teaching, that enables robots to teach humans in cooperative tasks. We show that, by abstracting a teaching task over the original duo cooperative task, the robot can learn to act as a specialized teacher to humans. To be more specific, we model the teaching task as a POMDP with hidden student policy and propose a partially assistive teaching curriculum to support human learning. We believe that robot teaching fills in the gap in the bilateral knowledge transfer in HRI: unlike other HRI tasks where the humans instruct the robots how to behave, the role is reversed and robots try to instill the knowledge back into humans. Despite the challenges that lie ahead, we believe that robot teaching has great potential and is a necessary step forward to bring robots closer to our daily life. **Acknowledgments.** This research is supported in part by the National Research Foundation, Singapore under its Medium Sized Centre Program, Center for Advanced Robotics Technology Innovation (CARTIN), and AI Singapore Programme (AISG Award No: AISG2-PhD-2022-01-036[T] and AISG2-PhD-2021-08-014), and by the Science and Engineering Research Council, Agency of Science, Technology and Research, Singapore, under the National Robotics Program (Grant No. 192 25 00054).
2304.02071
Geometrical torque on magnetic moments coupled to a correlated antiferromagnet
The geometrical spin torque mediates an indirect interaction of magnetic moments, which are weakly exchange coupled to a system of itinerant electrons. It originates from a finite spin-Berry curvature and leads to a non-Hamiltonian magnetic-moment dynamics. We demonstrate that there is an unprecedentedly strong geometrical spin torque in case of an electron system, where correlations cause antiferromagnetic long-range order. The key observation is that the anomalous torque is strongly boosted by low-energy magnon modes emerging in the two-electron spin-excitation spectrum due to spontaneous breaking of SU(2) spin-rotation symmetry. As long as single-electron excitations are gapped out, the effect is largely universal, i.e., essentially independent of the details of the electronic structure, but decisively dependent on the lattice dimension and spatial and spin anisotropies. Analogous to the reasoning that leads to the Mermin-Wagner theorem, there is a lower critical dimension at and below which the spin-Berry curvature diverges.
Nicolas Lenzing, David Krüger, Michael Potthoff
2023-04-04T18:44:34Z
http://arxiv.org/abs/2304.02071v1
# Geometrical torque on magnetic moments coupled to a correlated antiferromagnet ###### Abstract The geometrical spin torque mediates an indirect interaction of magnetic moments, which are weakly exchange coupled to a system of itinerant electrons. It originates from a finite spin-Berry curvature and leads to a non-Hamiltonian magnetic-moment dynamics. We demonstrate that there is an unprecedentedly strong geometrical spin torque in case of an electron system, where correlations cause antiferromagnetic long-range order. The key observation is that the anomalous torque is strongly boosted by low-energy magnon modes emerging in the two-electron spin-excitation spectrum due to spontaneous breaking of SU(2) spin-rotation symmetry. As long as single-electron excitations are gapped out, the effect is largely universal, i.e., essentially independent of the details of the electronic structure, but decisively dependent on the lattice dimension and spatial and spin anisotropies. Analogous to the reasoning that leads to the Mermin-Wagner theorem, there is a lower critical dimension at and below which the spin-Berry curvature diverges. Introduction.A magnetic moment coupled to a system of itinerant electrons via a local exchange interaction of strength \(J\) experiences a spin torque which leads to precession dynamics. For several magnetic moments \(\mathbf{S}_{m}\) (with \(m=1,...,M\)), usually described as classical fixed-length spins, there are further torques caused by, e.g., indirect exchange interactions mediated by the electron system. These _Hamiltonian_ spin torques, well known in micromagnetics [1] and in the theory of coupled spin-electron dynamics [2; 3; 4; 5; 6; 7; 8], all derive from interaction terms in the quantum-classical Hamiltonian [9] for the spin and electron degrees of freedom. In addition, there is a non-Hamiltonian spin torque that has a purely _geometric_ nature. This geometrical spin torque represents the feedback of the Berry physics [10] on the classical magnetic-moment dynamics. Generally, such feedback effects have been pointed out early [11; 12; 13] but have not been studied in spin dynamics theory until recently [14]. For weak \(J\) compared to the typical energy scales of the electron system, the classical spin dynamics is slow, such that the electron system accumulates a geometrical phase which is gauge independent in case of a cyclic motion [15; 16; 10]. This Berry phase is closely related to the Berry curvature, a two-form which, when integrated in classical parameter space over a two-dimensional surface bounded by a closed path \(\mathcal{C}\) yields the Berry phase associated with \(\mathcal{C}\). For example, in molecular physics [17] and when treating the coordinates of the nuclei classically, the feedback of the Berry physics produces an additional geometrical force, where the Berry curvature plays the role of a magnetic field in the nuclei equations of motion. This effect is known as "geometrical magnetism" [18; 19]. The geometrical spin torque resulting from the spin-Berry curvature (SBC) [14] is the analogous concept in the field of atomistic spin dynamics [4; 20]. As opposed to the closely related geometrical friction term [18; 19], i.e., Gilbert damping [21], it is energy conserving. But, importantly, the SBC is non-Hamiltonian and emerges for weak \(J\), i.e., in the limit of slow classical spin dynamics. However, the effects are typically weak [22] for a solid [23], such that it appears difficult to disentangle the effect of the geometrical spin torque from other contributions [24]. In this Letter we study the geometrical spin torque for magnetic moments coupled to a magnetic solid: a correlated \(D\)-dimensional antiferromagnetic (AF) insulator. This is a generic situation realized, e.g., by magnetic impurities in the bulk or by magnetic adatoms on the surface of the antiferromagnet. We demonstrate that the magnitude of the SBC is governed by the magnon-excitation spectrum. This has very general consequences: the SBC must diverge for \(D=1\) but is regular for \(D\geq 3\), see Tab. 1. For \(D=2\) the SBC generically exhibits a logarithmic divergence as a function of any perturbation causing a gap in the magnon dispersion, such as magnetic anisotropies or external magnetic fields. The magnitude of the SBC and thus the impact on the magnetic-moment dynamics is studied for the Hubbard model at half-filling and zero temperature as a prototype of a correlation-induced insulator. Time-reversal symmetry (TRS).Within adiabatic spin-dynamics theory [14; 22], the geometrical spin torque is obtained from the SBC of the electron system, see Eq. (2) below. Importantly, a finite SBC generally requires TRS breaking in the electron system [22]. \begin{table} \begin{tabular}{|c|c|c|c|} \hline \hline lattice & spin-Berry & distance & magnetic \\ dimension & curvature (SBC) & dependence & ground state \\ \hline \hline 1 & divergent & — & — \\ 2 & log. divergent & — & stable \\ 3 & regular & \(1/R\) & stable \\ \hline \(D\geq 4\) & \(\sim\int_{0}^{\Lambda_{\text{model}}}dk\,k^{D-3}\) & \(1/R^{D-2}\) & stable \\ \hline \hline \end{tabular} \end{table} Table 1: Spin-Berry curvature of a spontaneously symmetry-broken antiferromagnetic state with gapped single-particle excitations. \(\mathbf{k}\): wave vector. See text for discussion. \(J\) is strong, as assumed in Ref. [14], TRS is broken by the classical spin moment itself, as this acts like a local symmetry-breaking field. TRS breaking can be waived only at the cost of working with a non-Abelian extension of the theory well beyond the adiabatic limit [25], where the dynamics is governed by the generically finite non-Abelian spin-Berry curvature. Another approach is to replace the electron system by an entirely classical model composed of "slow" and "fast" spin moments [26; 27]. This circumvents the necessity of TRS breaking altogether but still exhibits the feedback of holonomy effects in purely classical systems [28]. For magnetic moments coupled to _quantum_ systems and in the physically relevant weak-\(J\) regime, a finite SBC can be achieved with an external magnetic field, or with a (staggered) orbital field as considered recently [22] with the Haldane model [29] as a prototype of a TRS-breaking Chern insulator [30]. However, fine tuning of the parameters is required to achieve considerable effects [22]. Here we consider an electron system in which correlations induce a TRS-breaking AF state. The AF order not only enables a finite SBC but also strongly boosts its magnitude due to magnon modes in the spin-excitation spectrum. _Dynamics of magnetic moments._ We are interested in the slow dynamics of \(M\) magnetic moments, described as classical spins \(\mathbf{S}_{m}\) of unit length, which are coupled to a correlated electron system with Hamiltonian \(H_{\rm el}\) via a local exchange interaction \(H_{\rm int}=J\sum_{m=1}^{M}\mathbf{s}_{i_{m}}\mathbf{S}_{m}\). Here, \(i_{m}\) is the site, at which the \(m\)-th moment is coupled to, and \(\mathbf{s}_{i}=1/2\sum_{\sigma\sigma^{\prime}}c_{i\sigma}^{\dagger}\mathbf{\tau}_{ \sigma\sigma^{\prime}}c_{i\sigma^{\prime}}\), where \(\mathbf{\tau}\) is the vector of Pauli matrices, is the local spin moment at site \(i\) of the electron system. The total Hamiltonian is \(H=H(\mathbf{S})=H_{\rm el}+H_{\rm int}(\mathbf{S})\) and depends on the configuration \(\mathbf{S}=(\mathbf{S}_{1},...,\mathbf{S}_{m})\) of the magnetic moments. Assuming that the electron system at any instant of time \(t\) is in its instantaneous ground state for the spin configuration \(\mathbf{S}(t)\), i.e., \(|\Psi(t)\rangle=|\Psi_{0}(\mathbf{S}(t))\rangle\), the equation of motion of adiabatic spin dynamics is given by [14; 22] \[\dot{\mathbf{S}}_{m}=(\mathbf{T}_{m}^{\rm(H)}+\mathbf{T}_{m}^{\rm(geo)})\times\mathbf{S}_{m}\;. \tag{1}\] Here \(\mathbf{T}_{m}^{\rm(H)}\times\mathbf{S}_{m}\) with \(\mathbf{T}_{m}^{\rm(H)}=\partial\langle H(\mathbf{S})\rangle/\partial\mathbf{S}_{m}=J \langle\mathbf{s}_{i_{m}}\rangle\) is the conventional (Hamiltonian) spin torque, where \(\langle\cdots\rangle\) is the instantaneous ground-state expectation value. _Geometrical spin torque._ The second term, the geometrical spin torque \(\mathbf{T}_{m}^{\rm(geo)}\times\mathbf{S}_{m}\) is necessary to enforce the constraint \(|\Psi(t)\rangle=|\Psi_{0}(\mathbf{S}(t))\rangle\) and has been derived within a quantum-classical Lagrange formalism in Refs. [14; 22]. This assumes that the ground state is non-degenerate (otherwise non-Abelian spin-dynamics theory [25] must be used) and that \(J\) is sufficiently weak so that the classical spin dynamics is much slower than typical relaxation time scales of the quantum system \(H_{\rm el}\). Alternatively, the term may be derived within adiabatic response theory [18; 19; 31] as the first nontrivial correction in a systematic expansion of the response of a driven system with respect to the driving speed, when applied to spin dynamics [32]. It is given by \[\mathbf{T}_{m}^{\rm(geo)}=\sum_{\alpha}\sum_{m^{\prime}\alpha^{\prime}}\Omega_{ m^{\prime}m,\alpha^{\prime}\alpha}(\mathbf{S})\dot{S}_{m^{\prime}\alpha^{\prime}}\mathbf{e}_{ \alpha}\;, \tag{2}\] with \(\alpha=x,y,z\) and the \(\alpha\)-th unit vector \(\mathbf{e}_{\alpha}\), and where \[\Omega_{mm^{\prime},\alpha\alpha^{\prime}}(\mathbf{S})=\frac{\partial}{\partial S _{m\alpha}}A_{m^{\prime}\alpha^{\prime}}(\mathbf{S})-\frac{\partial}{\partial S _{m^{\prime}\alpha^{\prime}}}A_{m\alpha}(\mathbf{S}) \tag{3}\] is the spin-Berry curvature. At each spin configuration \(\mathbf{S}\), this is a real antisymmetric tensor (\(\Omega_{m^{\prime}m,\alpha^{\prime}\alpha}=-\Omega_{mm^{\prime},\alpha\alpha ^{\prime}}\)), which is invariant under local gauge transformations of the ground states \(|\Psi_{0}(\mathbf{S})\rangle\mapsto e^{i\phi(\mathbf{S})}|\Psi_{0}(\mathbf{S})\rangle\). It is the exterior derivative of the spin-Berry connection \(\mathbf{A}_{m}=i\langle\Psi_{0}|\frac{\partial}{\partial\mathbf{S}_{m}}|\Psi_{0}\rangle\), which describes parallel transport of the ground state \(|\Psi_{0}(\mathbf{S})\rangle\) on the manifold of spin configurations \(\mathcal{M}\). For \(M\) classical spins \(\mathbf{S}_{m}\in S^{2}\), this is given by the \(M\)-fold Cartesian product of 2-spheres \(\mathcal{M}\equiv S^{2}\times\cdots\times S^{2}\). _Spontaneous antiferromagnetic order._ We consider a coupling of the magnetic spin moments to the single-band Hubbard model [33; 34] on a \(D\)-dimensional hypercubic lattice as a prototypical model for itinerant magnetic order. Its Hamiltonian is \(H_{\rm el}=-t\sum_{ij}^{\rm n.n.}\sum_{\sigma=\uparrow,\downarrow}c_{i\sigma}^ {\dagger}c_{j\sigma}+U\sum_{i}n_{i\uparrow}n_{i\downarrow}\), where the nearest-neighbor hopping \(t=1\) fixes the energy and (with \(\hbar\equiv 1\)) the time scales. \(c_{i\sigma}\) annihilates an electron at site \(i\) with spin projection \(\sigma\), and \(n_{i\sigma}=c_{i\sigma}^{\dagger}c_{i\sigma}\). The sums over \(i,j\) are restricted to nearest neighbors, and \(L\) is the total number of sites. It is well known [35; 36; 37; 38; 39] that at half-filling, repulsive Hubbard-\(U\) and for \(D\geq 2\), the ground state of the system in the thermodynamical limit \(L\to\infty\) develops long-range AF correlations. SU(2) spin-rotation symmetry and therewith TRS are spontaneously broken, and the ordered state is characterized by a finite staggered magnetization \(\mathbf{m}=m\mathbf{e}_{z}\) with \(m=L^{-1}\sum_{i}z_{i}\langle n_{i\uparrow}-n_{i\downarrow}\rangle\) and \(z_{i}=\pm 1\) for \(i\) in sublattice A or B, respectively. We assume \(m>0\) for sublattice A. At weak \(U\), AF order is driven by the Slater mechanism and perturbatively accessible [36; 38]. Within self-consistent Hartree-Fock theory [40], the one-electron excitation spectrum displays a gap \(\Delta=Um\) at wave vector \(Q=(\pi,\pi,...)\) in the conventional Brillouin zone. The two-electron spin-excitation spectrum is well described by standard random-phase approximation (RPA) but for the symmetry-broken AF state [41; 42; 43; 44]. In the strong-\(U\) limit, the one-electron spectrum is dominated by a large Hubbard gap \(\Delta\sim U\) and well developed local spin moments, coupled via Anderson's superexchange [35; 38]. Here, the model maps onto the Heisenberg spin-1/2 Hamiltonian with AF exchange \(J_{\rm H}=4t^{2}/U\) and AF long-range order, see Refs. [45; 46; 47; 46] for example. To compute the low-energy magnon dispersion and states, we can apply spin-wave theory (SWT) [47] to the AF Heisenberg model and use the Holstein-Primakoff transformation [48] at linear order. Linear SWT is motivated by the fact that single-magnon decay requires overlap with the two-magnon continuum, so that the picture of a stable magnon gas is protected by kinematic restrictions at low energies [49; 50; 51; 52]. Spin-Berry curvature of an antiferromagnet.To compute the geometrical spin torque, we make use of a Lehmann-type representation of the SBC starting from Eq. (3). This is straightforwardly derived [22] using a resolution of the the unity, \(\mathbf{1}=\sum_{n}|\Psi_{n}(\mathbf{S})\rangle\langle\Psi_{n}(\mathbf{S})|\), with an orthonormal basis of instantaneous eigenstates of \(H_{\rm el}+H_{\rm int}(\mathbf{S})\): \[\Omega_{mm^{\prime},\alpha\alpha^{\prime}}=-2J^{2}{\rm Im}{\sum_{n\neq 0} \frac{\langle\Psi_{0}|s_{i_{m}}^{\alpha}|\Psi_{n}\rangle\,\langle\Psi_{n}|s_{i_ {m^{\prime}}}^{\alpha^{\prime}}|\Psi_{0}\rangle}{(E_{n}-E_{0})^{2}}}\,. \tag{4}\] Note that, due to the \(J^{2}\) prefactor, the \(\mathbf{S}\) dependence of the eigenenergies and eigenstates will provide corrections to Eq. (4) only at order \(J^{3}\). As we refer to the weak-\(J\) limit, these will be neglected in the following. In the AF phase and assuming that the order parameter is aligned to the \(z\) axis, \(\langle\mathbf{s}_{i}\rangle=(-1)^{i}m\mathbf{e}_{z}\), there is a remaining SO(2) symmetry of the energy eigenstates under spin rotations around \(\mathbf{e}_{z}\). This unbroken spin-rotation symmetry, together with the spatial inversion and translation symmetries of \(H_{\rm el}\), and the antisymmetry \(\Omega_{mm^{\prime},\alpha\alpha^{\prime}}=-\Omega_{m^{\prime}m,\alpha^{ \prime}\alpha}\) [see Eq. (3)] imply that the spin-Berry curvature tensor is entirely fixed by a single real number \(\Omega\equiv\Omega_{mm^{\prime},xy}=-\Omega_{mm^{\prime},yx}\) for each fixed pair of sites \(i_{m}\), \(i_{m^{\prime}}\). All other elements must vanish, as is detailed by the symmetry analysis in Sections A and B of the Supplemental Material (SM) [53]. In a first step, for weak \(U\), we compute the SBC via \[\Omega_{mm^{\prime}}=-iJ^{2}\frac{\partial}{\partial\omega}\chi_{i_{m}i_{m^{ \prime}},xy}(\omega)\Big{|}_{\omega=0}+\mathcal{O}(J^{3})\,, \tag{5}\] where \(\chi_{ii^{\prime},\alpha\alpha^{\prime}}(\omega)=L^{-1}\sum_{\mathbf{k}}e^{i\mathbf{k} (\mathbf{R}_{i}-\mathbf{R}_{i^{\prime}})}\chi_{\alpha\alpha^{\prime}}(\mathbf{k},\omega)\) is the real-space retarded susceptibility, obtained by the RPA (see SM, Sec. C [53]). The relation Eq. (5) is easily derived by comparing the representation Eq. (4) of the SBC with the Lehmann representation of the susceptibility (SM, Secs. A, B [53]). Therewith, the susceptibility in the symmetry-broken AF state is seen to play a dual role for the spin dynamics: (i) via Eq. (5) and Eq. (2) its frequency derivative at \(\omega=0\) yields the geometrical spin torque \(\mathbf{T}_{m}^{(\rm geo)}\times\mathbf{S}_{m}\), and (ii) the static susceptibility yields, in the weak-\(J\) regime, the conventional RKKY spin torque \(\mathbf{T}_{m}^{(\rm H)}\times\mathbf{S}_{m}\) with \(\mathbf{T}_{m}^{(\rm H)}=\partial H_{\rm RKKY}/\partial\mathbf{S}_{m}\), where \(H_{\rm RKKY}=J^{2}\sum\chi_{i_{m}i_{m^{\prime}},\alpha\alpha^{\prime}}(\omega =0)S_{m\alpha}S_{m^{\prime}\alpha^{\prime}}\) is the perturbative RKKY Hamiltonian of the AF state. For the Hubbard model on the \(D=2\) square lattice the spin-excitation spectrum \(\chi_{+-}(\mathbf{k},\omega)\), see Fig. 1 (left) for \(U=2\) and \(U=4\), consists of a continuum at high frequencies \(\omega>\Delta=Um\) (\(\Delta\approx 0.75\) for \(U=2\), \(\Delta\approx 2.76\) for \(U=4\)) and, furthermore, within the gap an undamped transversal and doubly degenerate magnon mode. This mode takes most of the spectral weight. The magnon contribution to the derivative \(\partial_{\omega}\chi_{xy}(\mathbf{k},\omega)\) on sublattice A (Fig. 1, right) is even more pronounced, especially for \(\omega=0\), where it is related to the SBC by Eq. (5). Goldstone theorem, implications.In our second step, we exploit the fact that the spin-excitation spectrum of an AF insulator has a universal structure at low frequencies. This is due to Goldstone's theorem which enforces the presence of gapless magnon modes [54; 55; 56]. In the collinear AF state and corresponding to the two broken generators of the spin SU(2) symmetry, there are two degenerate modes with a linear and isotropic dispersion in the vicinity of the \(\Gamma\) point in the magnetic Brillouin zone (mBz). Linear SWT applied to the Heisenberg model that emerges in the strong \(U\) limit captures this physics, i.e., the dispersion close to \(\Gamma\) is given by \(\frac{1}{2}J_{\rm H}\omega(\mathbf{k})=c_{\rm s}k+\mathcal{O}(k^{2})\), where \(c_{\rm s}\) is the spin-wave velocity. Using the magnon energies and eigenstates, we can compute the SBC in this limit from Eq. (4) directly (SM, Secs. D, E [53]), ending up with \[\Omega_{mm^{\prime}}=\mp\frac{2J^{2}}{J_{\rm H}^{2}}\frac{1}{(2\pi)^{D}}\int_ {\rm mBz}\!d^{D}k\frac{\cos(\mathbf{k}(\mathbf{R}_{i_{m}}\!-\!\mathbf{R}_{i_{m^{\prime}}} ))}{\omega(\mathbf{k})^{2}}\,, \tag{6}\] if both, \(i_{m},i_{m^{\prime}}\) belong to sublattice A (\(-\) sign) or B (\(+\) sign), and \(\Omega_{mm^{\prime}}=0\) else. For \(D=2\), the linear dispersion close to \(\Gamma\) then implies a \(1/k^{2}\) singularity of the integrand and thus a logarithmic infrared divergence. For \(D\geq 3\), the local (\(m=m^{\prime}\)) SBC is finite. We note that the same arguments as invoked for the Mermin-Wagner theorem [47; 57], i.e., a divergence due to the low-energy spin excitations, here lead to a lower critical dimension (\(D_{\rm c}=3\)) that is shifted by one, see Tab. 1. The numerical value for the \(D=3\) local SBC is \(\Omega_{\rm loc}\approx-0.084\,J^{2}/J_{\rm H}^{2}=-0.084\,J^{2}U^{2}/16t^{4}\) Figure 1: _Left_: Transversal retarded ground-state spin susceptibility \({\rm Im}\,\chi_{+-}(\mathbf{k},\omega)\) for \(U=2\) and \(U=4\) along high-symmetry directions in the conventional \(D=2\) Brillouin zone, as obtained by RPA. _Right:_ Frequency derivative \({\rm Im}\,\partial_{\omega}\chi_{\rm xy}(\mathbf{k},\omega)\) (absolute values) in the mBz, related to the SBC at \(\omega=0\). White dotted lines: Slater gap \(\Delta=Um\) (onset of the continuum). Lorentzian broadening \(\omega\rightarrow\omega+i\eta\) with \(\eta=0.045\). Energy scale: \(t=1\). When scaling the hopping as \(t=t^{*}/\sqrt{D}\) with \(t^{*}=\mathrm{const}\)[58; 59], the modulus of the SBC decreases monotonically with \(D\), and the SBC approaches a finite mean-field value \(|\Omega_{\mathrm{loc}}|\to J^{2}U^{2}/32t^{*}{}^{4}\) for \(D\to\infty\) (SM, Sec. F [53]). _Magnitude of the SBC._ SWT predicts a \(U^{2}\) dependence of the SBC in the Heisenberg limit for strong \(U\). For \(U=0\), on the other hand, TRS of the resulting paramagnetic state implies that it must vanish. For \(U\to 0\), there is an intricate competition between the exponential suppression of the order parameter \(m\propto e^{-1/U}\), i.e., of the "strength" of TRS breaking and thus of the SBC and, on the other hand, the exponential closure of the single-electron Slater gap \(\Delta=Um\) and thus of the onset of the continuum in the spin-excitation spectrum resulting in continuum contributions that favor a large SBC. Our numerical results for the local SBC in \(D=3\), as obtained from weak-coupling RPA and strong-coupling SWT are displayed in Fig. 2. With increasing \(U\) we find a smooth crossover from the Slater to the Heisenberg limit with a monotonically increasing \(|\Omega_{\mathrm{loc}}|\). The nonlocal SBC at large distances \(R\equiv\|\mathbf{R}_{i_{m}}-\mathbf{R}_{i_{m^{\prime}}}\|\) is again governed by the linear dispersion at low frequencies. Carrying out the integration in Eq. (6) for \(R\to\infty\) we find \(\Omega(R)\propto 1/R^{D-2}\) (see Tab. 1 and SM, Sec. F [53]). For \(D=3\) this implies that the geometrical spin torque mediates a long-range coupling in the spin dynamics. Compared to previous studies [24; 25; 26; 27; 14; 22] the \(D=3\) value of the local SBC \(|\Omega_{\mathrm{loc}}|\approx 0.084\,J^{2}/J_{\mathrm{H}}^{2}\) is several orders of magnitude larger for realistic parameters \(J,J_{\mathrm{H}}\ll t,U\). Renormalization of \(c_{s}\to c_{s}^{\prime}\approx 1.1c_{s}\) due to magnon interaction [60] leads to a slightly smaller SBC, \(|\Omega_{\mathrm{loc}}|\to(c_{\mathrm{s}}/c_{\mathrm{s}}^{\prime})^{2}|\Omega _{\mathrm{loc}}|\). There are at least two routes that lead to an even larger \(|\Omega_{\mathrm{loc}}|\): Namely, we can take advantage of the formally infinite SBC in \(D=2\) and regularize the theory (i) by dimensional crossover to \(D=3\)[61; 62; 63], i.e., by switching on a small hopping \(t_{\perp}\) in the third dimension (Fig. 2) implying \(J_{\mathrm{H}}^{\perp}\ll J_{\mathrm{H,x}}=J_{\mathrm{H,y}}=J_{\mathrm{H}}\), see Fig. 3 (left), or (ii) by switching on a magnetic anisotropy to open a small gap in the magnon spectrum (Fig. 3, right), i.e., by adding an Ising term \(\delta\,J_{\mathrm{H}}S_{z}S_{z}\) to the standard Heisenberg coupling \(J_{\mathrm{H}}\mathbf{S}_{i}\mathbf{S}_{j}\). A moderate \(J_{\mathrm{H}}^{\perp}/J_{\mathrm{H}}=0.1\) yields a SBC \(|\Omega_{\mathrm{loc}}|\approx 0.22J^{2}/J_{\mathrm{H}}^{2}\). About the same enhancement is obtained for an anisotropy parameter \(\delta\sim 10^{-2}\). _Geometrical spin dynamics._ For the AF ordered phase, Eq. (1) tells us that the dominating effect in the magnetic-moment dynamics is a precession around the staggered magnetization \(\mathbf{m}\) on a time scale \(1/J\). This effect dominates the weaker (and slower) anisotropic RKKY-type exchange on the scale \(J^{2}\). Importantly, the SBC \(\Omega\sim J^{2}\) enters the equations of motion as a renormalization _factor_ (for \(M>1\) classical spins as a matrix factor) rather than a summand and thus does _not compete_ with the stronger direct exchange of order \(J\) (SM, Sec. G [53]). For \(M=1\) this factor amounts to \(1/(1-\Omega_{\mathrm{loc}}S_{z})\), such that the most pronounced effects are found for a SBC of intermediate strength, \(\Omega_{\mathrm{loc}}=\mathcal{O}(1)\). This holds true for \(M=2\) as well, as is detailed in the SM, Sec. G [53]. Note that a singular renormalization indicates a breakdown of the theory as this is the point where the condition for nearly adiabatic spin dynamics is invalidated. Note further that the precession comes with an inverted orientation beyond the singular point. _Conclusions._ A hitherto unknown but generic interplay of electron correlations, spontaneous symmetry breaking, gapless Goldstone bosons, and a holonomy on the configuration space of classical spin degrees of freedom leads to non-Hamiltonian effects, such as renormalization of precession frequencies, inverted orientation of the precessional motion, or long-range interactions, in the spin dynamics. This is due to a geometrical spin torque which is finite for correlated AF ground states in lattice models with dimension \(D\geq 3\) and diverges for \(D\leq 2\), caused by the same mechanism that leads to the Mermin-Wagner theorem, however, shifted by one dimension. With a SBC \(\Omega_{\mathrm{loc}}=\mathcal{O}(1)\) for typical parameters, the effect is unexpectedly large. It is boosted by electron correlations and further enhanced by spatial and by spin anisotropies. We expect a strong overall impact on the phenomenology of atomistic spin dynamics. _Acknowledgments._ This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Re Figure 3: SWT results (dots) for anisotropic systems. _Left, dimensional crossover:_ local SBC for \(D=3\) but with a spatially anisotropic nearest-neighbor Heisenberg exchange \(J_{\mathrm{H}}^{\perp}\leq J_{\mathrm{H}}\). _Right, spin anisotropy:_ SBC as function of the coupling anisotropy parameter \(\delta\). search Foundation) through the research unit QUAST, FOR 5249 (project P8), project ID 449872909, and through the Cluster of Excellence "Advanced Imaging of Matter" - EXC 2056 - project ID 390715994.
2310.06388
Phase space of electron- and muon-neutrino and antineutrino scattering off nuclei
We discuss the electron and muon neutrino and antineutrino double differential cross sections on carbon in the quasielastic as well as in the multinucleon and one pion production channels. By projecting them in the transferred momentum - transferred energy plane and in the neutrino energy - lepton scattering angle plane, as well as by performing simple considerations on the position of the quasielastic and Delta peaks and on their broadening, we explain the surprising dominance of the muon neutrino and antineutrino cross sections over the electron ones in particular kinematical conditions.
M. Martini, M. Ericson, G. Chanfray
2023-10-10T07:51:51Z
http://arxiv.org/abs/2310.06388v2
# Phase space of electron- and muon-neutrino and antineutrino scattering off nuclei ###### Abstract We discuss the electron and muon neutrino and antineutrino charged current quasielastic double differential cross sections on carbon by projecting them in the transferred momentum-energy plane. This visually allows to easily understand the surprising dominance of the muon neutrino and antineutrino cross sections over the electron ones in particular kinematical conditions. One of the main objectives of present [1; 2] and future [3; 4; 5] accelerator-based neutrino oscillation experiments is the search for the Charge-Parity (CP) violation in the leptonic sector. The best way to observe this phenomenon would be the measurement of a different appearance probability for electron neutrinos and electron antineutrinos from intense beams of muon neutrinos and muon antineutrinos. The next generation long-baseline (LBL) experiments will have unprecedented statistics of detected neutrinos thanks to intense beams and huge detector size. However these features are not sufficient to guarantee their success in the potential discovery of CP violation. In contrast to old bubble-chamber experiments, where the interaction of the neutrinos occurs with hydrogen, the use of relatively heavy nuclear targets (carbon, oxygen, argon), while allowing for a substantial increase of the event rate, requires a quantitative description of the nuclear response to weak interactions [6; 7]. A precise and simultaneous knowledge of the \(\nu_{\mu}\), \(\nu_{e}\), \(\bar{\nu}_{\mu}\) and \(\bar{\nu}_{e}\) cross sections on the target nucleus will be indeed crucial for the success of the LBL experiments. In this connection, the last fifteen years have been characterized by numerous \(\nu_{\mu}\) and \(\bar{\nu}_{\mu}\) cross sections measurements. On the contrary the equivalent data for \(\nu_{e}\), and \(\bar{\nu}_{e}\) are scarce and unlikely to reach the same level of precision as the \(\nu_{\mu}\) and \(\bar{\nu}_{\mu}\) ones. A theoretical investigation on the difference between electron and muon cross sections is hence particularly important. In charged current processes \[\nu_{l}+A\to l^{-}+X \tag{1}\] \[\bar{\nu}_{l}+A\to l^{+}+X, \tag{2}\] where \(l\) denotes the generic flavour (which can be \(e\) or \(\mu\)), \(\nu_{e}\) (\(\bar{\nu}_{e}\)) cross sections are expected to be larger than the \(\nu_{\mu}\) (\(\bar{\nu}_{\mu}\)) ones due to the differences in the mass of the outgoing charged lepton, which imply different kinematic limits. This is certainly true for the total neutrino cross section \(\sigma_{\nu_{l}}\) as a function of the neutrino energy \(E_{\nu_{l}}\). However this hierarchy can be opposite in specific kinematical conditions in the case of differential cross sections, as \(\frac{d\sigma}{d\cos\theta}\), where \(\theta\) is the lepton scattering angle, and \(\frac{d^{2}\sigma}{dE_{l}d\cos\theta}\), where \(E_{l}\) is the charged-lepton energy, or equivalently \(\frac{d^{2}\sigma}{d\omega d\cos\theta}\), where \(\omega\) is the transferred energy, \(\omega=E_{\nu_{l}}-E_{l}\). This surprising inversion of the \(\nu_{e}\) and \(\nu_{\mu}\) cross section hierachy was pointed out at first in Ref. [8] where it was shown that for forward scattering angles the muon neutrino quasielastic differential cross sections can be larger than the corresponding electron ones, especially for low neutrino energies. This unexpected feature, and its potential important impact for the LBL neutrino oscillation results, pushed the community to perform further investigation in this direction. Hence several papers on this subject have been published [9; 10; 11; 12]. Ref. [8] already stressed that the surprising dominance of \(\nu_{\mu}\) over \(\nu_{e}\) quasielastic differential cross sections at fixed kinematics for small scattering angles is related to the differences in the momentum transfer \({\bf q}={\bf k}_{\nu_{l}}-{\bf k}_{l}\) between the \(\nu_{e}\) and \(\nu_{\mu}\) scattering. Ref. [9] analyzed the \((q,\omega)\) phase space available for the charged current quasielastic (CCQE) interaction and pointed out that the \(\nu_{\mu}\) over \(\nu_{e}\) dominance, appearing in the Fermi-Gas based and Hartree-Fock based approaches [8], could no more appear by considering a spectral function approach. However in the calculations of Ref. [9] nucleon final state interactions were not taken into account. Refs. [10; 11] used several independent mean-field based models to conclude that a proper quantum mechanical treatment of Pauli blocking and of the final nucleon's wave function confirms the dominance of \(\nu_{\mu}\) over \(\nu_{e}\) cross sections at forward lepton scattering angle. In Ref. [12] the potential for mis-modeling of \(\nu_{e}/\nu_{\mu}\) and \(\bar{\nu}_{e}/\bar{\nu}_{\mu}\) CCQE cross section ratios was quantified in order to investigate its impact on neutrino oscillation experiments. In this analysis large differences between the Hartree-Fock based and spectral function approaches appeared in the forward scattered region and, even if less pronounced that in the Hartree-Fock case, a region where the \(\nu_{e}/\nu_{\mu}<1\) appeared also in the spectral function case for small neutrino energy. Furthermore it was also shown that for the antineutrino case a region appears in the \((\theta,E_{\nu})\) phase space where \(\bar{\nu}_{e}/\bar{\nu}_{\mu}<1\). This happens at backward scattering angles for different theoretical models. In the present work we want to complement the analyses of the mentioned previous papers [8; 9; 10; 11; 12] by discussing simple arguments and by introducing an effective way of presenting results which could be useful for future investigations. For this purpose we decide to represent in the \((q,\omega)\) plane the double differential cross sections \(\frac{d^{2}\sigma(E_{\nu_{l}})}{d\omega d\cos\theta}\) on carbon for fixed values of the neutrino energy and for all the values of the scattering angle. We remind that the values of \(q=|{\bf q}|\) are obtained by the formula \[q=\sqrt{E_{\nu_{l}}^{2}+k_{l}^{2}-2E_{\nu_{l}}k_{l}\cos\theta}, \tag{3}\] where \[k_{l}^{2}=E_{l}^{2}-m_{l}^{2}=(E_{\nu_{l}}-\omega)^{2}-m_{l}^{2}. \tag{4}\] Once \(E_{\nu_{l}},\ \omega\) and \(\cos\theta\) are fixed, \(q\) is univocally determined, hence it possible to project \(\frac{d^{2}\sigma}{d\omega d\cos\theta}\) in the \((q,\omega)\) plane, the strength of the cross section being represented by a colour chart. These cross sections, referring exclusively to the genuine CCQE channel, are shown in Figs.1 and 2 for the four cases \(\nu_{e}\), \(\nu_{\mu}\), \(\bar{\nu}_{e}\) and \(\bar{\nu}_{\mu}\), for the neutrino energies \(E_{\nu_{l}}=175\) MeV and \(E_{\nu_{l}}=575\) MeV, respectively. In the figures we show also the curves corresponding to the \(\omega-q\) relation given by Eq.(3) for fixed values of the neutrino energy and charged lepton mass for the two extreme values of the lepton scattering angle \(\theta=0\) and \(\theta=\pi\). These curves delimit the available phase space. Even if the figures are obtained by employing a peculiar approach, the Random Phase Approximation (RPA) one of Refs. [13; 14; 15], general considerations can however be made. First of all, some well know features visually emerge: * The electron (anti) neutrino phase space is larger than the corresponding muon one, due to the different charged-lepton mass, which explains the larger total cross sections in the electron case. * The difference between the electron and muon (anti) neutrino cross sections decreases by increasing the neutrino energy. * The antineutrino cross sections decrease more rapidly increasing \(q\), hence increasing the angle, than the neutrino ones. * The quasielastic response region clearly appears: all the cross sections are peaked at the quasielastic line 1 Footnote 1: We remind that RPA collective effects may shift the position of the QE peak, but the effect remains relatively weak. \[\omega=\frac{q^{2}-\omega^{2}}{2M_{N}}=\sqrt{q^{2}+M_{N}^{2}}-M_{N}\] (5) (\(M_{N}\) being the nucleon mass) and spread around this curve due to Fermi motion. The most important feature which emerges from the figures concerns the inversion of the \(\nu_{e}\) (\(\bar{\nu}_{e}\)) and \(\nu_{\mu}\) (\(\bar{\nu}_{\mu}\)) cross section hierachy and it is the following: * At lower neutrino energies (for example \(E_{\nu_{l}}=175\) MeV, as in Fig.1) the \(\theta=0\) line largely crosses the quasielastic response region for the muon (anti) neutrino scattering, which is not the case of electron (anti) neutrino scattering, where the \(\theta=0\) line is always outside the quasielastic response region. In other words, _for neutrino and antineutrino scattering the \(\theta=0\) muon and electron lines explore in the \((q,\omega)\) plane two different regions, the muon one corresponding to larger quasielastic cross sections_. By increasing the neutrino energies the difference between the muon and electron \(\theta=0\) lines decreases and the two curves explore more and more similar region in the \((q,\omega)\) plane, as it appears in Fig.2. This argument allows also to see why at low neutrino energies and backward scattering angles the muon antineutrino cross sections are larger than electron ones, as first observed in Ref.[12]: * At low neutrino energies for antineutrino scattering the \(\theta=\pi\) muon and electron lines explore in the \((q,\omega)\) plane two different regions, the muon one corresponding to larger quasielastic cross sections, as it appears in Fig.1. Before concluding, it is important to stress that all the previous results and discussions refer to fixed values of neutrino energy, which is an unknown variable in accelerator-based neutrino experiments, where neutrino beams are not mono-chromatic. The three variables \(E_{\nu_{l}}\), \(\omega\) and \(q\) are not directly measurable and up to now the neutrino scattering community rightly privileged the flux integrated cross sections as a function of measured variables such as \(E_{l}\) and \(\cos\theta\). However there are several reasons to consider the cross sections also in terms of the variables used in the present work. First of all, as we have shown, it allows theoretical analyses and effective visualization of different cross sections behaviour. Second, present and future neutrino detectors will allow more and more exclusive measurements and to know better and better the vertex activity. This, combined with more and more accurate neutrino interaction modeling, will allow to reconstruct and constrain unmeasured variables. First examples of experimental cross sections shown as a function of directly-unmeasurable variables, such as \(\sigma(E_{\nu_{\mu}})\), \(\frac{d\sigma}{d\omega}\)[16] and, more recently, \(\frac{d^{2}\sigma(E_{\nu_{\mu}})}{dk_{\mu}d\cos\theta}\)[17] already appeared. The final comment is that one could use the representation of the cross sections in terms of the \(q\) and \(\omega\) variables in order to investigate if it is possible to find patterns allowing to constrain unmeasured electron (anti)neutrino cross sections starting from the measured muon ones. ## Acknowledgement We thank T. Dieminger, S. Dolan, C. Giganti, Y. Maidannyk, D. Sgalaberna and U. Virginet for useful discussions.
2305.02358
Are all metal-poor stars of second-generation?
Hydrodynamical cosmological simulations predict that the metal-free Population III (Pop III) stars were likely very massive and, therefore, short-lived. However, they left their chemical imprint on their descendants, which can also have masses $ < 0.8 \mathrm {M_{\odot}}$ and still be alive today. The Milky Way stellar halo is one of the oldest and most metal-poor component of the Local Group and a peculiar class of stars, the so-called Carbon-Enhanced Metal-Poor (CEMP-no) stars, seem to be directly related to Pop III stars. We aim at revealing if all metal-poor halo stars are true second-generation stars or if they have also been enriched by the subsequent generations of normal (Pop II) stars. For this purpose, we compare the measured carbon and iron abundances of the metal-poor halo stars with the ones predicted by our simple parametric model, varying the pollution level from Pop III and normal stars. We find that only the most C-enhanced and Fe-poor stars enclose in their photospheres the pure imprint of Pop III stars, while, as the [C/Fe] decreases, the probability of being also polluted by normal Pop II stars increases.
Irene Vanni, Stefania Salvadori, Ása Skúladóttir
2023-05-03T18:00:10Z
http://arxiv.org/abs/2305.02358v1
# Are all metal-poor stars of second-generation? ###### Abstract Hydrodynamical cosmological simulations predict that the metal-free Population III (Pop III) stars were likely very massive and, therefore, short-lived. However, they left their chemical imprint on their descendants, which can also have masses \(<0.8\)M\({}_{\odot}\) and still be alive today. The Milky Way stellar halo is one of the oldest and most metal-poor component of the Local Group and a peculiar class of stars, the so-called Carbon-Enhanced Metal-Poor (CEMP-no) stars seem to be directly related to Pop III stars. We aim at revealing if _all_ metal-poor halo stars are true _second-generation stars_ or if they have also been enriched by the subsequent generations of normal (Pop II) stars. For this purpose, we compare the measured carbon and iron abundances of the metal-poor halo stars with the ones predicted by our simple parametric model, varying the pollution level from Pop III and normal stars. We find that only the most C-enhanced and Fe-poor stars enclose in their photospheres the pure imprint of Pop III stars, while, as the [C/Fe] decreases, the probability of being also polluted by normal Pop II stars increases. stars: abundances - ISM: abundances - Galaxy: halo - cosmology: first stars 023 ## 1 Introduction The Milky Way stellar halo is a promising region to search for the descendants of the first (Pop III) stars. Here the oldest and most metal-poor stars have been spotted, suggesting a link with the chemical elements produced by Pop III stars. The mass of Pop III stars formed in state-of-the-art hydro-dynamical simulations cover a wide mass range, \(0.1<m_{PopIII}<1000\)M\({}_{\odot}\)(e.g Hirano et al., 2014; Greif, 2015; Hirano & Bromm, 2017). However, there is general theoretical consensus supporting the idea that the first stars were typically more massive than those forming today, with a characteristic mass \(\geq 10\)M\({}_{\odot}\)(e.g. Bromm, 2013). Furthermore, a zero-metallicity star has never been observed, confirming the massive nature of Pop III stars (e.g. Rossi et al., 2021). If Pop III stars were typically massive, most of them would have ended their lives in less than 100 Myrs, exploding as supernovae (SNe). Thus, the first SNe polluted the Interstellar Medium (ISM) with their newly synthesized chemical elements. Consequently, normal Population II (Pop II) stars were able to form in this Pop III-enriched ISM (e.g. Schneider et al., 2012). Among these "second-generation" stars, those with masses \(<0.8\)M\({}_{\odot}\) can survive until today preserving in their pho tospheres the chemical signatures of Pop III stars. The iron-abundance1 of halo stars spans more than 5 dex. Stars with [Fe/H] \(<-1\) are called metal-poor and they are separated in two categories depending on their [C/Fe] values (see Beers & Christlieb 2005): Carbon-Enhanced Metal-Poor stars (CEMP) when [C/Fe] \(\geq+0.7\), and C-normal otherwise. The CEMP stars can be further divided into CEMP-s(/r) and CEMP-no stars depending on whether [Ba/Fe] is super- or sub-solar, respectively. Neutron-capture elements from the slow process, such as barium, are mainly produced by Asymptotic Giant Branch (AGB) stars and their enhancement in the photosphere of a star is a signature of mass accretion from an AGB companion. The C-excess in CEMP-s stars is thus expected to be acquired during its lifetime, via mass transfer from a binary AGB companion (e.g. Abate et al. 2015). On the other hand, the C-excess in CEMP-no stars is expected to be representative of the environment of formation. This is supported by observations, showing that CEMP-s stars are almost exclusively in binary systems, which is not the case for CEMP-no stars (e.g. Hansen et al. 2016; Arentsen et al. 2019). Footnote 1: [Fe/H]=\(\log\left(\frac{N_{Fe}}{N_{H}}\right)-\log\left(\frac{N_{Fe}}{N_{H}}\right)_{\odot}\). Solar abundances are from Asplund et al. (2009). The fraction of CEMP-no stars increases towards decreasing [Fe/H], suggesting a direct link between the birth clouds of CEMP-no stars and the chemical products of Pop III SNe (e.g. Salvadori et al. 2015; Chiaki et al. 2020). Conversely, the origin of C-normal very metal-poor stars ([Fe/H] \(<-2\)) is still highly debated: their chemical abundances are indeed consistent with either an enrichment from Pop III SNe _only_ (see Hartwig et al. 2019; Welsh et al. 2021), or from both Pop III and normal Pop II SNe (e.g. De Bennassuti et al. 2017; Liu et al. 2021). In this work, we aim at understanding if all very metal-poor stars are true descendants of Pop III stars. To this end we will take advantage of the high-resolution measurements of C-enhanced and C-normal metal-poor stars, and compare those with our theoretical models. We refer the reader to Vanni et al. (in prep) for a comparison with _all_ the chemical abundance ratios measured in metal-poor star. ## 2 Sample of metal-poor halo stars The _halo sample_, which will be used to compare to our model, includes 132 CEMP-no and C-normal halo stars with [Fe/H] \(\leq-2\), that have been observed with high resolution spectroscopy (R \(\geq 30\,000\)). In particular, this sample includes the stars presented in Cayrel et al. (2004) and Yong et al. (2013), excluding CEMP-s stars, as well as extremely metal-poor stars, [Fe/H] \(\leq-3\), from: Christlieb et al. (2004); Norris et al. (2007); Caffau et al. (2011); Hansen et al. (2014); Keller et al. (2014); Frebel & Norris (2015); Bonifacio et al. (2015); Li et al. (2015); Starkenburg et al. (2018); Bonifacio et al. (2018); Francois et al. (2018); Aguado et al. (2019); Ezzeddine et al. (2019). These chemical abundances are not corrected for the non-LTE effects. The stars of our _halo sample_ cover a wide iron range, \(-7.1<\mathrm{[Fe/H]}<-2\), and their metallicity distribution function peaks at [Fe/H]\(\sim-3\) (see e.g. Placco et al. 2014). In Fig. 1 we show the average abundance ratios, standard deviation, and maximum range of the scatter for stars in the _halo sample_, divided among: CEMP-no stars (red) with [Fe/H] \(<-4\) (upper panel) and with \(-4<\mathrm{[Fe/H]}<-2\) (middle), and finally C-normal stars with [Fe/H] \(<-2\) (gray, lower panel). All the stars in the sample have carbon (and iron) abundance measurements, while for the other elements the measurements are sometimes limited to few stars (e.g. O or Zn, which are not available for the CEMP-no stars in our sample with \(-4<\mathrm{[Fe/H]}<-2\)). Furthermore, the majority of the most iron-poor stars only have upper/lower limits of abundance values. For this reason, in the top panel of Fig. 1 we include the mean chemical abundance ratios obtained by both including and excluding the upper/lower limits. Furthermore, the upper/lower limits were used to evaluate the range of the scatter. For example, for the C-normal stars we have also used the Caffau et al. (2011) and Starkenburg et al. (2018) stars which have, respectively, [C/Fe]\(<+0.7\) and [C/Fe]\(<+1.0\). In Fig. 1 we first notice that the standard deviation and scatter in the abundance ratios are highest for the most Fe-poor CEMP-no stars (top panel), while they gradually decrease towards lower panels, being smallest for C-normal stars (see also Cayrel et al. 2004). The mean abundance ratios of the light elements C, N and O, are generally higher for CEMP-no stars than for C-normal stars, even if we consider stars in the same [Fe/H] range. On the other hand, the abundance ratios of the elements heavier than oxygen are consistent within the error bars between the different stellar classes. ## 3 The model Here we briefly summarize the simple and general parametric study presented by Salvadori et al. (2019) and further implemented by Vanni et al. (in prep). The model aims to chemically characterize the descendants of Pop III stars: long-lived stars formed in environments predominantly polluted by Pop III SNe, i.e. where the metals from Pop III SNe account for \(\geq 50\%\) of metals in the ISM. Following the results of hydrodynamical cosmological simulations (e.g. Hirano et al. 2014), we assume that a single Pop III star can form in the primordial star-forming haloes. Then, we evaluate the chemical enrichment of the ISM after: (i) the injection of heavy elements by Pop III SNe with different explosion energies and progenitor masses; and (ii) the subsequent contribution of "normal" Pop II stars exploding as core-collapse SNe. We adopt the yields by Heger & Woosley (2002) for very massive Pop III stars, \(m=[140-260]\)M\({}_{\odot}\), that explode as Pair Instability SNe (_PISN_, \(E_{SN}=10^{52}-10^{53}\) erg) and by Heger & Woosley (2010) for intermediate mass Pop III stars, \(m=[10-100]\)M\({}_{\odot}\), exploding as _faint_, _core-collapse_, _high-energy_ SNe and _hypernovae_, whose explosion energy is respectively equal to \(E_{SN}=(0.6,1.2,3.0,10.0)\times 10^{51}\) erg. For Pop II stars we adopt the yields of Woosley & Weaver (1995) and Limongi & Chieffi (2018) and show both results to account for the uncertainty due to the choice of the stellar evolutionary model (see e.g. Nomoto et al. 2013). The main unknowns related to early cosmic star formation and metal enrichment are en Figure 1: Chemical abundance pattern of the _halo sample_ in three categories: CEMP-no stars with [Fe/H] \(\leq-4\) (top), CEMP-no with \(-4<\) [Fe/H] \(\leq-2\) (middle), and C-normal stars with [Fe/H] \(\leq-2\) (bottom). Filled points with error bars represent the mean and the standard deviation computed without considering upper/lower limits, while the empty points (top panel) with arrows including upper and/or lower limits. Shaded areas show the range of the measured chemical abundances, including also upper/lower limits. capsulated into the free parameters, which are varied to obtain the most general model. The model takes into consideration: the fraction of gas converted into stars, or the star formation efficiency, \(f_{*}=[10^{-4}-10^{-1}]\); metals retained into the ISM, parameterized with the dilution factor \(f_{dil}\); and the mass fraction of metals injected into the ISM by Pop III stars with respect to the total, \(f_{PopIII}=[100-50]\%\). It turns out that the iron abundance of the ISM depends on \(f_{PopIII}\); the yields of Pop III, \(Y^{III}(m,E_{SN})\), and Pop II stars, \(Y^{II}(m,Z)\); and on the ratio between the first two free parameters, \(f_{*}/f_{dil}\). Conversely, [C/Fe] is not affected by \(f_{*}\) nor by \(f_{dil}\) (see Salvadori et al. 2019 for details). ## 4 Results In Fig. 2 we compare the predicted [C/Fe] and [Fe/H] values for the descendants of the first stars (shaded areas) with the chemical abundances measured in the _halo sample_ (points). The colours denote different fraction of metals from Pop III SNe: from the yellow area, where the descendants are completely imprinted by Pop III (\(f_{popIII}=100\%\)), to the purple area, where they are equally imprinted by Pop III and Pop II (\(f_{popIII}=50\%\)) stars. The different panels show the results for different explosion energies of Pop III SNe: from the least energetic faint SNe to the most energetic PISNe. The maximum [C/Fe] that is predicted by the models (Fig. 2) decreases with: (i) increasing explosion energy of Pop III SNe; and (ii) increasing contribution of Pop II SNe to the chemical enrichment. Furthermore, we see that CEMP-no halo stars with [C/Fe] \(\geq+2.5\) and [Fe/H] \(\leq-4\) can _only_ be reproduced by models accounting for a 100% enrichment from Pop III stars, which implies that these objects are _true second-generation stars_. Indeed, even a 10% pollution by Pop II stars is sufficient to lower the maximum to [C/Fe]\(<+2.5\). Which kind of Pop III SNe imprinted these highly C-enhanced ultra iron-poor stars? Different Pop III SN models are able to enrich the ISM with high [C/Fe] and low [Fe/H]: the lightest PISN progenitors, M \(\sim 140-150\rm M_{\odot}\), and massive Pop III stars, \(40\rm M_{\odot}\leq M\leq 100\rm M_{\odot}\), exploding as _faint_ SNe or _core-collapse_ SNe. Still, our models show that the abundance ratios of the other chemical elements measured in these CEMP-no ultra iron-poor stars (Fig. 1) are not consistent with a light PISN enrichment, but rather support a scenario where these stars have been imprinted by massive Pop III stars exploding as low-energy SNe. We can now try to understand if all metal-poor stars are truly second-generation objects. At higher [Fe/H] \(>-4\), the observed properties of both CEMP-no and C-normal halo stars can be reproduced by several models (Fig. 2). At [Fe/H] \(>-4\), only the hypernovae case is not able to reproduce the [C/Fe] values measured in CEMP-no stars with or without Pop II contribution. To disentangle this degeneracy of different Pop III SN models, we would need to study the abundance ratios of other chemical elements. This will be done in a complementary work, Vanni et al. (in prep). The probability for a star to be uniquely (100%) or predominantly (\(>50\%\)) imprinted by the chemical products of Pop III SNe decreases as [C/Fe] decreases. Furthermore, an increasing contribution from Pop II stars reduces the scatter in [C/Fe] predicted by different models, in agreement to what is observed (Fig. 1). Indeed, we can see in Fig. 2 that the scatter in [C/Fe] spans over \(\sim 5\) dex for pure Pop III star descendants, and only \(\sim 2\) dex for the descendants enriched by Pop III stars at a 50% level. Thus, the specific chemical features of the ISM enriched by different Pop III SN types are mostly washed out if Pop III and Pop II stars equally contribute to the ISM enrichment. This is a strong indication that a group of stars that show very similar chemical abundances, i.e. a small star-to-star scatter like C-normal halo stars, have likely been polluted by Pop III and normal Pop II stars at the same level, or even predominantly by Pop II stars. ## 5 Conclusions From comparing our model results with data, we propose simple diagnostics that exploit measured carbon and iron abundances to identify which metal-poor halo stars are the descendants of Pop III stars (\(>50\%\) of the metals in their ISM of formation) and to estimate the level of imprint from normal Pop II SNe. These are our key findings: * CEMP-no stars with [C/Fe] \(\geq+2.5\) are second-generation stars _solely_ imprinted by massive first stars exploding with low to normal energy (\(E_{SN}\leq 1.2\times 10^{51}\) erg). * CEMP-no stars with [C/Fe] \(<+2.5\) are likely Pop III star descendants, which have been mainly polluted by Pop III SNe with a non-negligible \(\sim(10-30)\%\) contribution from Pop II stars. * The majority of C-normal stars have been enriched by normal Pop II stars at \(\gtrsim 50\%\) level, and hence they do not show strong chemical features left by Pop III SNe. Conversely, our models show that Pop III SNe can also enrich the ISM to low carbon values. The [C/Fe] values of the pure descendants of Pop III SNe exploding with high-energy (\(E_{SN}\geq 3\times 10^{51}\) erg) are mostly located at [C/Fe] \(\lesssim 0\). Therefore, we expect to find rare C-normal stars that are direct descendants of these very energetic Pop III SNe, thus showing peculiar chemical abundance patterns. This is consistent with the recent discoveries of primordial hypernova descendants (see Skuladotti et al., 2021; Placco et al., 2021). Ultimately, our simple and general parametric model can be applied to interpret the chemical abundances measured in both ancient stars and in high-redshift absorption systems (see Salvadori et al. in this volume). According to our predictions, a C-excess measured in gaseous environments at high redshifts (\(z>4\)) is a solid proof of chemical enrichment driven by Pop III stars. Figure 2: Carbon-to-iron ratios of PopIII descendants. Colours show different level of Pop III enrichment: 100% (yellow), 90% (orange), 70% (pink), 50% (purple). Each panel shows different explosion energies of Pop III SNe (see labels): from faint SNe (\(E\sim 6\times 10^{50}\) erg) to hypernovae (\(E\sim 10^{52}\) erg) in the mass range [\(10-100\)]M\({}_{\odot}\); as well as PISN (\(E=10^{52-53}\) erg) for [\(140-260\)]M\({}_{\odot}\). Star symbols show the measured abundances of CEMP-no (filled) and C-normal stars (open). The dotted line is the threshold for the carbon enhancement, [C/Fe] \(>+0.7\). Note that the yellow area for the hypernovae progenitor case is hidden behind the other colours. The observed scatter of chemical abundance ratios is another key diagnostic. Our model shows a larger scatter with a higher fraction of Pop III contribution. This is supported by observations of metal-poor stars in the Galactic halo (Fig. 1), and we predict that this is also true in the ISM observed at high redshifts. By using a powerful combination of models and data, we have exploited C and Fe abundances to identify first star descendants and quantify their Pop III enrichment. However, using only these two elements is not sufficient to break degeneracies between different models of Pop III SNe (mass and energy). In an upcoming study, we will thus expand the work presented here to include all the chemical elements available, exploiting to the fullest the very metal-poor stars to better understand the first stars in the Universe. ###### Acknowledgements. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 804240). I.V. and S.S. acknowledge support from the PRIN-MIUR17, prot. n. 20174ARJ5.
2307.14343
Pruning Distorted Images in MNIST Handwritten Digits
Recognizing handwritten digits is a challenging task primarily due to the diversity of writing styles and the presence of noisy images. The widely used MNIST dataset, which is commonly employed as a benchmark for this task, includes distorted digits with irregular shapes, incomplete strokes, and varying skew in both the training and testing datasets. Consequently, these factors contribute to reduced accuracy in digit recognition. To overcome this challenge, we propose a two-stage deep learning approach. In the first stage, we create a simple neural network to identify distorted digits within the training set. This model serves to detect and filter out such distorted and ambiguous images. In the second stage, we exclude these identified images from the training dataset and proceed to retrain the model using the filtered dataset. This process aims to improve the classification accuracy and confidence levels while mitigating issues of underfitting and overfitting. Our experimental results demonstrate the effectiveness of the proposed approach, achieving an accuracy rate of over 99.5% on the testing dataset. This significant improvement showcases the potential of our method in enhancing digit classification accuracy. In our future work, we intend to explore the scalability of this approach and investigate techniques to further enhance accuracy by reducing the size of the training data.
Amarnath R, Vinay Kumar V
2023-05-26T11:44:35Z
http://arxiv.org/abs/2307.14343v1
# Pruning Distorted Images in MNIST Handwritten Digits ###### Abstract Recognizing handwritten digits is a challenging task primarily due to the diversity of writing styles and the presence of noisy images. The widely used MNIST dataset, which is commonly employed as a benchmark for this task, includes distorted digits with irregular shapes, incomplete strokes, and varying skew in both the training and testing datasets. Consequently, these factors contribute to reduced accuracy in digit recognition. To overcome this challenge, we propose a two-stage deep learning approach. In the first stage, we create a simple neural network to identify distorted digits within the training set. This model serves to detect and filter out such distorted and ambiguous images. In the second stage, we exclude these identified images from the training dataset and proceed to retrain the model using the filtered dataset. This process aims to improve the classification accuracy and confidence levels while mitigating issues of underfitting and overfitting. Our experimental results demonstrate the effectiveness of the proposed approach, achieving an accuracy rate of over 99.5% on the testing dataset. This significant improvement showcases the potential of our method in enhancing digit classification accuracy. In our future work, we intend to explore the scalability of this approach and investigate techniques to further enhance accuracy by reducing the size of the training data. handwritten digit recognition, deep learning, MNIST, ambiguous, distortion ## 1 Introduction Handwritten digit recognition is a complex task that finds applications in various fields, including computer vision and machine learning. It involves the identification and classification of digits written by hand, enabling tasks such as character recognition and digit analysis. In this domain, the MNIST dataset serves as a widely used benchmark for evaluating the performance of handwritten digit recognition systems [1, 2]. It consists of a collection of 70,000 handwritten digits, ranging from 0 to 9, and is derived from the writings of 250 individuals. The dataset's standardized characteristics, such as the consistent image size of 28x28 pixels and grayscale format, make it particularly valuable for research and experimentation [1, 2, 5]. While there are other handwritten datasets available in the literature [3], the MNIST dataset stands out due to its comprehensive nature. It provides a diverse range of writing styles, capturing variations in how different individuals write digits. Moreover, the dataset contains samples with various distortions, representing real-world scenarios where digits can appear in different shapes or orientations. This diversity in the dataset enables researchers to explore and develop robust algorithms that can handle these challenges effectively. Although deep learning techniques have achieved significant success in digit classification on the MNIST dataset, achieving maximum accuracy remains a challenging task [4]. This challenge arises from various factors that can impact the model's ability to learn and classify digits accurately. One of these factors is data imbalance, where certain classes of digits may have a disproportionately small number of samples compared to others. This imbalance can hinder the model's performance as it may struggle to learn effectively from limited data, leading to biased predictions. In addition to data imbalance, variances in the data itself can also present challenges. Handwritten digits can exhibit irregular shapes, incomplete strokes, and variations in their height, width, density, concavity, and convexity. These variations make it difficult for the model to generalize and capture the essential features necessary for accurate classification. However, the most significant obstacle faced by researchers is the identification of ambiguous images within the training set. These ambiguous images can introduce noise during the training process, leading to issues of overfitting or underfitting and ultimately compromising the accuracy of the model [4; 5]. When ambiguous images are present in the training set, the model may learn to rely on specific artifacts or noise patterns rather than the true characteristics of the digits. As a result, the model becomes biased and lacks the ability to generalize well for the specific task at hand. Data preparation plays a crucial role in building an effective deep network pipeline. However, manually removing noisy images, particularly in large training sets, can be challenging and time-consuming [6; 7]. Figure 1 showcases a selection of ambiguous images randomly extracted from the MNIST training dataset, which includes digits ranging from 0 to 9. The figure displays five sets of randomly chosen samples, organized in columns, where the first column represents the digit 0, and the last column represents the digit 9. Upon observing the figure, it becomes evident that a significant portion of the images is inherently ambiguous, making accurate classification a difficult task even for human observers. These ambiguous images present challenges due to their unclear or distorted visual characteristics. The presence of irregular shapes, incomplete strokes, variations in size, or conflicting elements within the images can cause confusion and lead to difficulties in accurate classification. These challenges highlight the need for automated approaches to identify and remove such ambiguous images from the training dataset, as manual inspection and removal would be labor-intensive and impractical, especially in scenarios involving large datasets. The objective of our research is to detect and eliminate distorted images that cause ambiguity within the training set. However, manually identifying such images can be a challenging task. To overcome this challenge, we propose the use of a deep learning model initially. This model is trained to identify and flag these distorted images automatically. Once the distorted images are identified, we proceed to verify them through human inspection. This step ensures that the flagged images are indeed ambiguous and warrant removal from the training set. After the verification process, the model is retrained using the filtered images from the training set, which no longer include the identified distorted images. In this paper, we present a deep neural network architecture specifically designed to identify and remove such distorted images from the MNIST dataset. Our approach aims to improve the overall accuracy of the digit classification model by reducing variations caused by ambiguous images. By filtering out these images, we enhance the quality of the training data, leading to more reliable and accurate predictions. Furthermore, our approach increases confidence levels in the training and validation losses, providing a more robust assessment of the model's performance. It also addresses issues related to overfitting and underfitting, which can occur when the model is excessively influenced by the presence of distorted or ambiguous images during training. The inspiration for our approach comes from previous studies [6; 7; 8] that have tackled similar challenges in data preparation and noise removal. By building upon the knowledge and Figure 1: Five sets of samples consisting of digits ranging from 0 to 9 from the training dataset. techniques established in these studies, we aim to contribute to the field of digit classification and further improve the accuracy and reliability of deep learning models. Previous research [2, 9] has shown that deep learning techniques can significantly improve handwritten digit recognition accuracy. For instance, LeCun et al. [2] achieved an error rate of 0.7% using convolutional neural networks (CNNs) for digit recognition. Later studies reported even better results using more advanced deep learning architectures, such as ResNet (He et al., 2016) [10] and Capsule Networks (Sabour et al., 2017) [11]. However, these works have not extensively addressed the issue of identifying and eliminating such distorted images causing lower accuracy rates. In recent studies, researchers have introduced various approaches [12, 13, 14, 15] aimed at identifying and removing noisy, ambiguous, or distorted images from datasets used in different image classification tasks. These techniques have demonstrated success in enhancing the quality of training data and improving classification accuracy. Motivated by the positive outcomes of these techniques, our research aims to apply similar methods to the MNIST dataset. Specifically, we focus on detecting and eliminating ambiguous images that can introduce noise during the training process. By addressing this issue, we aim to enhance the reliability and performance of the classification model. To achieve our goal, we propose a deep neural network architecture that encompasses preprocessing, feature extraction, and classification layers. By following a systematic approach, our method aims to improve classification accuracy, ensure an unbiased and generalized model, and enhance confidence levels in the model's predictions. The key steps involved in our approach are as follows: 1. Development of a model capable of detecting ambiguous images in the training dataset. This model serves as a means to automatically identify and flag images that exhibit characteristics of ambiguity. 2. Manual validation and removal of the identified ambiguous images from the training dataset. By conducting a thorough inspection, we ensure the accurate identification of these images and their subsequent exclusion from the training data. 3. Retraining the deep neural network using the remaining images from the training dataset. This step allows the model to learn from a refined dataset, free from the noise introduced by the ambiguous images. By retraining the model, we aim to improve its classification accuracy and boost confidence levels in its predictions. 4. Reporting our findings, including accuracies and losses, after cleaning the training dataset. We present the outcomes of our approach, showcasing the impact of removing ambiguous images on the model's performance and highlighting the improvements achieved. Our approach offers the following significant contributions: * Improved Confidence Level: Our approach has successfully enhanced the confidence level of predictions for digit recognition. By detecting and removing ambiguous images, which are challenging even for humans to recognize, we have increased the reliability and trustworthiness of our model's predictions. * Increased Accuracy: Through the identification and elimination of ambiguous images, we have achieved a higher accuracy rate in our model. By removing these sources of confusion and noise from the training dataset, our model can focus on learning from more reliable and representative examples, leading to improved classification performance. * Effective Neural Network Design: Our neural network design is straightforward yet effective in addressing both underfitting and overfitting issues. By carefully designing the architecture and training process, we have created a model that achieves a balance between capturing important patterns in the data while avoiding overgeneralization or memorization. * High Accuracy without Augmented Data: Remarkably, we have achieved high accuracy on the MNIST dataset without utilizing any augmented data for training. Our model's performance demonstrates its capability to learn and generalize from the available dataset effectively. Moreover, we have also demonstrated that our model performs well even with a reduced dataset, suggesting its robustness and adaptability. The paper is structured as follows: Section 2 provides a comprehensive overview of the existing literature on handwritten digit classification in the MNIST dataset. Section 3 describes the MNIST dataset's characteristics, including the data distribution, image characteristics, and classification challenges. In Section 4, we present our proposed method in detail, including the deep neural network architecture, preprocessing steps, and the identification and removal of ambiguous images. Section 5 presents experimental results. Finally, in Section 6, we summarize the paper's contributions and conclude our study. ## 2 Related Works In this survey, we will review the most significant studies related to and limited to the MNIST digit classification using deep learning. Convolutional neural networks have been widely used for MNIST digit classification, achieving high accuracy rates. LeNet-5, introduced by Yann LeCun et al. [2] in 1998, was the first successful CNN for handwritten digit recognition. LeNet-5 includes two convolutional layers, two subsampling layers, and three fully connected layers. Another popular CNN is AlexNet, which won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) in 2012 [16]. AlexNet includes five convolutional layers, three fully connected layers, and a ReLU activation function. Other successful CNN models include VGG-Net[17], GoogLeNet[18], ResNet[10], and InceptionNet [20]. In 2019, Baldominos et al. [21] conducted a literature survey on the research and developments related to the MNIST dataset. In our study, we have examined the surveys mentioned in their paper that report a test error rate below 0.5. Specifically, we present the findings in Table 1, which corresponds to test accuracies achieved with augmentation, and Table 2, which represents test accuracies without augmentation. Their contributions have been discussed in the literature [21] \begin{table} \begin{tabular}{|l|c|} \hline **Technique** & **Test Dataset** \\ **Error Rate \textless{} 0.5\%** \\ \hline NN 6-layer 5,700 hidden units[22] & 0.35\% \\ \hline MSRV C-SVDDNet [23] & 0.35\% \\ \hline Committee of 25 NN 2-layer 800 hidden units [24] & 0.39\% \\ \hline RNN [25] & 0.45\% \\ \hline CNN (2 conv, 1 dense, relu) with DropConnect [26] & 0.21\% \\ \hline Committee of 25 CNNs [27] & 0.23\% \\ \hline CNN with APAC [28] & 0.23\% \\ \hline CNN (2 conv, 1 relu, relu) with dropout [26] & 0.27\% \\ \hline Committee of 7 CNNs [30] & 0.27\% \\ \hline Deep CNN [31] & 0.35\% \\ \hline CNN (2 conv, 1 dense), unsup pretraining [32] & 0.39\% \\ \hline CNN, XE loss [33] & 0.40\% \\ \hline \end{tabular} \end{table} Table 1: Side-by-side comparison of the most competitive (error rate < 0.5%) results found in the state of the art for the MNIST database with data augmentation or preprocessing In a recent study published in 2022, researchers [44] utilized a decoder network, which proved to be a more convenient and quicker training approach when compared to encoder-decoder architectures. However, the authors did not mention the presence of noisy images in the MNIST dataset. In contrast, a more recent study published in 2023 by authors [45] explicitly addressed the issue of distorted images by incorporating samples from the MNIST dataset. They demonstrated how these distorted images can significantly decrease the accuracy rate, even when using state-of-the-art deep learning networks. The researchers emphasized the limitations of current deep learning models in comparison to human capabilities, highlighting that existing image distortions used for evaluating deep learning models primarily rely on mathematical transformations rather than human cognitive functions. Furthermore, the 1998 paper written by Yann LeCun et al. [2] brings attention to the existence of genuinely ambiguous and underrepresented digits within the training set, which presents difficulties in their recognition. Based on the literature survey mentioned earlier, it has been noted that only a limited number of papers discuss distorted images within the MNIST dataset. However, there is a lack of comprehensive research dedicated to identifying and removing such images from the training set. Inspired by the recent work highlighted in papers [45, 46] and foundation paper [2], we cognize the implications of these noisy images on compromising accuracy. Consequently, in this study, we propose a deep neural network architecture explicitly designed to eliminate ambiguous images from the MNIST training dataset. Our approach effectively addresses issues of overfitting and underfitting by minimizing variations in training and validation losses, resulting in enhanced accuracy for the digit classification model. By tackling the crucial task of identifying and removing ambiguous images, our methodology provides a practical solution that elevates the overall accuracy of deep learning models utilized for handwritten digit recognition. ## 3 MNIST Dataset The MNIST dataset is a widely used benchmark dataset in the field of image classification. It was introduced in a research paper by Yann LeCun et al [1, 2]. The dataset consists of 70,000 images of \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline **Technique** & **Test Dataset Error Rate \textless{} 0.5\%** \\ \hline Batch-normalized maxout network-in-network [35] & 0.24\% \\ \hline Committees of evolved CNNs (CEA-CNN) [36] & 0.24\% \\ \hline Genetically evolved committee of CNNs [37] & 0.25\% \\ \hline Committees of 7 neuroevolved CNNs [38] & 0.28\% \\ \hline CNN with gated pooling function [39] & 0.29\% \\ \hline Inception-Recurrent CNN + LSUV + EVE [40] & 0.29\% \\ \hline Recurrent CNN [41] & 0.31\% \\ \hline CNN with norm. layers and piecewise linear activation units [42] & 0.31\% \\ \hline CNN (5 conv, 3 dense) with full training [43] & 0.32\% \\ \hline \end{tabular} \end{table} Table 2: Side-by-side comparison of the most competitive (error rate < 0.5%) results found in the state of the art for the MNIST database without data augmentation or preprocessing handwritten digits, which are split into 60,000 training images and 10,000 test images. Each image is a grayscale 28x28 pixel image, centered on a white background. The digits are written by 250 individuals, which results in variations in writing styles, orientations, and distortions. The goal of the MNIST digit classification problem is to develop an algorithm that can accurately classify the digits in the images into their corresponding classes (0-9). The MNIST dataset is considered a challenging problem due to the variability in the dataset like disconnected or incomplete strokes, skewed or rotated digits with irregular shapes, which makes it difficult to accurately classify the digits. The MNIST dataset, although widely popular, is not immune to challenges. One significant obstacle is the existence of ambiguous images that can affect classification model accuracy [45]. Such images are often difficult for both humans and algorithms to distinguish between certain digits [46], such as 4 and 9, 1 and 7, 6 and 0, 7 and 4, and so on. When such noisy or distorted images are included in the training dataset, the learning rate can decrease, and the model's accuracy can be compromised. Figure 2 illustrates some of these images, where the digits from 0 to 9 are displayed in ascending order. In addition to the presence of noisy images, the MNIST dataset poses other challenges as well. One such challenge is the variations in writing styles and quality among different individuals. This results in some digits being more difficult to recognize than others, making the dataset more complex for classification models. Additionally, the dataset is imbalanced, with certain digits having significantly more samples than others [2, 47, 50, 51]. This can create bias in the model towards the more frequently occurring digits and poorer classification performance on less common digits. The distribution of samples across the 10 classes is illustrated in Figure 3. ## 4 Proposed Network Design The proposed method consists of three components: a preprocessing layer, a feature extraction layer, and a classification layer, as depicted in Figure 4. We used the TensorFlow and Keras frameworks for this study, as they are both widely used and make it easy to develop deep learning models [48, 49]. Figure 3: Distribution of digits in Training Set Figure 2: A sample set of noisy images from training data Figure 4 shows that the preprocessing layer performs normalization of the input images to a standard size of 28x28 and scales the pixel values between 0 and 1. This step helps to reduce the variations in input images, making them more suitable for training deep neural networks. The feature extraction layer includes 3 convolutional layers, each with the ReLu activation function and Batch Normalization applied to the output. To address overfitting, we also used the Dropout layer in combination with Batch Normalization. Dropout regularization randomly sets a fraction of the neurons to zero during training, which helps to prevent over-reliance on specific features. Batch normalization normalizes the activations of the neurons to reduce the impact of internal covariate shifts, which can improve generalization performance. The classification layer is composed of multiple connected layers that map the extracted features to the output classes. To tackle the overfitting problem, we introduce dropout regularization and batch normalization in the fully connected layers. Dropout regularization randomly sets a fraction of the neurons to zero during training, while batch normalization normalizes the activations of the neurons to reduce the internal covariate shift. Figure 5 displays the layers and the learnable parameters of the proposed architecture. Figure 4: Deep Learning Framework Figure 5 displays the layers and learnable parameters of the proposed architecture. We can see that the feature extraction component consists of multiple layers that extract relevant features from the input grayscale images of dimensions 28x28. The first layer serves as the input layer and has no learnable parameters, making it a non-trainable layer. The second layer is a convolutional layer with 32 filters, a kernel size of 3x3, no padding, and a stride of 1x1. The number of trainable parameters in this layer is 320. The third layer is a batch normalization layer with 32 filters and 128 trainable parameters, which include gamma weights, beta weights, moving_mean, and moving_variance. The fourth layer is another convolutional layer with 32 filters, a kernel size of 5x5, no padding, a stride of 2x2, and 25,632 trainable parameters. The fifth layer is another batch normalization layer with 128 trainable parameters. The sixth layer is a dropout layer with a dropout rate of 0.5 and no trainable parameters. The seventh layer is a convolutional layer with 64 filters, a kernel size of 5x5, no padding, a stride of 2x2, and 51,264 trainable parameters. The eighth layer is another batch normalization layer with 256 trainable parameters. The ninth layer is another dropout layer with a dropout rate of 0.5 and no trainable parameters. The final layer in the feature extraction component is a flattened layer with 3,136 feature maps and no trainable parameters. Following the feature extraction component, the classification component of the neural network consists of four layers. The first layer is a dense layer with 128 neurons and 401,536 trainable parameters. The second layer is a batch normalization layer with 512 trainable parameters. The third layer is a dropout layer with a value of 0.4 and no trainable parameters. The final layer is a dense layer with the softmax activation function and 1,290 trainable parameters. Figure 5: Layers of deep network with parameters Here are the trainable and non-trainable parameters for each layer along with the formula to compute them: Feature extraction layer: * Input layer: non-trainable (no learnable parameters) * Conv2D layer: 320 trainable parameters. Formula: (\(w\) * \(h\) + 1) * \(f\) = (33 + 1) * 32 = 320 * BatchNormalization layer: 128 trainable parameters. Formula: \(f\) * 4 = 324 = 128 * Conv2D layer: 25,632 trainable parameters. Formula: (\(w\) * \(f_{p}\) + 1) * \(f\) = (55 * 32 + 1) * 32 = 25,632 * BatchNormalization layer: 128 trainable parameters. Formula: \(f\) * 4 = 324 = 128 * Dropout layer: non-trainable (no learnable parameters) * Conv2D layer: 51,264 trainable parameters. Formula: (\(w\) * \(f_{p}\) + 1) * \(f\) = (55 * 32 + 1) * 64 = 51,264 * BatchNormalization layer: 256 trainable parameters. Formula: \(f\) * 4 = 644 = 256 * Dropout layer: non-trainable (no learnable parameters) * Flatten layer: non-trainable (no learnable parameters) Classification layer: * Dense layer: 401,536 trainable parameters. Formula: (\(n_{p}\) * \(n\)) + (1 * \(n\)) = 3136128 + 128 = 401,536 * BatchNormalization layer: 512 trainable parameters. Formula: \(f\) * 4 = 1284 = 512 * Dropout layer: non-trainable (no learnable parameters) * Dense layer: 1,290 trainable parameters. Formula: (\(n_{p}\) * \(n\)) + (1 * \(n\)) = 12810 + 10 = 1,290 In the above formulas, "\(w\)" and "\(h\)" represent the width and height of the kernel, "\(f\)" represents the number of filters in the current layer, "\(f_{p}\)" represents the number of filters in the previous layer, and "\(n\)" and "\(n_{p}\)" represent the number of neurons in the current and previous layers, respectively. The "+ 1" term in the formulas represents the bias term. Note that the batch normalization layers have both trainable and non-trainable parameters. The trainable parameters include "gamma" and "beta" weights used to adjust the normalized data, while the non-trainable parameters include "moving_mean" and "moving_variance" used to keep track of the mean and variance of the normalized data across all mini-batches during training. In summary, the proposed neural network architecture has a total of 480,554 trainable parameters and 512 non-trainable parameters. The neural network consists of 3 convolution layers and 2 dense layers with batch normalization and dropouts. This layer setup allows the system to learn more features and handle both overfitting and underfitting issues effectively. However, adding extra convolution layers or filters can lead to overfitting, while reducing the number of filters or convolution layers can result in underfitting. Therefore, we have maintained this layer setup to ensure that the network learns sufficient features. To further handle overfitting, we have also incorporated significant dropout, which helps match the validation losses with the training losses. This choice of layer and the number of trainable parameters provides a clear understanding of the neural network's design. Each layer is discussed in detail below. In our neural network, convolutional layers were employed to identify the spatial features of the images, while batch normalization layers were used to stabilize the learning process and improve training time. To achieve this, we chose kernel sizes of 3x3 and 5x5, which are effective for filtering corner edges and lengthy lines, respectively. Additionally, we selected filter sizes 32 and 64 to enable more learning parameters, which can lead to overfitting if not handled properly. To prevent overfitting and enhance the generalization ability of the model, we added dropout layers. These layers randomly drop out some nodes during training, preventing the model from relying too much on specific features or patterns in the data. This further improves the network's ability to handle noisy or ambiguous images and obtain accurate results. To compute the trainable parameters for each layer, we derived a formula based on the number of input channels, output channels, kernel size, and other hyperparameters used in that particular layer. The non-trainable parameters, such as the moving_mean and moving_variance in the batch normalization layers, were used to track the mean and variance of the input data during training, which is essential for the normalization process. The proposed neural network architecture and the number of trainable and non-trainable parameters used were determined based on previous research and experimentation in the field of image classification. The deep network architecture with multiple layers and a large number of trainable parameters enables the model to learn complex features and patterns in the input images, which can improve the accuracy of the classification task. Overall, our proposed neural network architecture provides a novel approach to image classification that considers the specific spatial and structural features of the images. Through the careful selection of layers and hyperparameters, we demonstrate the potential for improving the accuracy of image classification tasks. Determining optimal hyperparameters for a neural network is an important step in achieving high accuracy on a given dataset. Below are hyperparameters used for the proposed neural network architecture for classifying the MNIST dataset: 1. Learning rate: A good starting point for the learning rate is 0.0001. 2. Batch size: The batch size determines the number of samples that are processed at once during training. As we have used batch normalization, we recommended using a batch size of 128 which provides better accuracy. 3. Number of epochs: As the MNIST dataset is 60000 images, we have experimented with having 200 epochs. However, we find that early stops at ~90 epochs to get better accuracy having patients at 7. 4. Dropout rate: We have dropouts which are 0.4 and 0.5 at different positions just to address the overfitting problem. 5. Number of filters: A good starting point for the number of filters is 32 or 64. However, the optimal number of filters may depend on the complexity of the dataset and the size of the network. Accordingly, the filters are used. It's worth noting that determining optimal hyperparameters can be a time-consuming and iterative process, and it may be necessary to try different combinations of hyperparameters to find the best values for a given dataset. ## 5 Experimental Results As mentioned in Section 2, the MNIST dataset consists of 70,000 grayscale images of handwritten digits from 0 to 9, with 60,000 images reserved for training and 10,000 images for testing. The images are of size 2&x28 pixels, and each pixel is represented by a single value between 0 and 255, indicating the grayscale intensity. ### Stage 1: Training & Testing With Complete Data To improve the training process, the 60,000 training images were further divided into two subsets: 80% (48,000 images) were used for actual training, and 20% (12,000 images) were allocated for validation purposes. This division allowed for monitoring the model's performance on an unseen dataset and adjusting the model's hyperparameters accordingly. The choice of an 8:2 ratio split is consistent with prior research papers [52, 53], although this ratio can be altered, and investigating alternative ratios falls outside the scope of this paper and opens an area for future research. To ensure that the testing images were not utilized during training or validation, they were excluded from both the training and validation sets. K-fold cross-validation with n_splits=5 parameter was used to split the training and validation data five times without shuffling the data. This approach ensured that there were no duplicates in the split sets, and the model could be evaluated thoroughly. The training, validation accuracy, and loss graphs were plotted to assess the model's performance, as shown in Figure 6. The variance between the training and validation losses and accuracy was minimal from epoch 40, indicating that the model did not have any issues with overfitting or underfitting. The models generated from the five splits were evaluated on the testing set of 10,000 images. The models were evaluated on the training, validation, and testing datasets for all 5-fold. Figure 7 presents the confusion metrics for the five-fold models that were tested on the training, validation, and testing datasets. The metrics are provided in tabular form, and each row of the table represents a model that was tested on the dataset. Each column in the row represents the performance of the model on the corresponding dataset, i.e., training, validation, and testing. The metrics provide a measure of the performance of the models in terms of their ability to correctly classify the input images into their respective classes. Figure 6: The training and validation accuracies and losses for the 5-fold cross-validation. The rows labeled 1 to 5 represent the five different folds used in the cross-validation. ## 6 Conclusion In this paper, we have proposed a new method for the estimation of the number of parameters of the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method. The proposed method is based on the proposed method. Additionally, the evaluation accuracies and losses for each of the five split models were provided in Table 3. The average accuracy for the test dataset was reported as 99.48, indicating that the model performed exceptionally well on the unseen test data. Well, the models were purposely evaluated on the training and evaluation datasets, and the results are tabulated in Table 4. Our objective is to identify noisy images in the training dataset. To accomplish this, we have selected the 4th model from the 5-fold evaluation. This model exhibits low accuracy and high losses, making it suitable for our purpose. Upon analyzing this model, we discovered that a total of 81 images (72 from \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Datasets**} & \multicolumn{6}{c|}{**5-fold**} \\ \cline{2-7} & **Average** & **Min** & **Max** & **Average** & **Min Loss** & **Max Loss** \\ & **Accuracy** & **Accuracy** & **Accuracy** & **Loss** & & \\ \hline train & 0.9997 & 0.9995 & 0.9998 & 0.00138 & 0.001 & 0.0022 \\ \hline val & 0.9944 & 0.994 & 0.9946 & 0.01868 & 0.0174 & 0.0208 \\ \hline test & 0.9943 & 0.9939 & 0.9948 & 0.01758 & 0.0161 & 0.0192 \\ \hline \end{tabular} \end{table} Table 4: Average, Minimum, Maximum accuracies and losses of the model on the training, validation, and testing datasets. \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{**k-fold**} & **Dataset** & **Accuracy** & **Loss** \\ \hline & train & 0.9998 & 0.0014 \\ \cline{2-5} & val & 0.9946 & 0.0183 \\ \cline{2-5} 1 & test & 0.9944 & 0.0161 \\ \hline & train & 0.9995 & 0.0022 \\ \cline{2-5} & val & 0.9945 & 0.0184 \\ \cline{2-5} 2 & test & 0.9943 & 0.0166 \\ \hline & train & 0.9998 & 0.0011 \\ \cline{2-5} & val & 0.9946 & 0.0174 \\ \cline{2-5} 3 & test & 0.9948 & 0.0169 \\ \hline & train & 0.9998 & 0.0010 \\ \cline{2-5} & val & 0.9940 & 0.0185 \\ \cline{2-5} 4 & test & 0.9939 & 0.0191 \\ \hline & train & 0.9998 & 0.0012 \\ \cline{2-5} & val & 0.9942 & 0.0208 \\ \cline{2-5} 5 & test & 0.9942 & 0.0192 \\ \hline \end{tabular} \end{table} Table 3: The accuracies and losses of the models for each fold Figure 7: The confusion metrics for the five-fold models tested on the training, validation, and testing datasets. Each row represents a model, tested on the corresponding dataset in each column. the validation set and 9 from the training set) were incorrectly classified out of 60,000 images. In addition, 61 images from the test data were also classified incorrectly. Refer to Tables 5 and 6 for the complete list of misclassified images from the training, validation, and testing datasets. As mentioned earlier, we observed that a total of 81 images (9 from the validation set and 72 from the training set) were misclassified out of 60,000 images. However, we also noted that there were some images that were correctly classified but with a confidence level lower or equal to 0.99. Although this threshold value was chosen arbitrarily, it implies that these correctly classified images may still contain noise or ambiguity. For further analysis, please refer to Table 7, which presents the total count of noisy images in the training and validation datasets of all 5 fold evaluations. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Actual**} & \multicolumn{4}{c|}{**Fold 1 (Model 1)**} & \multicolumn{4}{c|}{**Fold 2 (Model 2)**} & \multicolumn{4}{c|}{**Fold 3 (Model 3)**} & \multicolumn{4}{c|}{**Fold 4 (Model 4)**} & \multicolumn{4}{c|}{**Fold 5 (Model 5)**} \\ \cline{2-13} & **WC** & **CN** & **TC** & **WC** & **CN** & **TC** & **WC** & **CN** & **TC** & **WC** & **CN** & **TC** & **WC** & **CN** & **TC** \\ \hline 0 & 2 & 10 & 12 & 3 & 8 & 11 & 2 & 6 & 8 & 3 & 4 & 7 & 3 & 7 & 10 \\ \hline 1 & 4 & 19 & 23 & 11 & 18 & 29 & 10 & 14 & 24 & 3 & 13 & 16 & 15 & 28 & 43 \\ \hline 2 & 3 & 15 & 18 & 11 & 20 & 31 & 5 & 10 & 15 & 6 & 13 & 19 & 6 & 13 & 19 \\ \hline 3 & 5 & 10 & 15 & 4 & 12 & 16 & 6 & 7 & 13 & 3 & 6 & 9 & 5 & 6 & 11 \\ \hline 4 & 12 & 32 & 44 & 13 & 43 & 56 & 4 & 18 & 22 & 16 & 35 & 51 & 10 & 20 & 30 \\ \hline 5 & 15 & 25 & 40 & 17 & 28 & 45 & 10 & 12 & 22 & 16 & 19 & 35 & 11 & 24 & 35 \\ \hline 6 & 3 & 3 & 6 & 4 & 11 & 15 & 4 & 9 & 13 & 4 & 7 & 11 & 5 & 5 & 10 \\ \hline 7 & 15 & 21 & 36 & 10 & 21 & 31 & 12 & 12 & 24 & 16 & 14 & 30 & 13 & 29 & 42 \\ \hline 8 & 7 & 19 & 26 & 6 & 24 & 30 & 9 & 23 & 32 & 7 & 13 & 20 & 7 & 2 & 9 \\ \hline 9 & 10 & 26 & 36 & 10 & 28 & 38 & 12 & 28 & 40 & 7 & 12 & 19 & 6 & 17 & 23 \\ \hline Tot & 76 & 180 & 256 & 89 & 213 & 302 & 74 & 139 & 213 & 81 & 136 & 217 & 81 & 151 & 232 \\ \hline \end{tabular} \end{table} Table 7: Noisy images in training dataset The following abbreviations are used in the table: * WC: Incorrectly Classified Images * CN: Correctly Classified Images with Noise, where the Confidence Level is less than 0.9 * TC: Total Count = WC + CN Models 1, 2, 3, 4, and 5 have identified a total of 1286, 1651, 1216, 1182, and 1113 images as distorted respectively. There may be some overlap or unique identifications among the models. Consequently, the noisy images identified by all models will be removed from the training set, resulting in the removal of 489 images from the total of 60000 training and validation data. Table 8 displays the count of images in each class of the training and validation dataset before and after cleaning. The distribution of the cleaned training and validation dataset is illustrated in Figure 8. Despite the uneven distribution of data in the training and validation dataset, the model is still trained to evaluate the system's performance. The same steps as before are followed, including using a 5-fold evaluation. However, the data distribution in the split between the training and validation sets may differ in each of the 5-fold evaluations. ### Stage 2: Training on the Reduced Training Data We removed 489 distorted images from the original training dataset of 60,000, resulting in a reduced dataset of 59,511 images. We trained the model using the same process as described in the previous section, with a training-validation split ratio of 80:20. This resulted in 47,609 images for training and 11,902 for validation. We also used 5-fold cross-validation in this case. Figure 9 displays the training and validation accuracies and losses for 5 iterations. After 40 epochs, the model reached convergence with a training accuracy of.999 and a validation accuracy of.9973, while the testing accuracy remained around.9943. Table 9 summarizes the losses and accuracies of the model across all 5-fold iterations for the training, validation, and testing datasets. Further, Table 10, shows the average accuracies and losses. Figure 10 presents the confusion metrics for the five-fold models that were tested on the training, validation, and testing datasets. The metrics are provided in tabular form, and each row of the table represents a model that was tested on the dataset. Each column in the row represents the performance of the model on the corresponding dataset, i.e., training, validation, and testing. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Datasets**} & \multicolumn{6}{c|}{**5-fold**} \\ \cline{2-7} & **Average** & **Min** & **Max** & **Average** & **Min** & **Max** \\ \multicolumn{1}{c|}{**Accuracy**} & **Accuracy** & **Accuracy** & **Loss** & **Loss** & **Loss** \\ \hline train & 0.9999 & 0.9999 & 0.9999 & 0.0001 & 0.0001 & 0.0001 \\ \hline val & 0.9972 & 0.9969 & 0.9999 & 0.00856 & 0.008 & 0.0093 \\ \hline test & 0.9939 & 0.9935 & 0.9945 & 0.01892 & 0.0181 & 0.0194 \\ \hline \end{tabular} \end{table} Table 10: Average, Minimum, Maximum accuracies and losses of the model on the training, validation, and testing datasets. Figure 10 presents the confusion metrics for the five-fold models that were tested on the training, validation, and testing datasets. The metrics are provided in tabular form, and each row of the table represents a model that was tested on the dataset. Each column in the row represents the performance of the model on the corresponding dataset, i.e., training, validation, and testing. Tables 11 and 12 are for the complete list of misclassified images from the training, validation, and testing datasets. As previously mentioned, a total of 81 images (9 from the validation set and 72 from the training set) were misclassified out of 60,000 images. Our results demonstrate that removing the distorted images improved training and validation accuracy, both above.999 and.997, respectively. We also observed misclassifications in classes 4 and 9, which may be attributed to the edges and curves in those digits. ### FAILURE CASE - A CASE STUDY The proposed model has one limitation: it is a rotation variant, meaning that the model cannot effectively process rotated digits. Table 13 displays some examples of such images. ### COMPARATIVE ANALYSIS Table 14 presents a comparative analysis of recent papers on the topic. Our proposed model provides the following distinct advantages when compared to these papers. 1. First, no-augmented data for building the deep network for achieving maximum accuracy. \begin{table} \begin{tabular}{|c|c|c|} \hline **Actual** & **Wrongly Classified images - the confidence rate ranges between 0.3 and** \\ \hline 0 & 2 & 6 & 2 \\ \hline 1 & 1 & 1 \\ \hline 2 & 3 & 2 & 2 & 2 \\ \hline 3 & 5 & 3 & 5 & 3 \\ \hline 4 & 2 & 2 & 2 & 2 \\ \hline 4 & 3 & 2 & 2 & 2 \\ \hline 5 & 5 & 5 & 3 & 5 & 5 \\ \hline 6 & 4 & 0 & 4 & 4 & 4 \\ \hline 7 & 7 & 2 & 2 & 2 & 2 \\ \hline 8 & 8 & 2 & 2 & 2 & 2 \\ \hline 9 & 4 & 4 & 4 & 4 & 4 \\ \hline \end{tabular} \end{table} Table 12: Wrongly Classified images in Testing Dataset of 10000 images Second, the predicted classes' confidence level has been improved without or less distorted or noisy, or ambiguous images. Most of the other papers either do not discuss or provide limited information on the confidence level of each predicted image. 3. Additionally, while many of the proposed models utilized the entire MNIST training dataset of 60,000 images for training, our model used only 80% of the training data for actual training and reserved the remaining 20% for validation. Although some of the literature shows an accuracy above 99.5% on the testing dataset, our discussions indicate that these high accuracies can make it difficult for humans to distinguish between the images. This suggests that many of these papers may have trained their models on the entire training data without filtering out the noise images, which can lead to overfitting. ### Contradiction Study In their publication from 2021 [19], the authors emphasized the presence of ambiguous and distorted images, which they successfully addressed to achieve the highest accuracy compared to other existing literature. Similarly, the original dataset paper [2] also acknowledged the existence of such distorted images, which inherently complicates the recognition task. While the paper from 2021 [19] outperformed other state-of-the-art models in terms of accuracy, it is clear that the trained models exhibit bias and lack generalization capabilities. ## 6 Conclusion In summary, this study demonstrates that removing distorted images can lead to a significant increase in classification accuracy and confidence level. Our novel deep network model, applied to the MNIST dataset, improved the validation accuracy from 99.44% to 99.72%. The model also effectively addressed overfitting and underfitting issues, as evidenced by the increased confidence level for correctly classified images and decreased confidence level for misclassified ones. Although our research was limited to the MNIST dataset, it can be extended to other datasets. Additionally, we used an 8:2 ratio for training and validation, as suggested by previous literature, but this ratio can be varied in future research. To achieve even higher accuracy on the testing data, future work may involve further reducing the training data and exploring data augmentation techniques for image \begin{table} \begin{tabular}{|c|c|} \hline **Approach** & **Testing Dataset** \\ \hline Dynamic Routing Between Capsules, 2017 [11] & 99.75\% \\ \hline Lets keep it simple, Using simple architectures to outperform deeper and more complex architectures, 2016 [54] & 99.75\% \\ \hline Batch-Normalized Maxout Network in Network, 2015 [35] & 99.76\% \\ \hline APAC-Augmented Pattern Classification with Neural Networks, 2015 [28] & 99.77\% \\ \hline Multi-Column Deep Neural Networks for Image Classification, 2012 [27] & 99.77\% \\ \hline No routing needed between capsules, 2021 [19] & 99.83\% \\ \hline Ensembles: Regularization of Neural Networks using DropConnect 2013 [28] & 99.79\% \\ \hline Ensembles: RMDL-Random Multimodel Deep Learning for Classification 2018 [29] & 99.82\% \\ \hline Ensembles:No routing needed between capsules, 2021 [19] & 99.87\% \\ \hline **Ours** & **99.75 (Cleaned** **Validation) \& **99.43** \\ & **(Testing)** \\ \hline \end{tabular} \end{table} Table 14: Comparative analysis - State of the art rotations. While some literature may demonstrate higher accuracy rates than our proposed approach, it is evident that our model overcomes bias and exhibits generalized capability. This paper emphasizes the importance of data quality and preprocessing in developing accurate deep-learning models for image classification tasks, and our proposed approach shows promise in improving confidence levels in model predictions.
2306.03516
COPR: Consistency-Oriented Pre-Ranking for Online Advertising
Cascading architecture has been widely adopted in large-scale advertising systems to balance efficiency and effectiveness. In this architecture, the pre-ranking model is expected to be a lightweight approximation of the ranking model, which handles more candidates with strict latency requirements. Due to the gap in model capacity, the pre-ranking and ranking models usually generate inconsistent ranked results, thus hurting the overall system effectiveness. The paradigm of score alignment is proposed to regularize their raw scores to be consistent. However, it suffers from inevitable alignment errors and error amplification by bids when applied in online advertising. To this end, we introduce a consistency-oriented pre-ranking framework for online advertising, which employs a chunk-based sampling module and a plug-and-play rank alignment module to explicitly optimize consistency of ECPM-ranked results. A $\Delta NDCG$-based weighting mechanism is adopted to better distinguish the importance of inter-chunk samples in optimization. Both online and offline experiments have validated the superiority of our framework. When deployed in Taobao display advertising system, it achieves an improvement of up to +12.3\% CTR and +5.6\% RPM.
Zhishan Zhao, Jingyue Gao, Yu Zhang, Shuguang Han, Siyuan Lou, Xiang-Rong Sheng, Zhe Wang, Han Zhu, Yuning Jiang, Jian Xu, Bo Zheng
2023-06-06T09:08:40Z
http://arxiv.org/abs/2306.03516v2
# COPR: Consistency-Oriented Pre-Ranking for Online Advertising ###### Abstract. Cascading architecture has been widely adopted in large-scale advertising systems to balance efficiency and effectiveness. In this architecture, the pre-ranking model is expected to be a lightweight approximation of the ranking model, which handles more candidates with strict latency requirements. Due to the gap in model capacity, the pre-ranking and ranking models usually generate inconsistent ranked results, thus hurting the overall system effectiveness. The paradigm of score alignment is proposed to regularize their raw scores to be consistent. However, it suffers from inevitable alignment errors and error amplification by bids when applied in online advertising. To this end, we introduce a consistency-oriented pre-ranking framework for online advertising, which employs a chunk-based sampling module and a plug-and-play rank alignment module to explicitly optimize consistency of ECPM-ranked results. A \(\Delta\)_NDCG_-based weighting mechanism is adopted to better distinguish the importance of inter-chunk samples in optimization. Both online and offline experiments have validated the superiority of our framework. When deployed in Taobao display advertising system, it achieves an improvement of up to +12.3% CTR and +5.6% RPM. pre-ranking, cascading architecture, consistency, online advertising + Footnote †: journal: Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for for Information for Information for Information for Information for Information for Information for Information for for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for for Information for Information for Information for Information for Information for Information for for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for Information for results on the same candidate set. Such **inconsistency** hinders the overall system effectiveness. For example, top ads selected from the pre-ranking phase could be less competitive in the ranking phase, causing waste of the computational resource. Also, ads which are preferred in the ranking phase could be unfortunately discarded in the pre-ranking phase, leading to sub-optimal results. Some pioneering studies (Wang et al., 2018; Wang et al., 2018) propose to align the pre-ranking and ranking models via distillation on pCTR scores. The pre-ranking model is encouraged to generate same scores as the ranking model (Wang et al., 2018) or generate high scores for top candidates selected by the ranking model (Wang et al., 2018). Although exhibiting encouraging performance, the paradigm of **score alignment** suffers from the following issues, especially when applied to the advertising system: * **Inevitable alignment errors.** Due to simpler architecture and fewer parameters for efficiency concerns, the capacity of the pre-ranking model is limited, making it difficult to well approximate original scores of the complex ranking model. Thus even with explicit optimization, there still exist errors in aligning their scores to be exactly the same. * **Error amplification in ECPM ranks2**. In both pre-ranking and ranking phases, ads are ranked according to their ECPM as Eq. (1), which is jointly determined by the pCTR score and the bid. Thus the influence of alignment errors could be amplified due to existence of bids. As shown in Table 1, when multiplied by corresponding bids, even a tiny difference in pCTR scores of the pre-ranking and ranking models leads to completely different ranked results. Footnote 2: We use **ECPM rank** to denote the order of an ad in the ECPM-ranked list. Above issues call for rethinking the necessity of strictly aligning pCTR scores in the advertising system. Essentially, given a set of candidates, it is **not their absolute pCTR scores but their relative ECPM ranks** that determine the results of each phase. Therefore, to achieve consistent results, the pre-ranking model is not required to output same pCTR scores as the ranking model. Instead, it only needs to output scores which yield same ECPM ranks when multiplied by bids. In this way, the requirement of score alignment can be relaxed to that of **rank alignment**, which is more easier to meet. Moreover, when optimizing pCTR scores for consistent ECPM ranks, the influence of bids can be taken into account beforehand, thus alleviating the issue of error amplification. To this end, we introduce a Consistency-Oriented Pre-Ranking (**COPR**) framework for online advertising, which explicitly optimize the pre-ranking model towards consistency with the ranking model. Particularly, we collect historical logs of the ranking phase, where each log records a ECPM-ranked list of candidates. COPR segments the list into fixed-sized chunks. Each chunk is endowed with certain level of priority from the view of the ranking phase. With pairs of ads sampled from different chunks, COPR learns an plug-and-play rank alignment module which aims to consistently distinguish their priority using scores at the pre-ranking phase. Moreover, we adopts a \(\Delta NDCG\)-based weighting mechanism to better distinguish the importance of inter-chunk pairs in optimization. Our main contributions can be summarized as follows: * To the best of our knowledge, we are the first to explicitly optimize the pre-ranking model towards consistency with the ranking model in the widely-used cascading architecture for online advertising. * We propose a novel consistency-oriented pre-ranking framework named COPR, which employs a chunk-based sampling module and a plug-and-play rank alignment module for effective improvement of consistency. * We conduct extensive experiments on public and industrial datasets. Both offline and online results validate that the proposed COPR framework significantly outperforms state-of-the-art baselines. When deployed in Taobao display advertising system, it achieves an improvement of up to +12.3% CTR and +5.6% RPM. ## 2. Related Work In this section, we briefly review studies about pre-ranking. Located in the middle of the cascading architecture, the pre-ranking system has played an indispensable role for many large-scale industrial systems (Wang et al., 2018; Wang et al., 2018). The development of a pre-ranking model is mainly for balancing the system effectiveness and efficiency, as the downstream ranking model usually cannot deal with tens of thousands of candidates. To this end, techniques such as the dual-tower modeling (Wang et al., 2018; Wang et al., 2018) are commonly adopted. However, this paradigm limits feature interactions between users and items to the form of vector product, which often results in extensive performance degradation. Another line of work strives to enhance high-order feature interactions, and explores the ways to reduce the online latency. Li et al. (Li et al., 2018) add fine-grained and early feature interactions between two towers. Wang et al. (Wang et al., 2018) propose to use fully-connected layers and employ various techniques from the perspectives of both modeling efficiency and engineering optimization. Specifically, a Squeeze-and-Excitation module (Squeeze-and-Excitation module (Squeeze-and-Excitation module (Squeeze-and-Excitation module (Squeeze-and-Excitation module (Squeeze-and-Excitation module (Squeeze-and-Excitation module (Squeeze-and-Excitation module (Squeeze-and-Excitation module (Squeeze-and-Excitation module (Squeeze-and-Excitation module (Squeeze-and-Excitation module (Squeeze-and-Excitation module (Squeeze-and-Excitation module (Squeeze-and-Excitation module (Squeeze-and-Excitation module (Squeeze-and-Excitation module (Squeeze-and-Excitation module (Squeeze-and-Excitation module (Squeeze-and-Excitation module (Squeeze-and-Excitation module (Squeeze-and-Excitation module (Squeeze-and-Excitation module (Squeeze-and-Excitation module (Squeeze-and-Excitation module (Squeeze-and-Excitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-Excitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-Excitation module (Squeeze-andExcitation module (Squeeze-Excitation module (Squeeze-andExcitation module (Squeeze-Excitation module (Squeeze-andExcitation module (Squeeze-Excitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-Excitation module (Squeeze-andExcitation module (Squeeze-Excitation module (Squeeze-andExcitation module (Squeeze-andExcitation module (Squeeze-Excitation module (Squeeze-andExcitation module (Squeeze-Excitation module (Squeeze-andExcitation module (Squeeze-Excitation module (Squeeze-andExcitation module (Squeeze-Excitation module (Squeeze-Excitation module (Squeeze-andExcitation module (Squeeze-Excitation module (Squeeze-andExcitation module (Squeeze-Excitation module (SandExcitation module (Squeeze-Excitation module (Squeeze-andExcitation module (Squeeze-Excitation module (Squeeze-andExcitation module (Squeeze-Excitation module (Squeeze-andExcitation module (Squeeze-Excitation module (Squeeze-andExcitation module (Squeeze-Excitation module (Squeeze-andExcitation module (Squeeze-Excitation module (Squeeze-andExcitation module (Squeeze-Excitation module (Squeeze-andExcitation module (Squeeze-Excitation module (Squeeze-andExcitation module (Squeeze-Excitation module (Squeeze-andExcitation module (Squeeze-Excitation module (SandExcitation module (Squeeze-Excitation module (SandExcitation module (Squeeze-Excitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitation module (SandExcitationExcitation (SandExcitationExcitationExcitationExcitationExc (SandExcitationExc (SandExcitationExcExc (SandExcitationExcExcExc (SandExcExcExcExcExc (SandExc in training pre-ranking models. RankFlow (Kang et al., 2017) regularizes the pre-ranking and ranking models to generate same scores for same candidates. Despite encouraging performance, there still exist inevitable errors in score alignment due to discrepancy in model capacity. When applied in online advertising, influence of such errors would be amplified by bids of ads, yielding inconsistent ECPM-ranked results. In this paper, we propose to relax the objective of score alignment to rank alignment, where bids of ads are incorporated and consistency of ranked results between two phases can be explicitly optimized in an effective manner. ## 3. Methodology In this section, we first introduce background knowledge about the pre-ranking model, and then describe our proposed COPR framework as illustrated in Fig. 2. ### Background **Training Data.** When the advertising system serves online traffic as Fig. 1, hundreds of ads are ranked through the ranking phase and recorded to logs, which we refer to as **ranking logs**. Each log contains an ranked list of ads with descending ECPM: \[\mathbf{R}=[(ad_{1},pCTR_{1},bid_{1}),...,(ad_{M},pCTR_{M},bid_{M})], \tag{2}\] where \(pCTR_{i}\) is the score output by the ranking model for \(i\)-th ad and \(bid_{i}\) denotes its bid. \(M\) is the number of candidates. Then top \(N\) ads are displayed to the user. User feedback \(y\) (click/non-click) on each displayed ad is recorded to **impression logs**: \[\mathbf{I}=[(ad_{1},y_{1}),...,(ad_{N},y_{N})]. \tag{3}\] **Base Model.** The base model for pre-ranking is usually a light-weight CTR model. Here we adopt the architecture of COLD (Kang et al., 2017). The input features consist of three parts: user features \(\mathbf{U}\) such as age and gender, ad features \(\mathbf{A}\) such as brand and category, context features \(\mathbf{C}\) such as time and device. After pre-selecting a concise set of features, COLD feeds them into embedding layers and concatenate their embeddings for a compact representation \(\mathbf{x}\): \[\mathbf{x}=E(\mathbf{U})\oplus E(\mathbf{A})\oplus E(\mathbf{C}). \tag{4}\] Then it employs a prediction net consists of multiple fully-connected layers to estimate CTR: \[\hat{y}=Sigmoid(MLP(\mathbf{x}))\in[0,1]. \tag{5}\] To accurately predict user click \(y\), the model is optimized with cross entropy loss over impression logs \(I\): \[L_{\text{ctr}}=\sum_{\mathbf{I}}[-ylog(\hat{y})-(1-y)log(1-\hat{y})]. \tag{6}\] ### Consistency-Oriented Pre-Ranking Though the pre-ranking model is expected to well approximates the ranking model in the cascading system, their gap in model capacity often hinders satisfying approximation. Thus in addition to \(L_{\text{ctr}}\), we aim to explicitly optimize the pre-ranking model towards consistent results with the ranking model over \(\mathbf{R}\). #### 3.2.1. Chunk-Based Sampling Given candidates \(\{Ad_{i}\}_{1}^{M}\) in ranking logs, an ideal pre-ranking model should output scores that yield same ECPM-ranked list as Eq. (2). Considering its limited capacity, it could be hard to rank hundreds of ad all in correct positions. To reduce the learning difficulty, we partition the ranked list into \(D=\frac{M}{K}\) fixed-sized chunks, each constituting \(K\) adjacent ads, as shown in Fig. 3. We regard ads in the same chunk as candidates with same priority in the ranking phase. The pre-ranking model is not required to distinguish ads in the same chunk. Instead, it only needs to consistently rank candidates in the granularity of chunk. For each chunk, we randomly sample a candidate and endow it with the priority related to this chunk. In this way, for each ranked list, we obtain a concise sub-list: \[\mathbf{R}_{chunk}=[(ad_{sq},pCTR_{sq},bid_{sq},D-d)]_{d=1}^{D}, \tag{7}\] where \(sq_{d}\) is the index of sampled ad in chunk \(d\) and \(D-d\) denotes its priority which the larger the better. The above chunk-based sampling has two-fold advantages: 1) It provides a flexible way to control the granularity of consistency, which makes the objective reachable for the lightweight Figure 3. Illustration of chunk-based sampling. Figure 2. The framework of consistency-oriented pre-ranking. pre-ranking model. By increasing the chunk size \(K\), the objective of consistency gradually shifts from fine-grained to coarse-grained. 2) It effectively reduces the size of ranked list in logs by \(K\) times and still maintains coverage of original lists, which is critical for efficient training in industrial machine learning systems. In our production implementation, \(K\) is set to 10. #### 3.2.2. Rank Alignment In the following, we introduce how to modify the base model with a plug-and-play rank alignment module. Instead of regularizing the difference between \(\hat{y}_{i}\) in Eq. (5) and \(pCTR_{i}\) in Eq. (7) as score alignment methods (Kang et al., 2017; Li et al., 2018), we propose to relax the objective to rank alignment on a properly-adjusted pCTR score. Particularly, we employ a relaxation net to learn a factor \(\alpha>0\), with which we adjust the original pCTR score: \[\alpha =ReLU(MLP(x))+1e^{-6}\in\mathcal{R}^{+},\] \[\tilde{y} =\alpha*\hat{y}, \tag{8}\] where \(\tilde{y}\) denote the adjusted pCTR. Thus ECPM at the pre-ranking phase can be accordingly estimated as \(\tilde{y}*bid\), based on which we aim to correctly rank each inter-chunk pair in \(\mathbf{R}_{chunk}\). Here we adopt the pairwise logistic loss for its relatively good performance and the simplicity for implementation (Beng et al., 2017; Li et al., 2018): \[L_{rank}=\sum_{i<j}log[1+e^{-\frac{(\tilde{y}_{s}*bid_{s}}{y_{s}*bid_{s}}-1)}]. \tag{9}\] For each pair of \(ad_{s_{i}}\) and \(ad_{s_{j}}\) sampled from different chunks that \(i<j\), we optimize \(L_{rank}\) by encouraging \(\tilde{y}_{s_{i}}*bid_{s_{i}}>\tilde{y}_{s_{j}}*bid_{s_{j}}\), which means \(ad_{s_{i}}\) would be ranked before \(ad_{s_{j}}\) by ECPM in the pre-ranking phase. If all inter-chunk pairs can be correctly ranked, we achieve consistent ECPM-ranked results between the pre-ranking and ranking phases over \(R_{chunk}\). Note that by introducing the relaxation factor \(\alpha\), we slightly modify the original pCTR score to achieve consistent ranked results if necessary. To maintain original value as much as possible, \(\alpha\) should be around 1. Thus we add a symmetric regularization to penalize the deviation of \(\alpha\) from 1: \[L_{reg}=\begin{cases}\alpha-1&\alpha>1\\ \frac{1}{\alpha}-1&\alpha<=1\end{cases}. \tag{10}\] It is worth mentioning that the proposed rank alignment module does not rely on specific assumption about the architecture of base model. It is an plug-and-play component that can be added to any pre-ranking models for improvement of consistency. #### 3.2.3. \(\Delta NDCG\)-Based Pair Weighting \(L_{rank}\) in Eq. (9) fails to consider the relative importance of different pairs in consistency optimization. In practice, consistently ranking ads from chunk 1 and chunk 10 is more important than ranking chunk 11 and chunk 20, since only the top ads will be sent to the ranking phase and displayed to users. It calls for a weighting mechanism that considers chunk-related priorities of candidates. Intuitively, if pair \((ad_{s_{i}},ad_{s_{j}})\) in \(L_{rank}\) are mistakenly ranked, the consistency between the pre-ranking and ranking phase will be hurt. Thus its weight in \(L_{rank}\) should be determined by the negative impact. As each sampled \(ad_{s_{d}}\) in \(\mathbf{R}_{chunk}\) is endowed with priority \(D-d\), we use NDCG (Beng et al., 2017; Li et al., 2018) to measure the utility of any ranked list \(p\) of these candidates: \[\begin{split} DCG&=\sum_{i=1}^{D}\frac{2^{pi}-1}{ log(i+1)},\\ IDCG&=\sum_{i=1}^{D}\frac{2^{D-i}-1}{log(i+1)},\end{split} \tag{11}\] where \(p_{i}\) denote the priority of \(i\)-th ad in the permutation and the IDCG is the ideal DCG achieved by \(\mathbf{R}_{chunk}\). If we swap the position of \(ad_{s_{i}}\) and \(ad_{s_{j}}\) in \(\mathbf{R}_{chunk}\), the utility of the list will experience a drop which can be further normalized as: \[\Delta NDCG(i,j)=\frac{2^{D-i}-2^{D-j}}{IDCG}[\frac{1}{log(i+1)}-\frac{1}{log(j +1)}]. \tag{12}\] The utility drop is used to re-weight inter-chunk pairs in consistency optimization: \[L_{rank}=\sum_{i<j}\Delta NDCG(i,j)log[1+e^{-(\tilde{y}_{s_{i}}*bid_{s}-\tilde {y}_{s_{j}}*bid_{s_{j}})}]. \tag{13}\] Thus the objective function of COPR can be formulated as: \[L=\underbrace{L_{\mathit{ctr}}}_{\mathit{CTR}\ \mathit{Loss}}+\underbrace{ \lambda_{1}L_{rank}+\lambda_{2}\mathit{Lreg}}_{\mathit{Consistency}\ \mathit{Loss}}, \tag{14}\] where \(\lambda_{1}>0\), \(\lambda_{2}>0\) are weights for corresponding loss terms. By minimizing \(L\), we explicitly optimize the pre-ranking model towards consistency with the ranking model via a plug-and-play rank alignment module. ### System Deployment We introduce the deployment of COPR in three stages: data generation, model training, and online serving as shown in Fig. 4. **Data Generation.** During online serving, hundreds of ads are ranked through ranking model and recorded to ranking logs, with which we perform chunk-based sampling. The content of each sample includes user index, ad index, chunk index as well as the bid. Note that the bid at the ranking phase could differ from that at the the pre-ranking phase (Li et al., 2018). In this case, we record the pre-ranking bid since it influences \(L_{rank}\) in model training. When ads are displayed to users in the client, we also record user feedback in impression logs, which are used in calculating \(L_{\mathit{ctr}}\). **Model Training.** The training procedure is performed on our ODL (Online Deep Learning) (Li et al., 2018) platform, which consumes real-time streaming data to continuously update model parameters. Figure 4. Overview of system pipeline. After training with fixed number of steps, the learnt model will be delivered to the Model Center, which manages all online models. **Online Serving.** Once a new version of pre-ranking model is ready, pre-ranking server will load it from Model Center to replace the online version in service. ## 4. Experiments In this section, we conduct experiments on both public dataset and production dataset to validate the effectiveness of COPR in improving consistency and overall system performance. ### Experiment Setup **Taobao Dataset.** It is a public dataset3 with 26 million impression logs of 1 million users and 0.8 million items in 8 days. Item price is used as bid. Impressions of first 7 days are used to train DIN (Zhou et al., 2017) as the ranking model. For each impression, we sample 10 candidates and collect ECPM-ranked results by the ranking model to train pre-ranking models. Logs of the last day are used for evaluation. To simulate the cascading process, we sample 100 candidates for each impression, among which the pre-ranking and ranking model sequentially select top 10 and top 1 candidates to display. Footnote 3: [https://tianchi.allyun.com/dataset/dataDetail?datId=56](https://tianchi.allyun.com/dataset/dataDetail?datId=56) **Production Dataset.** It contain 8 days of impression logs and ranking logs collected from our system shown in Fig. 4. These logs are of the magnitude of billions. The first week of logs are used for training and the last day is used for evaluation. According to the scenario that logs come from, it is further divided into two subsets: **Homepage** and **Post-Purchase**. **Baselines.** COPR is compared with following baselines: * **Base** adopts the architecture of COLD (Zhou et al., 2017) and is trained on impression logs. * **Distillation**(Dong et al., 2018) directly distills predicted scores of the ranking model on impression logs. * **RankFlow**(Kip demonstrates the stable improvement of consistency achieved by our proposed framework. Moreover, we still observe the gap between COPR and COPR w/o \(\Delta NDCG\), which shows that the weighting mechanism also works in the large-scale production dataset. To evaluate system performance in production environment, we perform online A/B test on two scenarios, where these methods are used to serve real users and advertisers. From Table 3 we find that Distillation, RankFlow, and COPR all perform better than the production baseline, among which COPR achieves the largest improvement, with a lift of up to +12.3% CTR and +5.6% RPM. With impressive performance, **COPR has been successfully deployed to serve the main traffic of Taobao display advertising system in the pre-ranking phase since October of 2022**. ### Qualitative Analysis Given ranked results from the pre-ranking and ranking phases, we calculate the average pre-ranking position for candidates at each ranking position, based on which we draw the Ranking-PreRanking Curve (**RPC**). The ideal RPC happens when results are exactly same. #### 4.4.1. Error Amplification in ECPM Rank As shown in Fig. 8 (Left), RPC by pCTR of RankFlow is close to the ideal curve, showing well alignment of raw pCTR in two phases. However, after ranking by ECPM, RPC of RankFlow largely deviates from the ideal one. It verifies that the involvement of bid in ECPM will amplify the influence of errors in score alignment, leading to more inconsistent ECPM-ranked results. This analysis is consistent with the example in Table 1. Hence we confirm that merely score alignment is not enough for the cascading architecture in online advertising. #### 4.4.2. More Consistent ECPM Rank Fig. 8 (Right) shows RPC by ECPM of different methods. We observe that compared with Base and RankFlow, RPC of COPR is more close to the ideal curve in almost each ranking position. It qualitatively shows that ECPM-ranked results given by COPR are more consistent with results of the ranking phase. It can be attributed to the design of our consistency-oriented framework, where the rank alignment module directly optimizes towards this objective. The incorporation of bid also helps alleviate the above mentioned error amplification. ## 5. Conclusion In this paper, we introduce a consistency-oriented pre-ranking framework for online advertising, which employs a chunk-based sampling module and a plug-and-play rank alignment module to explicitly optimize consistency of ECPM-ranked results. A \(\Delta NDCG\)-based weighting mechanism is also adopted to better distinguish the importance of inter-chunk samples in optimization. Both online and offline experiments have validated the superiority of our framework. When deployed in Taobao display advertising system, it achieves an improvement of up to +12.3% CTR and +5.6% RPM. Figure 5. HR@K of different methods in the scenario of Homepage (Left) and Post-Purchase (Right). Figure 8. Left: RPC by pCTR and ECPM of RankFlow. Right: RPC by ECPM of different methods. Figure 6. NDCG@K of different methods in the scenario of Homepage (Left) and Post-Purchase (Right). \begin{table} \begin{tabular}{c|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{Homepage} & \multicolumn{2}{c}{Post-Purchase} \\ \cline{2-5} & CTR & RPM & CTR & RPM \\ \hline Base & - & - & - & - \\ Distillation & +2.2\% & +0.1\% & +3.6\% & +0.6\% \\ RankFlow & +8.3\% & +2.9\% & +6.8\% & +2.3\% \\ \hline COPR w/o \(\Delta NDCG\) & +11.5\% & +5.0\% & +9.6\% & +3.7\% \\ COPR & **+12.3\%** & **+5.6\%** & **+10.8\%** & **+4.4\%** \\ \hline \hline \end{tabular} \end{table} Table 3. Relative improvement over the production baseline in online A/B Test. Best results are highlighted in bold. Figure 7. MAP@K of different methods in the scenario of Homepage (Left) and Post-Purchase (Right).
2308.15800
On dual groups of symmetric varieties and distinguished representations of $p$-adic groups
Let $X=H\backslash G$ be a symmetric variety over a $p$-adic field. Assume $G$ is split. Let $\widehat{G}$ be the Langlands dual group of $G$. There is a complex group $\widehat{G}_X$ whose root datum is naturally constructed from that of $\widehat{G}$. In this paper, we construct a homomorphism $\widehat{\varphi}_X:\widehat{G}_X\times\operatorname{SL}_2(\mathbb{C})\to \widehat{G}$ naturally and somewhat explicitly, and make a few conjectures on how $\widehat{\varphi}_X$ is related to $H$-distinguished representations of $G$. We will also show that the local Langlands parameter of the trivial representation of $G$ factors through $\widehat{\varphi}_X$ for any symmetric variety $X=H\backslash G$. Our group $\widehat{G}_X$ is different from the dual group by Sakellaridis-Venkatesh. However, we will show that our conjectures are consistent with various known examples and conjectures, especially in the framework of the theory of Kato-Takano on relative cuspidality and relative square integrability.
Shuichiro Takeda
2023-08-30T07:10:30Z
http://arxiv.org/abs/2308.15800v3
# Dual groups of symmetric varieties and distinguished representations of \(p\)-adic groups ###### Abstract. Let \(X=H\backslash G\) be a symmetric variety over a \(p\)-adic field. Assume \(G\) is split. In this paper, we construct a complex group \(G^{\vee}_{X}\), which we call the dual group of \(X\), and a natural homomorphism \(\varphi^{\vee}_{X}:G^{\vee}_{X}\times\operatorname{SL}_{2}(\mathbb{C})\to G^{\vee}\), where \(G^{\vee}\) is the Langlands dual group of \(G\), and make a few conjectures on how \(\varphi^{\vee}_{X}\) is related to \(H\)-distinguished representations of \(G\). We will also show that the local Langlands parameter of the trivial representation of \(G\) factors through \(\varphi^{\vee}_{X}\) for any symmetric variety \(X=H\backslash G\). Our group \(G^{\vee}_{X}\) is different from the dual group by Sakellaridis-Venkatesh. However, we will show that our conjectures are consistent with various known examples and conjectures, especially in the framework of the theory of Kato-Takano on relative cuspidality and relative square integrability. ## 1. Introduction Let \(F\) be a non-archimedean local field of characteristic \(0\), and \(G\) a connected reductive group split over \(F\) equipped with an \(F\)-involution \(\theta\). Let \(H\) be the subgroup of \(\theta\)-fixed points of \(G\), so that the quotient \(X:=H\backslash G\) is a symmetric variety. An irreducible admissible representation \((\pi,V)\) of \(G\) is said to be \(H\)-distinguished if there exists a nonzero \(H\)-invariant linear form \(\lambda:V\to\mathbb{C}\), so that \(\lambda(\pi(h)v)=\lambda(v)\) for all \(h\in H\) and \(v\in V\), namely \(\operatorname{Hom}_{H}(\pi,\mathbf{1})\neq 0\). It is then expected that the (conjectural) local Langlands parameter \(\varphi_{\pi}:WD_{F}\to G^{\vee}\) of \(\pi\) should factor through some smaller subgroup of \(G^{\vee}\). (Here, as usual, \(WD_{F}\) is the Weil-Deligne group of \(F\) and \(G^{\vee}\) is the Langlands dual group of \(G\).) Indeed, in their monumental work [13], Sakellaridis and Venkatesh proposed a "dual group" \(G^{\vee}_{X}\) of \(X\) and a (conjectural) homomorphism \(G^{\vee}_{X}\times\operatorname{SL}_{2}(\mathbb{C})\to G^{\vee}\) in the broader context of spherical varieties, so that the local Langlands parameter \(\varphi_{\pi}\) should factor through the map \(G^{\vee}_{X}\times\operatorname{SL}_{2}(\mathbb{C})\longrightarrow G^{\vee}\). Later, the construction of the map was completed by Knop and Schalke in [16]. In this paper, we construct another complex group \(G^{\vee}_{X}\) and a map \[\varphi^{\vee}_{X}:G^{\vee}_{X}\times\operatorname{SL}_{2}(\mathbb{C}) \longrightarrow G^{\vee}\] though only when \(X=H\backslash G\) is a symmetric variety. To be more specific, the given involution \(\theta\) naturally gives rise to an involution on the root datum of \(G\) and hence of \(G^{\vee}\). This naturally divides the root datum into two root data: the \(\theta\)-invariant root data and the \(\theta\)-split root data. We call their corresponding complex groups \(G^{\vee+}\) and \(G^{\vee-}\), respectively. To be more explicit, there are two subtori \(T^{\vee+}\) and \(T^{\vee-}\) of a suitably chosen maximal torus \(T^{\vee}\subseteq G^{\vee}\), where we call the former a \(\theta\)-invariant torus and the latter a \(\theta\)-split torus. They are the maximal tori of \(T\) with the property that \[\theta(t)=t\;\;\text{if}\;\;t\in T^{\vee+}\quad\text{and}\quad\theta(t)=t^{-1} \;\;\text{if}\;\;t\in T^{\vee-}.\] We have the natural isogeny \[T^{\vee+}\times T^{\vee-}\longrightarrow T,\] and the map \(T^{\vee+}\to T\) extends to an inclusion \(G^{\vee+}\subseteq G^{\vee}\) and the map \(T^{\vee-}\to G^{\vee}\) extends to a map \(\varphi^{\vee-}:G^{\vee-}\to G^{\vee}\). Further for the group \(G^{\vee+}\), we let \[\varphi^{\vee+}:\operatorname{SL}_{2}(\mathbb{C})\longrightarrow G^{\vee+}\] be the so-called principal \(\operatorname{SL}_{2}\)-homomorphism. We will then show that the images \(\varphi^{\vee+}(\operatorname{SL}_{2}(\mathbb{C}))\) and \(\varphi^{\vee-}(G^{\vee-})\) commute with each other, so that we have a homomorphism \[G^{\vee-}\times\operatorname{SL}_{2}(\mathbb{C})\longrightarrow G^{\vee}.\] We then set \(G^{\vee}_{X}=G^{\vee-}\). Our dual group \(G^{\vee}_{X}\) is not always the same as the one constructed by Sakellaridis and Venkatesh. However, ours seems to be more natural and consistent with the theory of \(H\)-matrix coefficients developed by Kato and Takano [17, 18] and Lagier [14], which was later augmented by the author [13]. Let us recall the theory here. Let \((\pi,V)\) be an \(H\)-distinguished representation with unitary central character, and \(\lambda\in\operatorname{Hom}_{H}(\pi,\mathbf{1})\) nonzero. For each \(v\in V\), define \[\varphi_{\lambda,v}:H\backslash G\longrightarrow\mathbb{C},\quad\varphi_{ \lambda,v}(g)=\langle\lambda,\pi(g)v\rangle,\] for \(g\in H\backslash G\). We call \(\varphi_{\lambda,v}\) an \(H\)-matrix coefficient (with respect to \(\lambda\)). We say 1. \((\pi,V)\) is relatively cuspidal if \(\varphi_{\lambda,v}\in C^{\infty}_{c}(Z_{G}H\backslash G)\) for all \(\lambda\in\operatorname{Hom}_{H}(\pi,\mathbf{1})\) and all \(v\in V\); 2. \((\pi,V)\) is relatively square integrable if \(\varphi_{\lambda,v}\in L^{2}(Z_{G}H\backslash G)\) for all \(\lambda\in\operatorname{Hom}_{H}(\pi,\mathbf{1})\) and all \(v\in V\); 3. \((\pi,V)\) is relatively tempered if for all \(\epsilon>0\) we have \(\varphi_{\lambda,v}\in L^{2+\epsilon}(Z_{G}H\backslash G)\) for all \(\lambda\in\operatorname{Hom}_{H}(\pi,\mathbf{1})\) and all \(v\in V\). With the assumption that the residue characteristic of \(F\) is odd, in [18] Kato and Takano have shown that relative cuspidality is detected by vanishing of the "Jacquet-module" of \(\lambda\), and in [18] they have established the Casselman type criterion for relative square integrability in terms of exponents. Further, in [13], the author has established the Casselman type criterion for relative temperedness. We then make the following conjecture. **Conjecture 1.1**.: _Let \(\pi\) be an \(H\)-distinguished irreducible admissible representation of \(G\), so that \(\operatorname{Hom}_{H}(\pi,\mathbf{1})\neq 0\)._ 1. _The (conjectural) local Langlands parameter_ \(\varphi_{\pi}:WD_{F}\to G^{\vee}\) _of_ \(\pi\) _factors through_ \[\varphi_{\pi}:WD_{F}\longrightarrow G^{\vee}_{X}\times\operatorname{SL}_{2}( \mathbb{C})\xrightarrow{\varphi_{X}}G^{\vee},\] _where the map_ \(WD_{F}\to\operatorname{SL}_{2}(\mathbb{C})\) _is given by_ \[w\mapsto\begin{pmatrix}|w|^{\frac{1}{2}}&\\ &|w|^{-\frac{1}{2}}\end{pmatrix}\] _for all_ \(w\in WD_{F}\)_._ 2. _Assuming (I) holds._ 1. _If the image of_ \(WD_{F}\to G_{X}^{\vee}\) _is not in a proper Levi of_ \(G_{X}^{\vee}\) _and the_ \(\operatorname{SL}_{2}(\mathbb{C})\)_-factor of_ \(WD_{F}\) _maps trivially, then_ \(\pi\) _is relatively cuspidal._ 2. \(\pi\) _is relatively square integrable if and only if the image of_ \(WD_{F}\to G_{X}^{\vee}\) _is not in a proper Levi of_ \(G^{\vee}\)_._ 3. \(\pi\) _is relatively tempered if and only if the image of_ \(WD_{F}\to G_{X}^{\vee}\) _is bounded modulo center._ We will see that the converse of Conjecture (I) does not always hold; namely even if \(\pi\) has an \(L\)-parameter \(\varphi:WD_{F}\to G^{\vee}\) factoring through \(\varphi_{X}^{\vee}\), it could be the case that \(\pi\) is not \(H\)-distinguished. However, such examples seem to be quite degenerate and presumably the converse of Conjecture (I) almost always holds. Also the converse of Conjecture (II-a) fails already in the so-called group case due to the existence of an \(L\)-packet which contains a supercuspidal representation and a non-supercuspidal discrete series representation at the same time. But this seems to be the only obstruction to the converse of Conjecture (II-a), and presumably it can be modified by considering all the members of the \(L\)-packet, although at this moment the author does not know enough examples to make any precise suggestion to modify it. At any rate, we verify that our conjectures are consistent with numerous known examples and conjectures. Note that the trivial representation \(\mathbf{1}\) of \(G\) is \(H\)-distinguished for any \(H\subseteq G\). Hence by Conjecture (I) the \(L\)-parameter of \(\mathbf{1}\) must factor through \(\varphi_{X}^{\vee}\) for any symmetric variety \(X=H\backslash G\). We prove this assertion in Theorem 4.3. Finally at the end, we propose a way to generalize our theory to the Galois case. ### Notation and assumptions We let \(F\) be a nonarchimedean local field of characteristic \(0\), and \(WD_{F}=W_{F}\times\operatorname{SL}_{2}(\mathbb{C})\) its Weil-Deligne group. For each \(w\in WD_{F}\), we denote its norm by \(|w|\). For a (split) reductive group \(G\) over \(F\), we let \(G^{\vee}\) be the Langlands dual group of \(G\). We usually identify \(G\) with its \(F\)-points \(G(F)\), and denote the center of \(G\) by \(Z_{G}\). By an \(F\)-involution \(\theta\), we mean a group homomorphism \(\theta:G\to G\) defined over \(F\) such that \(\theta^{2}=1\), and we denote its fixed points by \(H\). For each \(\varepsilon\in G\), we let \(\operatorname{Int}(\varepsilon)\) be the automorphism on \(G\) defined by \(g\mapsto\varepsilon g\varepsilon^{-1}\). Note that \(\operatorname{Int}(\varepsilon)\) is an involution if and only if \(\varepsilon^{2}\in Z_{G}\). We let \(\operatorname{Irr}(G)\) be the set of equivalence classes of irreducible admissible representations of \(G\), and denote by \(\mathbf{1}\) the trivial representation of \(G\). A representation \(\pi\in\operatorname{Irr}(G)\) is said to be \(H\)-distinguished if \(\operatorname{Hom}_{H}(\pi,\mathbf{1})\neq 0\), and call each nonzero \(\lambda\in\operatorname{Hom}_{H}(\pi,\mathbf{1})\) an \(H\)-period. For each \(\pi\in\operatorname{Irr}(G)\) we denote its contragredient by \(\pi^{\vee}\). We use the standard notation for root datum for reductive groups. In particular, for \(\operatorname{GL}_{n}\), we use \(\{e_{1},\ldots,e_{n}\}\) and \(\{e_{1}^{\vee},\ldots,e_{n}^{\vee}\}\) for the standard basis of the character lattice and the cocharacter lattice, respectively, and write \(\alpha_{i}=e_{i}-e_{i+1}\) and \(\alpha_{i}^{\vee}=e_{i}^{\vee}-e_{i+1}^{\vee}\) for the simple roots and coroots, respectively. By \(P_{n_{1},n_{2}}\), where \(n_{1}+n_{2}=n\), we mean the standard \((n_{1},n_{2})\)-parabolic subgroup of \(\operatorname{GL}_{n}\) whose Levi subgroup is \(\operatorname{GL}_{n_{1}}\times\operatorname{GL}_{n_{2}}\). We denote the \(n\times n\) identity matrix by \(I_{n}\). ## 2. Root datum associated with symmetric variety In this section, we let \(G\) be a split reductive group over an arbitrary filed \(k\) of characteristic \(0\), and \(\theta\) an involution on \(G\) defined over \(k\). Here \(k\) does not have to be our nonarchimedean local field \(F\). Indeed, what we have in mind is when \(k=\mathbb{C}\) and \(G\) is the Langlands dual group of a reductive group over \(F\). ### Additively closed root sub-datum Let \(\Phi=(X,R,X^{\vee},R^{\vee})\) be a root datum, where \(R\) and \(R^{\vee}\) are the sets of roots and coroots, respectively. A quadruple \(\Psi=(X,S,X^{\vee},S^{\vee})\) is said to be a root sub-datum of \(\Phi\) if \(\Psi\) is a root datum with \(S\subseteq R\) and \(S^{\vee}\subseteq R^{\vee}\). Further, \(\Psi\) is said to be additively closed if \(S=\mathbb{Z}S\cap R\); in other words if \(\alpha,\beta\in S\) are such that \(\alpha+\beta\in R\) then \(\alpha+\beta\in S\). Assume \(\Phi\) is a root datum of a reductive group \(G\). Then a root sub-datum \(\Psi\) of \(\Phi\) generates a subgroup in \(G\) whose root datum is \(\Psi\) if and only if \(\Psi\) is additively closed. (See [1]. Also see the proof of [13, Theorem 7.3].) For each subset \(\Sigma\subseteq R\), we set \[\Sigma^{ac}=\mathbb{Z}\Sigma\cap R=\{\sum_{n_{\alpha}\in\mathbb{Z}}n_{\alpha} \alpha\,:\,\alpha\in\Sigma\}\cap R,\] and call it the additive closure of \(\Sigma\). **Lemma 2.1**.: _With the above notation, the quadruple \((X,\Sigma^{ac},X^{\vee},\Sigma^{ac\vee})\) is an additively closed root sub-datum, where \(\Sigma^{ac\vee}\) has the obvious meaning._ Proof.: By definition \(\Sigma^{ac}\) is additively closed. Hence it remains to show that it is indeed a root datum. First we show that \(\Sigma^{ac}\) is closed under the root reflections. But each \(\alpha\in\Sigma^{ac}\) is written as \(\alpha=\sum n_{i}\alpha_{i}\) for \(n_{i}\in\mathbb{Z}\) and \(\alpha_{i}\in\Sigma\), and hence for each \(\beta\in\Sigma^{ac}\) we have \[s_{\alpha}(\beta)=\beta-\langle\beta,\alpha^{\vee}\rangle\alpha=\beta-\sum n _{i}\langle\beta,\alpha^{\vee}\rangle\alpha_{i}\in\mathbb{Z}\Sigma\cap R.\] To see \(\Sigma^{ac\vee}\) is closed under the coroot reflections, use \[s_{\alpha^{\vee}}(\beta^{\vee})=s_{\alpha}(\beta)^{\vee}\] for all coroots \(\alpha^{\vee},\beta^{\vee}\in R\). (See [1, Lemma 3.2.4].) We also call the root datum \((X,\Sigma^{ac},X^{\vee},\Sigma^{ac\vee})\) the additive closure of \(\Sigma\). Even if a root sub-datum \(\Psi=(X,S,X^{\vee},S^{\vee})\) is additively closed, its dual \(\Psi^{\vee}=(X^{\vee},S^{\vee},X,S)\) does not have to be additively closed. For example, if \(G=\operatorname{Sp}_{4}\) and \(S\) is the set of all long roots, then \(S\) is additively closed but \(S^{\vee}\), which is the set of all short coroots, is not additively closed. Indeed, in this case, \((X,S,X^{\vee},S^{\vee})\) is the root datum of \(\operatorname{SL}_{2}\times\operatorname{SL}_{2}\) and we have the natural embedding \(\operatorname{SL}_{2}\times\operatorname{SL}_{2}\to\operatorname{Sp}_{4}\). But the dual \((X^{\vee},S^{\vee},X,S)\) is the root datum of \(\operatorname{PGL}_{2}\times\operatorname{PGL}_{2}\) and there is no embedding \(\operatorname{PGL}_{2}\times\operatorname{PGL}_{2}\to\operatorname{Sp}_{4}^{\vee}= \operatorname{SO}_{5}\). Yet, we should mention the following, though we will not use it in this paper. **Lemma 2.2**.: _Let \(\Psi\) be an additively closed root sub-datum of a root datum \(\Phi\). If \(\Phi\) is simply laced then the dual \(\Psi^{\vee}\) is also additively closed._ Proof.: The lemma follows from the following assertion: if \(\Phi\) is simply laced and \(\alpha\) and \(\beta\) are two distinct roots such that \(\alpha+\beta\) is also root, then \((\alpha+\beta)^{\vee}=\alpha^{\vee}+\beta^{\vee}\). (See [12, 10.2.2, p.177].) ### Folding a root datum In this subsection, we review the method called "folding a root system", which is used to produce a map \(\varphi:H\to G\) of reductive groups such that the map \(H\to\varphi(H)\) is an isogenous. (This methods is discussed more in detail in [13, Section 4].) Let \((X(T),\Phi,X(T)^{\vee},\Phi^{\vee})\) be a root datum of a split reductive group \(G\). Let \(\Delta\subseteq\Phi\) be a set of simple roots, so that \((X(T),\Delta,X(T)^{\vee},\Delta^{\vee})\) is a based root datum. **Definition 2.3**.: Let \(s:\Delta\to\Delta\) be an involution, which naturally induces an involution \(s:\Delta^{\vee}\to\Delta^{\vee}\), so that \((^{s}\alpha)^{\vee}={}^{s}(\alpha^{\vee})\), which we simply write \({}^{s}\alpha^{\vee}\). We call \(s\) a _folding_ if for all \(\alpha,\beta\in\Delta\), we have 1. \(\langle a,{}^{s}\alpha^{\vee}\rangle=0\), whenever \(\alpha\neq{}^{s}\alpha\), and 2. \(\langle\alpha-{}^{s}\alpha,\beta^{\vee}+{}^{s}\beta^{\vee}\rangle=0\). Note that the property \(\langle{}^{s}\alpha,{}^{s}\beta^{\vee}\rangle=\langle\alpha,\beta\rangle\) implies (b). Let \(A\subseteq T\) be a subtorus and \(\varphi_{A}:A\to T\) a homomorphism with finite kernel, so that we have the map \[r:X(T)\longrightarrow X(A),\quad x\mapsto x\circ\varphi,\] given by the restriction via \(\varphi\). We often write \[r(x)=\overline{x}.\] Note that the image of \(r\) has a finite cokernel, which induces an injection \[r^{\vee}:X(A)^{\vee}\longrightarrow X(T)^{\vee},\] which implies that \(X(A)\) and \(r^{\vee}(X(A)^{\vee})\) are still dual to each other. We often identity \(r^{\vee}(X(A)^{\vee})\) with \(X(A)^{\vee}\). Now, let \(s:\Delta\to\Delta\) be a folding. For each \(\alpha\in\Delta\) we define \[\overline{\alpha}^{\vee}:=\begin{cases}\alpha^{\vee}&\text{if $\alpha={}^{s} \alpha$;}\\ \alpha^{\vee}+{}^{s}\alpha^{\vee}&\text{otherwise,}\end{cases}\] and set \[\overline{\Delta}=r(\Delta)\quad\text{and}\quad\overline{\Delta}^{\vee}:=\{ \overline{\alpha}^{\vee}\,:\,\alpha\in\Delta\}.\] The following is the main theorem on folding. **Proposition 2.4**.: _Let \(s:\Delta\to\Delta\) be a folding such that_ 1. \(r(\alpha)=r(^{s}\alpha)\) _for all_ \(\alpha\in\Delta\)_, and_ 2. \(\overline{\Delta}^{\vee}\subseteq X(A)^{\vee}\)_._ _Then the quadruple \((X(A),\overline{\Delta},X(A)^{\vee},\overline{\Delta}^{\vee})\) is a root datum, and if \(H\) is the corresponding split reductive group, there is a homomorphism_ \[\varphi:H\longrightarrow G\] _with finite kernel which extends the map \(\varphi_{A}:A\to T\) of the tori. If \(\varphi_{A}\) is one-to-one, so is \(\varphi\)._ Proof.: See [12, Lemma 4.5]. It should be noted that, in the above proposition, since \(r(\alpha)=r(^{s}\alpha)\), we can write \[\overline{\alpha}=\frac{1}{2}r(\alpha+^{s}\alpha)\] for all \(\overline{\alpha}\in\overline{\Delta}\). ### Two tori A \(k\)-split torus \(A\subseteq G\) is said to be \(\theta\)-invariant if \(\theta(t)=t\) for all \(t\in A\), and said to be \(\theta\)-split if \(\theta(t)=t^{-1}\) for all \(t\in A\). It is known that for each given \(\theta\) there exists a maximal \(k\)-split torus \(T\subset G\) which contains a maximal \(\theta\)-invariant torus \(T^{+}\) and a maximal \(\theta\)-split torus \(T^{-}\), so that the map \[T^{+}\times T^{-}\longrightarrow T\] is an isogeny. All of \(T,T^{+},T^{-}\) are closed under the action of \(\theta\), and hence \(\theta\) naturally acts on the groups of rational characters \[X(T)=\operatorname{Hom}(T,\mathbb{G}_{m}),\quad X(T^{+})=\operatorname{Hom}( T^{+},\mathbb{G}_{m}),\quad X(T^{-})=\operatorname{Hom}(T^{-},\mathbb{G}_{m})\] and on the groups of rational cocharacters \[X(T)^{\vee}=\operatorname{Hom}(\mathbb{G}_{m},T),\quad X(T^{+})^{\vee}= \operatorname{Hom}(\mathbb{G}_{m},T^{+}),\quad X(T^{\vee}-)=\operatorname{ Hom}(\mathbb{G}_{m},T^{-})\] of the respective tori. The natural pairing \[\langle-,-\rangle:X(T)\times X(T)^{\vee}\longrightarrow\mathbb{Z}\] restricts pairings \[X(T^{+})\times X(T^{+})^{\vee}\longrightarrow\mathbb{Z}\quad\text{and}\quad X (T^{-})\times X(T^{-})^{\vee}\longrightarrow\mathbb{Z}\] which we also denote by \(\langle-,-\rangle\). Let \[\Phi(G,T)=(X(T),\Phi,X(T)^{\vee},\Phi^{\vee})\] be the root datum of \(G\) with respect to \(T\). We set \[\Phi^{\theta}=\{\alpha\in\Phi\,:\,\theta(\alpha)=\alpha\}\quad\text{and}\quad \Phi^{\theta\vee}=\{\alpha^{\vee}\in\Phi\,:\,\theta(\alpha^{\vee})=\alpha^{ \vee}\}\] namely the set of all \(\theta\)-invariant roots and coroots, respectively. We choose a set \(\Delta\subseteq\Phi\) of simple roots so that the corresponding ordering has the property \[\alpha>0\quad\text{and}\quad\theta(\alpha)\neq\alpha\quad\Longrightarrow\quad \theta(\alpha)<0. \tag{2.1}\] We call this ordering a \(\theta\)-order. We let \(\Phi^{+}\) be the set of positive roots with respect to this ordering, so that we have \[\theta:\Phi^{+}\smallsetminus\Phi^{\theta}\longrightarrow\Phi^{-}\smallsetminus \Phi^{\theta}\quad\text{and}\quad\theta:\Phi^{\vee+}\smallsetminus\Phi^{ \theta}\longrightarrow\Phi^{\vee-}\smallsetminus\Phi^{\theta}.\] By the definition of the action of \(\theta\) it is immediate that \[\langle\theta x,\theta x^{\vee}\rangle=\langle x,x^{\vee}\rangle\] for all \(x\in X(T)\) and \(x^{\vee}\in X(T)^{\vee}\). In particular, \(\theta\) preserves the lengths of roots and coroots. We have the natural restriction map \[p:X(T)\longrightarrow X(T^{-}),\quad x\mapsto x|_{T^{-}},\] where we also write \[\overline{x}=p(x)=x|_{T^{-}}.\] If \(x\in X(T)\) is such that \(\theta(x)=x\) then \(\overline{x}=0\), because in this case for each \(t\in T^{-}\) we have \(x(t)=\theta(x)(t)=x(\theta(t))=x(t^{-1})\), so \(x(t^{2})=1\). ### \(\theta\)-invariant root sub-datum One can readily see that the quadruple \[\Phi(G,T)^{\theta}:=(X(T),\Phi^{\theta},X(T)^{\vee},\Phi^{\theta\vee})\] is an additively closed root sub-datum of \(\Phi(G,T)\) where the set of simple roots and coroots are, respectively, \[\Delta^{\theta}:=\Delta\cap\Phi^{\theta}\quad\text{and}\quad\Delta^{\theta \vee}:=\Delta^{\vee}\cap\Phi^{\theta\vee}.\] We let \(G^{++}\) be the split reductive group generated by this root datum inside \(G\), so that we have the natural inclusion \[G^{++}\subseteq G,\] and the root datum of \(G^{++}\) is \(\Phi(G,T)^{\theta}\). Indeed, \(G^{++}\) is the Levi subgroup corresponding to \(\Delta^{\vee}\). Now, consider the restriction map \(r:X(T)\to X(T^{+})\). Then the induced map \(\Delta^{\theta}\to r(\Delta^{\theta})\) is a bijection, via which we often identity these two sets, and we have the identity \(r^{\vee}(\Delta^{\theta\vee})=\Delta^{\theta\vee}\). Hence by choosing \(s:\Delta\to\Delta\) to be the identity map, which is certainly a folding, we can apply Proposition 2.4, which gives a based root datum \[\Phi(G,T^{-}):=(X(T),\Delta^{\theta},X(T)^{\vee},\Delta^{\theta\vee})\] and an inclusion \[G^{+}\subseteq G,\] where \(G^{+}\) is the reductive group whose root datum is \(\Phi(G,T^{-})\). Note that the map \(G^{+}\to G\) given by folding is indeed an inclusion. We call the root datum \(\Phi(G,T^{-})\) the \(\theta\)-invariant root datum. Let us note that the group \(G^{++}\) plays only an auxiliary role to make use of the method of folding. Also note that the main difference between \(G^{++}\) and \(G^{+}\) is that \(T\subseteq G^{++}\) and \(T^{+}\subseteq G^{+}\) (but \(T\nsubseteq G^{+}\)). Let us then quote the following lemma, which will be frequently used in this paper. **Lemma 2.5**.: _Let \(w_{\theta}\) be the longest element in the Weyl group of the \(\theta\)-invariant root datum \(\Phi(G,T^{-})\). For each involution \(\theta\), there exists a (possibly trivial) automorphism \(\theta^{*}\) of the Dynkin diagram of \(G\) such that for each root \(\alpha\in\Phi\), one can write_ \[-\theta\alpha=(\theta^{*}\circ w_{\theta})\alpha.\] _In particular, for each \(\alpha\in\Delta\smallsetminus\Delta^{\theta}\) we have_ \[-\theta\alpha=\theta^{*}\alpha+\gamma\quad\text{for some $\gamma\in\operatorname{span}_{ \mathbb{Z}}(\Delta^{\theta})$}.\] _Also \(\theta^{*}\) preserves \(\Delta\smallsetminus\Delta^{\theta}\) setwise and fixes \(\Delta^{\theta}\) pointwise._ Proof.: See [10, 2.8] and [11, 1.7]. (In [11], it is stated that \(-\theta\alpha=\theta^{*}\alpha+\gamma\) where \(\gamma\in\Phi^{\theta}\), which is stronger than the assertion in the lemma. But it seems to the author that \(\gamma\) is not always a root.) ### \(\theta\)-split root datum Next, we consider the root datum associated with the \(\theta\)-split torus \(T^{-}\) by the restriction map \(p:X(T)\to X(T^{-})\) as above. This case requires more work than the \(\theta\)-invariant case. Let \[\overline{\Phi}=p(\Phi)\backslash\{0\}=p(\Phi\backslash\Phi^{\theta}),\] where \(p\) is the projection as above. It has been proven by Helminck and Wang ([11]) that, by tensoring with \(\mathbb{Q}\), \(\overline{\Phi}\) is a root system in \(X(T^{-})\otimes\mathbb{Q}\) with a basis \[\overline{\Delta}:=p(\Delta)\backslash\{0\}=p(\Delta\backslash\Delta^{\theta }).\] This gives rise to a root datum \[\overline{\Phi}(G,T^{-}):=(X(T^{-}),\overline{\Phi},X(T^{-})^{\vee},\overline {\Phi}^{\vee})\] in the usual way, which we call the \(\theta\)-split root datum. We choose the ordering on \(\overline{\Phi}\) to be the one determined by \(\overline{\Delta}\), and denote by \(\overline{\Phi}^{+}\) the set of positive roots in \(\overline{\Phi}\). Let us note that we have the natural inclusion \[X(T^{-})\otimes\mathbb{Q}\subseteq X(T)\otimes\mathbb{Q},\quad\overline{x} \mapsto\frac{1}{2}(x-\theta x),\] where \(x\in X(T)\otimes\mathbb{Q}\) is any choice of preimage of \(\overline{x}\) under the restriction map \(X(T)\otimes\mathbb{Q}\to X(T^{-})\otimes\mathbb{Q}\). Since the kernel of this restriction map is \(X(T^{+})\otimes\mathbb{Q}\), which is \(\theta\)-invariant, the expression \(\frac{1}{2}(x-\theta x)\) is independent of the choice of preimage \(x\). Then the Weyl invariant inner product on \(X(T^{-})\otimes\mathbb{Q}\) is naturally restricted from that on \(X(T)\otimes\mathbb{Q}\); namely for each \(\overline{x},\overline{y}\in X(T^{-})\otimes\mathbb{Q}\) we have \[(\overline{x},\overline{y})=(\frac{1}{2}(x-\theta x),\frac{1}{2}(y-\theta y)).\] Using this, we prove the following lemma. **Lemma 2.6**.: _For all \(\overline{\alpha}\in\overline{\Phi}\),_ \[(\overline{\theta\alpha})^{\vee}=\theta(\overline{\alpha}^{\vee}).\] Proof.: For all \(\overline{x}\in X^{*}(T^{-})\) we have \[\langle\overline{x},(\overline{\theta\alpha})^{\vee}\rangle=\frac{2( \overline{x},\overline{\theta\alpha})}{(\overline{\theta\alpha},\overline{ \theta\alpha})}=\frac{2(x-\theta x,\theta\alpha-\alpha)}{(\theta\alpha-\alpha, \theta\alpha-\alpha)}.\] On the other hand, \[\langle\overline{x},\theta(\overline{\alpha}^{\vee})\rangle=\langle\theta \overline{x},\overline{\alpha}^{\vee}\rangle=\frac{2(\theta\overline{x}, \overline{\alpha})}{(\overline{\alpha},\overline{\alpha})}=\frac{2(\theta x -x,\alpha-\theta\alpha)}{(\alpha-\theta\alpha,\alpha-\theta)}=\frac{2(x- \theta x,\theta\alpha-\alpha)}{(\theta\alpha-\alpha,\theta\alpha-\alpha)}\] by the \(\theta\)-invariance of the inner product \((-,-)\). This implies \((\overline{\theta\alpha})^{\vee}=\theta(\overline{\alpha}^{\vee})\). The \(\theta\)-split root datum \(\overline{\Phi}(G,T^{-})\) is not always reduced. **Example 2.7**.: Let \(G=\operatorname{GL}_{3}\) and \(\theta=\operatorname{Int}(\begin{pmatrix}0&0&1\\ 0&1&0\\ 1&0&0\end{pmatrix})\). One can readily see that \[T^{-}=\{\begin{pmatrix}t&0&0\\ 0&1&0\\ 0&0&t^{-1}\end{pmatrix}\,:\,t\in F^{\times}\},\] and so \[\overline{e_{1}-e_{3}}=2\overline{e_{1}-e_{2}}=2\overline{e_{2}-e_{3}}.\] Thus the root datum is not reduced. Indeed, this is a root datum of type \(BC_{1}\). **Lemma 2.8**.: _For all \(\alpha\in\Phi\) such that \(\alpha-\theta\alpha\notin\Phi\) and \(\alpha\neq-\theta\alpha\), we have_ \[\langle\alpha,\theta\alpha^{\vee}\rangle=0.\] _In particular, if \(\Phi(G,T^{-})\) is reduced then \(\langle\alpha,\theta\alpha^{\vee}\rangle=0\) for all \(\alpha\in\Phi\) with \(\alpha\neq-\theta\alpha\)._ Proof.: Since \(\theta\) preserves the length of a root, \(\overline{\alpha}\) and \(-\theta\overline{\alpha}\) have a same length. Now, for any root system, if two distinct roots have a same length but their sum is not a root, then they are orthogonal, which gives \(\langle\alpha,\theta\alpha^{\vee}\rangle=0\). Assume \(\overline{\Phi}(G,T^{-})\) is reduced. Noting \(-\overline{\theta\alpha}=\overline{\alpha}\), we have \(\overline{\alpha}-\theta\overline{\alpha}=2\overline{\alpha}\), which is not a root. Hence \(\alpha-\theta\alpha\) cannot be a root. Let \[\Delta_{\theta}=\{\alpha\in\Delta\smallsetminus\Delta^{\theta}\,:\,\alpha- \theta\alpha\notin\Phi\}\cup\{\alpha-\theta\alpha\,:\,\alpha\in\Delta\text{ and }\alpha-\theta\alpha\in\Phi\}.\] Note that if \(\overline{\Phi}(G,T^{-})\) is reduced, we simply have \(\Delta_{\theta}=\Delta\smallsetminus\Delta^{\theta}\). If \(\overline{\Phi}(G,T^{-})\) is not reduced, then it has an irreducible component of type \(BC_{n}\), in which case we make it type \(C_{n}\) by discarding the "shortest roots". Further, set \[\Sigma_{\theta}=\Delta_{\theta}\cup-\theta(\Delta_{\theta}),\] where by definition \(-\theta(\Delta_{\theta})=\{-\theta\alpha\,:\,\alpha\in\Delta_{\theta}\}\). Note that by our choice of positive roots, for each \(\alpha\in\Delta_{\theta}\) the root \(-\theta\alpha\) is positive, so that \[\Sigma_{\theta}\subseteq\Phi^{+}.\] We set \[\Phi(G,T)_{\theta}:=(X(T),\Sigma_{\theta}^{ac},X(T)^{\vee},\Sigma_{\theta}^{ ac\vee}),\] namely the additive closure of \(\Sigma_{\theta}\), which is an additively closed root sub-datum of \(\Phi(G,T)\). Hence there exists a subgroup \[G^{--}\subseteq G\] whose root datum is \(\Phi(G,T)_{\theta}\). Since each root in \(\Sigma_{\theta}^{ac}\) is a \(\mathbb{Z}\)-linear combination of elements from \(\Delta_{\theta}\cup-\theta(\Delta_{\theta})\), and for each \(\alpha\in\Delta_{\theta}\), we have \(\theta\alpha<0\), none of the roots in the additive closure \(\Sigma_{\theta}^{ac}\) is invariant under \(\theta\), namely \(\Sigma_{\theta}^{ac}\cap\Phi^{\theta}=\emptyset\). Similarly for the coroots \(\Sigma_{\theta}^{ac\vee}\). **Proposition 2.9**.: _The set \(\Sigma_{\theta}\) is a basis of the root datum \(\Phi(G,T)_{\theta}\)._ Proof.: By [12, Lemma 3.3], it suffices to check that for any two distinct \(\alpha,\beta\in\Sigma_{\theta}\) we have \(\alpha-\beta\notin\Phi^{+}\). So assume \(\alpha,\beta\in\Sigma_{\theta}=\Delta_{\theta}\cup-\theta(\Delta_{\theta})\). We then have \[\alpha=\alpha^{\prime},-\theta\alpha^{\prime}\text{ or }\alpha^{\prime}-\theta \alpha^{\prime}\quad\text{and}\quad\beta=\beta^{\prime},-\theta\beta^{\prime} \text{ or }\beta^{\prime}-\theta\beta^{\prime}\] for some \(\alpha^{\prime},\beta^{\prime}\in\Delta\smallsetminus\Delta^{\theta}\). We know from Lemma 2.5 that \(-\theta\alpha^{\prime}=\theta^{*}\alpha^{\prime}+\gamma\) for some \(\gamma\in\operatorname{span}_{\mathbb{Z}}(\Delta^{\theta})\), where \(\theta^{*}\alpha^{\prime}\in\Delta\smallsetminus\Delta^{\theta}\), and similarly for \(-\theta\beta^{\prime}\). By using this, one can check that \(\alpha-\beta\) is never a positive root. Now, let \(s\) be the involution on the root datum \(\Phi(G,T)_{\theta}\) defined by \({}^{s}\alpha=-\theta\alpha\). One can then see by using Lemma 2.8 that this is a folding in the sense of [12, Definition 4.1]. Furthermore, the condition of [12, Lemma 4.5] is satisfied by choosing \(r\) to be the restriction map \(p:X(T)\to X(T^{-})\). Hence we have the natural map \[G^{-}\longrightarrow G^{--}\subseteq G,\] which is actually an injection because the map \(T^{\to}T\) is an injection. (Here we recall that \(G^{--}\) is the reductive group whose root datum is \(\Phi(G,T)_{\theta}\).) Note that the root datum of \(G^{-}\) is \[(X(T^{-}),\overline{\Sigma_{\theta}^{ac}},X(T^{-})^{\vee},\overline{\Sigma_{ \theta}^{ac}}^{\vee}),\] which is equal to \(\overline{\Phi}(G,T^{-})\) if \(\overline{\Phi}(G,T^{-})\) is reduced. Also from the construction, one can see that for each root \(\overline{\alpha}\in\overline{\Sigma_{\theta}^{ac}}\), the corresponding coroot is given by \[\overline{\alpha}^{\vee}=\begin{cases}\alpha^{\vee}&\text{if }\alpha=-\theta \alpha;\\ \alpha^{\vee}-\theta\alpha^{\vee}&\text{otherwise.}\end{cases}\] One can also check that the image of \(\overline{\alpha}^{\vee}\) is indeed in \(T^{-}\) and \(\langle\overline{\alpha},\overline{\alpha}^{\vee}\rangle=2\). ### Sum of \(\theta\)-invariant positive roots Let \[\varphi^{+}:\operatorname{SL}_{2}\longrightarrow G^{\vee}\] be the principal \(\operatorname{SL}_{2}\)-homomorphism, so that \[\varphi^{+}(\begin{pmatrix}t&\\ &t^{-1}\end{pmatrix})=2\rho_{+}^{\vee}(t),\] where \[2\rho_{+}^{\vee}=\sum_{\alpha\in\Phi^{\theta+}}\alpha\] is the sum of \(\theta\)-invariant positive coroots of the \(\theta\)-invariant root datum \(\Phi(G,T^{-})\). In this subsection, we will show that its image \(\varphi^{+}(\operatorname{SL}_{2})\) commutes with \(G^{-}\). Let us start with the following lemma. **Lemma 2.10**.: _For each non-\(\theta\)-invariant root \(\alpha\in\Phi\smallsetminus\Phi^{\theta}\), we have_ \[\langle\alpha-\theta\alpha,2\rho_{+}^{\vee}\rangle=0.\] Proof.: Recall from Lemma 2.5 that we can write \(\theta\alpha=-\theta^{*}w_{\theta}\alpha\), where \(\theta^{*}\) is a diagram automorphism and \(w_{\theta}\) is the longest element of the Weyl group of the \(\theta\)-invariant root datum \(\Phi(G,T^{+})\). Noting that \(w_{\theta}(2\rho_{+}^{\vee})=-2\rho_{+}^{\vee}\) and \(\theta^{*}(2\rho_{+}^{\vee})=2\rho_{+}^{\vee}\), we have \[\langle\alpha-\theta\alpha,2\rho_{+}^{\vee}\rangle =\langle\alpha,2\rho_{+}^{\vee}\rangle-\langle\theta\alpha,2\rho_ {+}^{\vee}\rangle\] \[=\langle\alpha,2\rho_{+}^{\vee}\rangle-\langle\alpha,-\theta^{*} w_{\theta}(2\rho_{+}^{\vee})\rangle\] \[=\langle\alpha,2\rho_{+}^{\vee}\rangle-\langle\alpha,2\rho_{+}^{ \vee}\rangle\] \[=0,\] where for the second equality we used that \(\theta\) preserves the canonical pairing \(\langle-,-\rangle\). Noting that \(2\rho_{+}^{\vee}:\mathbb{G}_{m}\to T^{+}\), we set \[T^{0}=\text{the image of }2\rho_{+}^{\vee},\] which is a \(1\)-dimensional subtorus of \(T^{+}\). Consider the restriction map \[r:X(T)\longrightarrow X(T^{0}T^{-}).\] Then one can see that for each \(\alpha\in\Delta_{\theta}\) we have \(r(\alpha)=r(-\theta\alpha)\). Hence one can apply the folding argument to the root datum \[(X(T^{0}T^{-}),\overline{\Sigma_{\theta}^{ac}},X(T^{0}T^{-})^{\vee},\overline {\Sigma_{\theta}^{ac}}^{\vee}).\] Then if we denote the corresponding group by \(G^{0-}\), we have the inclusions \[G^{-}\subseteq G^{0-}\subseteq G^{--}.\] Note that \(G^{0-}\) has \(T^{0}T^{-}\) as a maximal torus. **Lemma 2.11**.: _The center of \(G^{0-}\) contains \(T^{0}\), and hence \(G^{-}\) and \(T^{0}\) commute pointwise._ Proof.: Note that the center of \(G^{0-}\) is the intersection of the kernels of all the roots. Each root of \(G^{0-}\) is of the form \(\frac{1}{2}r(\alpha-\theta\alpha)\). Hence by the above lemma, \(T^{0}\) is contained in the kernel of this root. Using this lemma, we can prove our assertion. **Proposition 2.12**.: _The image of the principal \(\varphi_{+}:\operatorname{SL}_{2}\to G^{+}\) homomorphism and \(G^{-}\) commute pointwise._ Proof.: Once we have the lemma, it suffices to show that for each root \(\overline{\alpha}\) of \(G^{0-}\) the corresponding root subgroup \(U_{\overline{\alpha}}\) commutes with the images of the unipotent elements \(\varphi_{+}(\begin{pmatrix}1&\\ a&1\end{pmatrix})\) and \(\varphi_{+}(\begin{pmatrix}1&a\\ &1\end{pmatrix})\). But since \(\overline{\alpha}\) is orthogonal to \(2\rho_{+}\), it is an elementary exercise to show that this indeed holds. The detail is left to the reader. By this proposition, we have \[G^{-}\times\operatorname{SL}_{2}\longrightarrow G, \tag{2.2}\] where on the \(\operatorname{SL}_{2}\) the homomorphism \(\operatorname{SL}_{2}\to G^{+}\subseteq G\) is the principal \(\operatorname{SL}_{2}\)-homomorphism of \(G^{+}\). ## 3. Dual groups and conjectures Now, we assume \(G\) is a connected reductive group split over our local nonarchimedean field \(F\) and \(\theta\) an \(F\)-involution on \(G\). We let \(H\) be the \(\theta\)-fixed points of \(G\), so that \(X:=H\backslash G\) is a symmetric variety. ### Definition of dual group Let \(G^{\vee}\) be the Langlands dual group of \(G\), so that it is the complex connected reductive group whose root datum is dual to that of \(G\). The involution \(\theta\) naturally dualizes to an involution on \(G^{\vee}\); namely \(\theta\) gives rise to an involution on the root datum of \(G\), and hence an involution on the root datum of \(G^{\vee}\), which in turn gives an involution on \(G^{\vee}\). We call this involution on \(G^{\vee}\) the dual of \(\theta\). We set \[G^{\vee}_{X}=G^{\vee-}\quad\text{and}\quad\varphi^{\vee}_{X}:G^{\vee}_{X} \times\operatorname{SL}_{2}(\mathbb{C})\longrightarrow G^{\vee},\] where \(\varphi^{\vee}_{X}\) is as constructed in (2.2) applied to \(G^{\vee}\), so that the image of \(\operatorname{SL}_{2}(\mathbb{C})\) is in \((G^{\vee})_{\theta}\) and the map \(\operatorname{SL}_{2}(\mathbb{C})\to G^{\vee+}\) is the principal \(\operatorname{SL}_{2}\)-homomorphism. Let us note that in general \(G^{\vee-}\neq(G^{-})^{\vee}\). With this definition of \(G^{\vee}_{X}\) and \(\varphi^{\vee}_{X}\), we formulate our conjectures as stated in Conjecture 1.1 in the introduction. ### Difference from Sakellaridis-Venkatesh Our dual group \(G^{\vee}_{X}\) is not always the same as the one by Sakellaridis-Venkatesh as indicated by the following two examples. **Example 3.1**.: Let \(X=T\backslash\operatorname{SL}_{2}\), where \(T\) is a (nontrivial) torus. This is indeed a symmetric variety by choosing \(\theta=\operatorname{Int}(\begin{pmatrix}&1\\ a&\end{pmatrix})\), where \(a\in F^{\times}\), so that \(T\) is split if and only if \(a\in F^{\times 2}\). This example is briefly discussed in [13, 2.2.5], according to which their dual group of \(X\) is \(\operatorname{SL}_{2}(\mathbb{C})\). In our construction, however, the dual group is \(\operatorname{PGL}_{2}(\mathbb{C})\). Indeed, the maximal torus of \(\operatorname{SL}_{2}^{\vee}=\operatorname{PGL}_{2}(\mathbb{C})\) is already \(\theta\)-split and there is no \(\theta\)-invariant root. Hence the \(\theta\)-split root datum \(\overline{\Phi}(G^{\vee},T^{\vee-})\) is actually that of \(\operatorname{SL}_{2}^{\vee}\) itself. **Example 3.2**.: Let \(X=\operatorname{PO}_{n}\backslash\operatorname{PGL}_{n}\). This is the case ruled out by the theory of Sakellaridis-Venkatesh. (See [13, 2.2].) But this is a symmetric variety by choosing \(\theta\) to be inverse-transpose on \(\operatorname{GL}_{n}\), which descends to \(\operatorname{PGL}_{n}\). On the dual side, \(\operatorname{PGL}_{n}^{\vee}=\operatorname{SL}_{n}(\mathbb{C})\) and the dual of \(\theta\) is also inverse-transpose, so the maximal torus of \(\operatorname{SL}_{n}(\mathbb{C})\) is \(\theta\)-split and there is no \(\theta\)-invariant root. One can then see that our dual group of \(X\) is \(\operatorname{SL}_{n}(\mathbb{C})\). We believe that the theory of Sakellaridis-Ventatesh can be modified so that the modified theory is consistent with ours. We will take up this issue in our later work. ## 4. Trivial representation The trivial representation \(\mathbf{1}\) of \(G\) is \(H\)-distinguished for any subgroup \(H\subseteq G\), so in particular it is \(H\)-distinguished for any symmetric variety \(X=H\backslash G\). Hence by Conjecture (I) the local Langlands parameter \(\varphi_{\mathbf{1}}\) of \(\mathbf{1}\) should factor through \(\varphi^{\vee}_{X}\) for any symmetric variety \(X\). In this section, we prove this assertion. Let us first recall that the local Langlands parameter \(\varphi_{\mathbf{1}}\) of the trivial representation \(\mathbf{1}\) is given by \[\varphi_{\mathbf{1}}:WD_{F}\longrightarrow\mathbb{C}^{\times}\xrightarrow{2\rho ^{\vee}}G^{\vee},\] where the first map is given by the square root of the norm map, namely \(w\mapsto|w|^{\frac{1}{2}}\) for \(w\in WD_{F}\) (so it is trivial on the \(\operatorname{SL}_{2}(\mathbb{C})\)-factor of \(WD_{F}\)), and the second map is the sum of positive coroots of \(G^{\vee}\). We will prove that this factors through \(\varphi_{X}^{\vee}\). The set of positive coroots of \(G^{\vee}\) is the set of positive roots \(\Phi^{+}\) of \(G\), where we assume that the order of the roots is a \(\theta\)-order (2.1). We set \[\Phi_{\theta}=\Phi\smallsetminus\Phi^{\theta}\quad\text{and}\quad\Phi_{\theta}^ {+}=\Phi_{\theta}\cap\Phi^{+}=\Phi^{+}\smallsetminus\Phi^{\theta+}.\] Then if \[2\rho=\sum_{\alpha\in\Phi^{+}}\alpha\] is the sum of positive roots then we can decompose it as \[2\rho=2\rho_{+}+2\rho_{-}=\sum_{\alpha\in\Phi^{\theta+}}\alpha+\sum_{\alpha\in \Phi_{\theta}^{+}}\alpha,\] where \(2\rho_{+}=\sum_{\alpha\in\Phi^{\theta+}}\alpha\) is the sum of positive \(\theta\)-invariant roots, and \(2\rho_{-}=\sum_{\alpha\in\Phi_{\theta}^{+}}\alpha\) is the sum of positive non-\(\theta\)-invariant roots. Certainly \[2\rho_{+}\in\operatorname{span}_{\mathbb{Z}}(\Delta^{\theta}).\] We prove an analogue of this for \(2\rho_{-}\) holds. Let us start with the following lemma. **Lemma 4.1**.: _The involution \(\alpha\mapsto-\theta\alpha\) stabilizes both \(\Phi_{\theta}\) and \(\Phi_{\theta}^{+}\) setwise._ Proof.: Let \(\alpha\in\Phi_{\theta}\). Then it is written as \[\alpha=\sum_{\alpha_{i}\in\Delta\smallsetminus\Delta^{\theta}}n_{i}\alpha_{i}+ \sum_{\beta_{i}\in\Delta^{\theta}}m_{i}\beta_{i},\] where at least one of \(n_{i}\) is nonzero. Then \[-\theta\alpha=-\sum_{\alpha_{i}\in\Delta\smallsetminus\Delta^{\theta}}n_{i} \theta\alpha_{i}-\sum_{\beta_{i}\in\Delta^{\theta}}m_{i}\beta_{i}.\] We know from Lemma 2.5 that \[-\theta\alpha_{i}=\theta^{*}\alpha_{i}+\gamma_{i}\quad\text{for some $\gamma_{i}\in \operatorname{span}_{\mathbb{Z}}(\Delta^{\theta})$},\] where \(\theta^{*}\) is a (possibly trivial) diagram automorphism and \(\theta^{*}\alpha_{i}\in\Delta\smallsetminus\Delta^{\theta}\). Hence we can write \[-\theta\alpha=\sum_{\alpha_{i}\in\Delta\smallsetminus\Delta^{\theta}}n_{i} \theta^{*}\alpha_{i}+\sum_{\beta_{i}\in\Delta^{\theta}}m_{i}^{\prime}\beta_{i }=\sum_{\alpha_{i}\in\Delta\smallsetminus\Delta^{\theta}}n_{i}^{\prime}\alpha_{ i}+\sum_{\beta_{i}\in\Delta^{\theta}}m_{i}^{\prime}\beta_{i}\] for some \(n_{i}^{\prime}\)'s and \(m_{i}^{\prime}\)'s because \(\theta^{*}\) permutes the roots in \(\Delta\smallsetminus\Delta^{\theta}\). Now since \(n_{i}\neq 0\) for some \(i\), we have \(n_{j}^{\prime}\neq 0\) for some \(j\), which implies \(-\theta\alpha\in\Phi_{\theta}\). Further, if \(\alpha\in\Phi_{\theta}^{+}\) then \(n_{i}\geq 0\) for all \(i\), which implies \(n_{i}^{\prime}\geq 0\) for all \(i\), so \(-\theta\alpha\in\Phi_{\theta}^{+}\) The action of \(-\theta\) on the non-\(\theta\)-invariant roots \(\Phi_{\theta}\) divides two different types of roots, depending on whether \(-\theta\alpha=\alpha\) or not. We say a root a root \(\alpha\in\Phi_{\theta}\) is of type 1 if \(-\theta\alpha=\alpha\) and of type 2 if \(-\theta\alpha\neq\alpha\). **Lemma 4.2**.: _With the above notation, set_ \[\Sigma_{\theta}^{1}=\{\alpha\in\Sigma_{\theta}\,:\,\alpha\text{ is of type 1}\}\quad\text{and}\] \[\Sigma_{\theta}^{2}=\{\alpha-\theta\alpha\,:\,\alpha\in\Sigma_{ \theta}\quad\text{and}\quad\alpha\text{ is of type 2}\},\] _where we recall \(\Sigma_{\theta}=\Delta_{\theta}\cup-\theta(\Delta_{\theta})\). Then we have_ \[2\rho_{-}\in\operatorname{span}_{\mathbb{Z}}(\Sigma_{\theta}^{1}\cup\Sigma_{ \theta}^{2}).\] Proof.: If we partition the set \(\Phi_{\theta}^{+}\) into \(-\theta\)-orbits, each orbit is either a singleton \(\{\alpha\}\) or a set \(\{\alpha,-\theta\alpha\}\) of two elements, depending on wether \(\alpha\) is of type 1 or type 2. With this said, one can readily see that \[2\rho_{-}=\frac{1}{2}\sum_{\alpha\in\Phi_{\theta}^{+}}(\alpha-\theta\alpha),\] because if \(\alpha\) of type 1 then \(\alpha-\theta\alpha=2\alpha\), and if type 2 then there are two occurrences of \(\alpha-\theta\alpha\) in the sum. Hence it suffices to show that \(\alpha\in\operatorname{span}_{\mathbb{Z}}(\Sigma_{\theta}^{1}\cup\Sigma_{ \theta}^{2})\) for type 1 \(\alpha\in\Phi^{+}\) and \(\alpha-\theta\alpha\in\operatorname{span}_{\mathbb{Z}}(\Sigma_{\theta}^{1} \cup\Sigma_{\theta}^{2})\) for type 2 \(\alpha\in\Phi^{+}\). Now, each \(\alpha\in\Phi_{\theta}^{+}\) is written as \[\alpha=\sum_{\alpha_{i}\in\Delta\smallsetminus\Delta^{\theta}}n_{i}\alpha_{i}+ \sum_{\beta_{i}\in\Delta^{\theta}}m_{i}\beta_{i}\] with \(n_{i}\geq 0\) and \(m_{i}\geq 0\) for all \(i\) and \(n_{i}\neq 0\) for some \(i\). Considering \(-\theta\beta_{i}=-\beta_{i}\), we have \[\alpha-\theta\alpha=\sum_{\alpha_{i}\in\Delta\smallsetminus\Delta^{\theta}}n_{i} (\alpha_{i}-\theta\alpha_{i}).\] Hence, if \(\alpha\) is of type 2, we have \(\alpha-\theta\alpha\in\operatorname{span}_{\mathbb{Z}}(\Sigma_{\theta}^{1} \cup\Sigma_{\theta}^{2})\). Assume \(\alpha\) is of type 1. Since \(\alpha-\theta\alpha=2\alpha\), we have \[\alpha=\frac{1}{2}\sum_{\alpha_{i}\in\Delta\smallsetminus\Delta^{\theta}}n_{i} (\alpha_{i}-\theta\alpha_{i}).\] We need to show that the right-hand side is indeed in \(\operatorname{span}_{\mathbb{Z}}(\Sigma_{\theta}^{1}\cup\Sigma_{\theta}^{2})\). We know from Lemma 5.5 that for each \(\alpha_{i}\) \[-\theta\alpha_{i}=\theta^{*}\alpha_{i}+\gamma_{i}\] for some \(\gamma_{i}\in\operatorname{span}_{\mathbb{Z}}\{\Delta^{\theta}\}\). One can then see that \[-\theta(\theta^{*}\alpha_{i})=\alpha_{i}+\gamma_{i}.\] Now, let \(I\) be the set of indices of the roots \(\Delta\smallsetminus\Delta^{\theta}\). Since \(\theta^{*}\) permutes the elements in \(\Delta\smallsetminus\Delta^{\theta}\), it naturally acts on \(I\), which we write \(i\mapsto i^{*}\). Furthermore, if \(\alpha_{i}\in\Delta\smallsetminus\Delta^{\theta}\) is of type 1, then since \(-\theta\alpha_{i}=\theta^{*}\alpha_{i}+\gamma_{i}=\alpha_{i}\) and \(\theta^{*}\alpha_{i}\in\Delta\smallsetminus\Delta^{\theta}\), we have \(\theta^{*}\alpha_{i}=\alpha_{i}\). So if we decompose \(I=I_{1}\cup I_{2}\), where if \(i\in I_{1}\) then \(\alpha_{i}\) is of type 1 and if \(i\in I_{2}\) then \(\alpha_{i}\) is of type 2, then \(\theta^{*}\) acts both on \(I_{1}\) and \(I_{2}\). Then we can write \[\alpha =\sum_{\alpha_{i}\in\Delta\smallsetminus\Delta^{\theta}}n_{i}\alpha_{i} +\sum_{\beta_{i}\in\Delta^{\theta}}m_{i}\beta_{i}\] \[=\sum_{i\in I_{1}}\ell_{i}\alpha_{i}+\sum_{i\in I_{2}}n_{i}\alpha_ {i}+\sum_{\beta_{i}\in\Delta^{\theta}}m_{i}\beta_{i},\] and \[-\theta\alpha =\sum_{i\in I_{1}}\ell_{i}\alpha_{i}+\sum_{i\in I_{2}}n_{i}( \theta^{*}\alpha_{i}+\gamma_{i})-\sum_{\beta_{i}\in\Delta^{\theta}}m_{i}\beta_ {i}\] \[=\sum_{i\in I_{1}}\ell_{i}\alpha_{i}+\sum_{i\in I_{2}}n_{i}\theta^ {*}\alpha_{i}+\sum_{i\in I_{2}}n_{i}\gamma_{i}-\sum_{\beta_{i}\in\Delta^{ \theta}}m_{i}\beta_{i}\] \[=\sum_{i\in I_{1}}\ell_{i}\alpha_{i}+\sum_{i\in I_{2}}n_{i^{*}} \alpha_{i}+\sum_{i\in I_{2}}n_{i}\gamma_{i}-\sum_{\beta_{i}\in\Delta^{\theta }}m_{i}\beta_{i}.\] Since \(-\theta\alpha=\alpha\), we have \(n_{i}=n_{i^{*}}\) for \(i\in I_{2}\). Hence we can write \[\alpha-\theta\alpha=2\sum_{i\in I_{1}}\ell_{i}\alpha_{i}+2\sum_{i\in I_{2}/ \sim}n_{i}(\alpha_{i}+\theta^{*}\alpha_{i}+\gamma_{i})=2\sum_{i\in I_{1}}\ell _{i}\alpha_{i}+2\sum_{i\in I/\sim}n_{i}(\alpha_{i}-\theta\alpha_{i}),\] where \(I_{2}/\sim\) is the set of equivalence classes under the action of \(\theta\) on \(I_{2}\). Hence \(\alpha=\frac{1}{2}(\alpha-\theta\alpha)=\sum_{i\in I_{1}}\ell_{i}\alpha_{i}+ \sum_{i\in I_{2}/\sim}n_{i}(\alpha_{i}-\theta\alpha_{i})\in\operatorname{span} _{\mathbb{Z}}(\Sigma^{1}_{\theta}\cup\Sigma^{2}_{\theta})\). Now, we are ready to prove our theorem. **Theorem 4.3**.: _Let \(\varphi_{\mathbf{1}}:WD_{F}\to G^{\vee}\) be the local Langlands parameter of the trivial representation. Then for any symmetric variety \(X=H\backslash G\), the parameter \(\varphi_{\mathbf{1}}\) factors through \(\varphi_{X}^{\vee}:G_{X}^{\vee}\times\operatorname{SL}_{2}(\mathbb{C})\to G^{\vee}\)._ Proof.: By definition, the restriction of \(\varphi_{X}^{\vee}\) to the maximum torus of \(\operatorname{SL}_{2}(\mathbb{C})\) is the sum of \(\theta\)-invariant coroots. Also by construction, the image of \(G_{X}^{\vee}\) contains the images of cocharacters of the form \(\alpha\) if \(\alpha\) is of type 1 and \(\alpha-\theta\alpha\) if type 2, where we identify the cocharacters of \(G^{\vee}\) with the characters of \(G\). Hence the image of any \(\mathbb{Z}\)-linear combination of these cocharacters is contained in the image of \(G_{X}^{\vee}\). Hence the image of the cocharacter \(2\rho^{\vee}:\mathbb{C}^{\times}\to G^{\vee}\) is contained in the image of \(\varphi_{X}^{\vee}\). The theorem follows. ## 5. Examples In this section, we verify our conjectures by numerous examples. We occasionally assume that our \(p\)-adic field \(F\) has odd residual characteristic. Whenever we do this (except in Section 5.5), this is because the Casselman type criteria of [11, 12] are used. ### Group Case Let us consider the case known as the group case; namely \(G=H\times H\) where the involution \(\theta\) switches the two factors of \(H\). Then the \(\theta\)-fixed points are the diagonal \(\Delta H\), which is of course isomorphic to \(H\). Then \(\pi\in\operatorname{Irr}(H\times H)\) is \(\Delta H\) distinguished if and only if \(\pi=\tau\otimes\tau^{\vee}\), where \(\tau\in\operatorname{Irr}(H)\) and \(\tau^{\vee}\) is its contragredient. On the dual side, \(G^{\vee}=H^{\vee}\times H^{\vee}\), and by the choice of positive roots (2.1) the root data of the two copies of \(H^{\vee}\) have the oppositive ordering. Also the \(\theta\)-split torus is of the form \(\{(s,s^{-1})\}\subseteq S\times S\), where \(S\) is (any choice of) maximal torus of \(H^{\vee}\). Note that no root of \(H^{\vee}\times H^{\vee}\) is fixed by \(\theta\), which implies \(G^{\vee+}\) is trivial and hence the restriction of \(\varphi_{X}^{\vee}:G_{X}^{\vee}\times\operatorname{SL}_{2}(\mathbb{C})\to G^{\vee}\) to \(\operatorname{SL}_{2}(\mathbb{C})\) is trivial. One can then see that \(\varphi_{X}^{\vee}\) is given by \[H^{\vee}\longrightarrow H^{\vee}\times H^{\vee},\quad h\mapsto(h,c_{H^{\vee }}(h)),\] where \(c_{H^{\vee}}\) is an involution on \(H^{\vee}\) which acts on the torus \(S\subseteq H^{\vee}\) as \(c_{H^{\vee}}(s)=w_{H^{\vee}}(s^{-1})\), where \(w_{H^{\vee}}\) is the longest element of the Weyl group of \(H^{\vee}\). This involution is known as the Chevalley involution and it is conjectured by Prasad in [10, Conjecture 2] that the Chevalley involution of a local Langlands parameter corresponds to the contragredient. Hence our Conjecture (I) is equivalent to this conjecture of Prasad. Also in the group case the relative matrix coefficients are the usual matrix coefficients, and hence \(\tau\otimes\tau^{\vee}\) is relatively cuspidal (resp. relatively square integrable, resp. relatively tempered) if and only if \(\tau\) is cuspidal (resp. square integrable, resp. tempered). Thus the three statements in Conjecture (II) are well-known (conjectural) properties of the local Langlands correspondence. ### Linear period of \(\operatorname{GL}_{2n}\) Consider the case \(X=(\operatorname{GL}_{n}\times\operatorname{GL}_{n})\backslash\operatorname{ GL}_{2n}\), so we consider a \(\operatorname{GL}_{n}\times\operatorname{GL}_{n}\)-period of \(\operatorname{GL}_{2n}\), which is often known as a linear period. Then \(X\) is a symmetric variety with \(\theta=\operatorname{Int}(\begin{pmatrix}J_{n}\\ J_{n}\end{pmatrix})\), where \(J_{n}\) is the \(n\times n\) anti-diagonal matrix. One can readily check that the dual of \(\theta\) acts on \(\operatorname{GL}_{2n}^{\vee}=\operatorname{GL}_{2n}(\mathbb{C})\) in the same way. One can then see that the \(\theta\)-split torus \(T^{\vee-}\) of \(\operatorname{GL}_{2n}(\mathbb{C})\) is \[T^{\vee-}=\{\operatorname{diag}(t_{1},\dots,t_{n},t_{n}^{-1},\dots,t_{1}^{-1}) \,:\,t_{i}\in\mathbb{C}^{\times}\},\] and there is no \(\theta\)-invariant root. Then the usual choice \(\Delta=\{\alpha_{1},\dots,\alpha_{n},\alpha_{n+1},\dots\alpha_{2n-1}\}\) gives a \(\theta\)-order on the root datum of \(\operatorname{GL}_{2n}(\mathbb{C})\), where \(\alpha_{i}=e_{i}-e_{i+1}\). One can then see that \[\overline{\alpha}_{i}=\overline{\alpha}_{2n-i}=\overline{e_{i}-e_{i+1}}\quad \text{if $1\leq i\leq n-1$},\quad\text{and}\quad\overline{\alpha}_{n}=2 \overline{e_{n}}.\] Hence the \(\theta\)-split root datum is that of \(\operatorname{Sp}_{2n}\), so \(G_{X}^{\vee}=\operatorname{Sp}_{2n}(\mathbb{C})\). Since there is no \(\theta\)-invariant root, the restriction of \(\varphi_{X}^{\vee}:G_{X}^{\vee}\times\operatorname{SL}_{2}(\mathbb{C})\to G^{\vee}\) to \(\operatorname{SL}_{2}(\mathbb{C})\) is trivial, and hence we just write \(\varphi_{X}^{\vee}:G_{X}^{\vee}\to G^{\vee}\), which is nothing but the inclusion \(\operatorname{Sp}_{2n}(\mathbb{C})\subseteq\operatorname{GL}_{2n}(\mathbb{C})\); namely we have \[\varphi_{X}^{\vee}:\operatorname{Sp}_{2n}(\mathbb{C})\hookrightarrow \operatorname{GL}_{2n}(\mathbb{C}).\] Now, Conjecture (I) implies that if \(\pi\in\operatorname{Irr}(\operatorname{GL}_{n})\) is \(\operatorname{GL}_{n}\times\operatorname{GL}_{n}\)-distinguished then its \(L\)-parameter \(\varphi_{\pi}\) factors through the inclusion \(\operatorname{Sp}_{2n}(\mathbb{C})\subseteq\operatorname{GL}_{2n}(\mathbb{C})\), or equivalently \(\varphi_{\pi}\) is of symplectic type. If \(n=1\) then \(X=T\backslash\operatorname{GL}_{2}\), where \(T\) is a split-torus. This is the well-known case studied by Waldspurger [20], according to which \(\pi\) has a \(T\)-period if and only if \(\pi\) has a trivial central character. Hence Conjecture (I) along with its converse holds. Furthermore, by explicitly calculating the Jacquet modules, one can check that all of Conjecture (II) along with the converse of (II-a) holds, assuming the residue characteristic of \(F\) is odd. The detail is left to the reader. The case for general \(n\) has been worked out by many people, especially by relating the linear period with the Shalike period. (See [11, 12, 13, 14, 15, 16, 17].) In particular, it is known that a square integrable \(\pi\in\operatorname{Irr}(\operatorname{GL}_{2n})\) is \(\operatorname{GL}_{n}\times\operatorname{GL}_{n}\)-distinguished if and only if \(\pi\) is of symplectic type. (See [11, Proposition 6.1].) This proves Conjecture (I) and its converse for square integrable \(\pi\). Also when \(\pi\) is the generalized Steinberg representation \(St_{k}(\rho)\) in the sense of [11], then Theorem 6.1 of [11] says that \(\pi\) is \(\operatorname{GL}_{n}\times\operatorname{GL}_{n}\)-distinguished if and only if \(\rho\) is of symplectic type for \(k\) odd and of orthogonal type for \(k\) even. But the \(L\)-parameter of \(St_{k}(\rho)\) is of the form \(\varphi_{\rho}\otimes S_{k}\), where \(\varphi_{\rho}\) is the \(L\)-parameter of \(\rho\) and \(S_{k}\) is the \(k\)-dimensional representation of \(\operatorname{SL}_{2}(\mathbb{C})\). Since \(S_{k}\) is of orthogonal type for \(k\) odd and of symplectic type for \(k\) even, we know that \(St_{k}(\rho)\) is always of symplectic type. Thus Conjecture (I) holds for \(St_{k}(\rho)\). Assume \(F\) has odd residual characteristic. In [16, Theorem 6.3], Smith constructed a large family of relatively discrete series \(\operatorname{GL}_{n}\times\operatorname{GL}_{n}\)-distinguished representations which are not themselves discrete series. Later in [16, Theorem 3.3] he showed that they are of symplectic type and their \(L\)-parameters are not in a proper Levi of \(\operatorname{Sp}_{2n}(\mathbb{C})\), showing one direction of Conjecture (II-b) for this family of representations. Also when \(n=2\) and \(F\) has odd residual characteristic, some relatively cuspidal \(\operatorname{GL}_{2}\times\operatorname{GL}_{2}\)-distinguished representations are studied. Indeed, by combining the local Langlands correspondence for \(\operatorname{GSp}_{4}\)[12], the author can prove Conjecture (I) and (II) together with the converses of Conjectures (I) and (II-a). This result will appear elsewhere. ### \(\operatorname{GL}_{n-1}\times\operatorname{GL}_{1}\)-period of \(\operatorname{GL}_{n}\) Consider the case \(X=(\operatorname{GL}_{n-1}\times\operatorname{GL}_{1})\backslash\operatorname{ GL}_{n}\). This is a symmetric variety with \(\theta=\operatorname{Int}(\begin{pmatrix}&1\\ &I_{n-2}&\\ 1&\end{pmatrix})\), where \(I_{n-2}\) is the \((n-2)\times(n-2)\) identity matrix. The dual of \(\theta\) acts in the same way on the dual \(\operatorname{GL}_{n}^{\vee}=\operatorname{GL}_{n}(\mathbb{C})\), and the \(\theta\)-invariant torus \(T^{\vee+}\) and the \(\theta\)-split torus \(T^{\vee-}\) are respectively \[T^{\vee+} =\{\operatorname{diag}(1,t_{2},\ldots,t_{n-1},1)\,:\,t_{i}\in \mathbb{C}^{\times}\};\] \[T^{\vee-} =\{\operatorname{diag}(t,1,\ldots,1,t^{-1})\,:\,t\in\mathbb{C}^{ \times}\}.\] The usual choice of the simple roots \(\Delta=\{\alpha_{1},\ldots,\alpha_{n-1}\}\) gives a \(\theta\)-order and \(\Delta^{\theta}=\{\alpha_{2},\ldots,\alpha_{n-2}\}\). Then \(\Delta_{\theta}=\{\alpha_{1},\alpha_{n-1}\}\) for \(n>3\) and \(\Delta_{\theta}=\{\alpha_{1}+\alpha_{n-1}\}\) for \(n=3\). (Note that, as we have seen in Example 2.7, if \(n=3\), \(\alpha_{1}-\theta\alpha_{n-1}\) is a root.) Noting \(\overline{\alpha}_{1}=\overline{\alpha}_{n-1}\), one can see that the \(\theta\)-split root datum is that of \(\operatorname{SL}_{2}(\mathbb{C})\). Furthermore in this case, \(-\theta\alpha_{1}=-\theta(e_{1}-e_{2})=e_{2}-e_{n}\) and hence \(\alpha_{1}-\theta\alpha_{1}=e_{1}-e_{n}\), which is orthogonal to all the roots in \(\Delta^{\theta}\). This implies that the two groups \(G^{\vee+}\) and \(G^{\vee-}\) commute with each other because the former is generated by \(\Delta^{\theta}\) and the later by \(\{\alpha_{1}-\theta\alpha_{1}\}\). Indeed, one can see that \(G^{\vee-}\times G^{\vee+}=\operatorname{SL}_{2}(\mathbb{C})\times \operatorname{GL}_{n-2}(\mathbb{C})\) and this embeds into \(\operatorname{GL}_{n}(\mathbb{C})\) as \[\varphi_{X}^{\vee}:\operatorname{SL}_{2}(\mathbb{C})\times\operatorname{GL}_{ n-2}(\mathbb{C})\longrightarrow\operatorname{GL}_{n}(\mathbb{C}),\quad( \begin{pmatrix}a&b\\ c&d\end{pmatrix},g)\mapsto\begin{pmatrix}a&&b\\ &g&&\\ c&&d\end{pmatrix}.\] Accordingly, the map \(\varphi_{X}^{\vee}:\operatorname{SL}_{2}(\mathbb{C})\times\operatorname{SL}_ {2}(\mathbb{C})\rightarrow\operatorname{GL}_{n}(\mathbb{C})\) is given by pre-composing the above embedding with the principal \(\operatorname{SL}_{2}(\mathbb{C})\rightarrow\operatorname{GL}_{n-2}(\mathbb{C})\) for the second \(\operatorname{SL}_{2}(\mathbb{C})\). It is good to observe that the sum of positive coroots of \(\operatorname{GL}_{n}(\mathbb{C})\) in fact factors through \(\varphi_{X}^{\vee}\), so that the \(L\)-parameter of the trivial representation factors through it. To see it, recall that the sum of positive coroots of \(G^{\vee+}=\operatorname{GL}_{n-2}(\mathbb{C})\) factors through the principal \(\operatorname{SL}_{2}(\mathbb{C})\to\operatorname{GL}_{n-2}(\mathbb{C})\). The remaining positive coroots are \(e_{1}^{\vee}-e_{2}^{\vee},\ldots,e_{1}^{\vee}-e_{n}^{\vee},e_{2}^{\vee}-e_{n} ^{\vee},\ldots,e_{n-1}^{\vee}-e_{n}^{\vee}\). The sum of them is \((n-1)(e_{1}^{\vee}-e_{n}^{\vee})\), which has its image in the image of the first \(\operatorname{SL}_{2}(\mathbb{C})\). In the literature, instead of \(\operatorname{GL}_{n-1}\times\operatorname{GL}_{1}\)-distinguished representations, \(\operatorname{GL}_{n-1}\)-distinguished representations have been more often studied. But if \(\pi\in\operatorname{Irr}(\operatorname{GL}_{n})\) has a trivial central character, then \(\pi\) is \(\operatorname{GL}_{n-1}\)-distinguished if and only if it is \(\operatorname{GL}_{n-1}\times\operatorname{GL}_{1}\)-distinguished because \(Z_{\operatorname{GL}_{n}}\operatorname{GL}_{n-1}=\operatorname{GL}_{n-1} \times\operatorname{GL}_{1}\), where \(Z_{\operatorname{GL}_{n}}\) is the center of \(\operatorname{GL}_{n}\). Hence, the \(\operatorname{GL}_{n-1}\times\operatorname{GL}_{1}\)-distinguished representations are subsumed under the \(\operatorname{GL}_{n-1}\)-distinguished representations. Now, for \(n=3\), Prasad [10, Theorem 2, 169] gave a complete list of distinguished representations. According to Prasad's list, \(\pi\in\operatorname{Irr}(\operatorname{GL}_{3})\) is \(\operatorname{GL}_{2}\times\operatorname{GL}_{1}\)-distinguished if and only if \(\pi\) is either trivial or of the form \(\operatorname{Ind}_{P_{2,1}}^{\operatorname{GL}_{3}}\rho\otimes\mathbf{1}\), where \(P_{2,1}\) is the \((2,1)\)-parabolic subgroup and \(\rho\) is an infinite dimensional representation of \(\operatorname{GL}_{2}\). (Since we are assuming the central character of \(\pi\) is trivial, the cases (3) and (2) of Prasad's list [10, Theorem 2, 169] never happen.) We know that the \(L\)-parameter of \(\mathbf{1}\) factors though \(\varphi_{X}^{\vee}\). For \(\pi=\operatorname{Ind}_{P_{2,1}}^{\operatorname{GL}_{3}}\rho\otimes\mathbf{1}\), the corresponding \(L\)-parameter is of the form \(\varphi_{\rho}\oplus 1\), where \(\varphi_{\rho}\) is the \(L\)-parameter of \(\rho\). Since the central character of \(\rho\) is trivial, it certainly factors through \(\varphi_{X}^{\vee}\). Hence Conjecture (I) is satisfied. As for Conjecture (II), Kato and Takano have shown that if \(\rho\) is supercuspidal then \(\operatorname{Ind}_{P_{2,1}}^{\operatorname{GL}_{3}}\rho\otimes\mathbf{1}\) is relatively cuspidal ([11, Proposition 8.2.3]), and if \(\rho\) is square integrable then \(\operatorname{Ind}_{P_{2,1}}^{\operatorname{GL}_{3}}\rho\otimes\mathbf{1}\) is relatively square integrable ([11, 5.1]). One can also prove that if \(\rho\) is tempered then \(\operatorname{Ind}_{P_{2,1}}^{\operatorname{GL}_{3}}\rho\otimes\mathbf{1}\) is relatively tempered by using the same computation as [11, 5.1]. For this case, the converse of Conjecture (I) already fails when \(\rho\) is not infinite dimensional. Indeed, one can readily see that the \(L\)-parameter of \(\operatorname{Ind}_{P_{2,1}}^{\operatorname{GL}_{3}}\mathbf{1}\otimes\mathbf{1}\) also factors through \(\varphi_{X}^{\vee}\). Yet, this induced representation is not in Prasad's list. But this is the only case where the \(L\)-parameter factors through \(\varphi_{X}^{\vee}\) but the corresponding representation is not \(\operatorname{GL}_{2}\times\operatorname{GL}_{1}\)-distinguished. For general \(n\), to the best of our knowledge, there is no exhaustive list of \(\operatorname{GL}_{n-1}\times\operatorname{GL}_{1}\)-distinguished representations. But it is conjectured by Prasad ([10, Conjecture 1]) that if \(\pi\in\operatorname{Irr}(\operatorname{GL}_{n})\) has the trivial central character then \(\pi\) is \(\operatorname{GL}_{n-1}\times\operatorname{GL}_{1}\)-distinguished if and only if either \(\pi=\mathbf{1}\) or \(\pi=\operatorname{Ind}_{P_{2,n-2}}^{\operatorname{GL}_{n}}\rho\otimes\mathbf{1} _{\operatorname{GL}_{n-2}}\) for an infinite dimensional representation of \(\operatorname{GL}_{2}\) with trivial central character, where \(P_{2,n-2}\) is the \((2,n-2)\)-parabolic. Hence this implies Conjecture (I). Also it is observed by Kato and Takano [11, Proposition 8.2.3] that no irreducible supercuspidal representation of \(\operatorname{GL}_{n}\) is \(\operatorname{GL}_{n-1}\times\operatorname{GL}_{1}\)-distinguished. This is consistent with Conjecture (I) because the image of \(\varphi_{X}^{\vee}\) is in a proper Levi of \(\operatorname{GL}_{n}(\mathbb{C})\). Finally, Kato and Takano have shown that if \(\rho\) is a supercuspidal representation of \(\operatorname{GL}_{2}\) with trivial central character then the induced representation \(\operatorname{Ind}_{P_{2,n-2}}^{\operatorname{GL}_{n}}\rho\otimes\mathbf{1}\) is irreducible \(\operatorname{GL}_{n-1}\times\operatorname{GL}_{1}\)-distinguished and relatively cuspidal. This is consistent with Conjectures (I) and (II-a). ### Symplectic period of \(\operatorname{GL}_{2n}\) Let us consider \(X=\operatorname{Sp}_{2n}\setminus\operatorname{GL}_{2n}\). Let \[J_{n}=\begin{pmatrix}\begin{smallmatrix}0&1\\ -1&0\\ &\ddots\\ &&-1&0\end{smallmatrix}\end{pmatrix},\] and define the involution \(\theta\) by \(\theta(g)=J_{n}{}^{t}g^{-1}J_{n}^{-1}\). Then the set of \(\theta\)-fixed points is \(\operatorname{Sp}_{2n}\). The dual of \(\theta\) acts on the dual group \(\operatorname{GL}_{2n}(\mathbb{C})\) in the same way, and \[T^{\vee+} =\{\operatorname{diag}(t_{1},t_{1}^{-1},t_{2},t_{2}^{-1},\ldots,t _{n},t_{n}^{-1})\,:\,t_{i}\in\mathbb{C}^{\times}\};\] \[T^{\vee-} =\{\operatorname{diag}(t_{1},t_{1},t_{2},t_{2},\ldots,t_{n},t_{n} )\,:\,t_{i}\in\mathbb{C}^{\times}\}.\] One can readily see that \[\theta(e_{i}-e_{i+1})=\begin{cases}e_{i}-e_{i+1}&\text{if $i$ is odd};\\ -(e_{i-1}-e_{i+2})&\text{if $i$ is even},\end{cases}\] and hence the usual choice of simple roots \(\Delta=\{e_{1}-e_{2},\ldots,e_{2n-1}-e_{2n}\}\) gives a \(\theta\)-order. Note that \[\Delta^{\theta}=\{e_{i}-e_{i+1}\,:\,i\text{ is odd}\}\quad\text{and}\quad \Delta_{\theta}=\{e_{i}-e_{i+1}\,:\,i\text{ is even}\}.\] (We have chosen the above \(J_{n}\) instead of the ones usually used to define \(\operatorname{Sp}_{2n}\) precisely because this choice makes the usual choice of the simple roots give a \(\theta\)-order.) Since all the roots in \(\Delta^{\theta}\) are orthogonal, one can see that \[G^{\vee+}=\operatorname{SL}_{2}(\mathbb{C})\times\operatorname{SL}_{2}( \mathbb{C})\times\cdots\times\operatorname{SL}_{2}(\mathbb{C}),\] which embeds in \(\operatorname{GL}_{2n}(\mathbb{C})\) diagonally. On the other hand, the \(\theta\)-split root datum is that of \(\operatorname{GL}_{n}(\mathbb{C})\), so that \(G^{\vee}_{X}=G^{\vee-}=\operatorname{GL}_{n}(\mathbb{C})\), and this \(\operatorname{GL}_{n}(\mathbb{C})\) embeds into \(\operatorname{GL}_{2n}(\mathbb{C})\) as \[\begin{pmatrix}a_{11}&\cdots&a_{1n}\\ \vdots&\ddots&\vdots\\ a_{n1}&\cdots&a_{nn}\end{pmatrix}\mapsto\begin{pmatrix}a_{11}I_{2}&\cdots&a_ {1n}I_{2}\\ \vdots&\ddots&\vdots\\ a_{n1}I_{2}&\cdots&a_{nn}I_{2}\end{pmatrix},\] where \(I_{2}\) is the \(2\times 2\) identity matrix. If \(V_{n}\) is an \(n\) dimensional complex vector space and \(V_{2}\) a \(2\) dimensional complex vector space, then the map \(\varphi^{\vee}_{X}:\operatorname{GL}_{n}(\mathbb{C})\times\operatorname{SL}_ {2}(\mathbb{C})\to\operatorname{GL}_{2n}(\mathbb{C})=\operatorname{GL}(V_{n} \otimes V_{2})\) is realized as the representation \(std_{n}\otimes S_{2}\), where \(std_{n}\) is the standard representation of \(\operatorname{GL}_{n}(\mathbb{C})\) on \(V_{n}\) and \(S_{2}\) is the standard representation of \(\operatorname{SL}_{2}(\mathbb{C})\) on \(V_{2}\); namely we have \[\varphi^{\vee}_{X}:\operatorname{GL}_{n}(\mathbb{C})\times\operatorname{SL}_ {2}(\mathbb{C})\longrightarrow\operatorname{GL}(V_{n}\otimes V_{2}),\quad \varphi^{\vee}_{X}=std_{n}\otimes S_{2}.\] One can then see that the image of each \(L\)-parameter \(\varphi:WD_{F}\to G^{\vee}_{X}\times\operatorname{SL}_{2}(\mathbb{C})\to \operatorname{GL}_{2n}(\mathbb{C})\) is of the form \[\begin{pmatrix}\varphi_{\rho}(w)|w|^{\frac{1}{2}}\\ &\varphi_{\rho}(w)|w|^{-\frac{1}{2}}\end{pmatrix},\] where \(\varphi_{\rho}:WD_{F}\to G^{\vee}_{X}=\operatorname{GL}_{n}(\mathbb{C})\) is the first component of \(\varphi\). Hence, if \(\pi\in\operatorname{Irr}(\operatorname{GL}_{2n}(\mathbb{C}))\) is \(\operatorname{Sp}_{2n}\)-distinguished then Conjecture (I) implies that \(\pi\) has to be a constituent of \[I_{\rho}:=\operatorname{Ind}_{P_{n,n}}^{\operatorname{GL}_{2n}}\left(\rho| \det|^{\frac{1}{2}}\otimes\rho|\det|^{-\frac{1}{2}}\right)\] for some \(\rho\in\operatorname{Irr}(\operatorname{GL}_{n})\), where \(P_{n,n}\) is the \((n,n)\)-parabolic. Further, Conjecture (II) implies that (a) if \(\rho\) is supercuspidal then \(\pi\) is relatively cuspidal, (b) \(\rho\) is square integrable if and only if \(\pi\) is relatively square integrable, and (c) \(\rho\) is tempered if and only if \(\pi\) is relatively tempered. Now, the \(\operatorname{Sp}_{2n}\)-distinguished representations were first studied by Leumos and Rallis [10], in which they showed that for square integrable \(\rho\) the Langlands quotient \(\pi_{\rho}\) of \(I_{\rho}\) is distinguished. Then, assuming the residue characteristic of \(F\) is odd, Kato and Takano proved that if \(\rho\) is supercuspidal then \(\pi_{\rho}\) is relatively cuspidal ([11, Proposition 8.3.4]), and Smith ([14, Theorem 6.1]) proved that if \(\rho\) is square integrable then \(\pi_{\rho}\) is relatively square integrable. All these results are consistent with our Conjecture. ### Orthogonal period of \(\operatorname{GL}_{n}\) Consider the case \(X=\operatorname{O}_{n}\backslash\operatorname{GL}_{n}\), where \(\operatorname{O}_{n}\) is the split orthogonal group. This is a symmetric variety with the involution \(\theta\) given by \(\theta(g)={}^{t}g^{-1}\). The dual of \(\theta\) acts on the dual group \(\operatorname{GL}_{n}(\mathbb{C})\) in the same way, and the maximal torus of \(\operatorname{GL}_{n}(\mathbb{C})\) is \(\theta\)-split. Further, for each root \(\alpha\), we have \(\theta\alpha=-\alpha\). From these, one can readily see that \(G^{\vee+}=1\) and \(G_{X}^{\vee}=G^{\vee-}=\operatorname{GL}_{n}(\mathbb{C})\). Hence \(\varphi_{X}^{\vee}:G_{X}^{\vee}\times\operatorname{SL}_{2}(\mathbb{C})\to \operatorname{GL}_{n}(\mathbb{C})\) is the same as the identity map \(\operatorname{GL}_{n}(\mathbb{C})\to\operatorname{GL}_{n}(\mathbb{C})\), so Conjecture (I) trivially holds. An interesting question, however, is the converse of Conjecture (I). The converse certainly fails for the simple reason that if the central character \(\omega_{\pi}\) of \(\pi\in\operatorname{Irr}(\operatorname{GL}_{n})\) is not trivial upon restriction to \(\operatorname{O}_{n}\) (namely if \(\omega_{\pi}(-1)=-1\)), then \(\pi\) cannot be \(\operatorname{O}_{n}\)-distinguished. Yet, it seems this is the only obstruction for \(\pi\) to be \(\operatorname{O}_{n}\)-distinguished. Indeed, very recently, Zou [15] has shown that, assuming the residue characteristic of \(F\) is odd, a supercuspidal \(\pi\) is \(\operatorname{O}_{n}\)-distinguished if and only if \(\omega_{\pi}(-1)=1\). Considering \(\varphi_{X}^{\vee}\) is merely the identity, it is probably safe to conjecture that \(\pi\in\operatorname{Irr}(\operatorname{GL}_{n})\) is \(\operatorname{O}_{n}\)-distinguished if and only if \(\omega_{\pi}(-1)=1\). ## 6. Galois case Let \(E/F\) be a quadratic extension of our local field \(F\), and \(G\) a reductive group split over \(E\). An important case is the so-called Galois case; namely \(\theta\) is the Galois conjugate on \(G(E)\), so that \(H=G(F)\) and our symmetric variety is \(X=G(F)\backslash G(E)\). For this case, the involution \(\theta\) is not defined over \(E\); rather it is an \(F\)-involution on the non-split group \(R_{E/F}G\), where \(R_{E/F}\) is the restriction of scalar as usual. For this case, our theory does not apply because \(R_{E/F}G\) is not split. In this section, however, we propose a way to generalize our theory to the Galois case, although our generalization does not seem to be completely satisfactory because it does not explain some of the known results already when \(G=\operatorname{GL}_{n}\). ### Dual groups for Galois case The Langlands dual group \((R_{E/F}G)^{\vee}\) is \(G^{\vee}\times G^{\vee}\), and the \(L\)-group is \((G^{\vee}\times G^{\vee})\rtimes W_{F}\), where the action of \(W_{F}\) for the semidirect product factors through \(W_{F}/W_{E}=\operatorname{Gal}(E/F)\) with the nontrivial element acting by switching the two factors, so that we can write it as \((G^{\vee}\times G^{\vee})\rtimes\operatorname{Gal}(E/F)\). Hence it makes sense to define the dual of the involution \(\theta\) on \((R_{E/F}G)^{\vee}\) as the involution switching the two factors. This is precisely the same as the group case; hence it makes sense to set \(G^{\vee}_{X}=G^{\vee}\), and \[\varphi_{X}^{\vee}:G^{\vee}\longrightarrow(G^{\vee}\times G^{\vee})\rtimes \operatorname{Gal}(E/F),\quad g\mapsto(g,c_{G^{\vee}}(g))\rtimes 1,\] where \(c_{G^{\vee}}\) is the Chevalley involution on \(G^{\vee}\). Then we might as well make our conjectures exactly in the same way as Conjectures (I) and (II). Then Conjecture (I) is translated as follows: Let \(\pi\in\operatorname{Irr}(R_{E/F}G(F))=\operatorname{Irr}(G(E))\). If \(\pi\) is \(G(F)\)-distinguished, its (conjectural) \(L\)-parameter \(\varphi_{\pi}\) should factor through \(\varphi_{X}^{\vee}\), namely \[\varphi_{\pi}:WD_{F}\longrightarrow G^{\vee}\xrightarrow{\varphi_{X}^{\vee }}(G^{\vee}\times G^{\vee})\rtimes\operatorname{Gal}(E/F).\] One can readily check that \(\pi\) has such parameter if and only if \(\pi^{\vee}=\pi^{\theta}\), where \(\pi^{\vee}\) is the contragredient of \(\pi\) and \(\pi^{\theta}\) is the Galois conjugate of \(\pi\) which is defined by \(\pi^{\theta}(g)=\pi(\theta(g))\). Hence Conjecture (I) is equivalent to saying if \(\pi\) is \(G(F)\)-distinguished then \(\pi^{\vee}=\pi^{\theta}\). Of course, the converse of Conjecture (I) says if \(\pi^{\vee}=\pi^{\theta}\) then \(\pi\) is \(G(F)\)-distinguished. Conjecture (II) however does not seem to immediately generalize to the Galois case, as indicated in the \(GL_{n}\) case below, and hence our proposed theory probably requires more modifications. ### Galois case for \(\operatorname{GL}_{n}\) When \(G=\operatorname{GL}_{n}\), the Galois case has been well-studied. First, it was shown by Flicker [11] that if \(\pi\in\operatorname{Irr}(\operatorname{GL}_{n}(E))\) is \(\operatorname{GL}_{n}(F)\)-distinguished, then \(\pi^{\vee}=\pi^{\theta}\). (Also see [10, 2.3].) This is precisely Conjecture (I). The analogue of Conjecture (II) does not hold for this case. The direct analogue of Conjecture (II) says the following. Assume \(\pi\) is \(\operatorname{GL}_{n}(F)\)-distinguished, so that its \(L\)-parameter \(\varphi_{\pi}\) factors through \(\varphi_{X}^{\vee}\), namely \[\varphi_{\pi}:WD_{F}\longrightarrow\operatorname{GL}_{n}(\mathbb{C}) \longrightarrow(\operatorname{GL}_{n}(\mathbb{C})\times\operatorname{GL}_{n} (\mathbb{C}))\rtimes\operatorname{Gal}(E/F),\] where we should note that the Weil-Deligne group is for \(F\). Now if \(\pi\) is, say, relatively square integrable, then the first arrow \(\varphi_{\pi}:WD_{F}\to\operatorname{GL}_{n}(\mathbb{C})\) has to be irreducible. Hence the restriction \(\varphi_{\pi}|_{WD_{E}}\) (viewed as an \(n\)-dimensional representation) is either irreducible or a sum of two irreducibles \(\varphi_{1}\oplus\varphi_{2}\). Hence \(\pi\) must be either square integrable or of the form \(\operatorname{Ind}_{P}^{\operatorname{GL}_{n}(E)}\rho_{1}\otimes\rho_{2}\), where \(\rho_{i}\) is the square integrable representation of \(\operatorname{GL}_{n/2}(E)\) corresponding to \(\varphi_{i}\). Similarly for the cuspidal case. But this is not consistent with the results of Kato-Takano [10, Theorem 3.5] and Smith [14, Theorem 6.3].
2304.12262
Rapid Decay for Principal Étale Groupoids
This work concerns a generalization of property (RD) from discrete groups to twisted \'etale groupoids equipped with a length function. We show that, under the assumption that the \'etale groupoid is principal, twisted property (RD) is equivalent to polynomial growth. This generalizes a result of Chen and Wei concerning rapid decay for metric spaces with bounded geometry. Additionally, some permanence properties of groupoid (RD) are established.
Alex Weygandt
2023-04-24T17:02:25Z
http://arxiv.org/abs/2304.12262v2
# Rapid Decay for Principal Etale Groupoids ###### Abstract. This work concerns a generalization of property (RD) from discrete groups to twisted etale groupoids equipped with a length function. We show that, under the assumption that the etale groupoid is principal, twisted property (RD) is equivalent to polynomial growth. This generalizes a result of Chen and Wei concerning rapid decay for metric spaces with bounded geometry. Additionally, some permanence properties of groupoid (RD) are established. Key words and phrases:Etale groupoid; rapid decay; property (RD); polynomial growth 1991 Mathematics Subject Classification: 22A22, 46L05 This work was partially supported by NSF 1952693. ###### Contents * 1 Introduction * 2 Preliminaries * 2.1 Groupoids, twists, and their algebras * 2.2 Rapid decay for twisted etale groupoids * 3 Principal Groupoids * 4 Permanence Results ## 1. Introduction Let \(\Gamma\) be a discrete group, and let \(\ell\) be a length function on \(\Gamma\). This means that \(\ell\) is a mapping from \(\Gamma\) to the closed half-line \([0,\infty)\) which is subadditive, maps the identity of \(\Gamma\) to \(0\), and is inverse-invariant. We then say that \(\Gamma\) has _property (RD)_ (short for _rapid decay_) with respect to \(\ell\) if there exist constants \(C,t\geq 0\) such that for all finitely supported \(f:\Gamma\to\mathbb{C}\), we have \(\|f\|_{C^{*}_{\tau}\Gamma}\leq C\|f\|_{\ell,t}\). Here, \(\|\cdot\|_{C^{*}_{\tau}\Gamma}\) is the operator norm on \(\ell^{2}\Gamma\) given by convolution, and \(\|f\|_{\ell,t}=(\sum_{\gamma\in\Gamma}|f(\gamma)|^{2}(1+\ell(\gamma))^{2t})^{1 /2}\) is the weighted \(\ell^{2}\)-norm. First shown to hold for free groups by Haagerup in [5], property (RD) for groups was formally defined by Jolissaint in [9]. In that paper, they showed that property (RD) is preserved under taking subgroups and extensions, and is implied by polynomial growth and groups acting geometrically on hyperbolic spaces. Building on the latter, de la Harpe showed [6] that property (RD) is enjoyed by all Gromov hyperbolic groups. Hence property (RD) is satisfied by a large class of groups, and in the past \(30+\) years many other classes of groups have been shown to satisfy property (RD). For a more thorough survey of the history, and of classes of groups for which property (RD) is known, unknown, or conjectured, we refer the reader to [1]. One of the first major applications of property (RD) to noncommutative geometry came in [4], where they used it to show that hyperbolic groups satisfy the Novikov conjecture. They noted that property (RD) for a finitely generated group \(\Gamma\) (with the word length function) gave a means of generating trace-like maps on the \(K\)-theory of \(C_{r}^{*}\Gamma\) from well-behaved cocycles on \(\Gamma\) (for a groupoid generalization see proposition 6.4 in [7]). The next substantial application of property (RD) came from Lafforgue in [12], whose analysis showed the Baum-Connes conjecture holds for a large class of groups satisfying property (RD). Here, it was important that property (RD) for \(\Gamma\) provides a Banach subalgebra \(A\) of \(C_{r}^{*}\Gamma\) containing \(\mathbb{C}\Gamma\) such that (i) the inclusion map \(A\hookrightarrow C_{r}^{*}\Gamma\) induces an isomorphism on \(K\)-theory, and (ii) the \(A\)-norm of elements of \(\mathbb{C}\Gamma\) depends only on the magnitude of its coefficients (see [11]). Several generalizations and analogues of rapid decay have appeared. Considering only functions on the group which are constant on spheres, one obtains what is called radial rapid decay, as considered by Valette in [22]. In the appendix of [15], Chatterji defines a rapid decay property for groups with a given \(2\)-cocycle. In another direction, one can consider rapid decay for representations of groups on \(L^{p}\) spaces for \(p\neq 2\), as done in [13]. Leaving the realm of groups, one can study a rapid decay property for metric spaces, as done in [3] and [8] (also considered below). In another direction, (RD) was also generalized to the setting of quantum groups, as in [23]. In [7], property (RD) was extended to the setting of etale groupoids. In their work, it is shown that several useful consequences of the group rapid decay property extend to this generalized setting, and give some examples of groupoids admitting (RD). Aside from groups, all examples of groupoids with the (RD) property satisfy the polynomial growth condition, defined below. In the present work, we show that for a large class of etale groupoids, this is about as much as one can expect. The goal of the present work is to introduce a rapid decay type property for twisted etale groupoids, as introduced in [17]. This simultaneously generalizes the results of [7] and the appendix of [15]. We show that, under mild topological assumptions, principality conditions on the etale groupoid imply that rapid decay (with or without twists) is equivalent to polynomial growth of the length function (see Theorems 3.2 and 3.3). The rest of the paper is organized as follows. In section 2, we provide some background information on groupoids, twists, and their operator algebras, then define property (RD) and list some known consequences. Section 3 contains the main results of the paper, where we show that for continuous length functions and principal groupoids, property (RD) is equivalent to polynomial growth. The last section is devoted to studying some permanence properties of property (RD). ## 2. Preliminaries ### Groupoids, twists, and their algebras In this section we recall the definitions of groupoids, twists, and the (reduced) \(C^{*}\)-algebras associated to such data. For additional background information, we refer the reader to [16, 24, 20]. By a _groupoid_ we mean a small category in which every morphism is invertible. We will typically denote groupoids by the calligraphic letters \(\mathcal{G}\) and \(\mathcal{H}\). Given a groupoid \(\mathcal{G}\), the set of objects of \(\mathcal{G}\) (considered a subset of \(\mathcal{G}\) by identifying an object with its identity morphism) is called the _unit space_ of \(\mathcal{G}\), and is denoted \(\mathcal{G}^{(0)}\). Additionally, we denote the source and range maps by \(s,r:\mathcal{G}\to\mathcal{G}^{(0)}\), respectively, we denote the inverse map \(\mathcal{G}\to\mathcal{G}\) by \(\gamma\mapsto\gamma^{-1}\), and consider composition in the category to be a map from \(\mathcal{G}^{(2)}:=\{(\gamma,\sigma)\in\mathcal{G}\times\mathcal{G}:s(\gamma )=r(\sigma)\}\) to \(\mathcal{G}\), and write it \((\gamma,\sigma)\mapsto\gamma\sigma\). Given two groupoids \(\mathcal{G}\), \(\mathcal{H}\), a _groupoid homomorphism_ from \(\mathcal{G}\) to \(\mathcal{H}\) is a map \(\varphi:\mathcal{G}\to\mathcal{H}\) that is compatible with the source, range, product, and inversion maps. Let \(\mathcal{G}\) be a groupoid. Given \(x\in\mathcal{G}^{(0)}\), the _source fiber_ of \(x\) is the subset \(\mathcal{G}_{x}:=\{\gamma\in\mathcal{G}:s(\gamma)=x\}\), the _range fiber_ at \(x\) is \(\mathcal{G}^{x}:=\{\gamma\in\mathcal{G}:r(\gamma)=x\}\), and the _isotropy group_ at \(x\) is \(\mathcal{G}_{x}^{x}=\mathcal{G}_{x}\cap\mathcal{G}^{x}\). The _isotropy subgroupoid_ of \(\mathcal{G}\) is the subgroupoid \(\mathrm{Iso}(\mathcal{G})=\sqcup_{x\in\mathcal{G}^{(0)}}\mathcal{G}_{x}^{x}\). Note that there are inclusions \(\mathcal{G}^{(0)}\subset\mathrm{Iso}(\mathcal{G})\subset\mathcal{G}\). We say that \(\mathcal{G}\) is a _group bundle_ if \(\mathrm{Iso}(\mathcal{G})=\mathcal{G}\), and we say that \(\mathcal{G}\) is _principal_ if \(\mathrm{Iso}(\mathcal{G})=\mathcal{G}^{(0)}\). In this paper, a _topological groupoid_ is a groupoid \(\mathcal{G}\) equipped with a locally compact and Hausdorff topology such that all the structure maps are continuous, where the domain of the composition map, \(\mathcal{G}^{(2)}\), is given the relative product topology.1 A topological groupoid is said to be _etale_ if the source map is a local homeomorphism. A subset \(K\) of \(\mathcal{G}\) is a _bisection_ if it is contained in an open set \(U\) of \(\mathcal{G}\) such that the restrictions of the source and range maps to \(U\) are homeomorphisms onto open subsets of \(\mathcal{G}^{(0)}\). Footnote 1: Our assumptions that the topology be locally compact and Hausdorff are common, although not universal. We now list some of the basic facts about etale groupoids which will be used in the sequel. For proofs, one can consult the references listed at the beginning of this section. **Proposition 2.1**.: _Let \(\mathcal{G}\) be an etale groupoid._ 1. _The unit space_ \(\mathcal{G}^{(0)}\) _is a clopen subset of_ \(\mathcal{G}\)_._ 2. _For each_ \(x\in\mathcal{G}^{(0)}\)_, the source and range fibres_ \(\mathcal{G}_{x}\) _and_ \(\mathcal{G}^{x}\) _are discrete subspaces of_ \(\mathcal{G}\)_._ 3. _The collection of open bisections of_ \(\mathcal{G}\) _forms a basis for the topology of_ \(\mathcal{G}\)_._ 4. _The product map_ \(\mathcal{G}^{(2)}\to\mathcal{G}\)_,_ \((\gamma,\sigma)\mapsto\gamma\sigma\) _is an open map._ A rich source of examples for groupoids come from the notion of an action of a groupoid on a space. Before defining groupoid actions, we recall the notion of a fibered product: If \(Y_{1},Y_{2},Z\) are sets and \(f_{i}:Y_{i}\to Z\) are surjective functions, the _fibered product of \(Y_{1}\) and \(Y_{2}\) over \(Z\) relative to the maps \(f_{1},f_{2}\)_ is the space \[Y_{1}\ {}_{f_{1}}\!*_{f_{2}}Y_{2}:=\{(y_{1},y_{2})\in Y_{1}\times Y_{2}:f_{1}(y_{ 1})=f_{2}(y_{2})\}.\] When \(Y_{1},Y_{2},Z\) are topological spaces and \(f_{1}\) and \(f_{2}\) are continuous, we endow \(Y_{1}\ {}_{f_{1}}\!*_{f_{2}}Y_{2}\) with the relative product topology it inherits from being a subspace of the product space \(Y_{1}\times Y_{2}\). **Definition 2.2**.: Let \(\mathcal{G}\) be a groupoid and let \(Y\) be a set. An _action of \(\mathcal{G}\) on \(Y^{2}\)_ is the data of a surjective map \(p:Y\to\mathcal{G}^{(0)}\), called the _anchor map_ for the action, and a map \[\mathcal{G}\ {}_{s}\!*_{p}Y\to Y,\qquad(\gamma,y)\mapsto\gamma\cdot y,\] such that the following conditions are satisfied: * If \(y\in Y\) and \((\gamma,y)\in\mathcal{G}\ {}_{s}\!*_{p}Y\), then \(p(\gamma\cdot y)=r(\gamma)\) and \(p(y)\cdot y=y\). * If \((\eta,y)\in\mathcal{G}\ {}_{s}\!*_{p}Y\) and \((\gamma,\eta)\in\mathcal{G}^{(2)}\), then \(\gamma\cdot(\eta\cdot y)=(\gamma\eta)\cdot y\). When \(\mathcal{G}\) is a topological groupoid \(Y\) is a locally compact Hausdorff space, an action of \(\mathcal{G}\) on \(Y\) is said to be _continuous_ if the anchor map \(p:Y\to\mathcal{G}^{(0)}\) and the product map \(\mathcal{G}\ {}_{s}\!*_{p}Y\to Y\) are continuous. Let \(\mathcal{G}\) be a groupoid, let \(Y\) be a set, and suppose \(\mathcal{G}\) acts on \(Y\) with anchor map \(p\). We define the _transformation groupoid_, denoted \(\mathcal{G}\ltimes Y\), associated to this action, as follows: as a set, \(\mathcal{G}\ltimes Y=\mathcal{G}\ {}_{s}\!*_{p}Y\). The source, range, and inverse maps are given, for \((\gamma,y)\in\mathcal{G}\ltimes Y\), as follows: \[s(\gamma,y) =(p(y),y),\] \[r(\gamma,y) =(p(\gamma\cdot y),\gamma\cdot y),\] \[(\gamma,y)^{-1} =(\gamma^{-1},\gamma\cdot y).\] The product in \(G\ltimes Y\) is defined as follows: if \(((\gamma,y),(\eta,z))\in(\mathcal{G}\ltimes Y)^{(2)}\), then \(y=\eta\cdot z\) and \[(\gamma,\eta\cdot z)(\eta,z)=(\gamma\eta,z).\] When \(\mathcal{G}\) is a topological groupoid, \(Y\) is a locally compact Hausdorff space, and the action is continuous, \(\mathcal{G}\ltimes Y\) is a topological groupoid. Moreover, when \(\mathcal{G}\) is etale, so is \(\mathcal{G}\ltimes Y\). We now lay out our notation for the various vector spaces and algebras associated to groupoids we use in our analysis. Fix an etale groupoid \(\mathcal{G}\). The space \(C_{c}(\mathcal{G})\) of continuous and compactly supported functions \(\mathcal{G}\to\mathbb{C}\) is _a priori_ a vector space. We give it the structure of a \(*\)-algebra, with product and involution given by the formulas \[(f*g)(\gamma) =\sum_{\alpha\beta=\gamma}f(\alpha)g(\beta)\] \[f^{*}(\gamma) =\overline{f(\gamma)}\] for \(f,g\in C_{c}(\mathcal{G})\) and \(\gamma\in\mathcal{G}\).3 Footnote 3: Proving that \(f*g\) defined above lies in \(C_{c}(\mathcal{G})\) makes use of the étale condition. For \(f\in C_{c}(\mathcal{G})\), the sup-norm is denoted \(\|f\|_{\infty}=\sup_{\gamma\in\mathcal{G}}|f(\gamma)|\). While this is a \(C^{*}\)-norm for the pointwise operations, it fails to be submultiplicative for the product and involution defined above. In order to define a \(C^{*}\)-norm on \(C_{c}(\mathcal{G})\), we look at a natural class of representations of this algebra on Hilbert spaces. For each \(x\in\mathcal{G}^{(0)}\), let \(\mathbb{C}\mathcal{G}_{x}\) denote the space of functions \(\mathcal{G}_{x}\to\mathbb{C}\) with finite support, and let \(\ell^{2}\mathcal{G}_{x}\) denote the Hilbert space of square summable functions \(\mathcal{G}_{x}\to\mathbb{C}\). We define a representation \(\lambda_{x}:C_{c}(\mathcal{G})\to\mathbb{B}(\ell^{2}\mathcal{G}_{x})\), called the _(left) regular representation_ at \(x\) as follows: for \(f\in C_{c}(\mathcal{G})\), the operator \(\lambda_{x}(f)\in\mathbb{B}(\ell^{2}\mathcal{G}_{x})\) acts on \(\xi\in\ell^{2}\mathcal{G}_{x}\) via the formula \[[\lambda_{x}(f)\xi](\gamma)=\sum_{\eta\in\mathcal{G}_{x}}f(\gamma\eta^{-1}) \xi(\eta)\] for all \(\gamma\in\mathcal{G}_{x}\). We then define the _reduced \(C^{*}\)-norm_ on \(C_{c}(\mathcal{G})\) by the formula \[\|f\|_{C_{r}^{*}\mathcal{G}}=\sup_{x\in\mathcal{G}^{(0)}}\|\lambda_{x}(f)\|_{ \mathbb{B}(\ell^{2}\mathcal{G}_{x})}.\] The _reduced \(C^{*}\)-algebra_ of \(\mathcal{G}\), denoted \(C_{r}^{*}\mathcal{G}\), is then the completion of \(C_{c}(\mathcal{G})\) with respect to this norm. We now proceed to define twists over etale groupoids. Our notation will follow that of [20]. **Definition 2.3**.: Let \(\mathcal{G}\) be an etale groupoid. By a _twist_ over \(\mathcal{G}\) we mean a sequence of topological groupoids, where \(\mathcal{G}^{(0)}\times\mathbb{T}\) is considered as a trivial group bundle over \(\mathcal{G}^{(0)}\), and \(i\) and \(\pi\) are continuous groupoid homomorphisms which restrict to homeomorphisms on unit spaces (we identify \(\mathcal{E}^{(0)}\) with \(\mathcal{G}^{(0)}\) via \(\pi\)), such that * the sequence is short-exact, meaning that \(i\) is injective, \(\pi^{-1}(\mathcal{G}^{(0)})=i(\mathcal{G}^{(0)}\times\mathbb{T})\), and \(\pi\) is surjective, * for all \(\varepsilon\in\mathcal{E}\) and \(z\in\mathbb{T}\), we have \(i(r(\varepsilon),z)\varepsilon=\varepsilon i(s(\varepsilon),z)\), and * every \(\gamma\in\mathcal{G}\) admits an open neighborhood \(U\subset\mathcal{G}\) and a continuous section \(S:U\to\mathcal{E}\) for the map \(\pi\) (meaning \(\pi\circ S=\mathrm{id}_{U}\)), such that the map \(U\times\mathbb{T}\to\pi^{-1}(U)\) given by \((\eta,z)\mapsto i(r(\eta),z)S(\eta)\) is a homeomorphism. The second condition is often seen as requiring that the image of \(i\) is "central" in \(\mathcal{E}\), and the third condition means that we can view the map \(\pi\) as a "locally trivial \(\mathcal{G}\)-bundle." As we have no need to consider multiple twists over the same groupoid, in the sequel we shall simply refer to the groupoid \(\mathcal{E}\) as a "twist" over \(\mathcal{G}\), leaving the maps \(i\) and \(\pi\) implicit, or we shall say "let \(\mathcal{E}\stackrel{{\pi}}{{\to}}\mathcal{G}\) be a twist", if the bundle map \(\pi\) need be made explicit. Let \(\mathcal{E}\) be a twist over an etale groupoid \(\mathcal{G}\). For \(\varepsilon\in\mathcal{E}\) and \(z\in\mathbb{T}\), we denote by \(z\cdot\varepsilon\) the element of \(\mathcal{E}\) given by \(i(r(\varepsilon),z)\varepsilon\). Similarly, let \(\varepsilon\cdot z=\varepsilon i(s(\varepsilon),z)\), so that \(z\cdot\varepsilon=\varepsilon\cdot z\) by the second condition in Definition 2.3. If \(\varepsilon_{1},\varepsilon_{2}\in\mathcal{E}\) and \(\pi(\varepsilon_{1})=\pi(\varepsilon_{2})\), then by [20, Lemma 11.1.3] there is a unique \([\varepsilon_{1},\varepsilon_{2}]\in\mathbb{T}\) such that \(\varepsilon_{1}=[\varepsilon_{1},\varepsilon_{2}]\cdot\varepsilon_{2}\). Let \(\Sigma_{c}(\mathcal{G},\mathcal{E})=\{f\in C_{c}(\mathcal{E}):f(z\cdot \varepsilon)=zf(\varepsilon)\text{ for all }z\in\mathbb{T},\varepsilon\in\mathcal{E}\}\). With pointwise addition and scalar multiplication, this is a \(\mathbb{C}\)-vector space. It is a \(*\)-vector space, with involution given by \((f^{*})(\varepsilon)=\overline{f(\varepsilon^{-1})}\) for \(f\in\Sigma_{c}(\mathcal{G},\mathcal{E})\) and \(\varepsilon\in\mathcal{E}\). To define a multiplication on \(\Sigma_{c}(\mathcal{G},\mathcal{E})\), fix a (not necessarily continuous) section \(\rho:\mathcal{G}\to\mathcal{E}\) for the map \(\pi\), and for \(f,g\in\Sigma_{c}(\mathcal{G},\mathcal{E})\) define \(f*g\in\Sigma_{c}(\mathcal{G},\mathcal{E})\) by \[(f*g)(\varepsilon)=\sum_{\gamma\in\mathcal{G}_{s(\varepsilon)}}f(\varepsilon \rho(\gamma)^{-1})g(\rho(\gamma)).\] By the \(\mathbb{T}\)-equivariance of functions in \(\Sigma_{c}(\mathcal{G},\mathcal{E})\), the above formula is independent of the chosen section \(\rho\). For each \(x\in\mathcal{G}^{(0)}\), define a representation \(\lambda_{x}^{\rho}\) of \(\Sigma_{c}(\mathcal{G},\mathcal{E})\) on \(\ell^{2}\mathcal{G}_{x}\) by extension of the above convolution formula: \[[\lambda_{x}^{\rho}(f)\xi](\gamma)=\sum_{\eta\in\mathcal{G}_{x}}f\left(\rho( \gamma)\rho(\eta)^{-1}\right)\xi(\eta).\] Up to unitary equivalence, this representation is independent of the chosen section \(\rho\). We define \(C_{r}^{*}(\mathcal{G},\mathcal{E})\) to be the completion of \(\Sigma_{c}(\mathcal{G},\mathcal{E})\) with respect to the norm \[\|f\|_{C_{r}^{*}(\mathcal{G},\mathcal{E})}=\sup_{x\in\mathcal{G}^{(0)}}\| \lambda_{x}^{\rho}(f)\|_{\mathbb{B}(\ell^{2}\mathcal{G}_{x})}.\] **Example 2.4**.: Let \(\mathcal{G}\) be an etale groupoid. If we consider \(\mathcal{G}\times\mathbb{T}\) as a topological groupoid, with the product topology and pointwise operations, then \[\mathcal{G}^{(0)}\times\mathbb{T}\xrightarrow{i}\mathcal{G}\times\mathbb{T} \xrightarrow{\pi}\mathcal{G}\] where \(i\) is the inclusion map and \(\pi(\gamma,z)=\gamma\), defines a twist over \(\mathcal{G}\), called the _trivial twist_ over \(\mathcal{G}\). There is a natural identification \(\Sigma_{c}(\mathcal{G},\mathcal{G}\times\mathbb{T})\) with \(C_{c}(\mathcal{G})\), sending \(f\in\Sigma_{c}(\mathcal{G},\mathcal{G}\times\mathbb{T})\) to the map \(\mathcal{G}\ni\gamma\mapsto f(\gamma,1)\in\mathbb{C}\), which extends to an isomorphism from \(C_{r}^{*}(\mathcal{G},\mathcal{G}\times\mathbb{T})\) onto \(C_{r}^{*}\mathcal{G}\). ### Rapid decay for twisted etale groupoids In this subsection, we describe the basic properties of length functions on groupoids, and give a definition of property (RD) for twisted etale groupoids. We begin by recalling the definition of a length function on \(\mathcal{G}\), as given in [7]. **Definition 2.5**.: Let \(\mathcal{G}\) be a groupoid. By a _length function_ on \(\mathcal{G}\) be we mean a map \(\ell:\mathcal{G}\to[0,\infty)\) satisfying the following conditions: * \(\ell(x)=0\) for any \(x\in\mathcal{G}^{(0)}\), * \(\ell(\gamma^{-1})=\ell(\gamma)\) for any \(\gamma\in\mathcal{G}\), and * \(\ell(\gamma\eta)\leq\ell(\gamma)+\ell(\eta)\) for any \((\gamma,\eta)\in\mathcal{G}^{(2)}\). Now suppose that \(\mathcal{G}\) is a topological groupoid. We say that the length function \(\ell\) is _continuous_ if it is continuous as a map from \(\mathcal{G}\) to \([0,\infty)\). A weaker condition that one can ask for is that \(\ell\) be _locally bounded_, meaning that \(\sup_{\gamma\in K}\ell(\gamma)\) is finite for all compact sets \(K\). As a partial converse to this definition, following [14] we say that \(\ell\) is _proper_ if for every for every subset \(K\subset\mathcal{G}\setminus\mathcal{G}^{(0)}\), finiteness of the quantity \(\sup_{\gamma\in K}\ell(\gamma)\) implies that \(K\) is pre-compact. **Example 2.6**.: 1. Suppose \(\Gamma\) is a discrete group, \(\ell\) is a length function on \(\Gamma\), and suppose that \(\Gamma\) acts by homeomorphisms on the locally compact Hausdorff space \(X\). On the transformation groupoid \(\Gamma\ltimes X\) one can define a length function (still denoted \(\ell\)) by the fomula \(\ell(\gamma,x)=\ell(\gamma)\). This induced length function is continuous, and proper if the length on \(\Gamma\) is proper and \(X\) is assumed to be compact. 2. More generally, suppose \(\mathcal{G}\) and \(\mathcal{H}\) are groupoids, and that \(\varphi:\mathcal{H}\to\mathcal{G}\) is a groupoid homomorphism. If \(\ell\) is a length function on \(\mathcal{G}\), then the formula \((\varphi^{*}\ell)(\eta)=\ell(\varphi(\eta))\) defines a length function \(\mathcal{H}\). 3. Let \(\mathcal{G}\) be an etale groupoid, and assume that \(\mathcal{G}\) is compactly generated, meaning that there is a pre-compact subset \(K\subset\mathcal{G}\) such that every \(\gamma\in\mathcal{G}\) can be written as a product of elements of \(K\cup K^{-1}\). Given such a \(K\), we define a length \(\ell\) on \(\mathcal{G}\) by \(\ell(x)=0\) for \(x\in\mathcal{G}^{(0)}\), and by \[\ell(\gamma)=\min\{n\in\mathbb{N}:\gamma=\gamma_{1}\cdots\gamma_{n}\text{ for some }\gamma_{k}\in K\cup K^{-1}\}\] for \(\gamma\in\mathcal{G}\setminus\mathcal{G}^{(0)}\). Given a length function \(\ell\) on a groupoid \(\mathcal{G}\), for each \(x\in\mathcal{G}^{(0)}\) one can define a pseudometric \(\rho_{\ell,x}\) on the source fibre \(\mathcal{G}_{x}\) by the formula \(\rho_{\ell,x}(\gamma_{1},\gamma_{2})=\ell(\gamma_{1}\gamma_{2}^{-1})\). For \(\gamma\in\mathcal{G}_{x}\), \(r>0\), the _closed ball of radius \(r\) centered at \(\gamma\)_ with respect to this metric will be denoted \(B_{\ell}(\gamma,r)=\{\eta\in\mathcal{G}_{x}:\ell(\gamma\eta^{-1})\leq r\}\). In section 4 of [14], they study the geometric structure a length function imposes on an etale groupoid. One particularly nice result, which we will use, is their _local slice lemma_, which we repeat below for convenience. **Lemma 2.7** ([14, Lemma 5.10]).: _Let \(\mathcal{G}\) be a \(\sigma\)-compact, etale groupoid, and let \(\ell\) be a continuous and proper length function on \(\mathcal{G}\). For every \(x\in\mathcal{G}^{(0)}\) and every pair of constants \(R,\varepsilon>0\), there exist a number \(R^{\prime}\in[R,R+\varepsilon)\), an open neighborhood \(V\subset\mathcal{G}^{(0)}\) of \(x\), an open subset \(W\) of \(\mathcal{G}\), and a homeomorphism \(\Phi:B_{\ell}(x,R^{\prime})\times V\to W\), satisfying the following conditions:_ 1. \(\Phi(x,y)=y\) _for any_ \(y\in V\)_,_ 2. \(\Phi(\gamma,x)=\gamma\) _for any_ \(\gamma\in B(x,R^{\prime})\)_,_ 3. \(\Phi(B_{\ell}(x,R^{\prime})\times\{y\})=B_{\ell}(y,R^{\prime})\) _for every_ \(y\in V\)_, and_ 4. \(|\ell(\gamma\eta^{-1})-\ell(\Phi(\gamma,y)\Phi(\eta,y)^{-1})|<\varepsilon\) _for all_ \(\gamma,\eta\in B(x,R^{\prime})\) _and all_ \(y\in V\)_._ This lemma is particularly useful, as it allows us to translate data from a finite subset of source fibres to nearby source fibres. We provide one modest example. **Corollary 2.8**.: _Let \(\mathcal{G}\) be a \(\sigma\)-compact, etale groupoid, and let \(\ell\) be a continuous and proper length function on \(\mathcal{G}\). For every \(x_{0}\in\mathcal{G}^{(0)}\) and \(R>0\), there is some open neighborhood \(V\subset\mathcal{G}^{(0)}\) of \(x_{0}\) such that \(|B_{\ell}(x,R)|=|B_{\ell}(x_{0},R)|\) for all \(x\in V\)._ We now define some seminorms on \(C_{c}(\mathcal{G})\) which will be relevant for our discussion of rapid decay type properties for \(\mathcal{G}\). **Definition 2.9**.: Let \(\mathcal{G}\) be an etale groupoid, let \(\mathcal{E}\xrightarrow{\pi}\mathcal{G}\) be a twist over \(\mathcal{G}\), and let \(\ell\) be a length function on \(\mathcal{G}\). Fix a section \(\rho:\mathcal{G}\to\mathcal{E}\) for the map \(\pi\). For each \(x\in\mathcal{G}^{(0)}\), and \(t\geq 0\), define seminorms on \(\Sigma_{c}(\mathcal{G},\mathcal{E})\) by \[\|f\|_{\mathcal{E},\ell,t,s,x}=\left(\sum_{\gamma\in\mathcal{G}_ {x}}|f(\rho(\gamma))|^{2}(1+\ell(\gamma)^{2t}\right)^{1/2},\] \[\|f\|_{\mathcal{E},\ell,t,s}=\sup_{x\in\mathcal{G}^{(0)}}\|f\|_{ \mathcal{E},\ell,t,s,x},\] \[\|f\|_{\mathcal{E},\ell,t}=\max\{\|f\|_{\mathcal{E},\ell,t,s},\| f^{*}\|_{\mathcal{E},\ell,t,s}\}.\] **Definition 2.10**.: Let \(\mathcal{G}\) be an etale groupoid, let \(\mathcal{E}\) be at twist over \(\mathcal{G}\), and let \(\ell\) be a length function on \(\mathcal{G}\). We say that \(\mathcal{G}\) has _\(\mathcal{E}\)-twisted rapid decay_ (or \(\mathcal{E}\)-(RD) for short) with respect to the length function \(\ell\) if there exist constants \(C,t\geq 0\) such that \[\|f\|_{C_{r}^{*}(\mathcal{G},\mathcal{E})}\leq C\|f\|_{\mathcal{E},\ell,t}\] for all \(f\in\Sigma_{c}(\mathcal{G},\mathcal{E})\). In the case that \(\mathcal{E}=\mathcal{G}\times\mathbb{T}\) is the trivial twist, we identify \(\Sigma_{c}(\mathcal{G},\mathcal{E})\) with \(C_{c}(\mathcal{G})\), remove the \(\mathcal{E}\) in the subscript of the norms, and simply say that \(\mathcal{G}\) has the _rapid decay property_, or property (RD), when it has \(\mathcal{E}\)-(RD) for this twist. Immediately from the definition, one can see that this generalizes the notion of rapid decay for discrete groups: if \(\Gamma\) is a discrete group with a length function \(\ell\), then \(\Gamma\) has property (RD) with respect to \(\ell\) as in Definition 2.10 if and only if it satisfies property (RD) with respect to \(\ell\) as described in the first paragraph of the present work. We now proceed to show that, for a fixed length function, (RD) implies \(\mathcal{E}\)-(RD) for all twists \(\mathcal{E}\). This has been shown in the group case, but in order to adapt the proof to this setting, we need a lemma. **Lemma 2.11**.: _If \(f\in\Sigma_{c}(\mathcal{G},\mathcal{E})\), then the map \(\mathcal{G}\to\mathbb{C}\), \(\gamma\mapsto|f(\rho(\gamma))|\), belongs to \(C_{c}(\mathcal{G})\)._ Proof.: First, we show the map has compact support. Observe that if \(\gamma\in\mathcal{G}\) and \(|f(\rho(\gamma))|\neq 0\), then \(\rho(\gamma)\in\mathrm{supp}(f)\). Thus \(\gamma=\pi(\rho(\gamma))\in\pi(\mathrm{supp}(f))\). As the latter set is compact, it follows that the set \[\{\gamma\in\mathcal{G}:|f(\rho(\gamma))|\neq 0\}\] has compact closure. Next, we show continuity. Fix \(\gamma_{0}\in\mathcal{G}\) and \(\varepsilon>0\). There is an open neighborhood \(U\subset\mathcal{G}\) of \(\gamma_{0}\) and a continuous section \(\rho_{U}:U\to\mathcal{E}\) of \(\pi\). As \(f\) is continuous, there is an open neighborhood \(V\subset\mathcal{E}\) of \(\rho_{U}(\gamma_{0})\) such that \(|f(\rho_{U}(\gamma_{0}))-f(\delta)|<\varepsilon\) whenever \(\delta\in V\). Letting \(U_{0}=\rho_{U}^{-1}(V)\subset\mathcal{G}\), we see that \(U_{0}\) is an open neighborhood of \(\gamma_{0}\), and if \(\gamma\in U_{0}\), we have \[||f(\rho(\gamma_{0}))|-|f(\rho(\gamma))|| =||f(\rho_{U}(\gamma_{0}))|-|f(\rho_{U}(\gamma))||\] \[\leq|f(\rho_{U}(\gamma_{0}))-f(\rho_{U}(\gamma))|<\varepsilon,\] where the first equality follows from the fact that \(f(z\cdot\delta)=zf(\delta)\) for all \(z\in\mathbb{T}\) and \(\delta\in\mathcal{E}\). **Proposition 2.12**.: _Let \(\mathcal{G}\) be an etale groupoid, and let \(\ell\) be a length function on \(\mathcal{G}\). If \(\mathcal{G}\) has (RD) with respect to \(\ell\), then it has \(\mathcal{E}\)-(RD) with respect to \(\ell\) for any twist \(\mathcal{E}\) over \(\mathcal{G}\)._ Proof.: With the above lemma, we can proceed as in the proof of Lemma 6.7 in [15]. Let \(\mathcal{E}\stackrel{{\pi}}{{\to}}\mathcal{G}\) be a twist over \(\mathcal{G}\), and let \(\rho:\mathcal{G}\to\mathcal{E}\) be a section for \(\pi\). As \(\mathcal{G}\) has property (RD), there are constants \(C,t\geq 0\) such that \(\|g\|_{C_{r}^{*}\mathcal{G}}\leq C\|g\|_{\ell,t}\) for all \(f\in C_{c}(\mathcal{G})\). Fix \(f\in\Sigma_{c}(\mathcal{G},\mathcal{E})\), and define \(g\in C_{c}(\mathcal{G})\) by \(g(\gamma)=|f(\rho(\gamma))|\). If \(x\in\mathcal{G}^{(0)}\) and \(\xi\in\ell^{2}\mathcal{G}_{x}\), then for any \(\gamma\in\mathcal{G}_{x}\) we have \[[\lambda_{x}^{\rho}(f)\xi](\gamma)|\leq\sum_{\eta\in\mathcal{G}_{x}}|f(\rho( \gamma\eta^{-1}))||\xi(\eta)|=[\lambda_{x}(g)|\xi]|\,(\gamma).\] Summing over \(\gamma\in\mathcal{G}_{x}\) and taking a square root yields \[\|\lambda_{x}^{\rho}(f)\xi\|_{\ell^{2}\mathcal{G}_{x}} \leq\|\lambda_{x}(g)|\xi\|_{\ell^{2}\mathcal{G}_{x}}\leq\|g\|_{C_ {r}^{*}\mathcal{G}}\|\xi\|_{\ell\mathcal{G}_{x}}\] \[\leq C\|g\|_{\ell,t}\|\xi\|_{\ell^{2}\mathcal{G}_{x}}=C\|f\|_{ \mathcal{E},\ell,t}\|\xi\|_{\ell^{2}\mathcal{G}_{x}}.\] Taking suprema, the above inequality implies \(\|f\|_{C_{r}^{*}(\mathcal{G},\mathcal{E})}\leq C\|f\|_{\mathcal{E},\ell,t}\). As mentioned in the introduction, one of the main motivations for studying rapid decay in the group setting was that it yields nice a nice subalgebra of the reduced group \(C^{*}\)-algebra. This was also shown to be the case for groupoids with (RD) in [7], and in the twisted setting, we obtain a similar result. Let \(S_{\ell}(\mathcal{G},\mathcal{E})\) denote the completion of \(\Sigma_{c}(\mathcal{G},\mathcal{E})\) with respect to the topology induced by the family of norms \(\{\|\cdot\|_{\infty}\}\cup\{\|\cdot\|_{\mathcal{E},\ell,t}:t\in\mathbb{Z}_{ \geq 0}\}\). With minor modifications, the proofs of Lemma 3.3 and Proposition 3.4 in [7] yield the following result: **Proposition 2.13**.: _Let \(\mathcal{G}\) be an etale groupoid, let \(\ell\) be a length function on \(\mathcal{G}\), and let \(\mathcal{E}\) be a twist over \(\mathcal{G}\). Then \(\mathcal{G}\) has \(\mathcal{E}\)-(RD) with respect to \(\ell\) if and only if \(S_{\ell}(\mathcal{G},\mathcal{E})\subset C_{r}^{*}(\mathcal{G},\mathcal{E})\). Moreover, if the length function \(\ell\) is continuous, and \(\mathcal{G}\) has \(\mathcal{E}\)-(RD) with respect to \(\ell\), then \(S_{\ell}(\mathcal{G},\mathcal{E})\) is a dense, Frechet \(*\)-subalgebra of \(C_{r}^{*}(\mathcal{G},\mathcal{E})\)._ The proof of Theorem 4.2 in [7] also generalizes to the this setting, and we obtain the following result. **Proposition 2.14**.: _Let \(\mathcal{G}\) be an etale groupoid, let \(\mathcal{E}\) be a twist over \(\mathcal{G}\), and let \(\ell\) be a continuous length function on \(\mathcal{G}\). If \(\mathcal{G}\) has \(\mathcal{E}\)-(RD) with respect to \(\ell\), then \(S_{\ell}(\mathcal{G},\mathcal{E})\) is an inverse closed subalgebra of \(C_{r}^{*}(\mathcal{G},\mathcal{E})\), and the inclusion induces an isomorphism at the level of \(K\)-theory._ We end this section by giving another class of groupoids with length functions for which property (RD) holds. Let \(\mathcal{G}\) be an etale groupoid, and let \(\ell\) be a length on \(\mathcal{G}\). We say that \(\mathcal{G}\) has _polynomial growth_ with respect to \(\ell\) if there exist constants \(C,n>0\) such that \(|B_{\ell}(x,r)|\leq C(1+r)^{n}\) for all \(x\in\mathcal{G}^{(0)}\) and \(r>0\). **Proposition 2.15** ([7, Proposition 3.5]).: _Let \(\mathcal{G}\) be an etale groupoid, and let \(\ell\) be a length function on \(\mathcal{G}\). If \(\mathcal{G}\) has polynomial growth with respect to \(\ell\), then \(\mathcal{G}\) has property (RD) with respect to \(\ell\), and hence has \(\mathcal{E}\)-(RD) for all twists \(\mathcal{E}\) over \(\mathcal{G}\)._ In [7], all examples of groupoids with rapid decay had polynomial growth with respect to the given length function. In the next section, we shall see that under the assumption that the groupoid is principal, this is about as much as one can expect. ## 3 Principal Groupoids Recall that a groupoid \(\mathcal{G}\) is called principal if \(\operatorname{Iso}(\mathcal{G})=\mathcal{G}^{(0)}\), or equivalently, if the map \(s\times r:\mathcal{G}\to\mathcal{G}^{(0)}\times\mathcal{G}^{(0)}\) is injective. As a topological analogue of principality, a topological groupoid \(\mathcal{G}\) is said to be _topologically principal_ if the set of units \(x\in\mathcal{G}^{(0)}\) such that \(\mathcal{G}_{x}^{x}=\{x\}\) is dense in \(\mathcal{G}^{(0)}\). In this section, we show that (topologically) principal groupoids admit property (RD) only when the length function has polynomial growth, and some continuity conditions. This generalizes some known results, see [3], [8]. Our strategy is inspired by the proof of Theorem 2.1 of [3], but details require attention in this more general setting. **Lemma 3.1**.: _Let \(\mathcal{G}\) be an etale groupoid, and let \(\ell\) be a continuous length function on \(\mathcal{G}\). Suppose that for every \(C>0\), \(D>0\), there exist \(R>0\), \(x\in\mathcal{G}^{(0)}\), and a finite set \(F\subset B(x,R)\), such that_ _(i) \(|F|>C(1+R)^{D}\), and_ _(ii) the restriction of the range map to \(F\) is injective,_ _Then \(\mathcal{G}\) does not have \(\mathcal{E}\)-(RD) with respect to \(\ell\) for any twist \(\mathcal{E}\) over \(\mathcal{G}\)._ **Proof.** Fix \(C,t>0\). By assumption, there exist \(R>0\), \(x_{0}\in\mathcal{G}^{(0)}\), and a finite set \(F\subset B(x_{0},r)\) such that \(r\mid_{F}\) is injective and \[|F|>C^{2}(1+R)^{6t}.\] Without loss of generality, we may assume that \(x_{0}\in F\). For each \(\gamma\in F\), fix a pre-compact open bisection neighborhood \(W_{\gamma}\) of \(\gamma\) such that * \(W_{x_{0}}\subset\mathcal{G}^{(0)}\), * \(s(W_{\gamma})\subset W_{x_{0}}\) for all \(\gamma\in F\), and * the collection \(\{r(W_{\gamma}):\gamma\in F\}\) of subsets of \(\mathcal{G}^{(0)}\) is pairwise disjoint. Put \(Z=FF^{-1}\). For each \(g\in Z\), there exist (unique) \(\gamma_{1},\gamma_{2}\in F\) such that \(g=\gamma_{1}\gamma_{2}^{-1}\). Define an open neighborhood \(V_{g}\) of \(g\) as follows: If \(\gamma_{1}=\gamma_{2}\) we set \(V_{g}=r(W_{\gamma_{1}})\), and otherwise \(V_{g}=W_{\gamma_{1}}W_{\gamma_{2}}^{-1}\). Now fix a pre-compact open bisection neighborhood \(U_{g}\) of \(g\) such that \(\overline{U}_{g}\subset V_{g}\), and such that \(|\ell(\gamma)-\ell(g)|<R\) for all \(\gamma\in U_{g}\). Let \(\mathcal{E}\stackrel{{\pi}}{{\to}}\mathcal{G}\) be a twist, and let \(\rho:\mathcal{G}\to\mathcal{E}\) be a (not necessarily continuous) section for \(\pi\). Then we may assume that for each \(g\in Z\), the open bisection \(U_{g}\) admits a continuous section \(\rho_{g}:\mathcal{G}\to\mathcal{E}\) for \(\pi\), such that the map \[\Psi_{g}:U_{g}\times\mathbb{T}\to\pi^{-1}(U_{g}),\qquad(\gamma,z)\mapsto z\cdot \rho_{g}(\gamma)\] is a homeomorphism. For each \(g\in Z\), fix a function \(\tilde{f}_{g}\in C_{c}(\mathcal{G})\) such that \(0\leq\tilde{f}_{g}(\gamma)\leq 1\) for all \(\gamma\in\mathcal{G}\), with \(\tilde{f}_{g}(g)=1\) and \(\operatorname{supp}(\tilde{f}_{g})\subset U_{g}\). Define \(f_{g}\in\Sigma_{c}(\mathcal{G},\mathcal{E})\) for \(g\in Z\) to be the function such that \(\operatorname{supp}(f_{g})\subset\pi^{-1}(U_{g})\), and \[f_{g}(\rho_{g}(\gamma))=[\rho_{g}(g),\rho(\gamma_{1})\rho(\gamma_{2})^{-1}] \tilde{f}_{g}(\gamma),\] whenever \(\gamma\in U_{g}\), where \(\gamma_{1}\) and \(\gamma_{2}\) are the (unique) elements of \(F\) such that \(g=\gamma_{1}\gamma_{2}^{-1}\). Now let \(\xi=|F|^{-1/2}\chi_{F}\in\ell^{2}\mathcal{G}_{x_{0}}\), where \(\chi_{F}\) denotes the indicator function for the set \(F\subset\mathcal{G}_{x_{0}}\). If \(\gamma\in\mathcal{G}_{x_{0}}\), then \([\lambda_{x}^{\rho}(f)\xi](\gamma)=0\) unless \(\gamma\in\mathbb{F}\). In this case, we set \(Z^{\gamma}=Z\cap\mathcal{G}^{r(\gamma)}\), and we have \[[\lambda_{x}^{\rho}(f)\xi](\gamma) =|F|^{-1/2}\sum_{\eta\in F}f(\rho(\gamma)\rho(\eta)^{-1})\] \[=|F|^{-1/2}\sum_{g\in Z^{\gamma}}f_{g}(\rho(\gamma)\rho(g^{-1} \gamma)^{-1})\] \[=|F|^{-1/2}\sum_{g\in Z^{\gamma}}\tilde{f}_{g}(g)\] \[=|F|^{1/2}.\] Squaring and summing over \(\gamma\in\mathcal{G}_{x_{0}}\), we obtain \(\|\lambda_{x}^{\rho}(f)\xi\|_{\ell^{2}\mathcal{G}_{x_{0}}}^{2}=|F|^{2}\), so \(\|f\|_{C_{r}^{*}(\mathcal{G},\mathcal{E})}\geq|F|\). Next we estimate \(\|f\|_{\mathcal{E},\ell,t}\). We claim that, for all \(x\in\mathcal{G}^{(0)}\) we have \[\left|\bigcup_{g\in Z}\mathcal{G}_{x}\cap U_{g}\right|\leq|F|\] To see this, first note that if \(g\in Z\), then \(s(U_{g})\subset r(W_{\eta})\) for some \(\eta\in F\). Hence if \(x\in\mathcal{G}^{(0)}\) and \(x\notin r(W_{\eta})\) for all \(\eta\in F\), then \(\bigcup_{g\in Z}\mathcal{G}_{x}\cap U_{g}=\varnothing\), and the estimate holds. Now suppose \(x\in r(W_{\eta})\) for some (necessarily unique) \(\eta\in F\). If \(g\in Z\) and \(\mathcal{G}_{x}\cap U_{g}\) is nonempty, then \(g=\gamma\eta^{-1}\) for some \(\gamma\in F\), so \[\left(\bigcup_{g\in Z}\mathcal{G}_{x}\cap U_{g}\right)\subset\left(\bigcup_{ \gamma\in F}\mathcal{G}_{x}\cap U_{\gamma\eta^{-1}}\right).\] As each \(U_{g}\) is a bisection, the cardinality of the set on the right-hand side is clearly no more than \(|F|\). This proves the claim. Now for \(x\in\mathcal{G}^{(0)}\), the claim implies \[\|f\|_{\mathcal{E},\ell,t,s,x}^{2} =\sum_{\gamma\in\mathcal{G}_{x}}|f(\gamma)|^{2}(1+\ell(\gamma))^{ 2t}\] \[\leq\sum_{\gamma\in\cup_{g\in Z}(\mathcal{G}_{x}\cap U_{g})}(1+ \ell(\gamma))^{2t}\] \[<|F|(1+2R)^{2t}.\] Similarly, we obtain \(\|f^{*}\|_{\mathcal{E},\ell,t,s,x}^{2}<|F|(1+2R)^{2t}\), and it follows that \[\|f\|_{\mathcal{E},\ell,t}\leq|F|^{1/2}(1+2R)^{t}.\] Combining our estimates, we obtain \[\|f\|_{C_{r}^{*}(\mathcal{G},\mathcal{E})}\geq|F|>C|F|^{1/2}(1+R)^{2t}\geq C|F| ^{1/2}(1+2R)^{t}\ \geq C\cdot\|f\|_{\mathcal{E},\ell,t}.\] Since \(C,t>0\) were arbitrary, it follows that \(\mathcal{G}\) does not have \(\mathcal{E}\)-(RD) with respect to the length \(\ell\). **Theorem 3.2**.: _Let \(\mathcal{G}\) be a principal, etale groupoid, and let \(\ell\) be a continuous length function on \(\mathcal{G}\). The following are equivalent:_ 1. \(\mathcal{G}\) _has polynomial growth with respect to_ \(\ell\)_._ 2. \(\mathcal{G}\) _has_ \(\mathcal{E}\)_-(RD) with respect to_ \(\ell\) _for all twists_ \(\mathcal{E}\) _over_ \(\mathcal{G}\)_._ 3. \(\mathcal{G}\) _has_ \(\mathcal{E}\)_-(RD) with respect to_ \(\ell\) _for some twist_ \(\mathcal{E}\) _over_ \(\mathcal{G}\)_._ **Proof.** If \(\mathcal{G}\) has polynomial growth, then by [7, Proposition 3.5]\(\mathcal{G}\) has (RD) with respect to \(\ell\). Proposition 2.12 now implies that \(\mathcal{G}\) has \(\mathcal{E}\)-(RD) with respect to \(\ell\) for any twist \(\mathcal{E}\) over \(\mathcal{G}\). Thus (1) implies (2). Obviously, (2) implies (3), so we focus on showing (3) implies (1). We prove the contrapositive, so assume that \(\mathcal{G}\) does not have polynomial growth with respect to \(\ell\). Then for each \(C,D>0\) there exist \(x\in\mathcal{G}^{(0)}\) and \(r>0\) such that \(|B(x,r)|>C(1+r)^{D}\). Letting \(F=B(x,r)\), we see that \(F\) satisfies conditions (i) and (ii) of Lemma 3.1, and therefore \(\mathcal{G}\) cannot have property \(\mathcal{E}\)-(RD) with respect to \(\ell\) for any twist, and therefore (3) does not hold. \(\square\) A mild adjustment of our assumptions allows us to apply the local slice lemma, and we obtain the following. **Theorem 3.3**.: _Let \(\mathcal{G}\) be a \(\sigma\)-compact, topologically principal, etale groupoid, and let \(\ell\) be a continuous and proper length function on \(\mathcal{G}\). The following are equivalent:_ 1. \(\mathcal{G}\) _has polynomial growth with respect to_ \(\ell\)_._ 2. \(\mathcal{G}\) _has_ \(\mathcal{E}\)_-(RD) with respect to_ \(\ell\) _for all twists_ \(\mathcal{E}\) _over_ \(\mathcal{G}\)_._ 3. \(\mathcal{G}\) _has_ \(\mathcal{E}\)_-(RD) with respect to_ \(\ell\) _for some twist_ \(\mathcal{E}\) _over_ \(\mathcal{G}\)_._ **Proof.** The only part that differs from the proof of Theorem 3.2 is the proof of _(3)\(\Rightarrow\)(1)_. Suppose \(\mathcal{G}\) does not have polynomial growth with respect to the length function \(\ell\). Fix \(C,D>0\). We can find some \(x_{0}\in\mathcal{G}^{(0)}\) and some \(R_{0}>0\) such that \[|B(x_{0},R_{0})|>C(1+R_{0})^{D}.\] Choose \(\varepsilon>0\) such that \[|B(x_{0},R_{0})|>C(1+R_{0}+\varepsilon)^{D},\] and moreover assume that \(\varepsilon\) is chosen small enough so that \(B(x_{0},R_{0}+\varepsilon)=B(x_{0},R_{0})\). By the local slice lemma, we can find some \(R\in[R_{0},R_{0}+\varepsilon)\) and some open set \(V\subset\mathcal{G}^{(0)}\) containing \(x_{0}\), such that \(|B(x,R)|=|B(x_{0},R_{0})|\) for all \(x\in V\). Since \(\mathcal{G}\) is topologically principal, we can find some \(x\in V\) with trivial isotropy. Then \[|B(x,R)|=|B(x_{0},R_{0})|>C(1+R_{0}+\varepsilon)^{D}>C(1+R)^{D}.\] Now \(F=B(x,R)\) satisfies the conditions of Lemma 3.1, and therefore \(\mathcal{G}\) cannot have property (RD). Now we consider the special case of coarse groupoids. As an application of the above result, we obtain a generalization of Theorem 2.1 from [3]. We begin by briefly recalling their construction (for more details, one can consult [21, 18]). Let \((X,d)\) be a discrete metric space, and assume for ease of exposition that it has _bounded geometry_, meaning that for each \(r>0\), there is a uniform bound on the cardinality of the balls \(B(x,r)\) as \(x\) varies over \(X\). For \(r\geq 0\), let \(E_{r}=\{(x,y)\in X\times X:d(x,y)\leq r\}\) denote the tube of radius \(r\), and let \(\overline{E_{r}}\) denote its closure in \(\beta(X\times X)\), the Stone-Cech compactification of \(X\times X\). As a space, the _coarse groupoid_ of \(X\) is \(\mathcal{G}(X)=\cup_{r\geq 0}\overline{E_{r}}\subset\beta X\times\beta X\). By Theorem 10.20 of [18], the pair groupoid structure on \(X\times X\) extends continuously to \(\mathcal{G}(X)\), making it a principal, \(\sigma\)-compact, etale groupoid, with unit space homeomorphic to \(\beta X\), and range and source maps respectively the unique extensions of of the first and second factor maps \(X\times X\to X\). We recall the notions necessary to define metric rapid decay for the bounded geometry metric space \((X,d)\). First, we recall the definition of the uniform Roe algebra associated to \(X\). For a function \(k:X\times X\to\mathbb{C}\), the _propagation_ of \(k\) is the quantity \(\operatorname{prop}(X)=\sup\{d(x,y):x,y\in X,k(x,y)\neq 0\}\). \(\mathbb{C}_{u}(X)\) be the space of those functions \(k\) that are bounded (meaning \(\sup\{|k(x,y)|:x,y\in X\}\) is finite) and of finite propagation. This is a \(*\)-algebra, which admits a canoncial action on the Hilbert space \(\ell^{2}(X)\) of square-summable functions \(X\to\mathbb{C}\). The \(C^{*}\)-algebra generated by \(\mathbb{C}_{u}(X)\) is called the _uniform Roe algebra_ of \(X\), and is denoted \(C^{*}_{u}(X)\). **Definition 3.4**.: Let \(k:X\times X\to\mathbb{C}\) be given. For \(t\geq 0\), we define the quantities \(\|k\|_{BS,t},\|k\|_{BS^{*},t}\in[0,\infty]\) by \[\|k\|_{BS,t}=\left(\sup_{y\in X}\sum_{x\in X}|k(x,y)|^{2}(1+d(x,y ))^{2t}\right)^{1/2}\] \[\|k\|_{BS^{*},t}=\max\{\|k\|_{BS,t},\|k^{*}\|_{BS,t}\},\] where \(k^{*}:X\times X\to\mathbb{C}\) is defined by \(k^{*}(x,y)=\overline{k(y,x)}\). We denote by \(BS_{2}(X)\) the space of all functions \(k:X\times X\to\mathbb{C}\) such that \(\|k\|_{BS^{*},t}<\infty\) for all \(t\geq 0\). Note that \(BS_{2}(X)\) is a Frechet space with the topology given by the family of seminorms \(\{\|\cdot\|_{BS^{*},t}:t\in\mathbb{Z}_{\geq 0}\}\). **Definition 3.5**.: We say that \(X\) has property (MRD), or has _(metric) rapid decay_, if \(BS_{2}(X)\) is contained in \(C^{*}_{u}(X)\).4 Footnote 4: Our definition differs slightly from those given in [3] and [8], to account for the self-adjointness of the norms in Definition 2.9, which was adapted from [7]. This is only a matter of convention; and one can easily adapt these results to that setting. We briefly outline the construction of a canonical length function on \(\mathcal{G}(X)\) as done in Section 5 of [14]. First, observe that for each \(r\geq 0\), the restriction of the metric \(d:E_{r}\to[0,r]\) extends to a continuous map \(\ell:\overline{E_{r}}\to[0,r]\), and that these extensions respect the inclusions \(\overline{E_{r}}\subset\overline{E_{r^{\prime}}}\) for \(r^{\prime}\geq r\), producing a well-defined length \(\ell\) on \(\mathcal{G}(X)\), which is clearly continuous. Properness of this length is also readily verified, as for each \(r\geq 0\) we have \(\ell^{-1}([0,r])\subset\overline{E_{r}}\), a compact subset of \(\mathcal{G}(X)\). **Lemma 3.6**.: _Let \(X\) be a discrete metric space with bounded geometry. If \(X\) has property \((MRD)\), then \(\mathcal{G}(X)\) has property (RD) with respect to the length function defined above._ **Proof.** Suppose \(X\) has \((MRD)\). The inclusion of \(BS_{2}(X)\) into \(C_{u}^{*}(X)\) is a closed map, hence continuous, and there exist \(C,t\geq 0\) such that \(\|k\|_{C_{u}^{*}(X)}\leq C\|k\|_{BS^{*},t}\) for all \(k\in BS_{2}(X)\). If now \(f\in C_{c}(\mathcal{G}(X))\), let \(k_{f}:X\times X\to\mathbb{C}\) denote the restriction of \(f\) to \(X\times X\subset\mathcal{G}(X)\). Then we have the estimate \[\|k_{f}\|_{BS,t}=\sup_{y\in X}\left(\sum_{x\in X}|f(x,y)|(1+\ell(x,y))^{2t} \right)^{1/2}=\sup_{y\in X}\|f\|_{\ell,t,s,y}\leq\|f\|_{\ell,t,s}.\] Note that \((k_{f})^{*}=k_{f^{*}}\), so \(\|k_{f}\|_{BS^{*},t}\leq\|f\|_{\ell,t}\). By Proposition 10.29 in [18], we have \(\|f\|_{C_{r}^{*}\mathcal{G}}=\|k_{f}\|_{C_{u}^{*}(X)}\), and thus \[\|f\|_{C_{r}^{*}\mathcal{G}}\leq C\|f\|_{\ell,t}.\] \(\square\) As in Definition 1.7 of [3], we say the metric space \((X,d)\) has _polynomial growth_ if there exist constants \(C,n>0\) such that \(|B(x,r)|\leq C(1+r)^{n}\) for all \(x\in X\) and \(r\geq 0\). An application of the local slice lemma yields the following result: **Lemma 3.7**.: _Let \(X\) be a discrete metric space with bounded geometry. Then \(\mathcal{G}(X)\) has polynomial growth if and only if \(X\) has polynomial growth._ **Proof.** The forward implication is clear, so assume that \(X\) has polynomial growth, and fix \(C>0\), \(d\in\mathbb{N}\) such that for all \(R>0\), we have \[\sup_{x\in X}|B(x,R)|\leq C(1+R)^{d}.\] Fix an ultrafilter \(\omega_{0}\in\beta X\) and \(R>0\). Choose \(\varepsilon>0\) such that \(B_{\ell}(\omega,R+\varepsilon)=B_{\ell}(\omega,R)\) for all \(\omega\in\beta X\). By Corollary 2.8, there is an open neighborhood \(V\) of \(\omega_{0}\) in \(\beta X\) such that \(|B_{\ell}(\omega,R)|=|B_{\ell}(\omega_{0},R)|\) for all \(\omega\in V\). Fixing some \(x_{0}\in V\cap X\), we have \[|B_{\ell}(\omega_{0},R)|=|B_{\ell}(x_{0},R)|=|B(x_{0},R)|\leq C(1+R)^{d}.\] Therefore, \(\mathcal{G}(X)\) has polynomial growth. \(\square\) Combining Lemmas 3.7 and 3.6, we see that Theorem 3.2 generalizes a previous theorem of Chen and Wei. **Theorem 3.8** ([3, Theorem 2.1]).: _Let \(X\) be a discrete metric space with bounded geometry. Then \(X\) has property \((MRD)\) if and only if \(X\) has polynomial growth._ ## 4. Permanence Results In this last section, we list some permanence properties enjoyed by the rapid decay property. We give conditions for products of (RD) groupoids to have (RD) (see Proposition 4.2), and as a consequence we obtain examples of groupoids which are not groups, do not have polynomial growth, and yet satisfy (RD). Other than this, the main result is Theorem 4.5, which gives conditions on which (RD) transfers from the domain of a groupoid homomorphism to its codoman, and give a few corollaries to this result. But first, we give a simple result regarding inclusions of groupoids. **Proposition 4.1**.: _Suppose \(\mathcal{H}\) is an etale groupoid, and that \(\mathcal{G}\subset\mathcal{H}\) is an open subgroupoid. Let \(\ell\) be a length function on \(\mathcal{H}\), let \(\mathcal{F}\stackrel{{\pi}}{{\to}}\mathcal{H}\) be a twist over \(\mathcal{H}\), and let \(\mathcal{E}=\pi^{-1}(\mathcal{G})\). Then \(\mathcal{E}\) is a twist over \(\mathcal{G}\), and if \(\mathcal{H}\) has property \(\mathcal{F}\)-(RD) with respect to \(\ell\), then \(\mathcal{G}\) has property \(\mathcal{E}\)-(RD) with respect to \(\tilde{\ell}\), where \(\tilde{\ell}\) is the restriction of \(\ell\) to \(\mathcal{G}\)._ Proof.: It is straightforward to check that \(\mathcal{E}\) defines a twist over \(\mathcal{G}\). As \(\mathcal{G}\subset\mathcal{H}\) is open, \(\mathcal{E}\) is an open subgroupoid of \(\mathcal{F}\), so extension-by-zero yields inclusions \(\Sigma_{c}(\mathcal{G},\mathcal{E})\subset\Sigma_{c}(\mathcal{H},\mathcal{F})\) and \(\ell^{2}\mathcal{G}_{x}\subset\ell^{2}\mathcal{H}_{x}\) for all \(x\in\mathcal{G}^{(0)}\). Moreover these later inclusions are isometric. Let \(\rho:\mathcal{H}\to\mathcal{F}\) be a section for the bundle map \(\pi\), and let \(\tilde{\rho}\) denote the restriction of \(\rho\) to \(\mathcal{G}\). For any \(f\in\Sigma_{c}(\mathcal{G},\mathcal{E})\), \(x\in\mathcal{G}^{(0)}\), and \(\xi\in\ell^{2}\mathcal{G}_{x}\), we have \[\|\lambda_{x}^{\tilde{\rho}}(f)\xi\|_{\ell^{2}\mathcal{G}_{x}}^{2} =\|\lambda_{x}^{\rho}(f)\xi\|_{\ell^{2}\mathcal{H}_{x}}^{2}\leq\| \lambda_{x}^{\rho}(f)\|_{\mathbb{B}(\ell^{2}\mathcal{H}_{x})}\|\xi\|_{\ell^{2} \mathcal{H}_{x}}\] \[=\|\lambda_{x}^{\rho}(f)\|_{\mathbb{B}(\ell^{2}\mathcal{H}_{x})} \|\xi\|_{\ell^{2}\mathcal{G}_{x}}.\] It follows that \(\|\lambda_{x}^{\tilde{\rho}}(f)\|_{\mathbb{B}(\ell^{2}\mathcal{G}_{x})}\leq\| \lambda_{x}^{\rho}(f)\|_{\mathbb{B}(\ell^{2}\mathcal{H}_{x})}\), and thus \(\|f\|_{C^{*}_{r}(\mathcal{G},\mathcal{E})}\leq\|f\|_{C^{*}_{r}(\mathcal{H}, \mathcal{F})}\). For \(t\geq 0\) and \(f\in\Sigma_{c}(\mathcal{G},\mathcal{E})\), we have \(\|f\|_{\mathcal{F},\ell,t,s,x}=\|f\|_{\mathcal{E},\tilde{\ell},t,s,x}\) whenever \(x\in\mathcal{G}^{(0)}\) and \(\|f\|_{\mathcal{F},\ell,t,s,x}=0\) whenever \(x\in\mathcal{H}^{(0)}\setminus\mathcal{G}^{(0)}\). Taking suprema, it follows that \(\|f\|_{\mathcal{F},\ell,t,s}=\|f\|_{\mathcal{E},\tilde{\ell},t,s}\), and taking adjoints we obtain \(\|f\|_{\mathcal{F},\ell,t}=\|f\|_{\mathcal{E},\tilde{\ell},t}\). Assuming \(\mathcal{H}\) has \(\mathcal{F}\)-(RD) with respect to \(\ell\), there are constants \(C,t\geq 0\) such that \(\|h\|_{C^{*}_{r}(\mathcal{H},\mathcal{F})}\leq C\|h\|_{\mathcal{F},\ell,t}\) for all \(h\in\Sigma_{c}(\mathcal{H},\mathcal{F})\), and thus \[\|f\|_{C^{*}_{r}(\mathcal{G},\mathcal{E})}\leq\|f\|_{C^{*}_{r}(\mathcal{H}, \mathcal{F})}\leq C\|f\|_{\mathcal{F},\ell,t}=C\|f\|_{\mathcal{E},\tilde{\ell},t}.\] We now consider the case of products of etale groupoids. It is known (see Lemma 3.1 of [2] for instance) that products of (RD) groups satisfy (RD). At the time of this writing, it is not known to the author if the same holds in this more general setting, but we can obtain a partial result. Before stating the result, let us note the following simple construction. Let \(\mathcal{G}\) and \(\mathcal{H}\) be etale groupoids, and let be a twist over \(\mathcal{G}\). One can define a twist over \(\mathcal{G}\times\mathcal{H}\) by where \(\tilde{i}(x,y,z)=(i(x,z),z)\) for \(x\in\mathcal{G}^{(0)}\), \(y\in\mathcal{H}^{(0)}\), \(z\in\mathbb{T}\), and where \(\tilde{\varphi}(\varepsilon,\eta)=(\pi(\varepsilon),\eta)\) for \(\varepsilon\in\mathcal{E}\), \(\eta\in\mathcal{H}\). **Proposition 4.2**.: _Let \(\mathcal{G}\) and \(\mathcal{H}\) be etale groupoids, and let \(\mathcal{E}\) be a twist over \(\mathcal{G}\). Suppose that \(\mathcal{H}\) is compact, and that \(\mathcal{G}\) has property \(\mathcal{E}\)-(RD) with respect to the length \(\ell\). Then \(\mathcal{G}\times\mathcal{H}\) has property \(\mathcal{E}\times\mathcal{H}\)-(RD) with respect to the length function \(\tilde{\ell}\), where \(\tilde{\ell}(\gamma,\eta)=\ell(\gamma)\)._ **Proof.** Fix a finite cover \(\{U_{1},\ldots,U_{n}\}\) of \(\mathcal{H}\) by open bisections, and let \((h_{1},\ldots,h_{n})\) be a partition of unity for \(\mathcal{H}\) subordinate to \((U_{1},\ldots,U_{n})\). For \(f\in\Sigma_{c}(\mathcal{G}\times\mathcal{H},\mathcal{E}\times\mathcal{H})\), define \(f^{(k)}\in\Sigma_{c}(\mathcal{G}\times\mathcal{H},\mathcal{E}\times\mathcal{H})\) for \(k\in[n]:=\{1,\ldots,n\}\) by \(f^{(k)}(\varepsilon,\eta)=f(\varepsilon,\eta)h_{k}(\eta)\), and define \(f_{\eta}\in\Sigma_{c}(\mathcal{G},\mathcal{E})\) for \(\eta\in\mathcal{H}\) by \(f_{\eta}(\varepsilon)=f(\varepsilon,\eta)\). For \(x\in\mathcal{G}^{(0)}\), \(y\in\mathcal{H}^{(0)}\), \(\xi\in\ell^{2}(\mathcal{G}\times\mathcal{H})_{(x,y)}\), and \(\eta\in\mathcal{H}_{y}\), define \(\xi_{\eta}\in\ell^{2}\mathcal{G}_{x}\) by \(\xi_{\eta}(\gamma)=\xi(\gamma,\eta)\). Now fix \(f\in\Sigma_{c}(\mathcal{G}\times\mathcal{H},\mathcal{E}\times\mathcal{H})\), and let \((x,y)\in\mathcal{G}^{(0)}\times\mathcal{H}^{(0)}\) and \(\xi\in\ell^{2}(\mathcal{G}\times\mathcal{H})_{(x,y)}\) be given. Let \(Z=\{(\eta,k)\in\mathcal{H}_{y}\times[n]:r(\eta)\in r(U_{k})\}\). For \((\eta,k)\in Z\), there is a unique \(\zeta=\zeta(\eta,k)\in\mathcal{H}_{y}\) such that \(\eta\zeta^{-1}\in U_{k}\). Let \(\rho:\mathcal{G}\to\mathcal{E}\) be a section for the bundle map \(\pi\), and let \(\tilde{\rho}=\rho\times\mathrm{id}_{\mathcal{H}}\) be the corresponding section for the bundle map \(\tilde{\pi}\). Observe that for \(\gamma\in\mathcal{G}_{x}\) and \((\eta,k)\in Z\) we have \[[\lambda^{\tilde{\rho}}_{(x,y)}(f^{(k)})\xi](\gamma,\eta)=h_{k}(\eta\zeta( \eta_{2},k)^{-1})[\lambda^{\rho}_{x}(f_{\eta\zeta(\eta,k)^{-1}})\xi_{\zeta( \eta,k)}](\gamma).\] We estimate: \[\|\lambda^{\tilde{\rho}}_{(x,y)}(f)\xi\|^{2}_{\ell^{2}(\mathcal{G }\times\mathcal{H})_{(x,y)}} =\sum_{\gamma\in\mathcal{G}_{x}}\sum_{\eta\in\mathcal{H}_{y}}\left| \sum_{k=1}^{n}[\lambda^{\tilde{\rho}}_{(x,y)}(f^{(k)})\xi](\gamma,\eta)\right|^ {2}\] \[\leq n\sum_{(\eta,k)\in Z}\sum_{\gamma\in\mathcal{G}_{x}}|[\lambda ^{\tilde{\rho}}_{(x,y)}(f^{(k)})\xi](\gamma,\eta)|^{2}\] \[\leq n\sum_{(\eta,k)\in Z}\|\lambda^{\rho}_{x}(f_{\eta\zeta(\eta, k)^{-1}})\xi_{\zeta(\eta,k)}\|^{2}_{\ell^{2}\mathcal{G}_{x}}\] \[\leq n\sum_{(\eta,k)\in Z}\|f_{\eta\zeta(\eta,k)^{-1}}\|^{2}_{C^{ *}_{x}\mathcal{G}}\|\xi_{\zeta(\eta,k)}\|^{2}_{\ell^{2}\mathcal{G}_{x}}.\] Since \(\mathcal{G}\) has property \(\mathcal{E}\)-(RD) with respect to \(\ell\), there exist constants \(C,t\geq 0\) such that \(\|g\|_{C^{*}_{x}(\mathcal{G},\mathcal{E})}\leq C\|g\|_{\mathcal{E},\ell,t}\) for all \(g\in\Sigma_{c}(\mathcal{G},\mathcal{E})\). For \(\eta\in\mathcal{H}\), one checks that \(\|f_{\eta}\|_{\mathcal{E},\ell,t}\leq\|f\|_{\mathcal{E}\times\mathcal{H},\tilde{ \ell},t}\), and hence \[\|\lambda_{(x,y)}^{\tilde{\rho}}(f)\xi\|_{\ell^{2}(\mathcal{G}\times\mathcal{H}) _{(x,y)}}^{2}\leq nC^{2}\|f\|_{\mathcal{E}\times\mathcal{H},\tilde{\ell},t}^{2} \sum_{(\eta,k)\in Z}\|\xi_{\zeta(\eta,k)}\|_{\ell^{2}\mathcal{G}_{x}}^{2}.\] For fixed \(k\), the map \(\eta\mapsto\zeta(\eta,k)\) is injective, and thus \[\sum_{(\eta,k)\in Z}\|\xi_{\zeta(\eta,k)}\|_{\ell^{2}\mathcal{G}_{x}}^{2}\leq n \|\xi\|_{\ell^{2}(\mathcal{G}\times\mathcal{H})_{(x,y)}}^{2}.\] Combining our estimates, we have obtain \[\|\lambda_{(x,y)}^{\tilde{\rho}}(f)\xi\|_{\ell^{2}(\mathcal{G}\times\mathcal{H })_{(x,y)}}^{2}\leq nC\|f\|_{\mathcal{E}\times\mathcal{H},\tilde{\ell},t}\| \xi\|_{\ell^{2}(\mathcal{G}\times\mathcal{H})_{(x,y)}}.\] Taking the supremum over \(\xi\in\ell^{2}(\mathcal{G}\times\mathcal{H})_{(x,y)}\), then over \((x,y)\in\mathcal{G}^{(0)}\times\mathcal{H}^{(0)}\), we obtain \[\|f\|_{C_{r}^{*}(\mathcal{G}\times\mathcal{H},\mathcal{E}\times\mathcal{H}} \leq nC\|f\|_{\mathcal{E}\times\mathcal{H},\tilde{\ell},t}.\] This allows us to conclude that when \(\mathcal{G}=\Gamma\) is a (discrete) group with property (RD), \(\mathcal{E}\) is the trivial twist, and \(\mathcal{H}\) is any compact etale groupoid, we see that \(\Gamma\times\mathcal{H}\) admits property (RD). In particular, if \(\mathbb{F}\) is a finitely generated free group, given the word length function with respect to a free generating set, and \(\mathcal{R}_{n}\) denotes the full equivalence relation on the \(n\)-set \([n]=\{1,\cdots,n\}\),then \(\mathbb{F}\times\mathcal{R}_{n}\) is a property (RD) groupoid which fails to have polynomial growth. This to be expected, as \(C_{r}^{*}(\mathbb{F}\times\mathcal{R}_{n})\cong M_{n}(C_{r}^{*}\mathbb{F})\), and in the language of Theorem 4.2 in [7], \(S_{2}^{\ell}(\mathbb{F}\times\mathcal{R}_{n})\) is just \(M_{n}(S_{2}^{\ell}(\mathbb{F}_{2}))\), and spectral invariance of this subalgebra is known by Theorem 2.1 in [19]. Next, we consider a fairly simple observation regarding the relationship between property (RD) for translation groupoids and property (RD) for the acting group. **Proposition 4.3**.: _Let \(\Gamma\) be a discrete group with a length function \(\ell\), and suppose \(\Gamma\) acts on a compact space \(X\). If \(\Gamma\ltimes X\) has (RD) with respect to the length function induced by \(\ell\), then \(\Gamma\) has (RD) with respect to \(\ell\)._ **Proof.** Let us write \(\mathcal{G}=\Gamma\ltimes X\), and define the length function \(\ell_{\Gamma\curvearrowright X}\) on \(\Gamma\ltimes X\) by \(\ell_{\Gamma\curvearrowright X}(\gamma,x)=\ell(\gamma)\) for all \(\gamma\in\Gamma\) and \(x\in X\). If \(f\in\mathbb{C}\Gamma\), define \(\hat{f}\in C_{c}(\mathcal{G})\) by \(\hat{f}(\gamma,x)=f(\gamma)\). For any \(x\in X\) and \(\xi\in\ell^{2}\Gamma\), define \(\hat{\xi}_{x}\in\ell^{2}\mathcal{G}_{x}\) by \(\hat{\xi}_{x}(\gamma,x)=\xi(\gamma)\) for all \(\gamma\in\Gamma\). Moreover, one checks that \[[\lambda_{x}(\hat{f})\hat{\xi}_{x}](\gamma,x)=[\lambda(f)\xi](\gamma),\qquad \|\hat{f}\|_{\ell_{\mathcal{G}},t,s,x}=\|\hat{f}\|_{\ell_{\mathcal{G}},t,s,x}= \|f\|_{\ell,t}\] for all \(f\in\mathbb{C}\Gamma\), \(\xi\in\ell^{2}\Gamma\), \(\gamma\in\Gamma\), \(x\in X\), and \(t\geq 0\). If now \(\mathcal{G}\) has (RD) with respect to \(\ell_{\Gamma\curvearrowright X}\), there exist \(C,t\geq 0\) such that \(\|g\|_{C_{r}^{*}\mathcal{G}}\leq C\|g\|_{\ell_{\Gamma\curvearrowright X},t}\) for all \(g\in C_{c}(\mathcal{G})\). If now \(f\in\mathbb{C}\Gamma\), we have \[\|f\|_{C_{r}^{*}\Gamma}=\|\hat{f}\|_{C_{r}^{*}\mathcal{G}}\leq C\|\hat{f}\|_{ \ell_{\Gamma\curvearrowright X},t}=C\|f\|_{\ell,t}.\] Since \(C,t\geq 0\) did not depend on \(f\), the result follows. By Theorems 3.2 and 3.3, the converse to this result fails drastically. For the time being, we shall attempt to generalize this result as much as possible. To that end, let \(\mathcal{H},\mathcal{G}\) be groupoids, and let \(\varphi:\mathcal{H}\to\mathcal{G}\) be a homomorphism. Given a twist \(\mathcal{E}\) over \(\mathcal{G}\), we can construct a pullback twist \(\varphi^{*}\mathcal{E}\) over \(\mathcal{H}\), making the following diagram commute: Here, \(\varphi^{*}\mathcal{E}=\mathcal{E}_{\pi^{*}\varphi}\,\mathcal{H}\) is the fibered product, and all induced maps are the obvious ones. Note that the twist \(\mathcal{E}\times\mathcal{H}\) over \(\mathcal{G}\times\mathcal{H}\) considered in Proposition 4.2 is just the pullback of the twist \(\mathcal{E}\) along the projection \(\mathcal{G}\times\mathcal{H}\to\mathcal{G}\) onto the first factor. We say that \(\varphi\) is _\(n\)-regular_ (for some \(n\in\mathbb{N}\)) if \(\varphi(\mathcal{H}^{(0)})=\mathcal{G}^{(0)}\), and \(|\varphi^{-1}(\gamma)\cap\mathcal{H}_{y}|=n\) for all \(y\in\mathcal{H}^{(0)}\) and \(\gamma\in\mathcal{G}_{\varphi(y)}\). Note that this also implies \(\varphi\) is surjective, and that \(|\varphi^{-1}(\gamma)\cap\mathcal{H}^{y}|=n\) for all \(y\in\mathcal{H}^{(0)}\) and \(\gamma\in\mathcal{G}^{\varphi(y)}\). It is worth noting that \(1\)-regular groupoid homomorphisms correspond to groupoid actions. Indeed, if \(\mathcal{G}\) acts on a set \(Y\), then the projection map \(\pi:\mathcal{G}\ltimes Y\to\mathcal{G}\) is \(1\)-regular. Conversely, let \(\varphi:\mathcal{H}\to\mathcal{G}\) be a \(1\)-regular groupoid homomorphism, and set \(p=\varphi^{(0)}\) and \(Y=\mathcal{H}^{(0)}\). If \(y\in Y\) and \(\gamma\in\mathcal{G}_{p(y)}\), then there is a unique \(\eta_{\gamma,y}\in\mathcal{H}_{y}\) such that \(\varphi(\eta_{\gamma,y})=\gamma\), and the action of \(\mathcal{G}\) on \(Y\) is given by \(\gamma\cdot y=r(\eta_{\gamma,y})\). Moreover, the map \(\mathcal{H}\to\mathcal{G}\ltimes Y\), \(\eta\mapsto(\varphi(\eta),s(\eta))\), is a groupoid isomorphism. **Lemma 4.4**.: _Let \(\mathcal{G}\) and \(\mathcal{H}\) be etale groupoids, let \(\mathcal{E}\) be a twist over \(\mathcal{G}\) with section \(\rho:\mathcal{G}\to\mathcal{E}\), and let \(\varphi:\mathcal{H}\to\mathcal{G}\) be an \(n\)-regular groupoid homomorphism._ 1. _If_ \(y\in\mathcal{H}^{(0)}\) _and_ \(\xi\in\mathbb{C}\mathcal{G}_{\varphi(y)}\)_, then_ \(\hat{\varphi}_{y}\xi:=\xi\circ\varphi\) _belongs to_ \(\mathbb{C}\mathcal{H}_{y}\)_. Moreover, the mapping_ \(\xi\mapsto\hat{\varphi}_{y}\xi\) _extends to a bounded linear map_ \(\ell^{2}\mathcal{G}_{\varphi(y)}\to\ell^{2}\mathcal{H}_{y}\) _such that_ \(\|\hat{\varphi}_{y}\xi\|_{\ell^{2}\mathcal{H}_{y}}=n^{1/2}\|\xi\|_{\ell^{2} \mathcal{G}_{\varphi(y)}}\) _for all_ \(\xi\in\ell^{2}\mathcal{G}_{\varphi(y)}\)_._ 2. _If_ \(\varphi\) _is continuous and proper, then_ \(\|\hat{\varphi}f\|_{\varphi^{*}\mathcal{E},\varphi^{*}\ell,t}=n^{1/2}\|f\|_{ \mathcal{E},\ell,t}\) _for any_ \(t\geq 0\) _and_ \(f\in\Sigma_{c}(\mathcal{G},\mathcal{E})\) _and any length_ \(\ell\) _on_ \(\mathcal{G}\)_._ 3. _If_ \(\varphi\) _is continuous and proper, then_ \(\lambda_{y}^{\varphi^{*}\rho}(\hat{\varphi}f)\hat{\varphi}_{y}=n\hat{\varphi} _{y}\lambda_{\varphi(y)}^{\rho}(f)\) _for any_ \(f\in\Sigma_{c}(\mathcal{G},\mathcal{E})\) _and_ \(y\in\mathcal{H}^{(0)}\)_._ Proof.: If \(y\in\mathcal{H}^{(0)}\) and \(\xi\in\mathbb{C}\mathcal{G}_{\varphi(y)}\), then \[|\operatorname{supp}(\hat{\varphi}_{y}\xi)|=|\varphi^{-1}(\operatorname{supp }(\xi))\cap\mathcal{H}_{y}|=n|\operatorname{supp}(\xi)|,\] so \(\hat{\varphi}_{y}\xi\in\mathbb{C}\mathcal{H}_{y}\). Moreover, we have \[\|\hat{\varphi}_{y}\xi\|_{\ell^{2}\mathcal{H}_{y}}^{2}=\sum_{\eta\in\mathcal{H}_{ y}}|\hat{\varphi}_{y}\xi(\eta)|^{2}=\sum_{\gamma\in\mathcal{G}_{\varphi(y)}}| \varphi^{-1}(\gamma)\cap\mathcal{H}_{y}|\cdot|\xi(\gamma)|^{2}=n\|\xi\|_{\ell^{ 2}\mathcal{G}_{\varphi(y)}}^{2},\] and _(i)_ follows in the familiar way. Now assume that \(\varphi\) is a continuous, proper, \(n\)-regular groupoid homomorphism. Fix a section \(\rho:\mathcal{G}\to\mathcal{E}\) for the bundle map \(\mathcal{E}\to\mathcal{G}\). If \(f\in\Sigma_{c}(\mathcal{G},\mathcal{E})\) and \(y\in\mathcal{H}^{(0)}\), then proceeding as in the above calculation, one sees that \[\|\hat{\varphi}f\|_{\varphi^{*}\mathcal{E},\varphi^{*}\ell,t,s,y}=n^{1/2}\|f\|_ {\mathcal{E},\ell,t,s,\varphi(y)}\] for all \(y\in\mathcal{H}^{(0)}\). Since \(\varphi\) maps \(\mathcal{H}^{(0)}\) onto \(\mathcal{G}^{(0)}\), we obtain \[\|\hat{\varphi}f\|_{\varphi^{*}\mathcal{E},\varphi^{*}\ell,t,s} =\sup_{y\in\mathcal{H}^{(0)}}\|\hat{\varphi}f\|_{\varphi^{*} \mathcal{E},\varphi^{*}\ell,t,s,y}=n^{1/2}\sup_{y\in\mathcal{H}^{(0)}}\|f\|_{ \mathcal{E},\ell,t,s,\varphi(y)}\] \[=n^{1/2}\sup_{x\in\mathcal{G}^{(0)}}\|f\|_{\mathcal{E},\ell,t,s, x}=n^{1/2}\|f\|_{\mathcal{E},\ell,t,s}.\] As \(\hat{\varphi}\) is a \(*\)-homomorphism, we have \[\|(\hat{\varphi}f)^{*}\|_{\varphi^{*}\mathcal{E},\varphi^{*}\ell,t,s}=\|\hat{ \varphi}(f^{*})\|_{\varphi^{*}\mathcal{E},\varphi^{*}\ell,t,s}=n^{1/2}\|f^{*} \|_{\mathcal{E},\ell,t,s},\] and _(ii)_ follows. To prove _(iii)_, let \(f\in\Sigma_{c}(\mathcal{G},\mathcal{E})\), \(y\in\mathcal{H}^{(0)}\), \(\xi\in\ell^{2}\mathcal{G}_{\varphi(y)}\), and \(\eta\in\mathcal{H}_{y}\) be given. We have \[\left[\lambda_{y}^{\varphi^{*}\rho}(\hat{\varphi}f)(\hat{\varphi }_{y}\xi)\right](\eta) =\sum_{\kappa\in\mathcal{H}_{y}}f\left(\rho(\varphi(\eta))\rho( \varphi(\kappa))^{-1}\right)\xi(\varphi(\kappa))\] \[=\sum_{\gamma\in\mathcal{G}_{\varphi(y)}}|\varphi^{-1}(\gamma) \cap\mathcal{H}_{y}|\cdot f(\rho(\varphi(\eta))\rho(\gamma)^{-1})\xi(\gamma)\] \[=n\sum_{\gamma\in\mathcal{G}_{\varphi(y)}}f(\rho(\varphi(\eta)) \rho(\gamma)^{-1})\xi(\gamma)\] \[=n[\lambda_{\varphi(y)}^{\rho}(f)\xi](\varphi(\eta))\] \[=n[\hat{\varphi}_{y}\lambda_{p(y)}^{\rho}(f)\xi](\eta).\] **Theorem 4.5**.: _Let \(\varphi:\mathcal{H}\to\mathcal{G}\) be a homomorphism of etale groupoids that is continuous, proper, and \(n\)-regular for some \(n\in\mathbb{N}\). Let \(\mathcal{E}\) be a twist over \(\mathcal{G}\). If \(\ell\) is a length function on \(\mathcal{G}\), and \(\mathcal{H}\) has property \(\varphi^{*}\mathcal{E}\)-(RD) with respect to the length \(\varphi^{*}\ell\), then \(\mathcal{G}\) has property \(\mathcal{E}\)- (RD) with respect to \(\ell\)._ **Proof.** Let \(f\in\Sigma_{c}(\mathcal{G},\mathcal{E})\), \(x\in\mathcal{G}^{(0)}\), and \(\xi\in\ell^{2}\mathcal{G}_{x}\) with \(\|\xi\|_{\ell^{2}\mathcal{G}_{x}}=1\) be given. Let \(\rho:\mathcal{G}\to\mathcal{E}\) be a section for the bundle map \(\mathcal{E}\to\mathcal{G}\). Fix some \(y\in\mathcal{H}^{(0)}\) such that \(\varphi(y)=x\). Applying _(i)_ and _(iii)_ of the previous lemma, we see that \[\|\lambda_{x}^{\rho}(f)\xi\|_{\ell^{2}\mathcal{G}_{x}} =n^{-1/2}\|\hat{\varphi}_{y}\lambda_{x}^{\rho}(f)\xi\|_{\ell^{2} \mathcal{H}_{y}}\] \[=n^{-3/2}\|\lambda_{y}^{\varphi^{*}\rho}(\hat{\varphi}f)\hat{ \varphi}_{y}\xi\|_{\ell^{2}\mathcal{H}_{y}}\] \[\leq n^{-3/2}\|\lambda_{y}^{\varphi^{*}\rho}(\hat{\varphi}f)\|_{ \mathbb{B}(\ell^{2}(\mathcal{H}_{y})}\|\hat{\varphi}_{y}\xi\|_{\ell^{2} \mathcal{H}_{y}}\] \[\leq n^{-1}\|\hat{\varphi}f\|_{C_{r}^{*}(\mathcal{H},\varphi^{*} \mathcal{E})}\] Taking suprema, we obtain \(\|f\|_{C_{r}^{*}(\mathcal{G},\mathcal{E})}\leq n^{-1}\|\hat{\varphi}f\|_{C_{r }^{*}(\mathcal{H},\varphi^{*}\mathcal{E})}\). Assuming \(\mathcal{H}\) has property \(\varphi^{*}\mathcal{E}\)-(RD) with respect to \(\varphi^{*}\ell\), there exist constants \(C,t\geq 0\) such that \(\|h\|_{C_{r}^{*}(\mathcal{H},\varphi^{*}\mathcal{E})}\leq C\|h\|_{\varphi^{*} \mathcal{E},\varphi^{*}\ell,t}\) for all \(h\in\Sigma_{c}(\mathcal{H},\varphi^{*}\mathcal{E})\). Applying _(ii)_ from the previous lemma, the above estimate now yields \[\|f\|_{C_{r}^{*}(\mathcal{G},\mathcal{E})}\leq n^{-1}\|\hat{\varphi}f\|_{C_{r }^{*}(\mathcal{H},\varphi^{*}\mathcal{E}}\leq Cn^{-1}\|\hat{\varphi}f\|_{ \varphi^{*}\ell,t}=Cn^{-1/2}\|f\|_{\ell,t}\] As \(f\in\Sigma_{c}(\mathcal{G},\mathcal{E})\) was arbitrary, \(\mathcal{G}\) has (RD) with respect to \(\ell\). First, observe that Proposition 4.3 is a corollary of the above result: The quotient map \(\pi:\Gamma\ltimes X\to\Gamma\) is \(1\)-regular for every continuous action, and proper when the space \(X\) is compact. Generalizing the above corollary, we now turn our attention to groupoid actions. Let \(\mathcal{G}\) be an etale groupoid, and suppose it admits a left action on the locally compact Hausdorff space \(Y\) with anchor map \(p:Y\to\mathcal{G}^{(0)}\). Let \(\mathcal{H}=\mathcal{G}\ltimes Y\), and let \(\pi:\mathcal{H}\to\mathcal{G}\) be the projection map: \(\pi(\gamma,y)=\gamma\). Then \(\pi\) is a continuous and \(1\)-regular groupoid homomorphism, so for the above result to apply we only need to supply conditions for \(\pi\) to be a proper map. This turns out to be the case when \(p\) is a finite cover. To prove this, we require a lemma. **Lemma 4.6**.: _Let \(I\) be a directed set, and let \(I_{1},\ldots,I_{n}\) be subsets of \(I\). If the union \(\cup_{k=1}^{n}I_{k}\) is cofinal in \(I\), then there is some \(k\in\{1,\ldots,n\}\) such that \(I_{k}\) is cofinal in \(I\)._ Proof.: Write \(I_{0}=I_{1}\cup\cdots\cup I_{n}\). The result is trivial if \(n=1\), so assume \(n\geq 2\), and suppose that \(I_{1},\ldots,I_{n-1}\) are not cofinal in \(I\). Then for each \(k\in\{1,\ldots,n-1\}\), there is some \(i_{k}\in I\) such that whenever \(i\in I\) and \(i\geq i_{k}\) we must have \(i\notin I_{k}\). Fix some \(i_{0}\in I\) such that \(i_{0}\geq i_{k}\) for each \(k<n\). Let \(i\in I\) be given, and let \(i_{n}\in I\) be such that \(i_{n}\geq i\) and \(i_{n}\geq i_{0}\). For each \(k\in\{1,\ldots,n-1\}\), we have \(i_{n}\geq i_{k}\), so \(i_{n}\notin I_{k}\). This forces \(i_{n}\in I_{n}\), and it follows that \(I_{n}\) is cofinal in \(I\). **Proposition 4.7**.: _Let \(\mathcal{G}\) be an etale groupoid, and suppose \(\mathcal{G}\) acts on a locally compact space \(X\) such that the anchor map \(p:X\to\mathcal{G}^{(0)}\) is a finite cover. Then the projection map \(\pi:\mathcal{G}\ltimes X\to\mathcal{G}\) is proper._ Proof.: Let \(K\subset\mathcal{G}\) be compact, and let \((\gamma_{i})_{i\in I}\) be a net in \(\pi^{-1}(K)\). For each \(i\in I\) write \(\gamma_{i}=(\sigma_{i},x_{i})\), where \(\sigma_{i}\in K\) and \(x_{i}\in X\). As \(K\) is compact, by passing to a subnet we may assume that the net \((\sigma_{i})_{i\in I}\) converges to some \(\sigma\in K\). Let us write \(p^{-1}(s(\sigma))=\{x_{1},\ldots,x_{n}\}\), and fix an open neighborhood \(V\) of \(s(\sigma)\) in \(\mathcal{G}^{(0)}\) that is evenly covered by the sets \(\{V_{1},\ldots,V_{n}\}\), where for each \(k\in\{1,\ldots,n\}\), \(V_{k}\subset X\) is an open neighborhood of \(x_{k}\). For \(1\leq k\leq n\) let us write \(I_{k}=\{i\in I:x_{i}\in V_{k}\}\), and \(I_{0}=I_{1}\sqcup\cdots\sqcup I_{n}\). As \(I_{0}=\{i\in I:s(\gamma_{i})\in V\}\), and \(s(\gamma_{i})\to s(\gamma)\), \(I_{0}\) is cofinal in \(I\). Lemma 4.6 now implies that there is some \(k\in\{1,\ldots,n\}\) such that \(I_{k}\) is cofinal in \(I\). Let \(U\subset X\) be an open neighborhood of \(x_{k}\). Then \(p(U\cap V_{k})\) is an open neighborhood of \(s(\sigma)\), so there is some \(i_{0}\in I\) such that \(p(x_{i})\in p(U\cap V_{k})\) whenever \(i\geq i_{0}\). Since \(I_{k}\) is cofinal in \(I\), we may assume that \(i_{0}\in I_{k}\). Thus \(x_{i}\in U\cap V_{k}\) whenever \(i\in I_{k}\) and \(i\geq i_{0}\). It follows that the net \((\gamma_{i})_{i\in I_{k}}\) converges to \((\sigma,x_{k})\in\pi^{-1}(K)\), and therefore \(\pi^{-1}(K)\) is compact. **Corollary 4.8**.: _Let \(\mathcal{G}\) be an etale groupoid, and let \(\ell\) be a length function on \(\mathcal{G}\). Suppose that \(\mathcal{G}\) admits an action on a locally compact space \(Y\) and that the anchor map \(p:Y\to\mathcal{G}^{(0)}\) is a finite covering map. If \(\mathcal{G}\ltimes Y\) has property (RD) with respect to the length function induced by \(\pi^{*}\ell\), then \(\mathcal{G}\) has property (RD) with respect to \(\ell\)._ As a last application, we consider blow ups. Let \(\mathcal{G}\) be an etale groupoid, let \(Y\) be a locally compact space, and let \(p:Y\to\mathcal{G}^{(0)}\) be a surjective local homeomorphism. We denote by \(\mathcal{G}[p]\) the blow up of \(\mathcal{G}\) by the map \(p\). This is, by definition, the groupoid whose underlying space is \(Y_{\;\;p}{}^{*}{}_{r}\,\mathcal{G}_{\;\;s}{}^{*}{}_{p}\,Y\), with the obvious groupoid operations. With the subspace topology coming from the product topology on \(Y\times\mathcal{G}\times Y\), \(\mathcal{G}[p]\) is an etale groupoid with unit space homeomorphic to \(Y\). Given a local homeomorphism \(p:Y\to\mathcal{G}^{(0)}\), we define a map \(p_{0}:\mathcal{G}[p]\to\mathcal{G}\) by \(p_{0}(w,\gamma,y)=\gamma\). This is a continuous groupoid homomorphism. Moreover, we have a result similar to the above lemma for group actions and finite covers. **Proposition 4.9**.: _Let \(\mathcal{G}\) be an etale groupoid, and let \(p:Y\to\mathcal{G}^{(0)}\) be an \(n\)-fold covering map. Then the map \(p_{0}:\mathcal{G}[p]\to\mathcal{G}\) is an \(n\)-regular and proper groupoid homomorphism._ **Proof.** It is clear that \(p_{0}\) is \(n\)-regular when \(p\) is an \(n\)-fold cover. The proof that \(p_{0}\) is a proper map is similar to the proof of Proposition 4.7, and will be omitted. **Corollary 4.10**.: _Let \(\mathcal{G}\) be an etale groupoid with a length function \(\ell\), and let \(p:Y\to\mathcal{G}^{(0)}\) be a finite covering map. If \(\mathcal{G}[p]\) has property (RD) with respect to the length induced by \(\ell\), then \(\mathcal{G}\) has (RD) with respect to \(\ell\)._
2307.04150
On Horizon Molecules and Entropy in Causal Sets
We review the different proposals and attempts to identify the ``horizon molecules" that would give a kinematical estimation for the black hole entropy in causal set theory. The proposals are presented according to their chronological appearance in scientific literature. The review is neither very technical nor merely descriptive; it is aimed to provide the reader with a lucid introduction to the necessary concepts and mathematical background, and give him or her a broad view on the subject, by focusing on the main technical and conceptual issues that summarize the progress made in the last two decades.
Djamel Dou
2023-07-09T10:58:01Z
http://arxiv.org/abs/2307.04150v1
# On Horizon Molecules and Entropy in Causal Sets ###### Abstract We review the different proposals and attempts to identify the "horizon molecules" that would give a kinematical estimation for the black hole entropy in causal set theory. The proposals are presented according to their chronological appearance in scientific literature. The review is neither very technical nor merely descriptive; it is aimed to provide the reader with a lucid introduction to the necessary concepts and mathematical background, and give him or her a broad view on the subject, by focusing on the main technical and conceptual issues that summarize the progress made in the last two decades. Keywords:Causal Sets, Quantum Gravity, Black Holes, Entropy, Horizon Molecules, Statistical Geometry ## 1 Introduction Although the energy scale at which quantum effects on spacetime are expected to show up is well beyond the range of any foreseeable laboratory-based experiments, the theoretical consequences of quantum mechanics and general relativity have been major reasons for studying quantum gravity and searching for a more fundamental structure of spacetime. Most importantly, the discovery of the close relationship between certain laws of black hole physics and the ordinary laws of thermodynamics, on one hand, and the discovery of the quantum induced radiation by black hole (BH), on the other hand, appear to be two major pieces of a puzzle that fit together so perfectly that there can be little doubt that this "fit" is of deep significance [1; 2; 3; 4; 5]. Today, well into its fifth decade of the development, this merger remains intellectually stimulating and puzzling at once. One of the most puzzling aspects is the fact black hole possesses an entropy equal to one quarter of its horizon area expressed in units of Planck area. And in spite of five decades of intensive research, debates and genuine advances in different directions, especially within the context of string theory and 2+1 gravity [3; 6; 7; 8; 9], see also [10] for loop quantum gravity results, it is fair to say that the physical origin of this entropy and all questions accompanying the thermodynamic of BH are still lacking satisfactory answers, and the debate is far from being settled. In particular, it remains uncertain what "degrees of freedom" or microstates the entropy refers to, or what unavailable information it quantifies. Moreover, it can be said that a well accepted criterion to select one approach out of the different approaches to quantum gravity or a fundamental theory of nature is its success in solving black hole thermodynamics puzzles in a satisfactory and general manner, in particular revealing the statistical mechanics behind BH entropy. It also is generally believed that all the puzzles of the BH are not independent and will be solved once we really solve one of them. For this and other reasons, providing a controllable calculation of BH entropy has been a prime target of all theories and proposals to quantum gravity. Indeed, in the current climate the role being played by BH thermodynamics in this connection looks more and more analogous to the role played historically by the thermodynamics of a box of gas and black body radiation in revealing the underlying atomicity and quantum nature of everyday matter and radiation. This analogy can be brought out more clearly by recalling some facts about thermodynamics in the presence of event horizons. A well accepted definition of entropy is as a measure of missing or "unavailable" information about a physical system, and from this point of view, one would have to expect some amount of entropy to accompany an event horizon, since it is by definition an information hider par excellence, and therefore the BH entropy could be understood as a response of having an event horizon which hides information about a region of space time, and here the notion of entanglement entropy comes into play. This originates from the well known observation that an observer outside the horizon has no access to the degrees of freedom behind the horizon. For this reason the outside observer would describe the world with a reduced density matrix obtained by tracing out the inaccessible degrees of freedom behind the horizon. If the exterior modes and the external modes are correlated "entangled" the resulting density operator is thermal even if the global state of the system is pure [11; 12]. Now, what modes or missing information the BH entropy refers to generally remains a mystery. Nevertheless, in the presence of a horizon, in principle one should associate to each quantum field an "entanglement entropy" that necessarily results from tracing out the interior modes of the field, given that these modes are necessarily correlated with the exterior ones. In the continuum, this entanglement entropy turns out to be infinite, at least when calculated for a free field on a fixed background spacetime. However, if one imposes a short distance cutoff on the field degrees of freedom, one obtains instead a finite entropy; and if the cutoff is chosen around the Planck length then this entropy has the same order of magnitude as that of the horizon [13; 14]. Based on this appealing result, there have been many speculations attributing the black hole entropy to the sum of all the entanglement entropies of the fields in nature [5]. Whether or not the entanglement of quantum fields furnishes all of the entropy or part of it, contributions of this type must be present, and any consistent theory must provide for them in its thermodynamic accounting. It is not, of course, the aim of this introduction to give an account of the developments in different directions that have surrounded the entanglement entropy in connection with black holes, and reader is referred for instance to [15] and references therein. However; there is a growing consensus that entanglement entropy, and in general quantum entanglement and holography, will play a central role in revealing a finer structure of spacetime and possibly leading to a radical revision of our perception of the universe. At present, and without having at hand a viable and more fundamental theory of spacetime, it is hard to expect a resolution of the problem of the divergence of entanglement entropy, which is very likely deeply linked to other issues of BH thermodynamics. Nevertheless, the finiteness of the BH entropy on one hand, the behavior of the entanglement entropy in the continuum picture, on the other hand, seem to point directly towards an underlying discrete structure of spacetime. The situation actually appears to be similar to that of an ordinary box of gas, where we know that, fundamentally, the finiteness of the entropy rests on the finiteness of the number of molecules, and to lesser extent on the discreteness of their quantum states. Indeed, at temperatures high enough to avoid quantum degeneracy, the entropy is, up to a logarithmic factor, merely the number of molecules composing the gas. The similarity with the BH becomes evident when we remember that the picture of the horizon as composed of discrete constituents gives a good account of the entropy if we suppose that each such constituent occupies roughly one unit of Planck area and carries roughly one bit of entropy [2]. A proper statistical derivation along these lines would require a knowledge of the dynamics of these constituents, of course. However, in analogy with the gas, one may still anticipate that the horizon entropy can be estimated by counting suitable discrete structures, analogs of the gas molecules, without referring directly to their dynamics. Clearly, this type of estimation can succeed only if well defined discrete entities can be identified which are available to be counted. Within a continuum theory, it is hard to think of such entities. However, in causal set theory [16], the elements of the causal set serve as "spacetime atoms", and one can ask whether these elements, or some related structures, are suited to play the role of "horizon molecules". The idea of considering a certain causal set structure as a potential candidate for the horizon molecules was first taken up in 1999 using causal links. This proposal was partially successful and gave promising results in 2 -dimensions. It was subsequently followed by other proposals to refine it or look for more suitable definitions for the horizon molecules that would work in higher spacetime dimensions. In this review, we go through the different horizon molecules proposals that emerged in the last two decades or so within the causal set approach to quantum gravity. The different proposals will be presented according to their chronological appearance in literature. We therefore shall first focus on the causal links proposal that appeared in [17; 18], which historically was the first proposal and so far seems to be the simplest one, and in spite of the fact that it has turned out to be unsuccessful beyond 2-dimensions, this proposal remains pedagogically useful and conceptually stimulating. As a consequence of the failure of the links proposal in higher dimensions, other horizon molecules proposals were put forward in subsequent and recent years aiming to succeed where the first proposal failed [19; 20; 21]. These subsequent and recent proposals will then be reviewed, their main results will be reported and discussed. This review is not intended to be a full comprehensive survey on this subject, however, we hope that the material presented herein will offer the beginner researcher in the subject, or the interested theoretical physicist in general, an accessible introduction to the subject, enough background, tools and concepts that enable him or her to understand the above-mentioned efforts and developments to identify the horizon molecules in causal set theory, and direct the reader to the still open issues. ## 2 Background and Terminology In this section we give the essential mathematical definitions and terminology related to the causal set picture of spacetime. We shall limit ourselves to the necessary background relevant to this review. For more comprehensive and extensive introduction to causal set hypothesis we refer the reader to [22; 23], for a recent and broad review with a fuller set of references see [24]. **Definition 1** A causal set (or a causet for short) \(\mathcal{C}\) is a set endowed with an order relation \(\prec\) satisfying the following axioms: 1. Acyclic (antisymmetric): \(\forall p,q\in\mathcal{C},p\prec q\) and \(q\prec p\Rightarrow p=q\), 2. Transitive:\(\forall p,q,r\in\mathcal{C},p\prec q\prec r\Rightarrow p\prec r\), 3. Reflexive : \(\forall p\in\mathcal{C},p\prec p\), 4. Locally finite: \(\forall p,q\in{\cal C},|I[p,q]|<\infty\), where \(I[p,q]={\rm Fut}(p)\cap{\rm Past}(q)\), \(|.|\) stands for the cardinality of the set, Fut and Past denote the future and the past of a given point, \[{\rm Fut}(p)=\{q\in{\cal C}|p\prec q,q\neq p\}\] \[{\rm Past}(p)=\{q\in{\cal C}|q\prec p,q\neq p\}\.\] Notice here that the reflexivity axiom is a matter of convention and we could instead have used the irreflexive convention. \({\rm Fut}(p)\) and \({\rm Past}(p)\) are to be compared with the notion of chronological future and past, \(I^{+}(p)\) and \(I^{-}(p)\), in continuum Lorentzian geometry. \(I[p,q]\) is referred to as the causal or order interval, the analogue of Alexandrov interval in the continuum. The discreteness of the causal set is encoded in the local finiteness axiom. The acyclicity axiom ensures that causets do not have closed causal loops. An important concept for the description of causets and that we shall frequently need is the _Link_. **Definition 2**: Let \(p\) and \(q\in{\cal C}\), \(p\prec q\), \(q\neq p\). If \(|I[p,q]|=0\), we say there is link between \(p\) and \(q\) and write \(p\prec\cdot q\). The knowledge of all links is equivalent to knowledge of all relations among elements: \(p\prec q\) iff there are elements \(q_{1},q_{2},,.....q_{n}\) such that \(p\prec\cdot q_{1}\prec\cdot q_{2}\prec\cdot,.....\prec\cdot q_{n}\prec\cdot q\). Therefore links are irreducible relations and in some sense are the building blocks of the causet. **Definition 3**: Let \({\cal C}^{\prime}\subset{\cal C}\), \(p\in{\cal C}^{\prime}\) is said to be _maximal_ (resp._minimal_) in \({\cal C}^{\prime}\) iff it is in the past (resp. future) of no other element in \({\cal C}^{\prime}\). An extended notion of maximality and minimality condition that will later be needed is the notion of maximal and minimal-but-\(n\). **Definition 4**: Let \({\cal C}^{\prime}\subset{\cal C}\), \(p\in{\cal C}^{\prime}\) is said to be _maximal-but-\(n\)_ (resp._minimal-but-\(n\)_) in \({\cal C}^{\prime}\) iff it is in the past (resp. future) of exactly \(n\) elements in \({\cal C}^{\prime}\). The basic hypothesis of the causal set approach to quantum gravity is that _"spacetime, ultimately, is discrete and its underlying structure is that of a locally finite, partial ordered set which continues to make sense even when the standard geometrical picture ceases to do so"_. The macroscopic spacetime continuum we experience must be recovered as an approximation to the causet. The causal set proposal can roughly be summarized in the following two points 1. Quantum Gravity is a quantum theory of causal sets. 2. A continuum spacetime \(({\cal M},g)\) is an approximation of an underlying causal set \(C\sim({\cal M},g)\), where (a) Order \(\sim\) Causal Order (b) Number \(\sim\) Spacetime Volume Point or step (2) is not to be viewed as independent of step (1). Actually the quantum theory of causal set should dictate how the continuum picture emerge as an approximation, and this could ultimately involve a more sophisticated notion of approximation. For instance, in view of the fact that not all causets admit a realization as spacetimes with a given dimension while respecting conditions (2a) and (2b), the process by which the continuum 4-d spacetime picture, or that of higher dimensional spacetimes with compactified extra-dimensions, is reached may involve some sort of coarse-graining in which the manifold picture would be a scale dependent approximation of the causal set. However, in the absence of a quantum dynamics of causet, a systematic way of defining a coarse-graining that would fit automatically our expectations is yet to be discovered. Nevertheless, we may use point (2) as a stepping stone (given) to investigate possible kinematical consequences of the causet approach. In short and without expanding too much around this point, the intuitive idea at work here is that of a _faithful embedding_ which we define below. **Definition 5** If \((\mathcal{M},g)\) is a \(d\)-dimensional Lorentzian manifold and \(\mathcal{C}\) a causet, then a faithful embedding of \(\mathcal{C}\) into \(\mathcal{M}\) is an injection map \(f:\mathcal{C}\hookrightarrow\mathcal{M}\) of the causet into the manifold that satisfies the following requirements: 1. The causal relations induced by the embedding agree with those of \(\mathcal{C}\) itself, i.e. \(x\prec y\Leftrightarrow f(x)\in J^{-}(f(y))\) where \(J^{-}(p)\) stands for the causal past of \(p\) in \(\mathcal{M}\); 2. The embedded points are distributed uniformly at density \(\varrho_{c}=l_{c}^{-d}\) with respect to the spacetime volume measure of \((\mathcal{M},g)\). 3. The characteristic length over which the geometry varies appreciably is everywhere much greater than the mean spacing between the embedded points. \(l_{c}\) is referred to as the discreteness scale. When these conditions are satisfied, the spacetime \((\mathcal{M},g)\) is said to be a continuum approximation to \(\mathcal{C}\) and we write \(\mathcal{C}\sim(\mathcal{M},g)\). To ensure covariance the above embedding is realized by randomly sprinkling in points until the required density is reached. Therefore from the point of view of \(\mathcal{M}\) the causet resembles a "random lattice", e.g "a regular" lattice cannot do the job since it is not uniform in all frames or coordinate systems. A natural choice for obtaining or creating a faithfully embedded causet is via a Poisson point process; under which the probability to find \(n\) elements in a spacetime region of volume \(V\) is given by \[(\varrho_{c}V)^{n}\frac{e^{-\varrho_{c}V}}{n!}. \tag{1}\] This makes \(f(\mathcal{C})\) a random causet and thereby any function \(F:\mathcal{C}\to\mathbb{R}\) is a random variable. For more detailed discussion of the issue of faithful embedding and the probabilistic nature of the process we refer the reader to [24] and references therein. ## 3 Horizons molecules as causal links As discussed in the introduction, the expectation is that the BH entropy can be understood as entanglement in a sufficiently generalized sense, and we may hope to estimate its leading behavior by counting suitable discrete structures that measure the potential entanglement in some way between in-outside discrete structures. Moreover, and owing to the fact that the entropy essentially measures the horizon area in Planck units, the problem is reduced to coming up with this measure in the causal set picture. It is worthy of note here that it seems far from obvious that such structures must exist. If they do, then they provide a relatively simple order theoretic measure of the area of a cross section of a null surface, and, unlike what one's Euclidean intuition might suggest, it is known that such measures are not easy to come by. For example, no one knows such a measure of spacelike distance between two sprinkled points that works in general, though some progress has been made in such Minkowski spacetime [25]. It follows from the above discussion that a natural and the simplest candidate for the structure we seek is a _link_ crossing the horizon. Indeed, we may think heuristically of "information flowing along links" and producing entanglement when it flows across the horizon during the course of the causet's growth (or "time development"). Since links are irreducible causal relations (in some sense the building blocks of the causet), it seems natural that by counting links between elements that lie outside the horizon and elements that lie inside, one would measure the degree of entanglement between the two regions. Equally, it seems natural that the number of such causal links if supplemented with extra conditions might turn out to be proportional to the horizon area and play the role of the Horizon molecules. In what follows we discuss with some detail the links proposal for horizon molecules and its applications in different 1+1 geometrical setups. ### The general Setup Let us consider a causet \(\mathcal{C}\) obtained via Poisson random sprinkling in a black hole background \(\mathcal{M}\) with density \(\varrho_{c}\), so this causal set is faithfully embeddable in this eometrical background by definition. Let \({\cal H}\) be a BH horizon and let \(\Sigma\) be an achronal hypersurface intersecting the horizon, Figure 1. The goal is to come up with a measure of the area of the resulting cross section between \({\cal H}\) and \(\Sigma\), which in turn would measure the horizon entropy and define _Horizon molecules_. A natural and intuitive candidate for such molecules is to take them made of pairs of points \((p,q)\), with \(p\) lying outside the black hole and to the past of \(\Sigma\), while \(q\) is inside the black hole and to the future of \(\Sigma\), and \(p\prec q\), i.e. \(p\prec q\) is a link. If no further conditions are imposed on \(p\) and \(q\), the expected number of such links can easily be shown to diverge. To see what conditions must be imposed on the pairs \((p,q)\), let us remember that, intuitively, what we are trying to estimate is not the total sum of all "lost information" but only that corresponding "to a given time", meaning in the vicinity of the given hypersurface \(\Sigma\). Hence, to associate the same causal link with more than one hypersurface would be to " overcount" it in forming our estimate, and it is this overcounting that seems to be the source of the above mentioned divergence. Therefore further conditions are needed to be imposed to give a definition of the horizon molecules which is truly proper to \(\Sigma\) rather to some earlier or later hypersur Figure 1: A typical geometrical setting showing a typical causal link crossing the horizon. face. Several possibilities suggest themselves for this purpose, but none seems to be clearly best, as the end result (the leading order) will be shown to be insensitive to which choice one makes. Below we pick up a specific choice or definition of horizon molecules, which will be referred to as the "_causal links proposal_", and the general issue will be discussed further in subsection 3.5. Actually working out explicitly with this particular choice, seeing its success in 1+1 and failure in higher dimensions, due to IR divergence, will be instructive for the reader to conceive the motivations behind the re-definitions of the horizon molecules that subsequently departed from the original links proposal. **The causal links proposal (Dou-Sorkin 1999)**: A horizon molecule with respect to a given hypersurface \(\Sigma\) is a pair \((p,q)\) satisfying the following conditions 1. \(p\in I^{-}(\Sigma)\cap I^{-}({\cal H})\) 2. \(q\in I^{+}(\Sigma)\cap I^{+}({\cal H})\) 3. \(|I[p,q]|=0\), i.e \(p\prec q\) is a link. 4. \(p\) is maximal in \(I^{-}(\Sigma)\cap I^{-}({\cal H})\) and \(q\) is minimal in \(I^{+}({\cal H})\). The \(4^{th}\) condition may seem asymmetric, as one would have expected symmetric Max and Min conditions between \(p\) and \(q\) to be more natural, however, the reason that we do not impose a similar condition on \(q\) is because this would give zero for a null hypersurface case, but the result should agree for null or spacelike if both intersect the horizon in the same time, moreover for stationary black the results should agree in all cases. Before we move on, we draw the reader's attention that throughout this section and the next one \(p\) will stand for points in \(I^{-}(\Sigma)\cap I^{-}({\cal H})\) and \(q\) for the ones in \(I^{+}(\Sigma)\cap I^{+}({\cal H})\). Let us now see how to count the expected number of these horizon molecules by reducing it to the calculation of an integral over the manifold \({\cal M}\). Remember that the probability of finding or sprinkling \(n\) points in some region of spacetime, \({\cal R}\), is given by the Poisson distribution \[P(n,{\cal R})=\frac{(\varrho_{c}{\rm vol}({\cal R}))^{n}}{n!}e^{-\varrho_{c}{ \rm vol}({\cal R})}\,\] where \({\rm vol}({\cal R})\) is the spacetime volume of \({\cal R}\). Consider first an infinitesimal region \(\Delta{\cal R}\), the probability of sprinkling a single point in it is follows from \[P(1,\Delta{\cal R})\approx\varrho_{c}{\rm vol}\Delta{\cal R}\equiv\varrho_{c} \Delta V. \tag{10}\] Consider now two infinitesimal regions \(\Delta{\cal R}_{p}\in I^{+}(\Sigma)\cap I^{+}({\cal H})\) and \(\Delta{\cal R}_{q}\in I^{-}(\Sigma)\cap I^{-}({\cal H})\). The probablity of having a pair of points \((p,q)\) with \(p\in I^{+}(\Sigma)\cap I^{+}({\cal H})\) and \(q\in I^{-}(\Sigma)\cap I^{-}({\cal H})\) sprinkled in \(\Delta{\cal R}_{p}\) and \(\Delta{\cal R}_{q}\) resp. is given by \[P(p,q|\Delta{\cal R}_{p},\Delta{\cal R}_{q})=\varrho_{c}\Delta V_{p}\varrho_{c }\Delta V_{q}. \tag{3.2}\] If we further require the relation between \(p\) and \(q\) to be a link then the Alexandrov interval \(A(p,q)\) between \(p\) and \(q\) must contain no point and therefore the probability becomes \[P(p\prec\!\!\cdot q|\Delta{\cal R}_{p},\Delta{\cal R}_{q})=P(0,\mbox{vol}(A(p, q))\varrho_{c}^{2}\Delta V_{p}\Delta V_{q}=\varrho_{c}^{2}e^{-\varrho_{c}\mbox{ vol}(A(p,q))}\Delta V_{p}\Delta V_{q}. \tag{3.3}\] In addition to the link condition Max and Min conditions must be imposed on \(p\) and \(q\). The Max and Min conditions are just statements about an extra region in \({\cal M}\) being empty, with no sprinkled points. If we denote by \({\cal R}(p,q)\) the region resulting from the union of \(A(p,q)\), \(I^{+}(p)\cap I^{-}(\Sigma)\cap I^{-}({\cal H})\) and \(I^{-}(q)\cap I^{-}({\cal H})\) the probability for the above link to become a horizon molecule reduces to \[P({\bf H}(p,q);\Delta{\cal R}_{p},\Delta{\cal R}_{q})=\varrho_{c}^{2}e^{- \varrho_{c}V(p,q)}\Delta V_{p}\Delta V_{q}\, \tag{3.4}\] where \(V(p,q)=\mbox{vol}({\cal R}(p,q))\). To count the expected number of horizon molecules we remember that the existence of horizon molecule is a random variable generated by a function whose value is 1 if the horizon molecule conditions are fulfilled and 0 otherwise. With this in mind, it follows that expected number of horizon molecules is obtained by summing in (3.4) over all \(p\in I^{-}(\Sigma)\cap I^{-}({\cal H})\) and \(q\in I^{+}(\Sigma)\cap I^{+}({\cal H})\) in the limit \(\Delta V_{p}\) and \(\Delta V_{q}\) go to zero. In this limit the sums are replaced by integrals over the domain of \(p\) and \(q\) to obtain the following final expression for the expected number of horizon molecules \[<{\bf H}_{link}>=\varrho_{c}^{2}\int_{I^{-}(\Sigma)\cap I^{-}({\cal H})}dV_{ p}\int_{I^{+}(\Sigma)\cap I^{+}({\cal H})}dV_{q}\ e^{-\varrho_{c}V(p,q)}. \tag{3.5}\] For a more systematic derivation of the above integral formula see [19]. For horizon molecules as such to be successful, one has to show that in the limit of large density, or \(l_{c}\) is much smaller than the geometrical length scales of the setting, \(<{\bf H}_{link}>\) has the asymptotic form \[\varrho_{c}^{\frac{d-2}{d}}<{\bf H}_{link}>=a^{(d)}\int_{\cal J}dV_{\cal J}+ \cdots\, \tag{3.6}\] where the dots refer to terms vanishing in the continuum limit. \({\cal J}:=\Sigma\cap{\cal H}\) and \(dV_{\cal J}\) is the surface measure on \({\cal J}\). \(a^{(d)}\) is constant that depends on the dimension of the spacetime but, in principle, not on the nature of \(\Sigma\), null or spacelike. In two dimensions the leading term in \(<{\bf H}_{link}>\) should be just a constant. ### Horizon molecules and the area law in 2-dimensions Ideally one would have used (10) to the evaluate the expected number of horizon molecules, \(<{\bf H}_{links}>\), in a full four dimensional BH background, e.g Schwarschild BH, however, historically and for technical reasons (difficulties) a simplified version was first worked out. This consisted in considering a " dimensionally reduced" two dimensional metric instead of the true four dimensional one. The hope was twofold; it would first be a warm up exercise for a more realistic four dimensional BH; second the establishment of the area law in 2-d models would give strong evidence for the validity of this proposal in the full four dimensional case. Stated differently, the four-dimensional answer would differ from the two-dimensional one only by a fixed proportionality coefficient of order one, together with a factor of the horizon area. Now, although the above defined horizon molecules proposal did not work beyond 2-d, in contrast to what had first been hoped, due to IR divergences, the establishment of the area law in 2-d using the above defined horizon molecules makes the calculation worth discussing. Beside this obvious reason, it will be seen that in \(1+1\) the resulting expected number of links seems to exhibit some interesting features: a sort of universality, giving exactly the same answer for two different geometrical backgrounds, in equilibrium and far from equilibrium, and remaining finite in the strict continuum limit, \(\varrho_{c}\rightarrow\infty\). In the sequel two cases will explicitly be worked out, a 2-d reduced Schwarschild geometry and collapsing null shell. We shall set \(\varrho_{c}=1\) in all 2-d models discussed in this section; because the leading term is a dimensionless constant and the subleading ones are easy to express and control in these units. ### An equilibrium black hole: 2-d reduced model Consider a dimensionally reduced Schwarzschild spacetime obtained from the realistic 4-dimensional BH spacetime, outside a collapsing spherically symmetric star, by identifying each 2-sphere \(S^{2}\) to a point. The resulting two dimensional spacetime has exactly the same causal structure as the S-sector of the 4-dimensional one. The Penrose diagram for this spacetime is depicted in Figure 2. For simplicity the presence of the collapse has been ignored; this of course will not change the argument, since the detail of the collapse should be irrelevant, or one can choose the hypersurface to intersect the horizon far from the collapse and the result will not be affected by the presence of collapse. The line element of the resulting spacetime is obtained by omitting the angular coordinates from the four dimensional line element, namely \[d^{2}s=-\frac{4a^{3}}{r}e^{-r/a}dudv\, \tag{11}\] here \(a=2M\) is the radius of the BH and \(u\) and \(v\) are the usual Kruskal-Szekeres coordinates, with \(r\) defined implicitly by the equation \[uv=(1-\frac{r}{a})e^{r/a}. \tag{3.8}\] The associated volume element is \[dV=\sqrt{-g}dudv=\frac{2a^{3}}{r}e^{-r/a}dudv. \tag{3.9}\] Our signs convention is such that \(u\sim t-r\), \(v\sim t+r\), and the horizon \({\cal H}\) coincides with \(u=0\). Let now \(\Sigma\) be an ingoing null hypersurface defined by the equation \(v=v_{0}\). The shaded region depicted in Figure 2 is the region \({\cal R}(p,q)\) with no sprinkled point, its \(V(p,q)\) volume can readily be evaluated using (3.8) \[V=a^{2}+r_{pq}^{2}-r_{pp}^{2}-r_{qq}^{2}\, \tag{3.10}\] where we have introduced the following notation \[u_{i}v_{j}=\left(1-\frac{r_{ij}}{a}\right)e^{r_{ij}/a}. \tag{3.11}\] Figure 2: An equilibrium BH obtained from real the 4-d Shwarszchild BH by dimensional reduction, keeping only the radial section. The shaded region is required to be free from any sprinkled points and with volume \(V(p,q)\). Let us note that in two dimension and for a null \(\Sigma\) the maximality condition on \(p\) is actually redundant and insured by the link condition, but it would be needed with spacelike \(\Sigma\). Using (10) and (11), the expected number of horizon molecules is given by \[<{\bf H}_{link}>=(2a^{3})^{2}\int_{0}^{v_{0}}dv_{p}\int_{-\infty}^{0}du_{p} \int_{v_{0}}^{\infty}dv_{q}\int_{0}^{1/v_{q}}du_{q}\frac{e^{-r_{pp}/a-r_{qq}/a} }{r_{pp}r_{qq}}\;e^{-V}. \tag{12}\] A change of integration variables from \((u_{p},v_{p},u_{q},v_{q})\) to \((r_{pp},r(u_{p},v_{0})\equiv r_{p0},r_{pq},r_{qq})\), followed by the notational substitutions \(x=r_{pq}\), \(y=r_{p0}\), \(z=r_{pp}\), reduces \(<{\bf H}_{link}>\) to the form, \[<{\bf H}_{link}>=4\,I(a)\,J(a)\,\] where \[I(a)=\int_{a}^{\infty}dx\frac{x}{x-a}e^{-x^{2}}\int_{a}^{x}dy\frac{y}{y-a} \int_{a}^{y}e^{z^{2}}dz\, \tag{13}\] and \[J(a)=e^{-a^{2}}\int_{0}^{a}e^{r_{qq}^{2}}dr_{qq}. \tag{14}\] It is worth noting here that the initial explicit dependence of \(<{\bf H}_{link}>\) on \(v_{0}\) has disappeared, reflecting the stationarity of the black hole. Now, inasmuch as comparison with the Bekenstein-Hawking entropy is meaningful only for macroscopic black holes, it is natural to assume that \(a\gg 1\), and under this condition, \(I(a)\) can be shown to have the following asymptotic behavior [17]: \[I(a)=\frac{\pi^{2}}{12}\;a+{\cal O}\left(\frac{1}{a}\right)\.\] On the other hand it is not difficult to see that \[J(a)=\frac{1}{2a}+{\cal O}\left(\frac{1}{a^{3}}\right)\.\] Putting everything together, we end up with \[<{\bf H}_{link}>=\frac{\pi^{2}}{6}+{\cal O}\left(\frac{1}{a^{2}}\right). \tag{15}\] As the intersection of \(\Sigma\) and \({\cal H}\) in two dimension is just a point, the area law, if finite, should naturally turn out to be a pure number, therefore (15), or the expected number of horizon molecules, is proportional to the area of the horizon in \(1+1\). Some remarks about the above derivation of the area law in 2-d using this horizon molecules proposal are in order. The first remark concerns the locations of the pairs forming the molecules that give the dominant contribution to \(<{\bf H}_{link}>\). It is easy to see that the dominant contribution to the integral \(J(a)\) plainly comes from \(r_{qq}\approx a\), but since \(r_{qq}\) is the radial coordinate \(r\) of sprinkled point \(q\), and since \(r=a\) is the horizon, this implies that \(q\) resides near the horizon. Similarly, an inspection of the integral \(I(a)\) shows that the dominant contribution to the integral \(I(a)\) comes as well from \(z\approx y\approx a\), which, since \(z=r_{pp}\) and \(y=r_{p0}\), implies in turn that sprinkled point \(p\) resides near the horizon as well [17]. Consequently this counting can be said to be controlled by the near horizon geometry. It should be noted too that from the unboundedness of the region \(I^{+}(\Sigma)\cap I^{+}({\cal H})\) and the finitness of \(<{\bf H}_{link}>\), we can infer that points \(q\) sitting arbitrarily close to the horizon but far from the \(\Sigma\cap{\cal H}\) cannot continue to contribute indefinitely to \(<{\bf H}_{link}>\). Moreover, the fact that \(<{\bf H}_{link}>\) turns out to be just a pure number strongly suggests that the pairs which give the dominant contribution are not only residing near the horizon but are hovering near \(\Sigma\cap{\cal H}\) too. It is interesting to look at this result and its features from another point of view. If we inspect the integral \(I(a)\) we note that what makes the near horizon molecules special is the vanishing of the denominators in \(I(a)\) when the dummy integration variables \(x\) and \(y\) tend to \(a\). To the extent that it is this divergence which makes the horizon such a strong source for the links, and here we may be reminded of the analogous fact that the strong redshift in the vicinity of the horizon allows modes of arbitrarily high (local) frequency to contribute to the entanglement entropy without influencing the energy as seen from infinity. Notice also that the clustering of \(p\) and \(q\) near the horizon is not simply a consequence of the maximality and minimality conditions we imposed on them. For instance, pairs \((p,q)\) sitting arbitrarily close to the hypersurface \(\Sigma\), with \(q\) arbitrarily close to the horizon, still do not contribute to the leading term in \(I(a)\) if \(q\) is far from the horizon, namely with coordinate \(|u_{p}|\gg 1\). ### A black hole far from equilibrium: 2-reduced collapsing null matter We now turn to another case which, though still spherically symmetric, is very far from equilibrium, namely that of a spherically collapsing null shell of matter with stress energy tensor given by \[T_{vv}=\frac{M\delta(b-v)}{4\pi r^{2}}\,\] and the other components are identically zero. The collapsing shell forms a Schwarzschild BH. The Penrose diagram for the resulting spacetime (after dimensional reduction \(S^{2}\to\) point) is shown in Figure 3. Let the shell sweep out the world sheet \(v=b\) and let us choose for our hypersurface \(\Sigma\) a second ingoing null surface defined by \(v=a\), with \(a<b\) so that \(\Sigma\) lies wholly in the flat region. Here \(a\) is of course generally different from \(a\) defined in the Schwarzschild case, \(u\) and \(v\) are null coordinates, chosen so that the horizon first forms at \(u=v=0\) and normalized for convenience such that the line element in the flat region is given by \[ds^{2}=-2dudv+r^{2}d\Omega^{2}\.\] Since our interest is again in macroscopic black holes, we will assume as before that the horizon radius at \(\Sigma\cap{\cal H}\) is large in units such that \(\varrho_{c}=1\), which amounts to \(a{\gg}1\); and to simplify matters further, we will also restrict ourselves to a time well before the infalling matter arrives (as judged in the center of mass frame). One thus has the double inequality, \(b{\gg}a{\gg}1\). Once again, the calculation will be performed for the two dimensional radial section rather than the full four dimensional spacetime. Since we are assuming that the infalling matter is far to the future of the hypersurface \(\Sigma\), points \(q\) sprinkled into that region should not contribute significantly when our minimality and link conditions are taken into account. For this reason, we shall, for convenience, restrict the counting to pairs \((p,q)\) with \(v_{q}<b\). Figure 3: A non-stationary BH. The region to the past of the world line of the infalling matter is flat space with an expanding event horizon, whereas the one to its future is Shwarzschild region. We have depicted an extra null hypersurface \(\Sigma_{0}\) for later reference. Using the definition of the horizon molecules we introduced above, one obtains for the expected number of horizon molecules \[<{\bf H}_{links}>=\int_{a}^{b}dv_{q}\int_{0}^{v_{q}}du_{q}\int_{- \infty}^{0}du_{p}\int_{0}^{a}dv_{p}e^{-V}\, \tag{3.16}\] where \(V=u_{q}v_{q}-u_{p}(v_{q}-v_{p})-u_{q}^{2}/2\), the volume of shaded region in Figure 3. Note here that the contribution of the points \(p\) with \(v_{p}<0\), i.e. to the past of \(\Sigma_{0}\), has been ignored; we will return to its justification below. The integration over \(v_{p}\) and \(u_{p}\) is easy to perform, followed by change of variables, \(x=v_{q},y=v_{q}-u_{q}\), we end up with \[<{\bf H}_{link}>=\int_{a}^{b}\ln(\frac{x}{x-a})e^{-x^{2}/2}dx\int _{0}^{x}e^{y^{2}/2}dy. \tag{3.17}\] At this stage it is not difficult to show that the leading behavior of this integral for large \(a\) is given by \[<{\bf H}_{link}>=\frac{\pi^{2}}{6}-l\left(\frac{a}{b}\right)+ \mathcal{O}(1/a^{2})\, \tag{3.18}\] where \(l(x)\equiv\sum_{k=1}^{\infty}x^{k}/k^{2}\), a convergent series that vanishes in the limit \(x\to 0\). Originally the correction to the leading term in (3.18) were set to be of the order of \(1/a\) in [17] and [18], but a careful repetition of the calculation due to Marr showed that the correction is of the order \(1/a^{2}\)[19]. Since we have assumed that \(a{\ll}b\), we can write this more simply as \[<{\bf H}_{link}>=\frac{\pi^{2}}{6}+\mathcal{O}(a/b)+\mathcal{O}(1/a^{2}). \tag{3.19}\] Notice that the presence of a negative contribution like \(-l(a/b)\) was to be expected, since we have omitted to count molecules that extend past the shell into the Schwarzschild region. For \(\Sigma\) near to the shell, one obviously should not neglect such links, and this counting is incomplete. However, if the collapse is pushed far away from \(\Sigma\), in particular to future infinity, we can safely restrict the counting to the flat region without worrying about the presence of the Schawrzschild region and therefore reducing the problem (even in higher dimension) to a counting in flat background geometry. Now, what is striking about the above result is is the occurrence of the same numerical coefficient \(\pi^{2}/6\) in both (3.19) and (3.15). This agreement seems at first sight to furnish a nontrivial consistency check of the suggestion that one can attribute the horizon entropy to the horizon molecules made of "causal links" crossing it. As mentioned above, in writing (3.18) we implicitly ignored the contribution of pairs \((p,q)\) with negative \(v_{p}\). No justification for this was given in [17] nor in [18]. However, this point was raised and briefly discussed by Marr in [19]. It is easy to write an integral formula for this type of contribution, and maybe compute it, however, it is not difficult to argue that it should not be considered as part of the horizon molecules associated with \(\Sigma\cap{\cal H}\). This kind of contribution counts the expected number of horizon molecules associated with a hypersurface \(\Sigma_{0}\), Figure 3, which is not intersecting the horizon, or they occur before the horizon formation. Therefore they must be taken as sort of random statistical fluctuations extraneous and not genuine horizon molecules associated to \(\Sigma\). Actually, if we remember that the geometrical setting we are using is 2-d reduced of a 4-dimensional one, this extra contribution would turn out to be just of order one in genuine 4-dimensional counting, thus a negligible fluctuation around the mean value. ### On the Min/Max conditions As we briefly discussed before picking up the particular choice for the"Max/Min" conditions we adopted in the definition the causal links proposal, this choice did not seem unique or particularly sacred and other variants were possible. Of course, one must be careful not to use something like "\(q\) minimal in \(I^{+}(\Sigma)\)", which would drive \(<{\bf H}_{link}>\) to zero in the limit of null \(\Sigma\), but this does not rule out, for example, a condition like "\(p\) maximal in \(I^{-}(\Sigma)\)". Let us note that it turns out that there are at least two variants of the "Max/Min" condition that seem to be equivalent, as long as the leading term is concerned. These variants are **Variant 1**: \(p\) is Max in \(I^{-}(\Sigma)\) and \(q\) is Min in \(I^{+}({\cal H})\cap I^{+}(\Sigma)\). **Variant 2**: \(p\) is Max in \(I^{-}(\Sigma)\cap I^{-}(\Sigma)\) and \(q\) is Min in \(I^{+}({\cal H})\cap I^{+}(\Sigma)\). If we consider for instance the first variant, it is easy to show that the resulting expected number of horizon molecules has the same asymptotic behavior as the one resulting from the original causal links proposal[17], namely \[<{\bf H}_{link}>=\frac{\pi^{2}}{6}+O(1/a^{2})\.\] Thus, for this variant at least, one obtains exactly the same numerical answer as the original proposal we started with. As for the second variant, although it has not been worked out explicitly, we do not expect the slight change in the volume \(V(p,q)\) should alter the leading term. Another related feature the links counting must have if it is to yield the horizon area is that, within reason, the expected number of horizon molecules should depend only on the intersection \({\cal H}\cap\Sigma\), and not on how the surface \(\Sigma\) is prolonged outside or (especially) inside the horizon \({\cal H}\). For example one should get the same answer for both of the continuations shown in Figure 4. The case where the difference is confined to the interior black hole region is of particular significance for the entanglement interpretation of horizon entropy, since such a difference cannot, by definition, influence the effective density operator for the external portion of \(\Sigma\) (at least to the extent that unitary quantum field theory is a good guide). For instance, we note that the volume \(V(p,q)\) needed to insure \(p\) maximal in \(I^{-}({\cal H})\cap I^{-}(\Sigma)\) and \(q\) be minimal in \(I^{+}({\cal H})\) is the same for both \(\Sigma_{ext}\cup\sigma_{1}\) and \(\Sigma_{ext}\cup\sigma_{2}\), therefore from this perspective the definition we have so far adopted seems to have advantage over the other two variants, at least in the case of null \(\Sigma\). However, in view of the fact that the leading order is controlled by contributions coming from links residing near the horizon, we expect the different variants to have the same leading behaviors no matter how \(\Sigma\) is prolonged inside or outside the horizon. Indeed, the issue of which Max/Min condition is favored cannot be settled nor properly discussed unless we settle the central issue of how to define the horizon molecules in a way that works in higher dimensions and for both types of hypersurfaces, null and spacelike. ### The spacelike hypersurface in 2-d reduced So far the counting of the horizon molecules has been restricted to null hypersurfaces in 2-d reduced black hole geometries. However, no proposal can be considered fully successful, even in two dimensions, unless it correctly reproduces the same result for both null and spacelike hypersurfaces. In this section, we look at this issue by discussing the previous links counting when \(\Sigma\) is a spacelike hypersurface crossing the horizon under the same Max/Min conditions. It is first intriguing to discuss one of the heuristic arguments that is sometimes invoked in this context to conclude that the null and spacelike counting should be expected to yield the same result. Figure 4: Two continuations of a hypersurface to the interior region. This argument generally goes as follows, [18, 21]. Consider a one-parameter family of spacelike hypersurfaces \(\Sigma_{t}\) which continuously deform to a null hypersurface, \(\Sigma=\lim_{t\to\infty}\Sigma_{t}\). On other hand, the region one sprinkles into and so the probability measure is also continuous with respect to the deformations. Now, because all spacelike hypersurfaces give the same result then the null hypersurface \(\Sigma\) which can be casted as a limit of a sequence of spacelike hypersurface should also give the same result. In the flat case one can equally use Lorentz invariance of the counting and the fact any spacelike hypersurface must give the same result, as any other related to it by a boost, and in the limit of tilting, a spacelike line becomes null. Note that a similar argument can also be made in the Schwarzschild case, using time-translation Killing vector instead of the boost killing vector. Stated mathematically, one would expect the following limit to hold \[\lim_{t\to\infty}<{\bf H}(\Sigma_{t})>=<{\bf H}(\Sigma)>. \tag{3.20}\] In the above equation we do not of course require strict equality, but it would be enough to hold in the limit \(l_{c}\to 0\) modulo some statistical deviations from the leading mean value. In [21], it was for instance argued that it is the non-commutativity of the two limits, \(t\to\infty\) and \(l_{c}\to 0\), which causes the above identity to fail, namely \[\lim_{l_{c}\to 0}\lim_{t\to\infty}l_{c}^{d-2}<{\bf H}(\Sigma_{t})>\neq\lim_{t \to\infty}\lim_{l_{c}\to 0}l_{c}^{d-2}<{\bf H}(\Sigma_{t})>. \tag{3.21}\] However, we shall see in the fourth section that in some horizon molecules counting the limit \(l_{c}\to 0\) is not at all required for the derivation of the area law, nonetheless the null and spacelike hypersurfaces give two different results. Therefore, we conclude that the above heuristic argument invoking the non-ommutativity of the two limits is at best not generally sustainable, as some counting could be inherently discontinuous and depend on the nature of the hypersurface crossing the horizon. Let us now consider a spacelike \(\Sigma\) given by \(t=\frac{a}{2}\). To facilitate the discussion it is convenient to introduce a null hypersurface \(\Sigma^{\prime}\) defined by \(v=a\). Again, we shall push the collapse to future infinity and restrict the counting to the flat region, Figure 5. It is not difficult to see that one has to distinguish five cases, each having a different expressions for the volume \(V(p,q)\). These cases are depicted in Figure 5. The contributions \(A1,A2,A3\) can be easily seen to be qualitatively of the same order of magnitude as the null contribution we already evaluated, thus they will just give constants of order one, but surely each is less then \(\frac{\pi^{2}}{6}\). Contributions of type \((B)\) and \((C)\) are different and one can not directly conclude that are finite or of the order one. For instance contributions from pairs \((p,q)\) with \(u_{q}\to 0\) and \(t_{p}\to a/2\) could lead to IR divergences. However, it was explicitly shown in [17] that both contributions are finite and of order one. Now, although it was possible to show that the expected number of such horizon molecules give a constant of order one, the question whether the spacelike and null cases yield the same result has so far remained open due to the analytical intractability of the integrals involved. But what should be noted in this context is that the difficulty to settle this issue in two dimensions may not solely be of technical nature, due to the intractability of the integrals, but there could be an other issue of conceptual nature at work here. For example in the non-equilibrium case and for a null hypersurface counting, we argued that contributions coming from the pairs \((p,q)\) with \(v_{p}<0\) are to be viewed as random fluctuations around the mean value not genuinely associated with \(\Sigma\cap{\cal H}\) not to be included in the counting, despite the fact that they are not zero, but in higher dimension they are easily seen to be negligible for a macroscopic horizon. For the same reasons we expect similar fluctuations to be present in the above different contributions; in particular we do not expect arrangement of type-\((B)\) and \((C)\) to be fully associated with \(\Sigma\cap{\cal H}\). However, as Figure 5: Different arrangement contributing in the spacelike case, accorrding to different volume epxressions. For the arrangement type-A1 the point \(q\) does not necessary lie to the future of \(\Sigma^{\prime}\), it could be to the past of it as well. it will be shown in the next section, the present links counting fails to work beyond two dimensions due to IR divergences, and therefore pursuing this issue would be no more than mathematical curiosity without physical guidance nor relevance. ### The failure of the causal links proposal in higher dimensions In view of the success and promising results of the causal links counting in two dimensional models, the natural step would of course be to try to apply it on a more realistic four dimensional black hole background. Any direct attempt to do this counting in Schwarzschild geometry will inevitably be encountered by mathematical complications that are almost impossible to surpass. However, the results obtained previously in two dimensions enable us to transform the whole problem into calculation in 4- dimensional (or \(d\)-dimensional ) flat spacetimes using the collapsing null-shell model by pushing the collapse world-line to future infinity. Therefore, one is in principle entitled to consider a flat spacetime with the future light-cone of the origin being the horizon, take a hypersuface \(\Sigma\) (spacelike or null) intersecting the horizon and compute the expected number of horizon molecules, Figure 6. Although working in flat spacetimes drastically simplify the problem, in \(d>2\) the calculation of \(<{\bf H}_{link}>\) is still complicated enough and a much more elaborated technique is needed to do the counting explicitly, for both the null and spacelike hypersurfaces. The calculation of the volumes needed to insure the link and Max/Min conditions is lengthy; and it turns out that one has to distinguish many cases depending on the relative positions of the points \(p\) and \(q\), each case making its own contribution to \(<{\bf H}_{link}>\)[17]. For instance, for spacelike \(\Sigma\) and to the exception of one contribution, coming from an arrangement similar to type A.1 in two dimensions, Figure 5, which could be evaluated and was reported in [17; 18]; the remaining arrangements turned out to be either very complicated or intractable. Nonetheless, at least for one non-trivial arrangement the volume was computed exactly in [17]. The arrangement in question is of the type-B, depicted in Figure 5 (its four dimensional analogue). For this particular arrangement it was later realized by the author that its corresponding contribution to \(<{\bf H}_{link}>\) diverges. It is unnecessary to give the detail of this calculation, but it is not difficult to qualitatively understand the source of this IR divergence. Let us first take a step back and consider the arrangement Type-B depicted in Figure 5, in its 1+1 version. The only potential source of IR divergences comes from the contributions of points \(q\) arbitrarily far away from the intersection point \(\Sigma\cap{\cal H}\); however the \(e^{-V}\) term appearing in the integrand exponentially suppresses all contributions except those when \(p\) is arbitrarily close to the intersection point and \(q\) bound to the horizon, which is not enough to produce any IR divergences. The situation in higher dimensions is quite different and this can be grasped by considering the 2+1 case depicted in Figure 5, and using the qualitative argument given in [19]. In Figure 6 the horizon light cone \({\cal H}\) is intersected by a spacelike hypersurface \(\Sigma\), \(t=a\) say. Consider a point \(p\) and let \(\dot{I}^{+}(p)\) be the boundary of its future light-cone. Unlike 1+1, in 2+1 dimensions (or higher) the intersection of \({\cal H}\) is no longer a point, but rather a curve ( \(d-2\) dimensional surface in general). This adds a new degree of freedom and allows the existence of new links formed with points \(q\), asymptotically close to \(\dot{I}^{+}(p)\cap{\cal H}\) and arbitrarily far form \(\Sigma\cap{\cal H}\). In addition, and unlike the 1+1 case, the point \(p\) is not at all required to be arbitrarily close to the \(\Sigma\cap{\cal H}\) for the volume \(V\) to vanish; it is enough to be arbitrarily close to \(\Sigma\) and anywhere far from the intersection of \({\cal H}\) and \(\Sigma\). For these distant pairs \((p,q)\), the interval between \(p\) and \(q\) remains small and highly likely to be free from additional sprinkled points. For these reasons the \(e^{-V}\) is not enough to suppress the contributions of such pairs, as there is an infinite number of potential pairs \((p,q)\) with vanishing volume in the analytic limit. As a consequence the expected number of causal links, no matter which Max/Min conditions are imposed, will diverge like \(\Lambda_{r}^{d-2}\), in \(d\)-dimensions, with \(\Lambda_{r}\) an appropriate IR cutoff. This IR divergence is incurable within the causal links proposal and a real departure from the links definition is therefore inevitable. Figure 6: The red curve shows the intersection of \(\dot{I}^{+}(p)\) with the horizon, this curve extends to future infinity and by considering sprinkled points \(q\) asymptotically approaching this curve, an arbitrarily number of \(p\prec\!q\) links can be found. The triplet proposal The failure of the causal links proposal beyond 1+1 subsequently led soon to a different and modified causet structures as a new candidates for the horizon molecules. We first note that it is almost obvious that the day can not be saved by simple modifications of the links proposal, for instance by modifying the Max/Min conditions, as we have exhausted all acceptable variants. The first attempt to depart from the links structure was taken by Marr [19]. This attempt was mainly based on "triad" structure or "Triplet". Although there are some hints that this modified horizon molecular structure may fail to cure all the IR divergences we observed in the causal links proposal in higher dimensions, we find that the triplet proposal worthy of brief discussion. We shall omit all technical details, as it is similar in spirit to the link-counting, and only focus on the main results and their discussion. Let us start by noting that there actually some suggestion that a certain type of triplets are naturally related to the kind of correlation responsible for entanglement entropy in a quantum field theory framework [26]. \(\Lambda\)**-Triplet (Marr 2007)**: A horizon molecule with respect to a given hypersurface \(\Sigma\) is a triplet \((p,q,r)\) satisfying the following conditions 1. \(p\in I^{-}(\Sigma)\cap I^{-}({\cal H})\), 2. \(q\in I^{+}(\Sigma)\cap I^{+}({\cal H})\), 3. \(r\in I^{-}(\Sigma)\cap I^{+}({\cal H})\), 4. \(|I[p,q]|=0\), \(|I[r,q]|=0\). 5. \(p\) is maximal in \(I^{-}(\Sigma)\cap I^{-}({\cal H})\), \(q\) is minimal-but-one in \(I^{+}({\cal H})\) and \(r\) is minimal in \(I^{-}(\Sigma)\cap I^{+}({\cal H})\). Notice that the \(4^{th}\) condition automatically requires \(p\) and \(r\) to be causally unrelated or spacelike related. The condition \(q\) minimal-but-one in \(I^{+}({\cal H})\) is meant to ensure that the only point in \(I^{-}(q)\) is \(r\), Figure 7. Marr first applied the \(\Lambda\)-triplet proposal for the collapsing shell model in 1+1 - pushing the collapsing shell to future infinity- to obtain an expected number of horizon molecules of order one, more precisely she obtained \[<{\bf H}_{\Lambda-triplet}>=\frac{\pi^{2}}{6}-1+{\cal O}((a/b)^{2}). \tag{4.1}\] Using a triplet instead of a simple link (doublet) has therefore reduced the expected number of horizon molecules by one. n contrast to the collapsing shell model in \(1+1\), the case of 2-d reduced Schwarzschild BH turned out to be analytically intractable for the \(\Lambda\)-triplet. As mentioned earlier, the underlying motivation that led to the departure from the link structure to the triplets was to kill off the contributions coming from \(p-q\) links generated by points \(q\) arbitrarily close to \({\cal H}\cap\dot{I}(p)\) but arbitrarily far from \({\cal H}\cap\Sigma\). However, it has been argued in [19] that although the introduction of a third element in the horizon molecule structure, i.e. \(r\), seems to cure this IR divergences, the \(\Lambda\)-triplet counting still suffers from another IR divergences, of course already present in the links proposal. This IR divergence arises as a result of having unsuppressed contributions coming now from points \(p\) asymptotically approaching \(\Sigma\cap\dot{I}(q)\) as they move further into the past, simultaneously keeping \(p-q\) interval small, and as \(r\) does not bound \(p\) from \(\Sigma\), they are spacelike related, thus avoiding any exponential suppression. The above qualitative argument seemingly rules out the \(\Lambda\)-triplet as possible alternative candidate that would work in higher dimensions, and led Marr to consider other possible arrangements for the triplet. The guide of course was to cure the IR divergences which plagued the link-counting and persisted in the \(\Lambda\)-triplet counting. To that end two different arrangements were considered in [19], the \(z\) and \(l\)-triplet. The \(z\)-triplet is obtained from the definition of the \(\Lambda\)-triplet by keeping the first three conditions, moving \(r\) to the future of \(p\) to form a link with it, keeping its link Figure 7: The regions shaded grey are required to be free from any sprinkled points, whilst the green ones have only one point. relation with \(q\), so the three points form a path or a maximal chain, i.e \(p\prec\!\cdot r\prec\!\cdot q\), and removing the minimality condition on \(r\) from the \(5^{th}\) condition, Figure 7. As for the \(l\)-triplet, it is a re-arrangement of the \(z\)-triplet by moving \(r\) to region \(\in I^{+}(\Sigma)\cap I^{-}({\cal H})\), requiring \(p\) to be maximal in \(I^{-}(\Sigma)\cap I^{-}({\cal H})\) and \(q\) maximal in \(I^{+}({\cal H})\), Figure 7. Marr used both the \(l\) and \(z\)-triplet to count the expected number of horizon molecules for the collapsing null shell 1+1 reduced model and obtained the following results \[<{\bf H}_{z-triplet}>=2-\frac{\pi^{2}}{6}+{\cal O}((a/b)^{2})\, \tag{10}\] \[<{\bf H}_{l-triplet}>=1+{\cal O}(a/b). \tag{11}\] Moreover, the \(l\)-triplet turned out to be manageable analytically for 1+1 Schwarzschild static model and gave the same result (leading term) as the collapsing null shell setting, which is a promising result. Let us remember that both the \(z\) and \(l\)-triplet were introduced with an eye on their use as a candidate for horizon molecules in higher dimensions. Although Marr did not report any analytical results concerning the triplet structures in 1+2 or 1+3, she gave qualitative argument suggesting that the \(z\) and \(l\)-triplet are in principle free of the IR divergences we discussed above. Her argument goes as follows: The presence of a third element \(r\) in the \(z\)-triplet bounds \(p\) away from \(\Sigma\) and \(q\) from \({\cal H}\), whereas in the \(l\)-triplet the role of \(r\) is reversed, it bounds \(p\) away from \({\cal H}\) and \(q\) from \(\Sigma\). Hence for both triplets an arbitrarily large number of \(p-q\) links is unlikely to build up in higher dimensions. Let us note that Marr's qualitative argument regarding the would-be role played by \(r\) in killing off the IR divergences in higher dimensions does not guarantee the finiteness of \(<{\bf H}>\) nor the emergence of the area law. For there could exist other less obvious and more subtle sources of divergences. Moreover, the finiteness of the result does not either guarantee that the resulting \(<{\bf H}>\) will scale like the area. Therefore the matter can only be settled by explicit calculation and this brings in a technical difficulties that one has to deal with when considering higher dimensions, and in particular 3+1. These technical difficulties come from the necessity to evaluate the volumes needed to insure links and max/min conditions, which turned out to be complicated and in some cases intractable even in the flat case and for the link structure [17], let alone the triplet structure. However, there are indications that the case \(2+1\) could be easier to handle analytically. Thus it would be interesting to explicitly test the triplets proposals, in particular the \(z\)-triplet in \(2+1\) is worth revisiting. It is also worth mentioning that \(z\) or \(l\)-triplets could work in \(2+1\) but fail beyond this dimension, and one may be led to consider "diamond" structure containing both types of triplet simultaneously. An extended notion of horizon molecules The definition of the horizon molecules as simple causal links crossing the horizon, supplemented with certain Max/Min conditions, worked nicely and gave promising results in 2-d reduced spacetimes, at least for null hypersurfaces. However, this success did not carry over to higher dimensions due to pathological IR divergences. The higher cardinalilty molecule definitions, namely the triplets, proposed by Marr to cure these divergences turned out to be mathematically cumbersome and challenging beyond two dimensions, and so far no one has devised a technique that would allow analytical investigation of the triplets proposal in higher dimensions. This calculational impasse, the failure of the causal links proposal and the desire to extend the concept of horizon molecule to all causal horizons including black hole, acceleration, and cosmological horizons stimulated two recent sequential and related works by Barton et al [20] and by Machet and Wang [21]. These two works will be the subject of the present section. Our discussion will not cover the technical details presented in [20; 21], but will be limited to introducing the key technical ideas, the results obtained and their discussion. ### The spacelike hypersurface case The extended proposal put forward by Barton et al was devised to extend the notion of horizon molecule to more general causal horizons, thus it does not refer to any particular black hole geometry, and to produce the area law in higher dimensions. The definition goes as follows. Let \((\mathcal{M},g)\) be a globally hyperbolic spacetime with a Cauchy surface \(\Sigma\). Let \(\mathcal{H}\) be a causal horizon, defined as the boundary of the past of a future inextendible timelike curve \(\gamma\), i.e.\(\mathcal{H}:=\dot{I}^{-}(\gamma)\), and consider a causet \(\mathcal{C}\) generated on \(\mathcal{M}\) through a random sprinkling sprinkling with a density \(\varrho_{c}\). Barton et al proposal (2019)A horizon molecule with respect to space-like hypersurface \(\Sigma\) ( a Cauchy surface) is a pair of elements of \(\mathcal{C}\), \(\{p_{-},p_{+}\}\), such that 1. \(p_{-}\prec p_{+}\), 2. \(p_{-}\in I^{-}(\Sigma)\cap I^{-}(\mathcal{H})\), 3. \(p_{+}\in I^{-}(\Sigma)\cap I^{+}(\mathcal{H})\), 4. \(p_{+}\) is the only element in \(I^{-}(\Sigma)\cap I^{+}(p_{-})\). Theses conditions imply that the horizon molecule is a link. An illustration of this type of horizon molecules is depicted in Figure 8. The above definition is easily seen to be generalizable to \(n\)-molecule \(\{p_{-},p_{1,+},p_{2,+},\cdot\cdot,p_{n,+}\}\) by requiring \(p_{-}\prec p_{+,k}\) and \(\{p_{1,+},p_{2,+},\cdot\cdot\cdot,p_{n,+}\}\) to be the only elements in \(I^{-}(\Sigma)\cap I^{+}(p_{-})\). However, our discussion of this proposal will be limited to the horizon molecule of minimal size \(n=1\). Actually the cardinality of the horizon molecules plays no essential technical role in the derivation of the results obtained in [20]. Before we move to the discussion of the derivation of the results of Barton et al, we find it instructive to compare the above definition with the original definition of horizon molecules as causal links with certain Max/Min conditions. The extended horizon definition requires \(p_{-}\) to be a maximal element in \(I^{-}(\Sigma)\cap I^{-}(\mathcal{H})\), or maximal-but-one in \(I^{-}(\Sigma)\), and no similar maximality condition is imposed on \(p_{+}\). The requirement that \(p_{-}\) be maximal-but-one (or but-\(n\)) would for instance derive the expected number of horizon molecules directly to zero if the hypersurface was null (straight null plane) and the future of the horizon is unbounded, as it is the case for Rindler space for example. For this and others reasons the null case has motivated an independent work by Machet and Wang [21], to which we will come lastly. It was first shown in [20] that \(p_{-}\) is in the chronological past of \(\mathcal{J}=\Sigma\cap\mathcal{H}\). Using similar steps we used to arrive to (3.5), it is not difficult to obtain the following integral representation for the expected number of such horizon molecules, \[<\mathbf{H}_{1}>=\varrho_{c}\int_{I^{-}(\mathcal{J})}\rho_{c}V_{+}(p)e^{- \varrho_{c}V(p)}dV_{p}\, \tag{5.1}\] Figure 8: A typical geometrical setting showing a typical horizon molecule \((p_{-},p_{+})\) (\(n=1\)). where \[V_{+}(p):=\text{vol}(I^{-}(\Sigma)\cap I^{+}(\mathcal{H})\cap I^{+}(p)),\ \ V_{+}(p):=\text{vol}(I^{-}(\Sigma)\cap I^{+}(p))\,\] and \(p\) is used to denote the point \(p_{+}\). Figure 8 illustrates the volumes involved in the counting. The main result shown by Barton et al is that under certain conceivable assumptions and in the continuum limit, the expected number of such defined horizon molecules, suitably rescaled, is equal to the area of \(\mathcal{J}\), the intersection of \(\Sigma\) and \(\mathcal{H}\), up to a dimension dependent constant of order one. Mathematically stated we have the following limit \[\lim_{\varrho_{c}\to\infty}\varrho_{c}^{\frac{2-d}{d}}<\mathbf{H}_{1}>=a^{(d) }\int_{\mathcal{J}}dV_{\mathcal{J}}\, \tag{5.2}\] where \(dV_{\mathcal{J}}\) is the area measure on \(\mathcal{J}\), and \(a^{(d)}\) is a constant that only depends on the dimension \(d\). Moreover, the approach to the limit involves finite \(\varrho_{c}\) corrections forming a derivative expansion of local geometric quantities on \(\mathcal{J}\) and increasing powers of \(l_{c}\), the discreteness length. In what follows we shall therefore explicitly keep track of \(\varrho_{c}\) and \(l_{c}\). #### 5.1.1 Rindler Horizon with a flat hypersurface Before outlining the explicit calculation of [20], it would be instructive to present their heuristic argument supporting the validity of (5.2) in general. This heuristic argument actually summarizes the motivation behind defining the horizon molecules as such. Consider Figure 8, the fact that \(p_{-}\) lies in \(I^{-}(\mathcal{J})\) and is required to be maximal in this region means that \(p_{-}\) is close to \(\mathcal{H}\), and as \(\varrho_{c}\to\infty\) it gets closer. The requirement that \(p_{-}\) is maximal-but-one in \(I^{-}(\Sigma)\cap I^{+}(\mathcal{H})\) pushes \(p_{-}\) towards \(\Sigma\) and prevents it from moving to the past of \(\mathcal{J}\). This tendency can be seen by inspecting the integrand of (5.2) in which the exponential will suppress any contribution from regions with \(\varrho_{c}V(p)\gg 1\). Contributions coming from points far from the horizon are suppressed by the \(e^{-\varrho_{c}V_{-}(p)}\), whereas those close to the horizon but far from \(\Sigma\) are suppressed by \(e^{-\varrho_{c}V_{+}(p)}\). Therefore the only region which gives a non-negligible contribution is a small and decreasing subregion of \(I^{-}(\mathcal{J})\), immediately to the past of \(\mathcal{J}\). This strongly suggests that in the limit, the integral will only depend on geometric quantities intrinsic to \(\mathcal{J}\). On dimensional ground, the only geometric quantity that can appear on the RHS of (5.2) is the area of \(\mathcal{J}\) times a dimensionless constant, \(a^{(d)}\), which is independent of the geometry. To prove (5.2) Barton et al first proceeded by probing their defintion on Minkowski \(d\)-dimnsional spacetime, flat \(\Sigma\) and Rindler horizon \(\mathcal{H}\), in short all-flat. First an inertial coordinates system is set up, \((x^{0},x^{1},y^{\alpha}),\alpha=2,3,\cdots d-1\), the hypersurface \(\Sigma\) is chosen at \(x^{0}=0\). \({\cal H}\) is given by \(x^{0}=-x^{1}\). For technical convenience a past-future sawpped set up was instead used, so the domain of integration is \(p\in I^{+}({\cal J})\). The integrand is independent of \(y^{\alpha}\) and the scaled expected number of horizon molecules takes the form \[\varrho_{c}^{\frac{2-d}{d}}<{\bf H}_{1}>=\int_{\cal J}d^{d-2}yI^{d,flat}(l_{c} )\, \tag{100}\] where \(I^{d,flat}(l_{c})\) is a dimensionless function given by \[I^{(d,flat)}(l_{c})=l_{c}^{-(d+2)}\int_{0}^{\infty}dx^{0}\int_{-x^{0}}^{x^{0}} dx^{1}\tilde{V}_{+}(x)e^{-\varrho_{c}\tilde{V}(x)}\, \tag{101}\] where \(l_{c}=\varrho_{c}^{-1/d}\) is the discreteness length. In the limit \(l_{c}\to 0\), the function \(I^{(d,flat)}(l_{c})\) determines the constant \(a^{(d)}\). In this flat case \(\tilde{V}(x)\) is just the \(d\)-dimensional volume of a solid null cone of height \(x^{0}\). For \(\tilde{V}_{+}(x)\), it was not possible to derive a formula for general dimension \(d\), but it was possible to compute it explicitly for the lower dimensions, \(d=2,3\) and \(4\), and they are given respectively by \[d=2 : \tilde{V}_{+}(x)=\frac{1}{4}(x^{0}-x^{1})^{2}\,\] \[d=3 : \tilde{V}_{+}(x)=\frac{2}{3}(x^{0})^{3}\tan^{-1}\left(\sqrt{\frac {x^{0}-x^{1}}{x^{0}+x^{1}}}\right)\] \[- \frac{1}{9}(2x^{0}-x^{1})(2x^{0}+x^{1})\sqrt{(x^{0}-x^{1})(x^{0} +x^{1})}\,\] \[d=4 : \tilde{V}_{+}(x)=\frac{\pi}{48}(x^{0}-x^{1})^{3}(5x^{0}+3x^{1})\.\] A direct, but not straightforward, calculation leads to the following limits \[\lim_{l_{c}\to 0}I^{(2,flat)}(l_{c})=a^{(2)}=\frac{1}{3}\,\] \[\lim_{l_{c}\to 0}I^{(3,flat)}(l_{c})=a^{(3)}=\frac{1}{4}\big{(}\frac{3}{\pi} \big{)}^{2/3}\,\] \[\lim_{l_{c}\to 0}I^{(4,flat)}(l_{c})=a^{(4)}=\frac{\sqrt{3}}{10}\.\] Actually the constants \(a^{(d)}\) were given in [20] for arbitrary \(n\). It should be noted here that although Barton et al computed the constants \(a^{(d)}\) in the limit \(l_{c}\to 0\) using Watson's lemma, the function \(I^{(d,flat)}(l_{c})\) is independent of the discreteness length \(l_{c}\) and equals \(a^{(d)}\) at any discreteness scale. In other words we have \[I^{(d,flat)}(l_{c}):=I^{(d,flat)}=a^{(d)}=l_{c}^{-(d+2)}\int_{0}^{\infty}dx^{0} \int_{-x^{0}}^{x^{0}}dx^{1}V_{+}(x)e^{-\varrho_{c}V(x)}\, \tag{100}\] and the above particular numerical values for \(a^{(d)}\) are the exact values of the integrals \(I^{(d,flat)}(l_{c})\) for different \(d\) regardless of the value of \(l_{c}\). There are indeed two ways to see why \(I^{(d,flat)}(l_{c})\) must be independent of density of the sprinkling \(\varrho_{c}\) or \(l_{c}\). The first is purely technical and based on a simple dimensional analysis of the integral (101). As \(I^{(d,flat)}(l_{c})\) is dimensionless, and there is no length scale which can pair with \(l_{c}\) to form a dimensionless quantity, thus the result must be a pure number. Another heuristic, but more intuitive, argument to understand why the derivation of the above area law should be independent of the density of the sprinkling in the all-flat setting, is the following. In an all-flat setup, the two regions \(I^{-}(\Sigma)\cap I^{-}(\mathcal{H})\) and \(I^{-}(\Sigma)\cap I^{+}(\mathcal{H})\) are flat and infinite (unbounded). If one randomly sprinkles in points in both regions with a given density, \(\varrho_{1}\) say, and considers another sprinkling with density \(\varrho_{2}\), both sprinklings should give the same result. The situation is just a matter of zoom -in and zoom -out, the trade-off here is simply that the number of molecules one loses by decreasing the density of sprinkling, gains by moving further to the past ( away from the intersection of the horizon and \(\Sigma\)). The density only tells us how far into the past we should go for the value of the constant \(a^{(d)}\) to get effectively saturated, and in the very large density limit the molecules contributing to \(<\mathbf{H}_{1}>\) are the ones located infinitesimally close to \(\Sigma\cap\mathcal{H}\). In other words, if the integral over \(x^{0}\) in (101) were cut off at some upper limit \(\tau\gg l_{c}\), the value of the \(l_{c}\to 0\) limit would not be affected, the deviation from the above limiting values tends to zero exponentially fast. This locality property, the fast exponential vanishing of the difference, will be crucial for the discussion of the general curvature case to which we now turn. #### 5.1.2 The general curvature case Our discussion of the general curvature case will more or less be sketchy, escaping technical detail and just highlighting the crucial steps of the calculation of Barton et al. The key elements in proving the limit (100) in the general curvature setting were first the construction of Florides-Synge Normal Corrdinates (FSNC'S) based on the co-dimension 2 spacelike submanifold \(\mathcal{J}\) and second the locality argument. Such coordinates system construction is always possible in tubular neighborhood about a submanifold of any co-dimension in any Riemannian or pseudo-Riemannian manifold [27]. For \(d>2\), let \(z^{a}=(x^{A},y^{\alpha})(A=0,1,\alpha=2,\cdots,d-1)\) denote the FSNC's corrdinates constructed within a small enough tubular neighborhood \(\mathcal{N}\) about \(\mathcal{J}\). For \(d=2\) FSBC's are just the Riemann Normal Corrdinates based on the intersection point \(\mathcal{J}\). The next step is to assume the existence of a length scale \(\tau\) such that \(l_{c}\ll\tau\ll L_{G}\), where \(L_{G}\) is the smallest geometric scale in the setup. This assumption is reasonable, because the continuum approximation of causal set is only valid when the curvature length scales involved in the problem are much larger than the discreteness scale \(l_{c}\). Consider now the region \(\mathcal{R}_{\tau}\) defined as \[\mathcal{R}_{\tau}:=\{p\in I^{-}(\mathcal{J})\cap\mathcal{N}\ :-\tau<x^{0}(p)<0\}\,\] where \(\tau\) is assumed to be small enough that this region is inside the tubular neighborhood \(\mathcal{N}\). Let \(\bar{\mathcal{R}}_{\tau}:=I^{-}(\mathcal{N})\setminus\mathcal{R}_{\tau}\) denote the complement of \(\mathcal{R}_{\tau}\), then the integral (5.2) naturally splits into a part over \(\mathcal{R}_{\tau}\), and another over \(\bar{\mathcal{R}}_{\tau}\). Now, in [20] it was argued, using the locality argument, that the integral over \(\bar{\mathcal{R}}_{\tau}\) tends to zero faster than any power of \(l_{c}\), actually exponentially suppressed, and hence its contribution can be ignored. Therefore the surviving part of the expected value can be written as a local integral over \(\mathcal{R}_{\tau}\) \[\varrho_{c}^{\frac{2-d}{d}}<\mathbf{H}_{1}>=\varrho_{c}^{\frac{2-d}{d}+2}\int _{\mathcal{R}_{\tau}}V_{+}(p)e^{-\varrho_{c}V(p)}dV_{p}. \tag{5.6}\] In view of the fact that the region \(\mathcal{R}_{\tau}\) lies by choice within the tubular neighbourhood, and hence the constructed FSNC's can be used to express the expectation value explicitly as \[\varrho_{c}^{\frac{2-d}{d}}<\mathbf{H}_{1}>=\varrho_{c}^{\frac{2-d}{d}+2}\int _{\mathcal{J}}d^{d-2}y\int_{-\tau}^{0}dx^{0}\int_{x^{0}}^{-x^{0}}dx^{1}\sqrt{ -g(x,y)}V_{+}(x,y)e^{-\varrho_{c}V(x,y)}\, \tag{5.7}\] where \(g(x,y)\) is the determinant of the metric. Let \(\sigma_{\alpha\beta}\) denote the induced metric on \(\mathcal{J}\), then (5.7) can be written as \[\varrho_{c}^{\frac{2-d}{d}}<\mathbf{H}_{1}>=\int_{\mathcal{J}}d^{d-2}y\sqrt{ -\sigma(y)}I^{(d)}(y;l_{c},\tau)+\cdots\, \tag{5.8}\] where \(I^{(d)}(y;l_{c},\tau)\) is defined by \[I^{(d)}(y;l_{c},\tau):=l_{c}^{-(d+2)}\int_{-\tau}^{0}dx^{0}\int_{x^{0}}^{-x^{0 }}dx^{1}\sqrt{\frac{-g(x,y)}{\sigma(y)}}V_{+}(x,y)e^{-\varrho_{c}V(x,y)}\, \tag{5.9}\] where \(\sigma(y)\) is the determinant of the induced metric. The factor \(\sqrt{-g(x,y)\over\sigma(y)}\) makes \(I^{(d)}(y;l_{c},\tau)\) a scalar on \({\cal J}\) and can be rewitten in a free coordinate notation as \(I^{(d)}(q;l_{c},\tau)\), \(q\in{\cal J}\). The next crucial step in the calculation of Barton et al is to show that \(I^{(d)}(q;l_{c},\tau)\) admits the following local expansion \[I^{(d)}(q;l_{c},\tau)=a^{(d)}+l_{c}\sum_{i}b^{(d)}_{i}{\cal G}_{i}(q)+O(l_{c}^{ 2})\, \tag{111}\] where \(a^{(d)}\) and \(b^{(d)}_{i}\) are constants that only depend upon the dimension \(d\). For instance \(a^{(d)}\) is the same constant obtained in the flat case. \({\cal G}_{i}(q)\) is the largest set of mutually independent geometric scalars of length dimension \(L^{-1}\), like the extrinsic curvature \(K\) or the null expansion \(\theta\) evaluated at \(q\). Again, switching to an order-reversed setup \(I^{(d)}(q;l_{c},\tau)\) is written as \[I^{(d)}(y;l_{c},\tau)=l_{c}^{-(d+2)}\int_{0}^{\tau}dx^{0}\int_{-x^{0}}^{x^{0}} dx^{1}\sqrt{-g(x,y)\over\sigma(y)}V_{+}(x,y)e^{-\varrho_{c}V(x,y)}. \tag{112}\] At this stage one is free to choose any coordinates on \({\cal J}\), and a suitable choice is RNC's \(y^{\alpha}\) centered about \(q\in{\cal J}\); \(y^{\alpha}(a)=0\). As all expressions appearing in (112) are evaluated at \(y^{\alpha}(a)=0\), the argument \(y\) will be dropped entirely to write \[I^{(d)}(q;l_{c},\tau)=l_{c}^{-(d+2)}\int_{0}^{\tau}dx^{0}\int_{-x^{0}}^{x^{0}} dx^{1}\sqrt{-g(x)}V_{+}(x)e^{-\varrho_{c}V(x)}. \tag{113}\] Note that \(\sigma(0)=1\) in these RNC's on \({\cal J}\), \(\sigma_{\alpha\beta}(0)=\delta_{\alpha\beta}\). Now spacetime RNC's \(Z^{a}=(X^{A},Y^{\alpha})\) can be introduced within a neighbourhood \({\cal U}\) about \(q\), such that \(X^{A}=x^{A}\), and such that the coordinate vectors \({\partial\over\partial Y^{\alpha}}={\partial\over\partial y^{\alpha}}\) at \(q\). With this choice the determinant of the metric, evaluated at at \(q\) keeps the same form in terms of the coordinates \(x^{A}\) and \(X^{A}\); and we have \[I^{(d)}(q;l_{c},\tau)=l_{c}^{-(d+2)}\int_{0}^{\tau}dX^{0}\int_{-X^{0}}^{X^{0}} dX^{1}\sqrt{-g(X)}V_{+}(X)e^{-\varrho_{c}V(X)}. \tag{114}\] The determinant \(g(X)\) can be expanded in small \(X^{A}\) relative to the curvature scales of spacetime at \(q\): \[\sqrt{=g(X)}=1-{1\over 6}R_{AB}X^{A}X^{B}+O(Z^{3})\, \tag{115}\] where \(R_{AB}\) is the Ricci tensor with indices restricted to \(A,B=0,1\). To bring out the role of the different length scales of the problem, the smallest length scale \(L_{G}\) is used to define a dimensionless tensor \(\hat{R}_{ab}:=L_{G}^{2}R_{ab}\), and \(\tau\) is used to re-express the above expansion in terms of dimensionless coordinates \(\hat{Z}_{a}:=Z^{a}/\tau\) \[\sqrt{-g(X)} = 1-\frac{1}{6}(\frac{\tau}{L_{G}})^{2}\hat{R}_{AB}\hat{X}^{A}\hat{X }^{B}+O(Z^{3}) \tag{111}\] \[= 1-\frac{1}{6}\varepsilon^{2}\hat{R}_{AB}\hat{X}^{A}\hat{X}^{B}+O( \varepsilon^{3})\.\] In view of the fact that \(\varepsilon=\tau/L_{G}\ll 1\), and \(L_{G}\) is the smallest geometric scale, the correction \(\frac{1}{6}R_{AB}X^{A}X^{B}\) is of order \(\varepsilon^{2}\). The volumes \(V(X)\) and \(V_{+}(X)\) can similarly be expanded around the flat ones in the neighborhood \(\mathcal{U}\). Using different explicit geometric setups; in particular different choices for the hypersurface \(\Sigma\), Barton et al suggested the following general expansion for the volumes \[V(X_{p}) = \tilde{V}(X_{p})\big{[}1+\sum_{i}\mathcal{G}_{i}(q)f_{i}(X_{p})+ O(\varepsilon^{2})\big{]}\,\] \[V(X_{p})_{+} = \tilde{V}_{+}(X_{p})\big{[}1+\sum_{i}\mathcal{G}_{i}(q)f_{+,i}(X _{p})+O(\epsilon^{2})\big{]}\, \tag{112}\] where \(\tilde{V}(X_{p})\) and \(\tilde{V}_{+}(X_{p})\) are the volumes from the all-flat case discussed previously. \(f_{i}(X_{p})\) and \(f_{+,i}(X_{p})\) are functions of length dimension \(L\). Using equations (111) and (112) the following expansion for \(I^{(d)}(q;l_{c},\tau)\), it is easy to obtain \[I^{(d)}(q;l_{c},\tau) = l^{-(d+2)}\int_{0}^{\tau}dX^{0}e^{-\rho_{c}\tilde{V}(X^{0})} \bigg{\{}\int_{-X^{0}}^{X^{0}}dX^{1}\tilde{V}_{+}(X) \tag{113}\] \[+ \sum_{i}\mathcal{G}_{i}(q)\bigg{[}\int_{-X^{0}}^{X^{0}}dX^{1} \tilde{V}_{+}(X)f_{+,i}(X)\] \[- \rho_{c}\tilde{V}(X^{0})\int_{-X^{0}}^{X^{0}}dX^{1}\tilde{V}_{+} (X)f_{i}(X)\bigg{]}+O(\epsilon^{2})\bigg{\}}\,\] where the fact that the flat cone volume \(\tilde{V}\) only depends on \(X^{0}\) was used, and the subscript \(p\) from the coordinates \(X^{A}\) have been removed. The integral in the first line is just the flat contribution \(I^{(d,flat)(l)}\) given by (110) up to a difference which vanishes exponentially fast in the limit \(l_{c}\to 0\). By dimensional argument and using Watson's lemma again, the expression in square bracket of (113) can be shown to evaluate to a term of the form \(Cl_{c}\), for some constant \(C\), as \(l_{c}\to 0\). Similarly, the \(O(\varepsilon^{2})\) corrections tend to a function of order \(O(l^{2})\). Therefore the expansion of (110) follows. The constants \(a^{(d)}\) are given by their flat values, \(I^{(d,flat)}=a^{(d)}\). The explicit form of the constants \(b^{(d)}_{i}\) can determined once a geometric setup is chosen. For instance Barton et al have explicitly evaluated these constants for two different geometric setups [20]. ### The null hypersurface case The horizon molecules proposal of Barton et al was specially devised to work when hypersurfaces of spacelike nature are considered. However, and as we mentioned earlier, there are good reasons for requiring any horizon molecule definition to be also valid in the case of null hypersurfaces. This issue was not raised nor discussed in [20], but a subsequent recent work by Machet and Wang addressed this question and investigated in detail the extension of this definition to encompass null hypersurfaces intersecting the horizon. The goal of the following subsection is to give a concise report of the main results and conclusions of Machet and Wang. Let us first give a general look at the problem to see how the success of Barton et al proposal is tied to the spacelike nature of the hypersurface. To that end consider the all-flat case of figure 9, a Rindler horizon in Minkowski space, with \(\Sigma\) being a straight null plane. As can easily be seen the region \(I^{-}(\Sigma)\cap I^{+}(\mathcal{H})\cap I^{+}(p_{-})\) is unbounded with infinite volume for any randomly selected point \(p_{-}\), and the expected number of such horizon molecules is thus directly derived to zero. Therefore, before any sensible calculation of the expected number is started, one has to first bound this domain, at least for the all-flat case. This can be done by either considering a folded null plane instead of the straight one, or by taking a null hypersurface with different shape like a downward light-cone. These two configurations were probed in [21] to compute the expected number of horizon molecules, although the motivation there for bounding the region \(I^{-}(\Sigma)\cap I^{+}(\mathcal{H})\cap I^{+}(p_{-})\) was to avoid any potential IR divergence. Let us note that, as we discussed in subsection 3.6, one can not approach the null case by invoking a continuity argument, like the one given by equation (3.20), by continuously deforming the spacelike result to obtain that of a null hypersurface. Therefore the issue can only be settled by explicit calculation. #### 5.2.1 Rindler horizon in Minkowski spacetime Machet and Wang applied Barton et al definition first to a Rindler horizon in Minkowski space. As we have already mentioned, in the all-flat case and due to the unboundedness of the region \(I^{-}(\Sigma)\cap I^{+}(\mathcal{H})\cap I^{+}(p_{-})\) the expected number of Barton et al horizon molecules is identically zero, therefore for any sensible calculation to get started with such definition and geometric setup one has first to introduce some IR regulator to bound the domain \(I^{-}(\Sigma)\cap I^{+}(\mathcal{H})\cap I^{+}(p_{-})\). This can be achieved for instance by taking \(\Sigma\) to be a folded null plane or a downward light-cone. The setup of a folded null plane is depicted in Figure 9. A global coordinates \((v,u,y^{\alpha})\) is set up, the null hypersurface \(\Sigma\) is the \(u\)-axis, \(v=0\) and the horizon \(\mathcal{H}\) is the \(v\)-axis, \(u=0\). Another null hypersurface \(\Sigma^{\prime}\) given by \(u=\lambda\), with \(\lambda>0\). The union \(\Sigma^{\prime}\cup\Sigma\) is the folded null plane with respect to which the expected number of horizon molecules is to be counted. n this setup the volumes \(V_{+}(p)\) and \(V(p)\) can explicitly be computed for arbitrary dimension, and are given by \[V_{f}(u,v,\lambda)=\alpha_{d}(2v(u+\lambda))^{d/2},\] \[V_{+,f}(u,v,\lambda)=\alpha_{d}(2v)^{d/2}\big{(}(u+\lambda)^{d/2} -u^{d/2}\big{)}\.\] where \(\alpha_{d}\) is a constant dependent on the dimension. It follows again that the expected number of horizon molecules can be written as \[\varrho_{c}^{\frac{2-d}{d}}<{\bf H}_{1}>=\int_{\cal J}d^{d-2}yI_{ null}^{(d,flat)}(l_{c},\lambda)\, \tag{111}\] where \(I_{null}^{(d,flat)}(l_{c},\lambda)\) is given by \[I_{null}^{(d,flat)}(l_{c},\lambda)=l_{c}^{-(d+2)}\int_{0}^{\infty}dv\int_{0}^ {\infty}duV_{+,f}(u,v,\lambda)e^{-\rho_{c}V_{f}}. \tag{112}\] Using the explicit formulas for \(V_{+,f}\) and \(V_{f}\) one gets \[I_{null}^{(d,flat)}(l_{c},\lambda)=\frac{\alpha_{d}^{-2/d}}{d}\Gamma[2/d+1] \int_{0}^{\infty}du\big{(}(u+\lambda)^{d/2}-u^{d/2}\big{)}(u+\lambda)^{-1-d/2}. \tag{113}\] It is noticeable that any dependence on the discreteness scale has disappeared and therefore, by dimensional analysis, \(I_{null}^{(d,flat)}(l_{c},\lambda)\) should be independent of \(\lambda\). The above integral can be evaluated and one obtains \[I_{null}^{(d,flat)}(l_{c},\lambda):=I_{null}^{(d,flat)}=a_{null}^{(d,flat)}= \frac{\alpha_{d}^{-2/d}}{d}\Gamma[2/d+1]H_{d/2}\, \tag{114}\] Figure 9: An all-flat setting: a folded null plane crossing in a Rindler horizon. where \(H_{k}\) is the \(k^{th}\) harmonic number. Actually, Machet and Wang also carried out the calculation for \(n\)-molecule and obtained a formula which can be exactly evaluated for each \(n\). It follows then \[\varrho_{c}^{\frac{2-d}{d}}<{\bf H}_{1}>=a_{null}^{(d,flat)}\int_{{\cal J}_{t}} dV_{\cal J}. \tag{5.22}\] Some comments on this result are in order. It is first interesting to note that the resulting constants \(a_{null}^{(d,flat)}\) are different from the constants \(a^{(d,flat)}\) obtained in the spacelike case, as can be checked by substituting for particular values of \(d\). Moreover, the final result is independent of the position of the \(\Sigma^{\prime}\), the parameter \(\lambda\). Therefore one may take the IR regulator to infinity without changing the result, hence going back to null plane case, and this goes in contradiction with the fact that if one started with a null plane the expected number of horizon molecules would be identically zero. Again, we see that this horizon molecules counting is sensitive to how some limits are taken. The independence of \(a^{(d,flat)}\) from \(l_{c}\) is actually related to its independence of \(\lambda\) and flat setup used to do the counting. As it was pointed out in [21], because of Lorentz invariance of the counting and the fact that one can always boost the system in the \(u\) direction to pull the surface \(\Sigma^{\prime}\) arbitrarily close to \({\cal H}\) the result should not depend on \(\lambda\). If \({\cal J}\) has no geometrical quantity associated to it, e.g intrinsic curvature, then \(l_{c}\) has no length scale to couple with and therefore \(a^{(d,flat)}\) can only be a pure number. Similarly to the spacelike case, to establish the area law in the all-flat setup using this counting the continuum limit plays no role, equation (5.22) is valid for any finite \(\varrho_{c}\). Another setup probed in [21] was again a Rindler horizon in Minkowski space but with a null hypersurface \(\Sigma\) having a different shape, namely a downward light-cone. The downward light-cone \(\Sigma\) is defined to be the boundary of the causal past of a point \(q\in{\cal H}\), i.e \(\dot{I}^{-}(q)\), Figure 10. The volumes \(V_{+}\) and \(V(p)\) are now given by \[V_{+,q}+(p):=\mbox{vol}(I(p,q)\cap I^{+}({\cal H})),\ \ \ \ V_{q}(p):=\mbox{vol}(I(p,q))\.\] Machet and Wang could compute these volumes explicitly in \(d=4\), in terms of the null coordinates of \(p\) and the affine distance between the horizon and the point \(q\), and obtained a general formula for \(I_{cone}^{(4,flat)}\) which reduces for \(n=1\) to \[I_{cone}^{(4,flat)}\frac{1}{12}\sqrt{\frac{3}{2}}\biggl{(}\frac{27}{4}-{}_{2 }F_{1}(1,-1,-2;\frac{3}{2})\biggr{)}=a^{(4)}. \tag{5.23}\] We notice that \(I_{cone}^{(4,flat)}\) turns out to be again independent of the discreteness scale and of the other length scale provided by the affine distance between the horizon and the point \(q\). The explanation of this independence is similar to the folded null plane. It is noteworthy that here too the constant \(a^{(4)}\) is different from its counterpart in the spacelike hypersurface. If one insists on the necessity that the null and spacelike hypersurface must give the same proportionality constant to the horizon area, and take it as a sanity condition of any horizon molecules proposal, then we see that the horizon molecules definition introduced by Barton et al does not meet this requirement. A final remark about the flat counting with a null hypersurface is that it is free from any IR divergences. This IR divergences could have arisen from points \(p_{-}\) arbitrarily close to the \(\Sigma\) and to the far past of \({\cal J}\), e.g with \(v\sim 0\), \(u\to-\infty\) and volume \(V_{f}\) close to zero in the folded plane case, hence not exponentially suppressed. It is not actually difficult to see how this IR divergence is cured within this horizon molecule counting. The requirement that \(p_{-}\) is max-but-\(n\) (at least \(n=1\)) bounds it away from \(\Sigma\), this is realized in the general integral formula of \(<{\bf H}_{1}>\) by multiplying the exponential term, \(e^{-\varrho_{e}V}\), by the extra volume term \(V_{+,f}\), or \(V_{+}^{n}\) for \(n\)-molecule, which has first to vanish before \(V_{f}\) approaches zero. Therefore, the term \(V_{+,f}\) accompanying the exponential kills off this IR. #### 5.2.2 Curved case To investigate the curved case with a null hypersurface the author in [21] took a path similar in spirit to that taken by Barton et al, but now by setting up a local Guassian Null Corrdinates (GNC) adapted to the study of the null hypersurface case. For folded null planes, a local coordinates \((v,u,y^{\alpha})\) is constructed in a tubular neighborhood \({\cal N}\supset{\cal J}\). A region \({\cal R}_{\Lambda}\) analogue to \({\cal R}_{t}\) in spacelike hypersurface case is defined as follows (in time-reversed corrdinates ) \[{\cal R}_{\Lambda}:=\{p\in I^{+}({\cal J})\cap{\cal N}|0<v(p)<\Lambda,0<u(p)< \Lambda\}\, \tag{5.24}\] Figure 10: A downward light-cone intersecting a Rindler horizon. where \(\Lambda\) is an intermediate scale between the discreteness and the geometric length of the setting, i.e \(l_{c}\ll\Lambda\ll L_{G}\). For the argument used in the spacelike hypersurface case to carry over to the null setting one has to show that the rescaled expected number of horizon molecules can be reduced to a local integral on \({\cal R}_{\Lambda}\) \[\varrho_{c}^{\frac{2-d}{d}}<{\bf H}>=\varrho_{c}^{\frac{2-d}{d}+2}\int_{{\cal R }_{\Lambda}}V_{+}(p,\lambda)e^{-\varrho_{c}V(p,\lambda)}dV_{p}+\cdots\, \tag{100}\] where "....." refer to terms decaying exponentially fast as we go to the limit \(l_{c}\to 0\), and \(\lambda\) is the parameter of the folding hypersurface \(\Sigma^{\prime}\). Based on the discussion of the flat setting, it is not difficult to argue that the above local expansion can not in general be true. This can be seen by first noticing that although the region ( in future-past swapped setting) \(v(p)>\Lambda\) poses no problem for all values of \(u(p)\), as \(\varrho_{c}V\gg 1\), so that all contributions from this region will be exponentially suppressed in the continuum limit. However, when \(u(p)>\Lambda\) this argument fails, contributions coming from points \(p\) close to \(u\)-axis, with \(v(p)\sim 0\), are not exponentially suppressed, as the volume \(V(p)\) is now close to zero for values of \(u\) arbitrarily far in the future (or the past in the original setting ) of \({\cal J}\). Thus, contributions coming from far away along the past light-cone of the intersection hypersurface can not be neglected. It follows then that the above local integral can not in general count for the dominant contributions to the expected number of horizon molecules in the continuum limit. Therefore Machet and Wang concluded that this failure is a first indication that the proposal of Barton et al to count horizon molecules with a null hypersurface is a flawed way to define entropy on causal set setting. Machet and Wang further argued that a truncated (by hand) local integral in the form (100), in which contributions from the far past of the inetsection are excluded, yields a small \(l_{c}\) expansion of the following form for \(I^{(d)}(y;\lambda_{c},\lambda,\Lambda)\) \[I^{(d)}(y;l_{c},\lambda,\Lambda)=a^{(d)}+\sum_{i}c_{i}^{(d)}{\cal F}_{i}(y, \lambda,\Lambda)+l_{c}\sum_{i}b_{i}^{(d)}{\cal G}_{i}(y,\lambda,\Lambda)+O(l_ {c}^{2})\, \tag{101}\] where \(a^{(d)}\), \(b_{i}^{(d)}\) and \(c_{i}^{(d)}\) are dimensionless constants dependent on \(d\). The set \({\cal G}_{i}(y,\lambda,\Lambda)\) is a set of mutually independent geometrical scalars of length \(L^{-1}\) evaluated along a geodesic segment. The extra set \({\cal F}_{i}(y,\lambda,\Lambda)\) is set of independent geometrical dimensionless scalar evaluated along a geodesic segment. The presence of extra terms \({\cal F}_{i}(y,\lambda,\Lambda)\) carrying geometrical information evaluated along the geodesic segment which, in contrast to the spacelike hypersurface case, survive in the limit \(l_{c}\to 0\) can be given the heuristic explanation based on the above discussion and dimensional analysis. To further support their claim Machet and Wang worked out an explicit geometrical setting in which \(\Sigma\) is a downward light-cone, the past-pointing light-cone of a point \(q\) which is of affine distance \(\lambda\) away from \({\cal J}\). Within the relevant region, in which the \(V\) and \(V_{+}\) tend to a skinny causal interval or diamond, a Null Fermi Normal coordinates system \((b,u,y^{\alpha})\) is set up. Figure 11 gives a sketch of the coordinates system. Both \(V\) and \(V_{+}\) can be expanded around the flat ones and admit the following expansions in \(d=4\), \[V^{(4)}(u,v,y;\lambda)=V^{(4)}_{f}(u,v,y;\lambda)\biggl{(}1+F( \lambda,u)+{\cal O}((u+\lambda)^{3},v^{3})\biggr{)}\, \tag{100}\] \[V^{(4)}(u,v,y;\lambda)=V^{(4)}_{f}(u,v,y;\lambda)\biggl{(}1+\tilde{F}(\lambda, u)+{\cal O}((u+\lambda)^{3},v^{3})\biggr{)}\, \tag{101}\] where \(F(\lambda,u)\) and \(\tilde{F}(\lambda,u))\) are two dimensionless function involving the integration over Ricci tensor along the \(u\) direction. The assumption here is of course that \(\lambda\) is small relative to the local curvature scales, \(u\) is to be cutoff at distance \(\Lambda\) small compared to the local curvature scales. Under the above assumptions along with an extra assumption about the Ricci tensor (an assumption generic enough to support the claim) it was possible to show that \(I^{(4)}(y;\lambda_{c},\lambda,\Lambda)\) admits the following continuum limit \[\lim_{l_{c}\to 0}I^{(4)}(y;l_{c},\lambda,\Lambda)=a^{(4)}+{\cal R}(y)c^{(4)}( \lambda,\Lambda)+\cdots. \tag{102}\] One can see that the limit is not local to the intersection \({\cal J}\), and the area law is distorted. Machet and Wang then concluded that the horizon molecules proposal of Figure 11: A sketch of the coordinates system in the relevant skinny diamond setup. \(V_{+}\) is shaded. Barton et al does not yield a well behaved area law for when evaluated on a null hypersurface intersecting the horizon. Now, whether the above argument is conclusive or just an artifact of the limitation of the expansion adopted by Machet and Wang, which relies on an ad hoc truncation by hand of the integral \(I^{(4)}\) and certainly neglecting relevant contributions, remains in our view an unsettled issue. One, for instance, cannot exclude the possibility that in a realistic BH model the area law might be restored. A hint for this possibility is offered by the links counting in 2-dimensional reduced Schwarzschild BH discussed in section 3.2, where the area law is established in the null surface case in a quite subtle way, and at the end the dominant contribution turns out to plainly comes from the near horizon links. ### Discussion and outlook In this survey, we have tried to take the reader through the different attempts to identify the right horizon molecules that would give a good kinematical account for BH entropy within the causal set approach. Despite of the fact that there have only been few scattered efforts and practitioners who have devoted their time to this issue, it is undeniable that some progress has been made along different directions. The simplicity, the success in 2-dimensions and the failure in higher dimensions of the causal links proposal stimulated further investigations and proposals based on triplets, and recently has sparked interest in the subject. The early proposal based on causal links gave some promising results in two dimensions, some of which may seem surprising. Prominent among them is the fact that one finds a universal answer which took the same value in two quite different black hole backgrounds, that of equilibrium and non-equilibrium cases, and that the bulk of the links always reside in the close proximity to the horizon, meaning that the result is controlled by the near horizon geometry. However, a seemingly surprising result is the that this value remains finite even in the continuum limit where the fundamental length \(l_{c}\) is sent to zero. In this sense, the replacement of continuous spacetime by a causal set may appear in two dimensions as more of a regularization device then something fundamental. Whether this has any deeper meaning, or whether it might be related to some of the other properties that both quantum field theory and quantum gravity possess in two dimensions [28], that remains an open question. Of note is also the fact that the causal set approach to quantum gravity has been unique in attempting to account for the statistical mechanics of the non-equilibrium horizon. It must be added that the above features of the causal links counting are shared by the triplet proposals, i.e. universality of the result and the finiteness in the continuum limit in two dimensions. Two questions about the links and triplets proposals have so far remained open. The first is to find a way to decide whether the spacelike hypersurface gives the same result as the null one in two dimensions, regardless of the validity of these proposals beyond two dimension. But as we early mentioned this a mere mathematical curiosity with little, if any, physical relevance. The second is of importance and concerns the triplet proposal. Although Marr [19] argued that the \(\Lambda\)-triplet would suffer from IR divergences in higher dimensions, the issue is unsettled for \(l\) and \(z\)-triplets. Therefore, it would be interesting exercise to investigate this triplets counting at least in three dimensional flat setting. However, as we already mentioned, going beyond two dimensions would make the calculation cumbersome, but if enough time and effort is devoted to this problem, some approximation methods could possibly be devised to extract the leading contributions or settle the divergence issue. Of course, one could also use numerical methods to approach this counting. Unlike the links and the triplets attempts, the Barton et al proposal has succeeded in giving an expected number of horizon molecules proportional to the area of the horizon intersecting a spacelike hypersurface, for almost all reasonable geometrical settings. However, this proposal has some drawbacks. Firstly, it seems to be inherently discontinuous as one moves from the spacelike hypersurfaces to null ones, giving two different values, i.e different proportionality constants. Moreover, if one accepts the conclusion of Machet and Wang [21], in a curved geometrical background and for a null hypersurface, the continuum limit of the expected number of Barton et al horizon molecules is not local to the intersection of the horizon and the hypersurface, yielding to an ill-behaved area law. Nonetheless, if one sticks with spacelike hypersurfaces, the Barton et al counting provides a good measure for the area of the intersection of the horizon (a null surface) and spacelike hypersurface, which is a promising aspect of such horizon molecules proposal. Another weakness of Barton el al definition, in our view, is that it lacks the physical intuitive and heuristic picture shared by the links and the triplet (and the diamond) proposals. The elements that underpinned the Barton et al horizon molecules are points exterior to the black hole and to the past of the hypersurface, with no reference to the future of the hypersurface, hence it would be hard to view such molecules as heuristically producing entanglement during the course of the causal set growth (or time development). Finally, the common weakness of all proposals discussed in this review, of course, is that they remain at purely kinematical level, and even if a fully successful kinematical identification of the horizon molecules is achieved, no successful proposal can be substantiated or refuted before we possess a fully quantum dynamics of causets. And despite a very important step made by Rideout and Sorkin in developing classical stochastic dynamics for causets, _classical sequential growth_ models [29], building a viable quantum sequential growth dynamics has so remained challenging [24; 30; 31]. **Acknowledgments** While writing this review I have benefited from several discussions with Fay Dowker and Ludovico Machet, I am grateful to them for answering my questions and discussing several issues regarding their works.
2305.04729
2-Drinfel'd double symmetry of the 4d Kitaev model
Following the general theory of categorified quantum groups developed by the author previously (arxiv:2304.07398), we construct the 2-Drinfel'd double associated to a finite group $N=G_0$. For $N=\mathbb{Z}_2$, we explicitly compute the braided 2-categories of 2-representations of certain version of this 2-Drinfel'd double, and prove that they characterize precisely the 4d toric code and its spin-$\mathbb{Z}_2$ variant. This result relates the two descriptions (categorical vs. field theoretical) of 4d gapped topological phases in existing literature and, perhaps more strikingly, displays the first ever instances of higher Tannakian duality for braided 2-categories. In particular, we show that particular twists of the underlying 2-Drinfel'd double is responsible for much of the higher-structural properties that arise in 4d topological orders.
Hank Chen
2023-05-08T14:29:07Z
http://arxiv.org/abs/2305.04729v3
# 2-Drinfel'd Double Symmetry of the 4d Kitaev Model ###### Abstract Following the general theory of categorified quantum groups developed by the author previously, we construct the 2-Drinfel'd double associated to a finite group \(N=G_{0}\). For \(N=\mathbb{Z}_{2}\), we explicitly compute the braided 2-categories of 2-representations of certain version of this 2-Drinfel'd double, and prove that they characterize precisely the 4d toric code and its spin-\(\mathbb{Z}_{2}\) variant. This result relates the two descriptions (categorical vs. field theoretical) of 4d gapped topological phases in existing literature and, perhaps more strikingly, displays the first ever instances of _higher Tannakian duality_ for braided 2-categories. In particular, we show that particular twists of the underlying 2-Drinfel'd double is responsible for much of the higher-structural properties that arise in 4d topological orders. ###### Contents * 1 Introduction * 2 Preliminaries * 2.1 A brief overview on 2-groups and 2-gauge theory * 2.1.1 Classification of 2-groups * 2.1.2 2-groups and categorical groups * 2.1.3 2-gauge theory * 2.2 A lightning review on 2-bialgebras and 2-Drinfel'd doubles * 2.2.1 2-group bialgebras * 2.2.2 Weakening the 2-bialgebra structure * 2.3 2-representations of the 2-Drinfel'd double * 2.3.1 Tensor product and direct sums * 2.3.2 Braiding * 3 The 4d Kitaev model * 3.1 Monster 2-BF theory on the 2-Drinfel'd double \(D(BM)\) * 3.1.1 \(Z_{\text{Kit}}\) as a topological nonlinear \(\sigma\)-model * 3.1.2 Classification of 4d topological phases with a single pointlike \(\mathbb{Z}_{2}\)-charge * 3.2 Anomaly-freeness of the 4d spin-Kitaev model * 4 Excitations in the (invisible) toric code \(Z_{\text{Kit}}^{0}\) * 4.1 Fusion structure * 4.1.1 2-intertwiners; the 1-morphisms * 4.1.2 Cochain homotopies; the 2-morphisms * 4.2 The braiding data * 5 Excitations in the spin-Kitaev model \(Z_{\text{Kit}}^{s}\) * 5.1 Fusion structure in the twisted case * 5.2 Proof of the main theorem * 6 Conclusion Introduction It is well-known that the charge algebra of excitations in the 3d BF theory (or equivalently 3d Chern-Simons [1, 2]) theory associated to an ordinary Lie group \(G\) is described by its Jimbo-Drinfel'd \(q\)-deformed double \(D_{q}(G)\)[3, 4, 5, 6]. In particular, the representations of the Drinfel'd double is known to form a braided tensor category [7], the algebraic structures of which is of tantamount importance in many areas of physics [8, 4, 9]. Of particular interest that serves as the main motivation for this paper is the theory of topological ordering in condensed matter. More precisely, it is well-known that the 2D toric code [10], can be described by an effective BF \(\mathbb{Z}_{2}\)-gauge theory [11], and therefore hosts a Drinfel'd double symmetry \(D(\mathbb{Z}_{2})\). This is a physical manifestation of the fact that the Drinfel'd centre of \(\mathrm{Rep}(\mathbb{Z}_{2})\) coincides with the representation category of \(D(\mathbb{Z}_{2})\), \[Z_{1}(\mathrm{Rep}(\mathbb{Z}_{2}))\simeq\mathrm{Rep}(D(\mathbb{Z}_{2})),\] the former of which describes the 2D toric code [8]. We shall always take the ground field \(k=\mathbb{C}\) in the rest of the paper. We wish to lift the above correspondence between gauge theory and symmetry-protected topological (SPT) phases to 4-dimensions by leveraging the following fact proven in [12]. **Theorem 1.1**.: _The 2-representation 2-category \(2\mathrm{Rep}^{\tau}(\mathcal{G})\) of a weak quasitriangular 2-bialgebra \((\mathcal{G},\mathcal{T},\Delta,\mathcal{R})\) forms a braided monoidal 2-category._ We refer the reader to [13, 14] for the definition of a braided monoidal 2-category. In terms of the following categorical ladder [15, 16], \[\begin{CD}\text{trialgebra}@>{}>{}>\text{Hopf category}@>{}>{}>\text{ monoidal 2-category}&4D\\ @V{}V{}V\\ \text{Hopf algebra}@>{}>{}>\text{monoidal category}&&3D\\ \end{CD}\] \[\begin{CD}\text{algebra}@>{}>{}>\text{2D}\\ \text{2D}\end{CD}\] our strategy involves studying concrete examples of an explicit model of Hopf categories [17] associated to certain 4d TFTs. For an study of trialgebras, see for instance [18, 19]. The goal for this paper will be accomplished in two steps. First, we specialize to a particular _2-group_\(B\mathbb{Z}_{2}\) obtained by delooping the finite group \(\mathbb{Z}_{2}\), and form its 2-Drinfel'd double \(D(B\mathbb{Z}_{2})\). We then construct topological field theories based on \(D(B\mathbb{Z}_{2})\) which we collectively call the _4d Kitaev model_; we also briefly mention its relation to previous works [20, 2, 21, 22]. In analogy with the 3-dimensional case, the excitations of the 4d Kitaev model are labeled by the 2-representations of \(D(B\mathbb{Z}_{2})\) in the sense of [12]. The second step is then to explicitly compute the monoidal and braiding structures of the 2-representation 2-category \(2\mathrm{Rep}^{\tau}(D(B\mathbb{Z}_{2}))\) by utilizing the structures of the underlying 2-Drinfel'd double [12]. Provided \(D(B\mathbb{Z}_{2})\) is equipped with certain twists/group cocycles on \(\mathbb{Z}_{2}\), we demonstrate that \(2\mathrm{Rep}^{\tau}(D(B\mathbb{Z}_{2}))\) recovers in fact two of the braided monoidal 2-categories \(\mathscr{R},\mathscr{S}\) studied in [23, 24], which model respectively the 4d toric code and its spin counterpart. More precisely, we have **Theorem 1.2**.: _Let \(\bar{e}\in H^{2}(\mathbb{Z}_{2},\mathbb{Z}_{2})\) and \(\bar{e}\in H^{2}(\mathbb{Z}_{2},k^{\times})\) denote group 2-cocycles, and define two 2-group 4-cocycles_ \[\omega_{b}=0+\bar{e},\qquad\omega_{f}=\bar{c}[1]+\bar{e}\in H^{4}(B\mathbb{Z} _{2},k^{\times})\] _that twist the 2-algebra structure on \(D(B\mathbb{Z}_{2})\). We have the following braided equivalences_ \[2\mathrm{Rep}^{\tau}(D(B\mathbb{Z}_{2})^{\omega_{b}})\simeq\mathscr{R},\qquad 2 \mathrm{Rep}^{\tau}(D(B\mathbb{Z}_{2})^{\omega_{f}})\simeq\mathscr{S},\] _where \(\tau\in H^{3}(\mathbb{Z}_{2},\mathbb{Z}_{2})\) is the Postnikov class of \(D(B\mathbb{Z}_{2})\)._ The 4-cocycles \(\omega_{b},\omega_{f}\), when pulled-back by a classifying map \(X\to BD(B\mathbb{Z}_{2})\) on a 4-manifold \(X\), give precisely the Dijkgraaf-Witten Lagrangian of the underlying topological NLSM [20]. The meaning of "twisting" shall be made clear in the main text. Moreover, key results in [23] give immediately the following corollary. **Corollary 1.1**.: _There are braided equivalences_ \[2\mathrm{Rep}^{\tau}(D(B\mathbb{Z}_{2})^{\omega_{k}})\simeq Z_{1}(\Sigma\, \mathrm{Vect}[\mathbb{Z}_{2}]),\qquad 2\mathrm{Rep}^{\tau}(D(B\mathbb{Z}_{2})^{\omega_{f}}) \simeq Z_{1}(\Sigma\,\mathrm{sVect}),\] _where \(Z_{1}\) is the Drinfel'd centre and \(\Sigma\) is the condensation completion functor defined in [25]._ This can be understood as an example of _categorified_ Tannakian reconstruction1[18] in the special case of Drinfel'd centre \(2\)-categories. This also suggests that one may model the tube algebra in \(4\)-dimensions [27] with the "fusion \(2\)-ring" of the \(2\)-Drinfel'd doubles \(D(BG)\), categorifying an analogous statement in 3d [3]. Footnote 1: This sentence should be stated more precisely by first picking appropriate forgetful \(2\)-functors into \(2\)Vect [24], analogous to the \(1\)-category case [26]. It is important to note that here the \(2\)-representations of \(D(B\mathbb{Z}_{2})\) we shall construct are based on "weak Baez-Crans \(2\)-vector spaces" in the sense of [12], while \(\Sigma\,\mathrm{Vect}[\mathbb{Z}_{2}]\simeq 2\mathrm{Vect}[\mathbb{Z}_{2}]\) gives \((\mathbb{Z}_{2}\)-graded) \(2\)-vector spaces in the sense of Kapranov-Voevodsky [28]. Briefly, weak Baez-Crans \(2\)-vector spaces \(V\) have \(2\)-term \(A_{\infty}\)-algebras as endomorphisms, and the corresponding \(2\)-representation theory was shown in [12] to be capable of reproducing those in the literature [29, 30, 31, 32] based on the Kapranov-Voevodsky \(2\)-vector space. Nevertheless, we shall reserve the notation \(2\)Vect or \(2\)Rep to refer to the modified Baez-Crans sense in this paper, while keeping the condensation completion functor \(\Sigma\) to refer explicitly to the Kapranov-Voevodsky \(2\)-vector space. The paper is organized as follows. In Section 2, we shall recall some definitions that will be used throughout this paper. We very briefly review the notion of \(2\)-groups, \(2\)-algebras and \(2\)-gauge theory based on existing literature [33, 34, 35, 36, 2, 21, 20], then move on to recall the construction and key duality structures of a \(2\)-quantum double as given in a previous work [12]. We focus in particular on skeletal models of \(2\)-Drinfel'd doubles based on finite Abelian groups -- as well as the properties of its \(2\)-\(R\)-matrix -- which plays a central role in this paper. In Section 3, we define the "monster \(2\)-BF theory" [2] associated to the \(2\)-Drinfel'd double \(D(B\mathbb{Z}_{2})\), and construct its partition function on a \(4\)-manifold. We refer to the resulting theory as the **4d Kitaev model**. Here, we also introduce additional terms in the topological action in accordance with previous work on \(2\)-group Dijkgraaf-Witten theory [21, 20], and provide an interpretation for them as _twists_ on the underlying \(2\)-Drinfel'd double symmetry. With the main tools under our belt, we study the excitations of the \(4\)d Kitaev model (without twists) in Section 4. This is done by explicitly computing all of the braided and monoidal/fusion structures in the \(2\)-representation \(2\)-category \(2\mathrm{Rep}^{\tau}(D(B\mathbb{Z}_{2}))\). We show that the \(4\)d Kitaev model without any twists _cannot_ describe any non-degenerate gapped topological phase, as the symmetric Muger centre \(Z_{2}(2\mathrm{Rep}^{\tau}(D(B\mathbb{Z}_{2}))\neq 2\mathrm{Vect}\) is in fact not trivial. This problem is amended in Section 5 by incorporating the twists, whence the fusion rules and braided structures are revisited. We find that, by interpreting these twists naturally as properties of the \(2\)-representations of \(D(B\mathbb{Z}_{2})\), we are able to finally prove **Theorem 1.2** directly. We provide explicit braided equivalences between the \(2\)-categories \(\mathscr{R},\mathscr{S}\) studied in [23, 24] and the \(2\)-representation \(2\)-category of twisted versions of the \(2\)-Drinfel'd double \(D(B\mathbb{Z}_{2})\). ## 2 Preliminaries Let us begin with a brief overview of the notion of \(2\)-groups, \(2\)-bialgebras and \(2\)-gauge theory [37, 33, 38, 39, 2, 12]. In this article, we shall mainly consider the presentation of a \(2\)-group \(G\) as a group crossed-module, but there are many equivalent definitions that we shall mention. ### A brief overview on \(2\)-groups and \(2\)-gauge theory **Definition 2.1**.: The crossed-module model of a \(2\)**-group**\(G\) is a group homomorphism \(t:G_{-1}\to G_{0}\), between the groups \(G_{0}\) and \(G_{-1}\) together with an action \(\rhd\) of \(G_{0}\) on \(G_{-1}\) such that the following conditions \[t(x\rhd y)=xt(y)x^{-1},\qquad(ty)\rhd y^{\prime}=yy^{\prime}y^{-1} \tag{2.1}\] are satisfied for each \(x\in G_{0}\) and \(y,y^{\prime}\in G_{-1}\). The first and second conditions are known respectively as the **equivariance** and the **Peiffer identity**. Alternatively, a \(2\)-group is equivalent to a \(2\)-term crossed complex of groups. Readers more familiar with category theory may understand \(2\)-groups as a connected \(2\)-groupoid \([G_{-1}\rtimes G_{0},G_{0},*]\)[30, 29], or equivalently its loop \(1\)-groupoid \(G_{-1}\rtimes G_{0}\rightrightarrows G_{0}\)[33]. These are equivalent to the group crossed-module description [33, 30]. #### 2.1.1 Classification of 2-groups As we shall mainly be interested in topological field theories, we only need to work with equivalence classes of 2-groups. More precisely, we consider the 2-group \(G=G_{-1}\to G\), as a 2-term complex of groups, only up to chain homotopies. **Definition 2.2**.: A cochain map between 2-term group complexes, or simply a **2-group homomorphism**, is a graded map \(\phi=(\phi_{-1},\phi_{0}):G\to G^{\prime}\) such that 1. \(\phi_{0}:G_{0}\to G^{\prime}_{0}\) and \(\phi_{-1}:G_{-1}\to G^{\prime}_{-1}\) are group homomorphisms, 2. \(\phi_{-1}(x\rhd y)=(\phi_{0}x)\rhd^{\prime}(\phi_{-1}y)\) for each \(x\in G_{0},y\in G_{-1}\), 3. \(\phi_{0}t=t^{\prime}\phi_{-1}\). We say that two 2-groups \(G,G^{\prime}\) are elementary equivalent, or quasi-isomorphic, if there exists a weakly invertible 2-group homomorphism between them. The fundamental classification result [20, 21, 34, 40] is that **Theorem 2.1**.: **(Gerstenhaber, attr. Mac-Lane).** _2-groups are classified up to quasi-isomorphism by a degree-3 group cohomology class \(\tau\in H^{3}(N,M)\), called the_ **Postnikov class** _of \(G\), where \(N=\operatorname{coker}t,M=\ker t\)._ Note \(M\) is Abelian due to the Peiffer identities. The tuple \((N,M,\tau)\) is known as the _Hoang data_[40, 41], which was proven by Hoang to classify "Gr-categories". Note that the part \((N,M)\) of the Hoang data of \(G\) defines a skeletal 2-group, where the \(t\)-map is trivial. We call this 2-group \(\pi G=M\xrightarrow{0}N\) the **skeleton** of \(G\). Clearly, skeletal 2-groups \(G\) are their own skeletons, and are hence classified by the Hoang data \((G_{0},G_{-1},\tau)\). #### 2.1.2 2-groups and categorical groups In this skeletal case, we can in fact understand \(G=(G_{0},G_{-1},\tau)\) as a **categorical group** with categorical structure [42] \[\operatorname{Obj}_{G}=G_{0},\qquad\operatorname{Hom}(x,x^{\prime})=\delta_{ xx^{\prime}}G_{-1},\quad x,x^{\prime}\in G_{0},\] with the Abelian \(G\)-module \(G_{-1}\) written additively. Given \(x,x^{\prime}\in G_{0}\) and \(y\in\operatorname{End}(x),y^{\prime}\in\operatorname{End}(x^{\prime})\), we have two monoidal structures \[(y,x)(y,x^{\prime})=(y+x\rhd y^{\prime},xx^{\prime}),\qquad(y,x)+(y,x^{\prime })=\begin{cases}(y+y^{\prime},x)&;x=x^{\prime}\\ 0&;\text{otherwise}\end{cases}\] which must satisfy the _interchange law_[43] \[((y_{1},x_{1})+(y^{\prime}_{1},x_{1}))((y_{2},x_{2})+(y^{\prime}_{2},x_{2})) =(y_{1},x_{1})(y_{2},x_{2})+(y^{\prime}_{1},x_{1})(y^{\prime}_{2},x_{2}) \tag{2.2}\] Moreover, we have distinguished associator isomorphisms given by a group 3-cochain \(\tau:G_{0}^{\operatorname{\mathbb{X}}}\to G_{-1}\), \[\tau(x,x^{\prime},x^{\prime\prime}):(xx^{\prime})x^{\prime\prime}\xrightarrow {}x(x^{\prime}x^{\prime\prime}),\] for which the **pentagon relation** (2.3) implies that it is in fact a 3-cocycle, which represents the Postnikov class \(\tau\in H^{3}(G_{0},G_{-1})\). This yields a categorical understanding of the kind of algebraic data a 2-group encodes; similar categorical realizations can also be given for 2-algebras [35, 34]. If one is more geometrically-minded, one may construct the _classifying space_\(BG\) of the skeletal 2-group \(G\) through the Postnikov tower [20, 21]. This space is given as a fibration \[B^{2}G_{-1}\to BG\to BG_{0}\to*,\] in which \(BG_{0}\) is the classifying space of \(G_{0}\) and \(B^{2}G_{-1}\) is the second delooping of \(G_{-1}\), namely \(\pi_{2}(B^{2}G_{-1})=G_{-1}\) with all other homotopy groups trivial. The (representative of the) Postnikov class \(\tau\in H^{3}(G_{0},G_{-1})\) then prescribes the way in which the 2-cells of \(B^{2}G_{-1}\) are glued onto the principal path fibration on \(BG_{0}\)[44]. _Remark 2.1_.: More generally, the monoidal structure can be weakened in various ways by introducing distinguished _2-morphisms_ to \(G\), which are invertible elements \(k^{\times}\) of the ground field \(k\)[23]. For instance, (2.2) can be implemented by an interchanger 2-morphism \(h(y_{1},y_{1}^{\prime};y_{2},y_{2}^{\prime})\in k^{\times}\), which is an isomorphism between 1-morphisms over \(xx^{\prime}\in G_{0}\). Together with the associator 1-morphism \(\tau\), it defines an associator 2-morphism [22], \[\tilde{\tau}(y,y^{\prime},y^{\prime\prime}):\tau(x,x^{\prime},x^{\prime\prime}) +(yy^{\prime})y^{\prime\prime}\to y(y^{\prime}y^{\prime\prime})+\tau(x,x^{ \prime},x^{\prime\prime}),\] for the 1-morphisms \(y,y^{\prime},y^{\prime\prime}\in G_{-1}\) with respective sources \(x,x^{\prime},x^{\prime\prime}\in G_{0}\). The pentagon relation can also be weakened to a _pentagonator_\(\nu(x_{1},x_{2},x_{3},x_{4})\in k^{\times}\), which is classified by degree-4 group cohomology class \(H^{4}(G_{0},k^{\times})\)[20]. In the following, we shall interchangeably understand skeletal 2-groups both in terms of a crossed complex, or as a categorical group; in particular, we shall call the classifying Postnikov class \(\tau\) also as the "associator". The same will be done for 2-algebras introduced later. #### 2.1.3 2-gauge theory The notion of 2-groups is the natural setting to study a _2-gauge theory_, ie. a categorification of the usual gauge theory. Recall that the fundamental structure underlying gauge theory is a principal bundle \(P\to X\) with connection [45]. Locally, one may describe the connection in terms of a Lie algebra-valued connection 1-form \(A\). Similarly, a principal 2-bundle \(\mathcal{P}\to X\) on \(X\) with connection is described by, locally, a pair of connection forms \[A\in\Omega^{1}(X)\otimes\mathfrak{g}_{0},\qquad\Sigma\in\Omega^{2}(X)\otimes \mathfrak{g}_{-1},\] where \(\mathfrak{g}_{0}\) and \(\mathfrak{g}_{-1}\) are the Lie algebras of respectively \(G_{0}\) and \(G_{-1}\). The 2-form \(\Sigma\) allows us to define a _2-holonomy_\(2\mathrm{Hol}_{\mathcal{P}}(S)=S\exp\int_{S}\Sigma\) across a 2-dimensional surface \(S\subset X\), where \(S\exp\) is the surface-ordered exponential defined in [46]. The covariant geometric quantities are given by the fake-flatness and the 2-curvature [47, 2] \[\mathcal{F}=F-t\Sigma,\qquad\mathcal{G}=d_{A}\Sigma-\kappa(A),\] where \(\kappa(A)\) is the evaluation of a 3-cocycle representative of the Postnikov class \(\kappa\in H^{3}(\mathfrak{n},V)\) on the 1-connection \(A\). For the details regarding the gauge structure, see [39, 48]. The same construction can be applied to define flat 2-connections valued in the skeleton \(\pi\mathfrak{g}\) of \(\mathfrak{g}\). _Remark 2.2_.: Note that here we have used a Lie algebra crossed-module to model the 2-gauge theory. This is because the gauge structure associated to a weak Lie 2-algebra is problematic: the 2-gauge algebra does not close unless fake-flatness \(\mathcal{F}=0\) is always satisfied [49]. There is no such problem in the crossed-module formulation, as long as we introduce the first descendant \(\zeta(A,\lambda)\)[21] of the Postnikov class \(\kappa\) into the 1-gauge transformation [49]. In the above, we have outlined the continuum description of a 2-gauge theory on \(X\) using Lie 2-algebras, but a similar construction can be applied to yield a discrete 2-gauge theory [50, 21]. In particular, we define a discrete 2-connection \((A,\Sigma)\) as a pair of cochains \[A\in C^{1}(X,G_{0}),\quad\Sigma\in C^{2}(X,G_{-1})\] satisfying the flatness conditions \[(d_{A}A)_{(012)}=t\Sigma_{(012)},\qquad(d_{A}\Sigma)_{(0123)}=\kappa(A_{\rm tree})\] on a simplicial discretization of \(X\), where we denote an oriented \(k\)-simplex by an ordered tuple \((0\cdots k)\) and \(d\) is now the differential on group cochains. Here, \(A_{\rm tree}\) denotes the evaluation of \(A\) on the edges \((01),(02),(13)\) of \((0123)\)[21], as depicted in red in Fig. 2.1.3. Topological 2-gauge theories and higher-SPT phases.In general, a 2-group homomorphism \(G\to G^{\prime}\) in fact induces a 2-bundle homomorphism \(\mathcal{P}\to\mathcal{P}^{\prime}\) which preserves the covariant curvature quantities \((\mathcal{F},\mathcal{G})\). As such, by the Gerstenhaber classification theorem above, 2-gauge bundles are classified up to 2-bundle maps by the Hoang data \((\pi G,\tau)=(N,M,\tau)\). The topological information encoded in a 2-gauge theory with structure 2-group \(G\), are thus captured up to homotopy by a representative 2-bundle \(\pi\mathcal{P}\), whose structure 2-group is the skeleton \(\pi G\) of \(G\). Indeed, it is known that flat 2-connections are in one-to-one correspondence with homotopy classes of classifying maps \(f:X\to B(\pi G)\), where \(B(\pi G)\) is the classifying space of the skeleton \(\pi G=A\to N\) (via the Postnikov tower construction) [21, 20]. These classifying maps define 2-group homomorphisms \(\Pi_{2}X\to\pi G\) from the homotopy 2-type \(\Pi_{2}X\) of \(X\) to \(\pi G\); such maps have been used to construct 3d topological defects [40]. The associated 2-gauge theory has also been proposed to describe higher-symmetry protected topological (SPT) phases of matter in (3+1)D [21]. It is typically understood and accepted that the excitations in such 2-gauge theories should form a braided monoidal 2-category that characterizes 4-dimensional topological phases. In this paper, we make this statement precise and rigorous. In the following, we shall recall the 2-Drinfel'd double construction of [12], but specialized to _skeletal_ 2-group algebras \(kG\). We will also describe some basic monoidal properties of its 2-representation 2-category \(2\mathrm{Rep}^{\tau}(D(G))\), where \(\tau\) is the Postnikov class of \(G\). We shall use this structure in order to construct a (3+1)D topological field theory (TFT) and study its excitations as 2-representations of the 2-Drinfel'd double \(D(G)\) of \(G\). All 2-groups in the following shall be understood as skeletal. ### A lightning review on 2-bialgebras and 2-Drinfel'd doubles Let \(k\) denote the ground field in the following. Recall a bialgebra \((H,\cdot,\Delta)\) is a vector space \(H\) equipped with an associative algebra structure and a coassociative coalgebra structure \(\Delta:H\to H\otimes H\) that is mutually compatible, in the sense that \[\Delta(hh^{\prime})=h_{(1)}h^{\prime}_{(1)}\otimes h_{(2)}h^{\prime}_{(2)}, \qquad\Delta(h)=h_{(1)}\otimes h_{(2)}\] for all \(h\in H\). Note we have used the conventional Sweedler notation for the coproduct \(\Delta\). Let \((\mathcal{G}_{0},\Delta^{\prime}_{0}),(\mathcal{G}_{-1},\Delta_{-1})\) denote a pair of associative bialgebras, and equip them the following algebra maps \[\cdot:\mathcal{G}_{0}\otimes\mathcal{G}_{-1}\oplus\mathcal{G}_{-1}\otimes \mathcal{G}_{0}\to\mathcal{G}_{-1},\qquad\Delta_{0}=\Delta^{l}_{0}\oplus \Delta^{r}_{0}:\mathcal{G}_{0}\to\mathcal{G}_{-1}\otimes\mathcal{G}_{0}\oplus \mathcal{G}_{0}\otimes\mathcal{G}_{-1}.\] We say that \(\mathcal{G}_{-1}\) is a \(\mathcal{G}_{0}\)-bimodule iff \[x\cdot(yy^{\prime})=(x\cdot y)y^{\prime},\qquad y(x\cdot y^{\prime})=(y\cdot x )y^{\prime},\qquad(yy^{\prime})\cdot x=y(y^{\prime}\cdot x)\] for all \(y,y^{\prime}\in\mathcal{G}_{-1}\) and \(x\in\mathcal{G}_{0}\), and \(\mathcal{G}_{0}\) is a \(\mathcal{G}_{-1}\)-cobimodule iff \[(1\otimes\Delta^{\prime}_{0})\circ\Delta^{l}_{0} = (\Delta^{l}_{0}\otimes 1)\circ\Delta^{\prime}_{0},\] \[(\Delta^{l}_{0}\otimes 1)\circ\Delta^{r}_{0} = (1\otimes\Delta^{r}_{0})\circ\Delta^{l}_{0},\] \[(1\otimes\Delta^{l}_{0})\circ\Delta^{\prime}_{0} = (\Delta^{r}_{0}\otimes 1)\circ\Delta^{\prime}_{0}.\] The following is from [12]. **Definition 2.3**.: An **associative 2-bialgebra**\((\mathcal{G},\cdot,\Delta)\) is a algebra/coalgebra homomorphism \(t:\mathcal{G}_{-1}\to\mathcal{G}_{0}\) such that 1. \(\mathcal{G}_{-1}\) is a \(\mathcal{G}_{0}\)-bimodule and \(\mathcal{G}_{0}\) is a \(\mathcal{G}_{-1}\)-cobimodule, 2. \(t\) is two-sided \(\mathcal{G}_{0}\)**-equivariant** and \(\mathcal{G}_{-1}\)**-coequivariant**, \[\begin{cases}t(x\cdot y)=xt(y)\\ t(y\cdot x)=t(y)x\end{cases},\qquad D^{+}_{t}\circ\Delta_{-1}=\Delta_{0}\circ t\] (2.4) for all \(y\in\mathcal{G}_{-1},x\in\mathcal{G}_{0}\), where \(D^{\pm}_{t}=t\otimes 1\pm 1\otimes t\), and 3. the **Peiffer/coPeiffer identities** are satisfied, \[t(y)\cdot y^{\prime}=yy^{\prime}=y\cdot t(y^{\prime}),\qquad D^{-}_{t}\circ \Delta_{0}=0,\] (2.5) where \(y,y^{\prime}\in\mathcal{G}_{-1}\), and 4. the **2-bialgbera axioms** are satisfied, \[\Delta_{-1}(x\cdot y)=\bar{x}_{(1)}\cdot y_{(1)}\otimes\bar{x}_{(2)} \cdot y_{(2)}, \Delta_{-1}(y\cdot x)=y_{(1)}\cdot\bar{x}_{(1)}\otimes y_{(2)} \cdot\bar{x}_{(2)},\] \[\Delta_{0}^{l}(xx^{\prime})=x_{(1)}^{l}x_{(1)}^{\prime l}\otimes x_{(2)} ^{l}x_{(2)}^{\prime l}, \Delta_{0}^{r}(xx^{\prime})=x_{(1)}^{r}x_{(1)}^{\prime r}\otimes x _{(2)}^{r}x_{(2)}^{\prime r},\] (2.6) where \(x,x^{\prime}\in\mathcal{G}_{0},y\in\mathcal{G}_{-1}\) and we have used the conventional Sweedler notation \[\Delta_{0}(x)=x_{(1)}^{l}\otimes x_{(2)}^{l}+x_{(1)}^{r}\otimes x_{(2)}^{r}, \Delta_{0}^{\prime}(x)=\bar{x}_{(1)}\otimes\bar{x}_{(2)}.\] A 2-bialgebra \(\mathcal{G}\) is _unital_ if there exists a unit \(\eta=(\eta_{-1},\eta_{0}):k\rightarrow\mathcal{G}\) and a counit \(\epsilon=(\epsilon_{-1},\epsilon_{0}):\mathcal{G}\to k\) such that \[\eta_{-1}y=y\eta_{-1}=y,\quad\eta_{0}x=x\eta_{0}=x,\quad\eta_{0} \cdot y=y\cdot\eta_{0}=y,\] \[\mathrm{id}=(\mathrm{id}\otimes\!\epsilon_{-1})\circ\Delta_{-1}, \quad\mathrm{id}=(\epsilon_{-1}\otimes\mathrm{id})\circ\Delta_{-1},\] \[\mathrm{id}=(\epsilon_{-1}\otimes\mathrm{id})\circ\Delta_{0}^{l},\quad\mathrm{id}=(\mathrm{id}\otimes\!\epsilon_{-1})\circ\Delta_{0}^{r}.\] Moreover, \(t\) should respect the units and counits such that \(\eta_{-1}=t\eta_{0}\) and \(\epsilon_{0}=\epsilon_{-1}t\). We note that for non-trivial \(t\neq 0\), the component \(\Delta_{0}^{\prime}\) of the coproduct is constrained to satisfy \[\Delta_{0}^{\prime}=(t\otimes 1)\circ\Delta_{0}^{l}=(1\otimes t)\circ\Delta_{0}^ {r}.\] Otherwise (ie. for _skeletal_ 2-algebras), all of the components of the coproduct are independent. Classification of 2-algebras.Similar to the case of 2-groups, a 2-algebra homomorphism \(f=(f_{-1},f_{0}):\mathcal{G}\rightarrow\mathcal{G}^{\prime}\) is a cochain map between 2-algebras, or equivalently a graded pair of algebra homomorphisms that respect the underlying bimodule structure, such that 1. \(f_{0}:\mathcal{G}_{0}\rightarrow\mathcal{G}_{0}^{\prime}\) and \(f_{{}_{1}}:\mathcal{G}_{-1}\rightarrow\mathcal{G}_{-1}^{\prime}\) are algebra homomorphisms, 2. \(f_{-1}(x\cdot y)=(f_{0}x)\cdot^{\prime}(f_{-1}y)\) and \(f_{-1}(y\cdot x)=(f_{-1}y)\cdot^{\prime}(f_{0}x)\) for each \(x\in\mathcal{G}_{0},y\in\mathcal{G}_{-1}\), and 3. \(f_{0}t=t^{\prime}f_{-1}\). **Theorem 2.2**.: **(Gerstenhaber, attr. Wagemann)**_Associative 2-algebras are are classified up to cochain homotopy by a third Hochschild cohomology class \(\mathcal{T}\in HH^{3}(\mathcal{N},V)\), where \(\mathcal{N}=\mathrm{coker}\,t\) and \(\mathcal{M}=\ker t\)._ The Peiffer identity implies that \(\mathcal{M}\subset Z(\mathcal{G}_{-1})\) is in the _nucleus_ of \(\mathcal{G}_{-1}\); it is in fact a square-free ideal [34]. #### 2.2.1 2-group bialgebras The main example of 2-bialgebras of interest in this paper comes from finite 2-groups \(G\), in which we take the group algebra functor on each graded component \(kG_{-1},kG_{0}\)[18]. One may try to extend all 2-group structures \(k\)-linearly, but the main difficulty is that \(G_{-1}\) acts on \(G_{0}\) by group automorphism, not algebra automorphism, as required by Peiffer identity. As a result, the \(kG_{0}\)-bimodule structure on \(kG_{-1}\) induced this way, namely \[\begin{cases}x\cdot y=x\rhd y\\ y\cdot x=x^{-1}\rhd y\end{cases},\qquad x\in kG_{0},\ y\in kG_{-1},\] does not give a bona fide 2-algebra. There are several ways to amend this. One is to quotient \(kG_{-1}\) out by an ideal generated by \(yy^{\prime}-yy^{\prime}y^{-1}\); this is the method given in [34], and is much too trivial for our purposes. We are looking for an association \(G\rightarrow\mathcal{G}\) which preserves the equivalence classes. We can describe this construction explicitly in the skeletal case. Let \(G=M\xrightarrow{0}N\) denote a 2-group, we simply extend all structures \(k\)-linearly _except_ for the \(t\)-map. We take, instead, the skeletal 2-algebra \(\mathcal{G}=kG\) with \(t=0:kM\to kN\). This construction has the desirable property that it preserves the equivalence class. _Remark 2.3_.: To see this, we leverage a natural (ring) isomorphism of the Hochschild cohomology \(HH^{*}(kN,kN)\) with the group cohomology \(H^{*}(N,kN)\)[51]. If \(N\) is Abelian, then \[HH^{*}(kN,kN)\cong kN\otimes_{k}H^{*}(kN,k).\] Given \(N,M\) are furthermore isomorphic, then in degree-3 we have an isomorphism of Abelian groups \[HH^{3}(kN,kM)\cong kM\otimes_{k}H^{3}(kN,k)\cong H^{3}(N,M)\otimes k. \tag{2.7}\] Given the element \(\tau\in H^{3}(N,M)\) classifying the 2-group \(G=M\xrightarrow{0}N\), we take the corresponding element \(\mathcal{T}\in HH^{3}(kN,kM)\) for the associated 2-algebra \(\mathcal{G}=kG\). Skeletal 2-Drinfel'd double of a finite cyclic Abelian group.Let \(p:kN\stackrel{{\leftrightarrow}}{{\rightarrow}}kM\) denote the linear isomorphism induced from \(N\cong M\), the simplest coproduct \(\Delta\) we can endow on the 2-algebra \(kG=kM\stackrel{{\leftrightarrow}}{{\rightarrow}}kN\) is the grouplike coproduct: \[\Delta_{-1}(y)=y\otimes y,\qquad\Delta_{0}^{\prime}(x)=x\otimes x,\] \[\Delta_{0}(x)=p(x)\otimes x+x\otimes p(x), \tag{2.8}\] where \(x\in kN,y\in kM\). By definition, \(\Delta\) is coassociative and admits the usual antipode \(S_{0}^{0}(x)=x^{-1},S_{0}^{1}(y)=y^{-1}\), together with the unit/counit \[\begin{cases}\eta_{0}=1\in N\\ \eta_{-1}=1\in M\end{cases},\qquad\begin{cases}\epsilon_{0}(x)=\delta_{x\eta_ {0}}\in k\\ \epsilon_{-1}(y)=\delta_{y\eta_{-1}}\in k\end{cases}.\] Moreover, this coproduct can be very easily shown to satisfy the 2-bialgebra axioms, \[\Delta_{-1}(x\cdot y)=x\cdot y\otimes x\cdot y,\qquad\Delta_{-1}( y\cdot x)=y\cdot x\otimes y\cdot x,\] \[\Delta_{0}(xx^{\prime})=p(x)p(x^{\prime})\otimes xx^{\prime}+xx^ {\prime}\otimes p(x)p(x^{\prime}),\] where we have used the fact that \(p\) is a group homomorphism \(p(xx^{\prime})=p(x)p(x^{\prime})\). This defines \((kG,\cdot,\Delta,S)\) as a unital 2-Hopf algebra (see Appendix A of [12]), but we require more structure: we require it to define a _2-Drinfel'd double_. The 2-Drinfel'd double is, briefly, a graded bicrossed product \(\mathcal{G}\mathcal{G}\mathcal{H}\) of two mutually paired 2-bialgebras \((\mathcal{G},\mathcal{H})\)[12], and is therefore in particular naturally **self-dual** in the following sense. **Proposition 2.1**.: _Suppose \(\mathcal{G}^{*}=\mathcal{G}_{0}^{*}\stackrel{{ t^{*}}}{{ \longrightarrow}}\mathcal{G}_{-1}^{*}\) is the linear dual of a 2-algebra \(\mathcal{G}=\mathcal{G}_{-1}\stackrel{{ t}}{{\longrightarrow}} \mathcal{G}_{0}\) under the duality pairing_ \[\langle(g,f),(y,x)\rangle=\langle f,y\rangle+\langle g,x\rangle \tag{2.9}\] _where \(x\in\mathcal{G}_{0},y\in\mathcal{G}_{-1}\) and \(g\in\mathcal{G}_{0}^{*},f\in\mathcal{G}_{-1}^{*}\). Then \((\mathcal{G},\cdot,\Delta)\) is a (unital) 2-bialgebra iff \((\mathcal{G}^{*},\cdot^{*},\Delta^{*})\) is also a (unital) 2-bialgebra. The coproduct \(\Delta\) (resp. counit \(\epsilon\)) dualizes to the product \(\cdot^{*}\) (resp. unit \(\eta^{*}\)) of the dual and vice versa._ To endow our particular skeletal 2-bialgebra \((kG,\cdot,\Delta)\) with the structure of a 2-Drinfel'd double, we take \(M=\widehat{N}\) -- the Pontrjagyn dual of \(N\) -- such that \[kG=D(BM)=kBM\vec{\otimes}kN_{*},\qquad\begin{cases}BM=M\stackrel{{ 0}}{{\rightarrow}}*\\ N_{*}=*\stackrel{{ 0}}{{\rightarrow}}N\end{cases}.\] The map \(p:x\mapsto p(x)=\hat{x}\) is the Pontrjagyn duality isomorphism (recall both \(N,M\) are cyclic Abelian groups), and \(kM\) is a \(N\)-bimodule canonically through the dual left- and right- actions \[(x\cdot y)(x^{\prime})=y(x^{-1}x^{\prime}),\qquad(y\cdot x)(x^{\prime})=y(x^{ \prime}x), \tag{2.10}\] where \(x,x^{\prime}\in N\) and \(y\in\widehat{N}=M\). This two-sided action dualizes to the grouplike coproduct \(\Delta_{0}\) in (2.8), as required by self-duality. We call \((D(BM),\cdot,\Delta,S)\) the (skeletal) 2-Drinfel'd double of \(M\). Another important property of a 2-Drinfel'd double is its **factorizability**. Strictly speaking, this implies that the bicrossed product 2-bialgebra \(kBM\vec{\otimes}kN_{*}\)_cannot_ have any non-trivial Hochschild class, as both of its sectors have trivial Hochschild cohomology group \[HH^{3}(*,kM)=0,\qquad HH^{3}(kN,*)=0.\] As such, we shall understand \({}^{*}D(BM)^{*}\) as a factorizable 2-bialgebra \(\mathcal{K}\) equipped with a span of 2-bialgebra inclusions, \[kBM\hookrightarrow\mathcal{K}\hookrightarrow kN_{*},\] but with a possibly non-trivial Hochschild 3-cocycle \(\mathcal{T}\in HH^{3}(kN,kM)\). By _Remark 2.3_, the structure of \(D(BM)\) allows it to inherit \(\mathcal{T}\) directly from the Postnikov class \(\tau\) of its underlying 2-group, hence we shall denote the Hochschild 3-cocycle \(\mathcal{T}\) by \(\tau\) as well. #### 2.2.2 Weakening the 2-bialgebra structure Going back to **Definition** 2.3, let \((\mathcal{G},\cdot,\Delta)\) denote a general 2-bialgebra. Similar to the case of 2-groups, there is a corresponding notion of weak 2-algebras and weak 2-bialgebras. This monoidal weakening is accomplished by violating associativity/coassociativity in a manner "controlled" by the following maps \[\mathcal{T}:\mathcal{G}_{0}^{3\otimes}\to\mathcal{G}_{-1},\qquad\Delta_{1}: \mathcal{G}_{0}\to\mathcal{G}_{-1}^{3\otimes},\] called appropriately the **associator/coassociator** of the 2-bialgebra \((\mathcal{G},\cdot,\Delta)\), respectively. Provided certain compatibility conditions between these maps hold, the duality result **Proposition** 2.1 extends to the weak case. For a proof of this, as well as the general construction of weak 2-bialgebras, we refer the reader to [12]. The coassociator \(\Delta_{1}\) is of particular importance, as it in fact defines the natural associator morphisms \(a\) on \(2\mathrm{Rep}^{\mathcal{T}}(\mathcal{G})\)[12]. This does not come as a surprise, as we know that the tensor product of 2-representations is implemented by precomposing with the coproduct \(\Delta\). We have shown in [12] that the pentagon relations for these associators follow from the compatibility condition2 Footnote 2: Here we are using a shorthand for tensor extensions of the coproduct maps, in which \[\Delta_{-1}\circ\Delta_{1}=\Delta_{1}\circ\Delta_{0}. \tag{2.11}\] Now what is important for us here in this paper is the 2-Drinfel'd double \(D(BM)=kBM\bar{\bowright}kN_{\mathfrak{s}}\), in which \(M=\widehat{N}\). This particular 2-bialgebra has two important features: it is _skeletal_ and _self-dual_. The fact that it is skeletal means that all adjoint equivalences (ie. the "mixed associator" 2-morphisms such as \(a_{VWk}:a_{VWU}\Rightarrow a_{VWU}\) for a 2-intertwiner \(k:U\to U^{\prime}\)) are isomorphisms, and hence the only non-trivial data endowed by \(\Delta_{1}\) are the associator 1-morphisms \[a_{VWU}:(V\otimes W)\otimes U\to V\otimes(U\otimes W)\] on 2-representations \(V,W,U\in 2\mathrm{Rep}^{\mathcal{T}}(D(BM))\). Secondly, the fact that \(D(BM)\) is self-dual with respect to (2.9) means that the coassociator \(\Delta_{1}\) is in fact dual to the Hochschild 3-cocycle \(\mathcal{T}\), \[\langle x,\tau(x_{1},x_{2},x_{3})\rangle=\langle\Delta_{1}(x),x_{1}\otimes x_ {2}\otimes x_{3}\rangle,\qquad x_{1},\ldots,x_{4}\in\big{(}D(BM)\big{)}_{0}= kN_{\mathfrak{s}}.\] Under this duality, the condition (2.11) is equivalent to the Hochschild 3-cocycle condition for \(\mathcal{T}\). By _Remark 2.3_, such Hochschild 3-cocycles are determined by the classifying Postnikov class of the 2-group underlying the 2-Drinfel'd double \(D(BM)\). In summary, the Postnikov class \(\tau\in H^{3}(N,M)\) (i) determines the Hochschild class \([\mathcal{T}]\) of \(D(BM)\), which in turn (ii) dualizes to the coassociator \(\Delta_{1}\), and which in turn (iii) gives rise to the associator 1-morphism \(a\) on \(2\mathrm{Rep}^{\tau}(D(BM))\). In other words, an element in \(H^{3}(N,M)\) determines the associator data \(a\) on \(2\mathrm{Rep}^{\tau}(D(BM))\), consistent with the literature [20, 22, 23]. ### 2-representations of the 2-Drinfel'd double The key result in [12] states that the 2-representations of a 2-Drinfel'd double \(D(BM)\) forms a braided monoidal 2-category as defined in [14]. We shall recall how the braiding and fusion structures of the 2-representations are induced by the 2-Hopf algebra structure of \(D(BM)\). We follow the Baez-Crans definition of a 2-vector space, which is equivalently a 2-term cochain complex \(V=V_{-1}\stackrel{{\rho}}{{\longrightarrow}}V_{0}\) of vector spaces [52] -- namely a nuclear 2-algebra [34] or an Abelian Lie 2-algebra [37, 53]. There is a natural associative 2-algebra \(\mathrm{End}(V)=\mathrm{End}(V)_{-1}\stackrel{{\delta}}{{ \longrightarrow}}\mathrm{End}(V)_{0}\) of linear transformations on \(V\), given by [53] \[\mathrm{End}(V)_{0} = \{(f,g)\in\mathrm{End}(V_{-1})\times\mathrm{End}(V_{0})\mid\hat{ \sigma}m=n\hat{\sigma}\},\] \[\mathrm{End}(V)_{-1} = \{\phi\in\mathrm{Hom}(V_{0},V_{-1})\mid(\phi\hat{\sigma},\hat{ \sigma}\phi)\in\mathrm{End}(V_{-1})\times\mathrm{End}(V_{0})\},\] equipped with the 2-algebra structure \[\delta:\phi\mapsto(\phi\hat{\sigma},\hat{\sigma}\phi),\qquad(f,g)\cdot\phi=f \phi,\qquad\phi\cdot(f,g)=\phi g.\] The associativity of matrix multiplication implies that \(\operatorname{End}(V)_{-1}\) is clearly a \(\operatorname{End}(V)_{0}\)-bimodule, Moreover, \(\delta\) is two-sided equivariant \[\delta((f,g)\cdot\phi) = (f\phi\partial,\partial f\phi)=(f\phi\partial,g\phi\phi)=(f,g) \delta(\phi),\] \[\delta(\phi\cdot(f,g)) = (\phi g\partial,\partialpartial g)=(\phi\partial f,\partial\phi g )=\delta(\phi)(f,g),\] and we also have the Peiffer identity (note \(\phi,\phi^{\prime}\in\operatorname{End}(V)_{-1}\)) \[\phi\cdot\phi^{\prime}=\phi\partial\phi^{\prime}=\delta(\phi)\cdot\phi^{\prime }=\phi\cdot\delta(\phi^{\prime}),\] and hence \(\operatorname{End}(V)\) is a \(2\)-algebra. Note that none of the matrices here are required to be invertible. We now recall the data of the \(2\)-category \(2\mathrm{Rep}^{\tau}(D(BM))\) of (weak) \(2\)-representations on \(D(BM)\)[35, 54, 12]. Keep note that \(D(BM)=kBM\stackrel{{ 0}}{{\to}}kN_{\mathfrak{s}}\) is skeletal. **Definition 2.4**.: Let \(\mathfrak{T}\) denote the Hochschild \(3\)-cocycle implementing the associativity of \(\operatorname{End}(V)\) (see [12] and Section 2.2.2 later). 1. The objects in \(2\mathrm{Rep}^{\tau}(D(BM))\) are **weak 2-representations**, ie. a \(2\)-algebra homomorphism \(\rho=(\rho_{1},\rho_{0}):D(BM)\to\operatorname{End}(V)\) together with a Hochschild \(2\)-cocycle \(\varrho:kN_{\mathfrak{s}}\to\operatorname{End}(V)_{-1}\) trivializing \(\rho_{1}(\tau)-\mathfrak{T}\circ\rho_{0}\). Namely \[\mathfrak{T}(\rho_{0}(x_{1}),\rho_{0}(x_{2}),\rho_{0}(x_{3}))- \rho_{-1}(\tau(x_{1},x_{2},x_{3})) = \rho_{0}(x_{1})\varrho(x_{2},x_{3})-\varrho(x_{1}x_{2},x_{3})\] \[+\ \varrho(x_{1},x_{2}x_{3})-\varrho(x_{1},x_{2})\rho_{0}(x_{3})\] for each \(x_{1},\dots,x_{3}\in kN_{\mathfrak{s}}\). 2. The \(1\)-morphisms in \(2\mathrm{Rep}^{\tau}(D(BM))\) are **weak 2-intertwiners**\((I,i):\rho\to\rho^{\prime}\), consisting of a \(2\)-vector space homomorphism \(i=(i_{1},i_{0}):V\to W\) together with a Hochschild \(1\)-cocycle \(I_{\cdot,i}:kN_{\mathfrak{s}}\to\operatorname{End}(V)_{-1}\) fitting into the following commutative diagrams, (2.12) such that \(I_{\cdot,i}\) trivializes \(\varrho-\varrho^{\prime}\). Namely \[\varrho^{\prime}(x,x^{\prime})\cdot\mathrm{id}_{i}-\mathrm{id}_{i}\cdot \varrho(x,x^{\prime})=\mathrm{id}_{\rho_{0}(x)}\cdot I_{x^{\prime},i}-I_{xx^{ \prime},i}+I_{x,i}\cdot\mathrm{id}_{\rho_{0}(x^{\prime})}\] for each \(x_{1},x_{2}\in kN_{\mathfrak{s}}\), where \(\mathrm{id}_{i}:i\Rightarrow i\) denotes the identity \(2\)-morphism on the intertwiner \(i\). 3. The \(2\)-morphisms/modifications in \(2\mathrm{Rep}^{\tau}(D(BM))\) are **equivariant cochain homotopies**\(\mu:i\Rightarrow i^{\prime}\) that trivializes \(I_{\cdot,i}-I_{\cdot,i^{\prime}}\), namely \[I_{x,i}-I_{x,i^{\prime}}=\mathrm{id}_{\rho_{0}(x)}\cdot\mu-\mu\] for all \(x\in kN_{\mathfrak{s}}\). More explicitly, \(\mu\) is a cochain homotopy (2.13) that intertwines between \(\rho_{1}(y),\rho_{1}^{\prime}(y)\) as cochain homotopies for each \(y\in M\). #### 2.3.1 Tensor product and direct sums Recall vector space cochain complexes \(V,W\) come equipped with natural notions of direct sum \[V\oplus W=V_{-1}\oplus W_{-1}\stackrel{{\triangle\otimes \varrho^{\prime}}}{{\relbar\joinrelbar\joinrelbar\joinrelbar\joinrelbar\joinrelbar \joinrelbar\joinrelbar\joinrelbar\joinrelbar\joinrelbar\joinrelbar\joinrelbar \joinrelbar\joinrelbar\joinrelbar\joinrelbar\joinrelbar\bar\joinrelbar \joinrelbar\bar\joinrelbar\joinrelbar\joinrelbar\bar\joinrelbar\joinrelbar \joinrelbar\bar\joinrelbar\joinrelbar\bar\joinrelbar\joinrelbar\bar \join\bar\bar\joinrelbar\joinrelbar\bar\join\bar\bar\joinrelbar\joinrelbar \join\bar\bar\joinrelbar\bar\join\bar\bar\joinrel\bar\join\bar\join \bar\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join \bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join \bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join \bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join \bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join \bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join \bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join \bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join \bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join \bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\join\bar\join\bar\join\bar\join\bar\join\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\bar\join\bar\join\bar\join\bar\join\bar\bar\join\bar\join\bar\join\bar\bar\join\bar\join\bar\join\bar\join\bar\bar\join\bar\join\bar\join\bar\join\bar\bar\join\bar\join\bar\bar\join\bar\join\bar\join\bar\bar\join\bar\join\bar\join\bar\bar\join\bar\bar\join\bar\join\bar\join\bar\join\bar\bar\join\bar\join\bar\bar\join\bar\join\bar\bar\join\bar\join\bar\bar\join\bar\join\bar\bar\join\bar\join\bar\bar\join\bar\join\bar\join\bar\bar\join\bar\bar\join\bar\join\bar\join\bar\join\bar\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\bar\join\barbar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\bar\join\barbar\join\join\bar\join\bar\join\barbar\join\join\bar\join\join\barbar\join\join\barbar\join\join\bar\join\bar\join\bar\join\bar\join\join\barbar\join\join\barbar\join\join\bar\join\barbar\join\join\bar\join\bar\join\bar\join\bar\join\barbar\join\join\bar\join\bar\join\bar\join\join\bar\join\join\bar\join\join\bar\join\bar\join\join\bar\join\bar\join\bar\join\join\bar\join\bar\join\join\bar\join\bar\join\join\bar\join\barbar\join\join\bar\join\join\barbar\join\join\bar\join\join\bar\join\join\bar\join\join\bar\join\join\barbar\join\join\bar\join\join\bar\join\joinjoin\barbar\join\join\bar\join\join\barbar\join\join\join\bar\join\join\bar\join\ as well as tensor product \[V\otimes W=V_{-1}\otimes W_{-1}\xrightarrow{D^{+}}V_{-1}\otimes W_{0}\oplus V_{ 0}\otimes W_{-1}\xrightarrow{D^{-}}V_{0}\otimes W_{0}, \tag{2.14}\] where \(D^{\pm}=\pm 1\otimes\partial^{\prime}+\partial\otimes 1\) is the tensor extension of the differentials \(\partial:V_{-1}\to V\) and \(\partial^{\prime}:W_{-1}\to W_{0}\). For 2-representations, the direct sum is defined by extending the definition to a direct sum of 2-algebra homomorphisms \[\rho\oplus\rho^{\prime}:\mathcal{G}\oplus\mathcal{G}\to\mathrm{End}(V)\oplus \mathrm{End}(W),\] while the tensor product is accomplished by precomposing with the coproduct. By interpreting chain homotopies \(\rho_{1}(y)\) as an "action" of \(y\in M\) on the 2-intertwiners \(i,j\) of \(2\mathrm{Rep}^{\tau}(D(BM))\), we can put formally \[\rho_{V\otimes W}=(\rho_{0}\otimes\rho_{0})\circ\Delta_{0}^{ \prime}, \rho_{i\otimes j}=(\rho_{1}\otimes\rho_{1})\circ\Delta_{-1},\] \[\rho_{V\otimes j}=(\rho_{0}\otimes\rho_{1})\circ\Delta_{0}, \rho_{i\otimes W}=(\rho_{1}\otimes\rho_{0})\circ\Delta_{0}. \tag{2.15}\] The coequivariance and the coPeiffer conditions of the graded coproducts \(\Delta_{-1},\Delta_{0},\Delta_{0}^{\prime}\) then implies the naturality of this tensor product, as well as the decomposition [12] \[\rho_{i\otimes j}=\rho_{V\otimes j}\rho_{i\otimes W}=\rho_{i\otimes W^{\prime }}\rho_{V\otimes j}. \tag{2.16}\] Now crucially, the weak component \(\varrho\) attached to these decompositions (2.16) may differ; their difference gives rise to a possibly non-trivial **interchanger 2-isomorphism** \[\phi_{ij}:(\mathrm{id}_{V^{\prime}}\otimes j)\circ(i\otimes\mathrm{id}_{W}) \xrightarrow{\sim}(i\otimes\mathrm{id}_{W^{\prime}})\circ(\mathrm{id}_{V} \otimes j)\] for the 2-intertwiners \(i,j\) in \(2\mathrm{Rep}^{\mathcal{T}}(\mathcal{G})\). This fact will play an important role in Section 5.1. Notice since the grouplike coproduct (2.8) is cocommutative, the induced tensor product (2.15) is commutative, as we shall see in Sections 4 and 5. The zero 2-representation is the zero complex \(0\xrightarrow{0}0\), while the tensor unit 2-representation is the trivial complex \(\mathbf{1}=k\xrightarrow{1}k\) carrying the action \(\rho(-)\cdot\mathbf{1}=\epsilon(-)\cdot\mathbf{1}\) by a scalar multiplication through the counit \(\epsilon:D(BM)\to k\). We briefly note that the tensor product of 2-morphisms is induced by composition \(\mu\otimes\mu^{\prime}=\mu\cdot\mu^{\prime}\). The reader is referred to [12] for an explicit description of the above structures and formulas. Dual/contragredient 2-representations.Let \(V\in 2\mathrm{Rep}^{\tau}(D(BM))\) be a 2-representation. As the 2-Drinfel'd double \(D(BM)\) is equipped with the (grouplike) antipode \(S\), we can in fact define the **dual 2-representation**\(\tilde{V}\) by \[\rho_{\tilde{V}}=\rho_{V}\circ S.\] Since \(S\) is bijective in this case, all 2-representations and 2-intertwiners of \(D(BM)\) has a dual and hence \(2\mathrm{Rep}^{\tau}(D(BM))\) is a _fully-dualizable_. This is a necessary step for **Theorem 1.2**, since tensor 2-categories do indeed have duals [55, 14, 24], but it shall not play an important role in this paper. #### 2.3.2 Braiding In general, 2-Drinfel'd doubles such as \(D(BM)\) come equipped naturally with a **2-\(R\)-matrix** \[\mathcal{R}=\mathcal{R}^{+}+\mathcal{R}^{-},\qquad\begin{cases}\mathcal{R}^{+ }\in kBM\otimes kN_{*}\\ \mathcal{R}^{-}\in kN_{*}\otimes kBM\end{cases},\] which satisfy certain algebraic properties [12]. There is also defined a component \(R\in kN_{*}\otimes kN_{*}\), which serves as a universal \(R\)-matrix for \(kN_{*}\). We shall make use of the conventional Sweedler notation for the 2-\(R\)-matrices, \[\mathcal{R}^{\pm}=\mathcal{R}^{\pm}_{(1)}\otimes\mathcal{R}^{\pm}_{(2)},\qquad R =R_{(1)}\otimes R_{(2)}.\] If \(t\neq 0\), this quantity \(R\) is subject to the constraint \[R=\mathcal{R}^{-}_{(1)}\otimes t\mathcal{R}^{-}_{(2)}(\equiv R^{-})=t\mathcal{ R}^{+}_{(1)}\otimes\mathcal{R}^{+}_{(2)}(\equiv R^{+}),\] while if \(t=0\), \(R\) and \(\mathcal{R}\) are independent. Now take two 2-representations \(V,W\in 2\mathrm{Rep}^{\tau}(D(BM))\); we define the braiding map between \(V,W\) by \[b_{VW}(V\otimes W)=\mathrm{flip}\circ\rho_{0}(R)(V\otimes W). \tag{2.17}\] We call \(b_{V}=b_{VV}\) the _self-braiding_ of \(V\); provided \(V,W\) can be self-braided, we compute [56] \[b_{V\otimes W}\cong b_{V}\otimes b_{W}\otimes B_{VW}, \tag{2.18}\] where we have defined the _full-braiding_ \[B_{VW}=b_{VW}\circ b_{WV}.\] Now similar to the tensor product structure, we can form the _mixed braiding_ between 1-morphisms \(i\) and objects \(W\), which is formally given by \[b_{iW}=\operatorname{flip}\circ(\rho_{1}\otimes\rho_{0})(\mathcal{R}^{+}), \qquad b_{Wi}=\operatorname{flip}\circ(\rho_{0}\otimes\rho_{1})(\mathcal{R}^{- }). \tag{2.19}\] See [12] for more explicit formulas. There, we have also proved the following naturality property \[b_{iW}:i\circ b_{VW}\Rightarrow b_{UW}\circ i,\qquad b_{Wi}:i\circ b_{WV} \Rightarrow b_{WU}\circ i,\] where \(i:V\to U\) is a 2-intertwiner in \(2\text{Rep}(D(BM))\). The hexagonator.By weakening the 2-Drinfel'd double via Section 2.2.2, we also introduce a _hexagonator_ 2-morphism \(\Omega_{V|\bullet}\) that implements the hexagon relation for each \(V\in 2\text{Rep}^{\tau}(D(BM))\). More precisely, given three 2-representations \(V,W,U\), we have the following diagram \[\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)* {\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xyxy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,00)*{\xyxy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xyxy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xyxy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{xy(0,0)*{ \xyxy(0,0)*{\xy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{xyxy(0,0)*{xyxy(0,0)*{xyxy(0,0)*{xyxy(0,0)*{\xy(0,0)*{xyxy(0,0)*{\xy(0)*{\xyxy(0,0)*{\xyxy(0)*{\xy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)(0,0)*{xyxy(0)*{\xy(0)*{\xyxy(0)*{\xy(0)*{\xy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0})*{\xy(0)*{\xy(0)*{\xyxy(0)*{\xyxy(0)*{\xy(0)*{\xyxy(0)*{\xyxy(0)*{\xy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0})*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0 )*{\xy(0})*{\xy(0)*{\xy(0)*{\xy(0)*{\xy(0)*{\xy(0)*{\xy( 0*{\xy(0)*{\xyxy(0*{\xyxy(0})*{\xyxy(0)*{\xyxy(0*{\xyxy( )0*{\xyxy(0)*{\xyxy(0*{\xyxyxy(0})*{\xyxyxy(0)*{ \xyxyxy(0)*{\xyxyxy(0)*{\xyxy(0*{\left(0)*{\xyxyxy(0)*{\left({\left(0} )\left({\left({0})\left({\left({0}))\left({\left({\left({0} )))\left({\left({\left({0}))\left({\left({\left({0} )}))\left({\left({{\left({0}))})\left({\left({\left{{ \left({0}}))})\left({\left{\left({\left{{\left({0}})})\left({\left{ \left({{0}})})\left({\left{\left{{\left({0}})\left{\left{\left({{0} \left({0}})\left{\left{{0}})\left({\left{{\left({0}})\left{\left{{0} \left({0}})\left{\left{\left{{0}})\left({\left{\left{{0}})\left({\left{{ \left({0}})\left{\left{{0}})\left({\left{{0}})\left({\left{\left{{0}})\left({\left{{ \left({0}})\left{\left{{0}})\left({\left{{0}})\left({\left{\left{{0}})\left({\left{{ \left(0}})\left{\left{{0}})\left({\left{\left{{0}})\left({\left{{\left{0}} )\leftleft{\left{\left{{0}}})\left({\left{\left{{0}})\left({\left{\left{{ \left({0}})\left{\left{{0}})\left({\left{{0}})\left({\left{\left{{0}} \left{\left{{0}})\left({\left{\left{{0}})\left({\left{\left{{0}}})\left({\left{ \left{{0}})\left({\left{\left{{0}})\left({\left{\left{{0}})\left({\left{\left{0}} )\left({\left{{0}})\left({\left{\left{{0}})\left({\left{\left{0}})\left({\left{ \left{0}})\left({\left{\left{{0}})\left({\left{\left{0}})\left({\left{\left{ \left}})\left({\left{{\left{{0}})\left({\left{{0}})\left({\left{\left{{0}} \left{\left{0})\left({\left{{}})\left({\left{\left{{0}})\left({\left{\left{ }})\left(\left{\left{{0}})\left({\left{\left{{0}})\left(\left{\left{{ }})\left(\left{\left{{0}})\left(\left{\left{{{0}})\left(\left{\left{{ }})\left(\left{\{{\left{0}})\left(\left{\left{{{0}})\left(\left{\left{ \left}})\left(\left{\left{{{0}})\left(\left{\left{{}})\left(\left{ {\left})\left(\left{\left{{{}})\left{\left})\left(\left{\left{{\left}} \right})\left(\left(\left{\left{{{}})\left{\left{\left}})\left(\left( \left(\left{{{})})\left(\left(\left(\left{{{})\left{\left})\left(\left(\left{{ \left})\left(\left{\left})\left(\left(\left{{{})})\left(\left(\left{\left {{})})\left(\left(\left(\left\left(\left{{})\left{\left})\left(\left\left(\left{{}) )\left(\left(\left\left\{{{})})\left(\left(\left\left\{{{\left})\left}\left} \right)\right\right\}\right\right\}\right\}\right\}\right\end{\end{\right}\end{\end{\right}\end{\end{\right}\end{\right}\right}\right}\right}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} in the former case, while by \(D(B\mathbb{Z}_{2})^{\rm sgn}\) in the latter case. This then induces a non-trivial grouplike component \(\Delta_{0}(x)=\hat{x}\otimes x+x\otimes\hat{x}\) of the coproduct \(\Delta\) on \(D(B\mathbb{Z}_{2})\) (recall \(\hat{x}=p(x)\) where \(p\) is the Pontrjagyn duality). Now consider the discrete combined \(D(B\mathbb{Z}_{2})\)-connection \((\mathbf{A},\mathbf{\Sigma})=(A+\Sigma,C+B)\) on a 4-manifold \(X\)[2]. These connection forms are given by cochains \[A\in C^{1}(X,\mathbb{Z}_{2}),\qquad B\in C^{2}(X,\widehat{Z}_{2}),\] with the components \(\Sigma=0,C=0\) trivial. Depending on the automorphism \(\mathrm{Aut}(\mathbb{Z}_{2})\) encoded in the 2-Drinfel'd double \(D(B\mathbb{Z}_{2})\), the 1- and 2-curvatures of the field theory are given by \[F=\begin{cases}dA&;\text{in }D(B\mathbb{Z}_{2})^{\rm triv}\\ dA+\frac{1}{2}A\cup A&;\text{in }D(B\mathbb{Z}_{2})^{\rm sgn}\end{cases},\qquad d _{A}B=\begin{cases}dB&;\text{in }D(B\mathbb{Z}_{2})^{\rm triv}\\ dB+A\cup B&;\text{in }D(B\mathbb{Z}_{2})^{\rm sgn}\end{cases},\] where the cup products are implemented through the automorphism \(\mathrm{Aut}(\mathbb{Z}_{2})\) or its dual. The corresponding _monster 2-BF theory_[2] is given by the topological action \[S[A,B]=\int_{X}\langle B\cup F\rangle+\langle\tau(A)\cup A\rangle, \tag{3.1}\] where we recall that \(\tau\in H^{3}(\mathbb{Z}_{2},\widehat{\mathbb{Z}_{2}})\) is the underlying Postnikov class of \(D(B\mathbb{Z}_{2})\). Note that the discrete 1-form gauge fields must be flat, \(F=dA=0\), and terms like \(A^{2}=0\mod 2\) vanish, hence the classical equations of motion (EOMs) are given by \[F=dA=0,\qquad d_{A}B=\tau(A). \tag{3.2}\] We will introduce in the following a non-trivial cohomological term that "mimics" \(\frac{1}{2}A^{2}\). However, it is important to note that these cohomological terms constitute _twists_ on the 2-Drinfel'd double and are not dynamical; they do not alter the EOM (3.2). We define the partition function corresponding to (3.1) on a 4-manifold \(X\) as a formal path integral \[Z_{\rm Kit}(X)=\int dAdBe^{i2\pi S[A,B]}, \tag{3.3}\] which should be appropriately normalized such that \(Z_{\rm Kit}(S^{4})=1\)[20]. We call \(Z_{\rm Kit}\) the **4d Kitaev model**. It should be understood as a collective of two such theories3, Footnote 3: There is a slight misnomer here, where \(Z_{\rm Kit}^{0}\) should really be called the “invisible” toric code; see Remark 4.2 later. \[\text{(Invisible) toric code}:\ Z_{\rm Kit}^{0},\qquad\text{Spin-Siatev}:\ Z_{ \rm Kit}^{s},\] arising respectively from \(D(B\mathbb{Z}_{2})^{\rm triv}\) and \(D(B\mathbb{Z}_{2})^{\rm sgn}\). We shall refer to either of these 2-Drinfel'd doubles collectively as \(D(B\mathbb{Z}_{2})\) in the following. The central idea is then that \(Z_{\rm Kit}\) has a 2-Drinfel'd double symmetry. #### 3.1.1 \(Z_{\rm Kit}\) as a topological nonlinear \(\sigma\)-model There had been proposals to construct (3+1)D topological phases with a higher-gauge field theory [21]. Specifically, [20] constructs a topological non-linear \(\sigma\)-model (NLSM) which corresponds to a higher-Dijkgraaf-Witten theory based on a 2-group, and claims that all (3+1)D topological phases can be described this way. The NLSM they construct is characterized by the following data: (i) a (skeletal) 2-group \(\mathcal{G}=\mathbb{Z}_{2}\to G_{b}\), where \(G_{b}\) is a finite group labeling "stringlike bosonic charges", and \(\mathbb{Z}_{2}\) is either fermion parity \(\mathbb{Z}_{2}^{f}\) or a magnetic \(\pi\)-flux \(\mathbb{Z}_{2}^{m}\), (ii) the first Postnikov class \(\tau\in H^{3}(G_{b},\mathbb{Z}_{2}^{f})\) of \(\mathcal{G}\) and (iii) a Dijkgraaf-Witten class \(\omega\in H^{4}(\mathcal{G},\mathbb{R}/\mathbb{Z})\)[21, 20]. We write the Hoang data [40] of \(\mathcal{G}\) as \((G_{b},\mathbb{Z}_{2}^{f},\tau)\). Our construction of the Kitaev model (3.3) fits nicely into this framework, with the 2-group \(G=\widehat{\mathbb{Z}_{2}}\xrightarrow{0}\mathbb{Z}_{2}\) given by the Hoang data \[(G_{b}=\mathbb{Z}_{2},\mathbb{Z}_{2}^{f}\cong\widehat{\mathbb{Z}_{2}},\tau).\] To construct the Dijkgraaf-Witten cocycle, we begin with the group cohomology ring \(H^{*}(\mathbb{Z}_{2},\mathbb{Z}_{2})\cong\mathbb{Z}_{2}[u]\) with a generator \(u\in H^{1}(\mathbb{Z}_{2},\mathbb{Z}_{2})\) in degree-1 [57]. Considering \(\mathbb{Z}_{2}\) as a trivial \(\mathbb{Z}_{2}\)-module, the sign representation \(\mathrm{sgn}\in\mathrm{Aut}(\mathbb{Z}_{2})\cong\mathrm{Hom}(\mathbb{Z}_{2}, \mathbb{Z}_{2})\cong H^{1}(\mathbb{Z}_{2},\mathbb{Z}_{2})\) then serves as a representative of the generator \(u\). Now consider \(D(B\mathbb{Z}_{2})^{\rm sgn}\). The cup product for the term \(\frac{1}{2}A\cup A\) in the curvature \(F\) is implemented by the sign representation \(\mathrm{sgn}=u\in H^{1}(\mathbb{Z}_{2},\mathbb{Z}_{2})\), from which \[\frac{1}{2}A\cup A=\bar{e}(A),\qquad\bar{e}=\frac{1}{2}u\cup u\in H^{2}( \mathbb{Z}_{2},\mathbb{Z}_{2}). \tag{3.4}\] The factor of \(1/2\) is very important as, without it, \(u\cup u=0\mod 2\) is trivial in \(\mathbb{Z}_{2}\)-cohomology [57]. Dualizing the value of \(\bar{e}\) to a class in \(H^{2}(\mathbb{Z}_{2},\widehat{\mathbb{Z}_{2}})\), it lifts the action \(\rhd\) of \(k\mathbb{Z}_{2}\) on \(k\widehat{\mathbb{Z}_{2}}\) to a central extension. The term \(\langle B\cup\bar{e}(A)\rangle\) that appears in (3.1) gives precisely the Dijkgraaf-Witten cocycle \(\omega\in Z^{4}(G,\mathbb{R}/\mathbb{Z})\). Indeed, going on-shell of the EOM (3.2) reduces the spin-Kitaev partition function to \[Z^{\mathfrak{s}}_{\mathrm{Kit}}(X)\sim\sum_{\begin{subarray}{c}dA=0\\ dB=\tau\end{subarray}}e^{i\neq\mathbb{J}_{X}\langle B\cup\bar{e}(A)\rangle}.\] This gives exactly the NLSM constructed in [20] with \(\omega(B,A)=B\cup\bar{e}(A)\), provided the **anomaly-free condition** \[\tau\cup\bar{e}=0 \tag{3.5}\] is satisfied. This condition ensures that that the Dijkgraaf-Witten integrand \(\omega(A,B)=\langle B\cup\bar{e}(A)\rangle\) is a cocycle \(d\omega(A,B)=0\) in light of the EOM \(dB=\tau(A)\). #### 3.1.2 Classification of 4d topological phases with a single pointlike \(\mathbb{Z}_{2}\)-charge The above describes the construction of a 4d Dijkgraaf-Witten topological field theory. As we have mentioned, these were proposed to describe [20, 21, 58, 59], in a very general sense, 4d gapped topological phases. Another approach towards this follows the program of "higher categorical symmetries" [22, 60, 61, 62, 63, 32, 64, 8]. In particular, the 4d toric code has been extensively studied in the literature [65, 66, 23] from this perspective, so we understand its corresponding braided fusion 2-category quite well. By hypothesis, gapped topological phases are characterized by non-degenerate4 braided fusion 2-categories, based on the physical principle of **remote detectability**[60, 61, 62, 63, 8]. In particular, those with a single pointlike \(\mathbb{Z}_{2}\)-charge have been classified in [23, 24]. These phases are Footnote 4: Namely the _Müger centre_\(Z_{2}\) is trivial. 1. the 4d toric code \(\mathscr{R}\simeq Z_{1}(\Sigma\operatorname{Vect}[\mathbb{Z}_{2}])\), 2. the 4d spin-\(\mathbb{Z}_{2}\) gauge theory \(\mathscr{S}\simeq Z_{1}(\Sigma\operatorname{sVect})\), 3. the \(w_{2}w_{3}\) gravitational anomaly \(\mathscr{T}\), where \(Z_{1}\) denotes the Drinfel'd centre and \(\Sigma\) denotes the condensation completion functor [25]. Here, \(\operatorname{Vect}[\mathbb{Z}_{2}]\) denotes the category of \(\mathbb{Z}_{2}\)-graded vector spaces, and \(\operatorname{sVect}\) is the category of supervector spaces. _Remark 3.1_.: Note \(\Sigma\operatorname{Vect}[\mathbb{Z}_{2}]\simeq\operatorname{2Vect}[\mathbb{Z }_{2}]\) is equivalent to the 2-category of \(\mathbb{Z}_{2}\)-graded 2-vector spaces as defined in the sense of Kapranov-Voevodsky [43, 29, 23]. It is important to keep in mind that this notion of a 2-vector space is distinct from what we are using in this paper, which is in the sense of Baez-Crans [37, 35, 53]. For this reason, we shall always denote \(2\operatorname{Vect}[\mathbb{Z}_{2}]\) in terms of the condensation completion \(\Sigma\operatorname{Vect}[\mathbb{Z}_{2}]\). However, we have proven in [12] that the 2-representation theory developed from **Definition**2.4 coincides with that studied in the literature [43, 29, 31, 32] for finite skeletal 2-groups. In this paper, we shall mostly focus on the gapped phases \(\mathscr{R},\mathscr{S}\), and leave the study of the gravitational anomaly \(\mathscr{T}\) to a later work; the reason for this shall be given at the end of Section5. We will find explicit realizations of these phases as 2-representation 2-categories of certain versions of the 2-quantum double \(D(B\mathbb{Z}_{2})\). To do so, we study the excitations in the associated NLSM (3.1). ### Anomaly-freeness of the 4d spin-Kitaev model Recall from the above that the 4d Kitaev model \(Z_{\mathrm{Kit}}\) is well-defined provided the non-trivial Postnikov class \(\tau\) and extension class \(\bar{e}\) of the underlying 2-group satisfies the anomaly free condition (3.5). Let us here study, from the point of view of the 2-representation 2-category \(2\mathrm{Rep}^{\tau}(D(B\mathbb{Z}_{2}))\), why the anomaly-free condition (3.5) is necessary. Recall from Section2.2 that the self-duality of \(D(B\mathbb{Z}_{2})\) as a 2-Drinfel'd double means that the Postnikov class \(\tau\) dualizes to a coassociator \(\Delta_{1}:\mathbb{Z}_{2}\to\widehat{\mathbb{Z}}_{2}^{3\otimes}\) defining the associator 1-morphism \(a_{VWU}\) for the objects \(V,W,U\in 2\mathrm{Rep}^{\tau}(D(B\mathbb{Z}_{2}))\). The key point is that, in general, the pentagon relation for \(a\) follows from the condition (2.11), which in turn follows from the 3-cocycle condition for \(\tau\). This notion generalizes to the case where \(D(B\mathbb{Z}_{2})\) is twisted by \(\bar{e}\in H^{2}(\mathbb{Z}_{2},\widehat{\mathbb{Z}_{2}})\); that is, the product in \(D(B\mathbb{Z}_{2})_{0}=bbZ_{2}\) is modified such that \[x\times x^{\prime}=\bar{e}(x,x^{\prime})(x^{\prime}),\qquad x,x^{\prime}\in \mathbb{Z}_{2}.\] We shall denote the corresponding 2-representation 2-category by \[2\mathrm{Rep}^{\tau}(D(B\mathbb{Z})^{\bar{e}})=2\mathrm{Rep}_{m}^{\tau}(D(B \mathbb{Z}_{2})^{\mathrm{sgn}}).\] This notation shall be explained later in Section5. For now, we prove the following. **Lemma 3.1**.: _The anomaly-free condition (3.5) implies the pentagon relation for \(a\) in \(2\mathrm{Rep}_{m}^{*}(D(B\mathbb{Z}_{2})^{\mathrm{sgn}})\)._ Proof.: In order to see the anomaly-free condition (3.5) manifest on the 2-representations, we begin with the observation that the component \(\Delta_{0}\) of the coproduct on \(D(B\mathbb{Z}_{2})\) satisfies \[\Delta_{0}(x^{2})=\bar{e}(x,x)\dot{x}^{2}\otimes x^{2}=\bar{e}(x,x)\otimes 1, \tag{3.6}\] by (2.6). This means that \(\Delta_{0}\) is an algebra map on \(k\mathbb{Z}_{4}\), not \(k\mathbb{Z}_{2}\). Because of (3.6), evaluating the condition (2.11) on \(1=x^{2}\in k\mathbb{Z}_{2}\) gives \[1^{4\otimes}=\Delta_{-1}\circ\Delta_{1}(1)=(\Delta_{1}\otimes 1)\circ\Delta_{ 0}(x^{2})=\bar{e}(x,x)\otimes\Delta_{1}(1),\] which violates the pentagon relation unless the right-hand side is also trivial \(1^{4\otimes}\). Pairing this equation with arbitrary \(x_{1},\dots,x_{3}\in k\mathbb{Z}_{2}\) gives \[1=\langle\bar{e}(x,x)\otimes\Delta_{1}(1),1\otimes x_{1}\otimes x_{2}\otimes x _{3}\rangle=\langle\bar{e}(x,x)\otimes 1,1\otimes\tau(x_{1},x_{2},x_{3}) \rangle=\bar{e}(x,x)\tau(x_{1},x_{2},x_{3}).\] This is nothing but \(\bar{e}\cup\tau=0\). Notice on the other hand that if \(\tau=0\) is trivial, then so is \(\Delta_{1}\) and the coassociativity condition simply implies the group cocycle condition for \(\bar{e}\). Weakening the anomaly-free condition.There is a way to weaken the anomaly-free condition, by imposing (3.5) only in _cohomology_\(\tau\cup\bar{e}=0\in H^{5}(\mathbb{Z}_{2},k^{\times})\)[21, 42]. This means that the 4d Kitaev model gains an additional term \(\nu(A)\) that trivializes the coboundary of the Dijkgraaf-Witten 4-cocycle, \[d(\omega_{b}-\nu)=0.\] Algebraically, this 4-cocycle \(\nu\in H^{4}(\mathbb{Z}_{2},k^{\times})\) is known to play the role of a "pentagonator" 2-morphism in the underlying 2-group [22, 20], implementing the pentagon relation (2.3); see _Remark 2.1_. On the other hand, from the 2-representation theory perspective, it was shown [12] that the module pentagonator \(\pi\)[32] on \(2\mathrm{Rep}^{*}(D(B\mathbb{Z}_{2}))\) is given by the Hochschild 3-cocycle \(\mathfrak{T}\), \[\pi_{x_{1}x_{2}x_{3}|V}=\mathfrak{T}(\rho_{0}(x_{1}),\rho_{0}(x_{2}),\rho_{0} (x_{3}))(V),\qquad x_{1},\dots,x_{3}\in k\mathbb{Z}_{2},\] attached to the _weak_ endomorphism 2-algebra \(\mathrm{End}(V)\) (see **Definition 2.4**) on the 2-vector space \(V\). In forming a weak 2-representation \(V\) of \(D(B\mathbb{Z}_{2})\), what is trivialized in Hochschild cohomology is not \(\tau\) but the difference \[\rho_{1}\circ\tau-\mathfrak{T}\circ\rho_{0}^{3\otimes},\] where \((\rho=(\rho_{1},\rho_{0}),\varrho):D(B\mathbb{Z}_{2})\to\mathrm{End}(V)\) is a 2-representation. As such, one should be able to compute the 4-cocycle \(\nu\) in terms of the Hochschild class of \(\mathfrak{T}\). We shall not dwell on this here, however. ## 4 Excitations in the (invisible) toric code \(Z_{\mathbf{Kit}}^{0}\) Excitations are inserted into the theory \(Z_{\mathrm{Kit}}\) with 2-representations \(\rho\) of \(D(B\mathbb{Z}_{2})\). Since \(D(B\mathbb{Z}_{2})\) is skeletal, it suffices to study 2-representations of the underlying 2-group. Let us first focus on the trivial case \(D(B\mathbb{Z}_{2})^{\mathrm{trv}}\). Recall that a 2-representation \(\rho:D(B\mathbb{Z}_{2})^{\mathrm{trv}}\to\mathrm{End}(V)\) on a 2-vector space \(V=V_{-1}\xrightarrow{\hat{\varrho}}V_{0}\) consists of the following data: 1. a pair of \(\mathbb{Z}_{2}\)-representations \[\rho_{0}=\rho_{0}^{1}\oplus\rho_{0}^{0}:\mathbb{Z}_{2}\to\mathrm{End}(V_{0}) \oplus\mathrm{End}(V_{-1}),\] such that \(\hat{\varrho}\) is an intertwiner between \(\rho_{0}^{0}\) and \(\rho_{0}^{1}\), and 2. a map \(\rho_{1}:\widehat{\mathbb{Z}}_{2}\to\mathrm{Hom}(V_{0},V_{-1})\). Since the \(t\)-map for \(D(B\mathbb{Z}_{2})\) is trivial, \(\rho\) must satisfy \(\delta\rho_{1}=(\rho_{1}\circ\partial,\hat{\varrho}\circ\rho_{1})=\rho_{0}t=0\), which means either \(\rho_{1}=0\) or \(\hat{\varrho}=0\). Therefore, for 1-dimensional irreducible representations (irreps) \(V_{0},V_{-1}\cong k\) over the ground field \(k\), the map \(\rho_{1}\) is either trivial or a scalar multiplication. We shall without loss of generality normalize this scalar to \(1\in k^{\times}\), and denote \(\rho_{1}=\hat{1}\). _Remark 4.1_.: Though \(\rho_{1}\) need _not_ be an intertwiner, we require it to preserve the identity \(\rho_{0}^{0,1}(1)=\rho_{0}^{0,1}(x^{2})=\mathrm{id}\) in the sense that \[\rho_{1}(y)\circ\mathrm{id}_{V_{0}}=\mathrm{id}_{V_{-1}}\circ\rho_{1}(y),\qquad x \in\mathbb{Z}_{2},\ y\in\widehat{\mathbb{Z}_{2}}.\] This condition is vacuous here, but it shall become non-trivial later when we _twist_\(D(BM)\). Strictly speaking, \(\rho_{1}\) can be trivial as well if \(\partial=0\), but this distinction makes no difference for \(D(B\mathbb{Z}_{2})^{\mathrm{trv}}\). Now given \(\rho_{0}^{0},\rho_{0}^{1}\) are irreducible, Schur's lemma implies that \(\partial\) is either trivial or an isomorphism. In other words, if \(\partial\neq 0\), then \(\rho_{0}^{0},\rho_{0}^{1}\) are either both the trivial representation \(1\), or both the sign representation \(\mathrm{sgn}\). We therefore have four inequivalent irreducible \(2\)-representations \[\mathbf{Electric}: \mathbf{1}=(1\oplus 1,\partial=1,\rho_{1}=0),\qquad\mathbf{c}=(1 \oplus\mathrm{sgn},\partial=0,\rho_{1}=\hat{1}),\] \[\mathbf{Magnetic}: \mathbf{1}^{*}=(\mathrm{sgn}\oplus\mathrm{sgn},\partial=1,\rho_ {1}=0),\qquad\mathbf{c}^{*}=(\mathrm{sgn}\oplus 1,\partial=0,\rho_{1}=\hat{1}), \tag{4.1}\] which constitute the semisimple objects in \(2\mathrm{Rep}^{\tau}(D(B\mathbb{Z}_{2}))\). We call the first row the **electric** sector and the second row the **magnetic** sector; this partition will be clear in the following. Note that \(\mathbf{c}\) is _not_ equivalent to \(\mathbf{c}^{*}\), because the map \(\partial\) remembers its domain and codomain. ### Fusion structure We now investigate the monoidal structure of the \(2\)-category \(2\mathrm{Rep}(D(B\mathbb{Z}_{2})^{\mathrm{trv}})\). Since the coproduct \(\Delta\) on \(D(B\mathbb{Z}_{2})\) is grouplike, the tensor product of \(2\)-representations \(\rho,\rho^{\prime}\) is just the usual _graded_ tensor product \(\rho\otimes\rho^{\prime}\). Graded here means (2.14), ie. equipped with the differential \(\partial\); we demonstrate this through computations below. Let us examine the \(2\)-representations as listed in (4.1). In the electric sector, we use the Morita equivalence \(\mathrm{sgn}^{2\otimes}\simeq 1^{2\otimes}\cong 1\) to have \[\mathbf{c}\otimes\mathbf{c}=(1\oplus\mathrm{sgn})\otimes(1\oplus\mathrm{sgn}) \cong 1\oplus\mathrm{sgn}\oplus\mathrm{sgn}^{2\otimes}\oplus\mathrm{sgn} \simeq\mathbf{c}\oplus\mathbf{c}, \tag{4.2}\] which tells us that \(\mathbf{c}\) is a **Cheshire string**[23]. We similarly have \[\mathbf{c}^{*}\otimes\mathbf{c}^{*}\simeq\mathbf{c}^{*}\oplus\mathbf{c}^{*},\] hence \(\mathbf{c}^{*}\) satisfies the same fusion rule as \(\mathbf{c}\). Consider the mixed fusion \(\mathbf{1}^{*}\otimes\mathbf{c}\). Here, we need to keep track of the non-trivial maps \(\partial\), \[\mathbf{1}^{*}\otimes\mathbf{c} = (\mathrm{sgn}\oplus\mathrm{sgn})\otimes(1\oplus\mathrm{sgn})\] \[\cong\] Since these maps \(\partial\) are intertwiners (in fact the identity), its domain and codomain are the same. We keep only one copy, so that \[\mathbf{1}^{*}\otimes\mathbf{c}\simeq\mathrm{sgn}\oplus 1=\mathbf{c}^{*}. \tag{4.3}\] Through similar computations, we have \[\mathbf{1}\otimes\mathbf{1}\cong\mathbf{1},\qquad\mathbf{1}\otimes\mathbf{c} \simeq\mathbf{c},\qquad\mathbf{1}^{*}\otimes\mathbf{1}^{*}\simeq\mathbf{1},\] hence \(\mathbf{1},\mathbf{1}^{*}\) are the vacuum lines; in particular, \(\mathbf{1}\) is the indecomposable identity object in \(2\mathrm{Rep}(D(B\mathbb{Z}_{2})^{\mathrm{trv}})\). #### 4.1.1 \(2\)-intertwiners; the \(1\)-morphisms Recall from **Definition 2.4** that the \(1\)-morphisms in \(2\mathrm{Rep}(D(B\mathbb{Z}_{2})^{\mathrm{trv}})\) are given by \(\mathbb{Z}_{2}\)-equivariant _cochain maps_. Form the list (4.1), we clearly have identity self-\(2\)-intertwiners, such as \(i[00]=\mathrm{id}:\mathbf{1}\to\mathbf{1}\) and \(i[11]=\mathrm{id}:\mathbf{c}\to\mathbf{c}\). As the source and target are the same graded \(\mathbb{Z}_{2}\)-representations for self-\(2\)-intertwiners in particular, we can find two more. These are given by a swap of grading together with a certain twist \[i^{\prime}[00]:(w,v)\mapsto(v,w),\qquad i^{\prime}[11]:(w,v)\mapsto\mathrm{ sgn}\cdot(v,w), \tag{4.4}\] where \((v,w)\in V_{-1}\oplus V_{0}\) denotes elements in \(\mathbf{1}\) or \(\mathbf{c}\). Clearly, the identity \(i[00],i[11]\) admit trivial actions by \(\rho_{1}\), in contrast to the grading swaps \(i^{\prime}[00],i^{\prime}[11]\). Hence from (2.15) and the grouplike coproduct \(\Delta_{-1}\) (2.8) we deduce the following fusion rules \[i[00]\otimes i[00]=i^{\prime}[00]\otimes i^{\prime}[00]=i[00],\qquad i[00] \otimes i^{\prime}[00]=i^{\prime}[00]\otimes i[00]=i^{\prime}[00], \tag{4.5}\] and similarly for \(i[11],i^{\prime}[11]\). The same analysis applies to the dual sector \(i^{\ast}[00]\in\operatorname{End}\mathbf{1}^{\ast},i^{\ast}[11]\in \operatorname{End}\mathbf{c}^{\ast}\). Now consider a map \(i[01]:\mathbf{1}\to\mathbf{c}\); the commutative diagrams (2.12) respectively enforce that \[i[01]_{0}\circ 1=0\circ i[01]_{1},\qquad i[01]_{1}\circ 0=\hat{1}(1)\circ i[01]_{0},\] where \(\hat{1}(1)\) denotes the isomorphism \(\rho_{1}(y)\in\operatorname{Hom}(V_{0},V_{-1})\). These equations admit a non-trivial solution \(i[01]_{0}=0,i[01]_{1}=1\), hence there is a non-trivial \(2\)-intertwiner \[i[01]=1\oplus 0:\mathbf{1}\to\mathbf{c};\] similar arguments show that we also have a non-trivial \(2\)-intertwiner \[i[10]=0\oplus 1:\mathbf{c}\to\mathbf{1}.\] These are in fact the _only_ possible \(2\)-intertwiners between \(\mathbf{1}\) and \(\mathbf{c}\). Again, the same analysis applies to the dual sector. Since \(i[01]\) and \(i[10]\) have different domain and codomain, we must employ the decomposition (2.16) in order to find the tensor product between them [67]. However, since the coproduct \(\Delta_{0}=0\) is trivial in \(D(B\mathbb{Z}_{2})^{\operatorname{trv}}\), we find their tensor product \[i[01]\otimes i[10]=i[10]\otimes i[01]\simeq 1=i[00]\] to be trivial as well. We shall see later in Section 5.1 that this will be different once we introduce twists on \(D(B\mathbb{Z}_{2})\). Let us now come finally to the \(2\)-intertwiners that map between dual sectors. First, consider maps such as \(\mathbf{1}\to\mathbf{1}^{\ast}\) or \(\mathbf{c}\to\mathbf{c}^{\ast}\). Any such maps must intertwine between different \(\mathbb{Z}_{2}\)-representations in both degrees, and the only such map is \(0\). Next, consider a map \(\bar{i}[01]:\mathbf{1}\to\mathbf{c}^{\ast}\). The commutative diagrams (2.12) enforce \[\bar{i}[01]_{0}\circ 0=1\circ\bar{i}[0,1]_{1},\qquad\bar{i}[01]_{1}\circ 0= \hat{1}(1)\circ\bar{i}[01]_{0}.\] The first equation says \(\bar{i}[01]_{1}=0\), while the second equation says \(\bar{i}[01]_{0}=0\), hence \(\bar{i}[01]=0\). Similarly, any \(2\)-intertwiner \(\bar{i}[10]:\mathbf{c}^{\ast}\to\mathbf{1}\) must be trivial \(\bar{i}[10]=0\). The above paragraph proves that \(2\mathrm{Rep}(D(B\mathbb{Z}_{2})^{\operatorname{trv}})\) has two connected components made separately of the electric and magnetic objects in (4.1), which have no (invertible) \(1\)-morphisms between them. We denote the **identity component** of \(2\mathrm{Rep}(D(B\mathbb{Z}_{2})^{\operatorname{trv}})\), namely the connected component of the fusion identity \(\mathbf{1}\), by \(\Gamma\), which consist of nothing but the electric sector. Relabeling \(i[00],i[11]=1\) and \(i^{\prime}[00],i^{\prime}[11]=\mathfrak{e}\), we achieve the following structure for \(\Gamma\) from (4.5), (4.6) which shall become crucial in the following. #### 4.1.2 Cochain homotopies; the 2-morphisms Recall from **Definition 2.4** that the \(2\)-morphisms in \(2\mathrm{Rep}^{\tau}(B\mathbb{Z}_{2})\) are given by cochain homotopies. Of course, the monoidal structure of the \(1\)-morphisms (eg. (4.5)) induce a monoidal structure on the modifications \(\mu\otimes\mu^{\prime}:i\otimes j\Rightarrow i^{\prime}\otimes j^{\prime}\). By inspection of (4.6), it is immediate from the definitions that the only modifications possible in \(\Gamma\) are self-modifications \(\mu:i\Rightarrow i\). Indeed, a modification \(\mu:i[01]\Rightarrow i[10]\) must satisfy (from the right triangle of (2.13)) \[i[01]_{0}(v)-i[10]_{0}(v)=-v=\partial^{\prime}\mu(v)=0,\qquad v\in V_{0},\] where \(\partial^{\prime}=0\) for \(\mathbf{c}\), which obviously do not have any solutions for \(\mu\). Similarly, a modification \(\mu:i[10]\Rightarrow i[01]\) must satisfy (from the left triangle of (2.13)) \[i[10]_{1}(w)-i[01]_{1}(w)=-w=\mu(\partial w)=0,\qquad w\in V_{-1}\] where \(\partial=0\) for \(\mathbf{c}\), which also does not have any solutions5. Consider now a modfication \(\mu:\mathbbm{1}\Rightarrow\mathfrak{e}\), which must satisfy an affine equation Footnote 5: This \(\mu\) must also intertwine between \(V_{0}=\operatorname{sgn}\) and \(W_{-1}=1\), which forces \(\mu\) to be trivial. However, \(\mu=0\) is in fact inconsistent with (2.13). \[i[00]_{0}(v)-i^{\prime}[00]_{0}(v)=v-w=\partial^{\prime}(\mu(v))=\mu(v).\] This is impossible as \(\mu\) must be linear; similarly for \(\mu:\mathfrak{e}\Rightarrow\mathfrak{1}\). On the other hand, self-modifications has \(i-i^{\prime}=0\), from which a solution for \(\mu\) can always be found, including trivial ones \(\mu=0\) such as the self-modifications on \(i[01]\) and \(i[10]\). For \(\mathbbm{1}=i[11]\) and \(\mathfrak{e}=i^{\prime}[11]\) on \(\mathbf{c}\), however, multiples of the identity map are in fact modifications, which we without loss of generality normalize to \(\pm 1\). These constitute our \(2\)-morphisms in the connected component \(\Gamma\subset 2\mathrm{Rep}^{\tau}(D(B\mathbb{Z}_{2}))\). As \(\mathbbm{1}\) is the tensor unit on \(\Omega\Gamma=\mathrm{End}_{\Gamma}(\mathbbm{1})\), the modification \(\mathbbm{1}\Rightarrow\mathbbm{1}\) must be trivial, but the one on \(\mathfrak{e}\) we can be \(\pm 1\). In summary, we have \[1:\mathbbm{1}\Rightarrow\mathbbm{1},\qquad 1,\mu:\mathfrak{e}\Rightarrow \mathfrak{e}, \tag{4.7}\] in which we denote \(\mu=-1\) the non-trivial modification on \(\mathfrak{e}\in\Gamma\). **Proposition 4.1**.: _There is a non monoidal functor between \(Z_{1}(\Sigma\operatorname{Vect}[\mathbb{Z}_{2}])\) and \(2\mathrm{Rep}^{\tau}(D(B\mathbb{Z}_{2})^{\mathrm{trv}})\)._ Proof.: We use the description of the braided fusion \(2\)-category \(\mathscr{R}\simeq Z_{1}(\Sigma\operatorname{Vect}[\mathbb{Z}_{2}])\) (with trivial associator class) describing the (3+1)D toric code given in [23]. This category has the identity component \(\Sigma\operatorname{Vect}[\mathbb{Z}_{2}]\) with two objects, the trivial \(\mathbb{Z}_{2}\)-algebra \(1=\mathbb{C}\) and the Cheshire string \(c=\mathbb{C}[x]/\langle x^{2}-1\rangle\), where \(\mathbb{Z}_{2}\) acts non-trivially on \(x\). The fusion rules are \[1^{2}=1,\qquad c^{2}\simeq c\oplus c.\] There are two objects \(m,m^{\prime}\) in the other component, satisfying identical fusion rules with \(m^{\prime}\cong m\otimes c\). We define a (unit-preserving) \(2\)-functor \(\mathfrak{F}:\mathscr{R}\to 2\mathrm{Rep}^{\tau}(D(B\mathbb{Z}_{2}))\) first on the objects by \[\mathfrak{F}(1)=\mathbbm{1},\qquad\mathfrak{F}(c)=\mathbf{c},\qquad \mathfrak{F}(m)=\mathbbm{1}^{\mathfrak{e}},\qquad\mathfrak{F}(m^{\prime})= \mathbf{c}^{\mathfrak{e}},\] which can be seen to respect the monoidal structure of the objects by the computations (4.2), (4.3). Let \(\dot{1},e\) denote the objects of the category \(\Omega\mathscr{R}\simeq\operatorname{Vect}[\mathbb{Z}_{2}]\), namely the \(1\)-morphisms on the fusion identity \(1\in\Sigma\operatorname{Vect}[\mathbb{Z}_{2}]\) of \(\mathscr{R}\). By taking \[\mathfrak{F}(\dot{1})=\mathbbm{1},\qquad\mathfrak{F}(e)=\mathfrak{e},\] we see that \(\mathfrak{F}\) is clearly \(2\)-functorial, and defines an equivalence with \(\Gamma\); the \(2\)-category \(\Sigma\operatorname{Vect}[\mathbb{Z}_{2}]\) has precisely the same form as \(\Gamma\) in (4.6) [29]. Let us now compare the monoidal structure of \(\Gamma\) and \(\Sigma\operatorname{Vect}[\mathbb{Z}_{2}]\). Though the fusion rules (4.5) for the \(1\)-morphisms match, \(i[01]\otimes i[10]\simeq\mathbbm{1}\) is trivial in \(\Gamma\) while it is not in \(\Sigma\operatorname{Vect}[\mathbb{Z}_{2}]\). This prevents \(\mathfrak{F}\) from being a monoidal equivalence. The problem is in fact even worse -- we will show in the following that \(2\mathrm{Rep}^{\tau}(D(B\mathbb{Z}_{2})^{\mathrm{trv}})\) does not even define a gapped topological order. We elaborate in Section 5 on how this problem can be amended by _twisting_ the \(2\)-algebra structure of \(D(B\mathbb{Z}_{2})\). ### The braiding data Let us for now turn to the _braiding_ structure. From the perspective of \(\mathscr{R}\), it is understood [23] in particular that there is the self-braiding \[\beta:m\otimes m\to m\otimes m\] on the magnetic \(m\) line, which can either be trivial \(\dot{1}\) or the electric \(\mathbb{Z}_{2}\)-particle \(e\). An argument was given in [23] that states \(\beta=\dot{1}\) is in fact trivial. We will prove that this is also the case in \(2\mathrm{Rep}^{\tau}(D(B\mathbb{Z}_{2})^{\mathrm{trv}})\), but there is a major problem. **Theorem 4.1**.: _All braiding maps on \(2\mathrm{Rep}(D(B\mathbb{Z}_{2})^{\mathrm{trv}})\) are trivial._ Proof.: Recall from (2.17), (2.19) that the braiding structure of \(2\mathrm{Rep}(D(B\mathbb{Z}_{2})^{\mathrm{trv}})\) is induced by a \(2\)-\(R\)-matrix \((\mathcal{R},R)\) on \(D(B\mathbb{Z}_{2})^{\mathrm{trv}}\). However, instead of explicitly solving the \(2\)-Yang-Baxter equations, we are instead going to _induce_ the \(2\)-\(R\)-matrix from the structure of the \(2\)-quantum "quadruple" \(D(D(B\mathbb{Z}_{2}),D(B\mathbb{Z}_{2}))\). This method is based on the general quantum double construction of Majid [7, 68], in which the universal \(R\)-matrix \(R\) on \(D(\mathbb{Z}_{2},\mathbb{Z}_{2})=k\mathbb{Z}_{2}\mathrm{\otimes}k\mathbb{Z}_{2}\) can be reconstructed \[R=\bar{\Psi}\circ\mathrm{coev}, \tag{4.8}\] from the _braided transposition_\(\bar{\Psi}:k\mathbb{Z}_{2}\otimes k\mathbb{Z}_{2}\to k\mathbb{Z}_{2}\otimes k \mathbb{Z}_{2}\) of the underlying quantum double (note here \(k\mathbb{Z}_{2}=D(B\mathbb{Z}_{2})_{0}\) is in degree-0), satisfying [7, 68] \[xx^{\prime}=\cdot\circ\bar{\Psi}(x^{\prime}\otimes x),\qquad x,x^{\prime}\in \mathbb{Z}_{2}.\] Here, \(\mathrm{coev}\) is the coevaluation dual to the canonical pairing form on \(\mathbb{Z}_{2}\). Now since \(\mathbb{Z}_{2}\) is Abelian, \(\bar{\Psi}\) is simply the identity and hence (4.8) states that \(R=\mathrm{id}\) is in fact the identity matrix. The braiding maps \(b_{VW}=1\) are thus all trivial. Now consider the universal \(2\)-\(R\)-matrix \(\mathcal{R}\). The above result was categorified in [12], hence we can play the same game and reconstruct \[\mathcal{R}^{+}=\Psi_{-1}^{l}\circ\mathrm{coev}_{l},\qquad\mathcal{R}^{-}= \Psi_{-1}^{r}\circ\mathrm{coev}_{r} \tag{4.9}\] from the underlying braided transpositions \[\Psi_{-1}^{l}:\widehat{\mathbb{Z}_{2}}\otimes\mathbb{Z}_{2}\to \mathbb{Z}_{2}\otimes\widehat{\mathbb{Z}_{2}},\qquad\Psi_{-1}^{r}:\mathbb{Z}_ {2}\otimes\widehat{\mathbb{Z}_{2}}\to\widehat{\mathbb{Z}_{2}}\otimes\mathbb{ Z}_{2}\] \[y\cdot f=\cdot\Psi_{-1}^{l}(y\otimes f),\qquad x\cdot g=\cdot \circ\Psi_{-1}^{r}(x\otimes g) \tag{4.10}\] on the \(2\)-Drinfel'd double \(D(D(B\mathbb{Z}_{2}),D(B\mathbb{Z}_{2}))\), where \(x,f\in\mathbb{Z}_{2}\) and \(y,g\in\widehat{\mathbb{Z}_{2}}\) and \(\mathrm{coev}_{l,r}\) is the coevaluation dual to the Pontrjagyn pairing (2.9). However, in the case of \(D(B\mathbb{Z}_{2})^{\mathrm{trv}}\), the braided transposition is merely the Pontrjagyn duality, \[\Psi_{-1}^{l}(y\otimes f)=\hat{y}\otimes\hat{f},\qquad\Psi_{-1}^{r}(x\otimes g )=\hat{x}\otimes\hat{g},\] whence (4.9) states that \(\mathcal{R}^{\pm}=p\circ\mathrm{coev}=\mathrm{id}\) is the identity matrix. The mixed braiding maps \(b_{iW},b_{Wi}\) are thus all trivial. The fact that all the braiding maps are trivial on \(2\mathrm{Rep}^{r}(D(B\mathbb{Z}_{2})^{\mathrm{trv}})\) also follows from the corresponding topological NLSM \(Z_{\mathrm{Kit}}^{0}\), which has no terms in its action that encode any non-trivial statistics of the charges in the theory [20, 22]. Not only is \(2\mathrm{Rep}^{r}(D(B\mathbb{Z}_{2})^{\mathrm{trv}})\) not (braided) monoidally equivalent to the toric code \(\mathscr{R}\), it is in a sense "too trivial" to even be a gapped topological phase, since it violates the principle of **remote detectability**[62, 14, 23]. _Remark 4.2_.: Remote detectability states that all non-trivial excitations can be detected by braiding. It is part of the definition of a topological order (such as the toric code \(\mathscr{R}\)) and hence calling \(Z_{\mathrm{Kit}}^{0}\) the " 4d toric code" is incorrect. In this simple \(\mathbb{Z}_{2}\)-charged case, this principle is encoded by the presence of the term \(\langle B\cup\bar{e}(A)\rangle\) in the Dijkgraaf-Witten 4-cocycle \(\omega\)[20, 23], which is only present for \(Z_{\mathrm{Kit}}^{s}\). Nevertheless, the above computations lay the foundation for our results in the following. ## 5 Excitations in the spin-Kitaev model \(Z_{\mathrm{Kit}}^{s}\) We now turn to the spin-Kitaev model \(Z_{\mathrm{Kit}}^{s}\) given by the \(2\)-Drinfel'd double \(D(B\mathbb{Z}_{2})^{\mathrm{sgn}}\). Its \(2\)-representations have the same ingredients as those of \(D(B\mathbb{Z}_{2})^{\mathrm{trv}}\), and hence the \(2\)-category \(2\mathrm{Rep}(D(B\mathbb{Z}_{2})^{\mathrm{sgn}})\) also has four objects, similar to those in (4.1). The difference here is that \(D(B\mathbb{Z}_{2})_{0}^{\mathrm{sgn}}=\mathbb{Z}_{2}\) now acts non-trivially on \(D(B\mathbb{Z}_{2})_{-1}^{\mathrm{sgn}}=\widehat{\mathbb{Z}_{2}}\). This action was obtained by dualizing the non-trivial sign automorphism \(u\in\mathrm{Aut}(\mathbb{Z}_{2})\), which induces via (3.4) the class \(\bar{e}=\frac{1}{2}u^{2}\in H^{2}(\mathbb{Z}_{2},\mathbb{Z}_{2})\) determining the non-trivial central extension of \(\mathbb{Z}_{2}\) by itself. This extension is \(\mathbb{Z}_{4}\), which we interpret as a "semidirect product" \(\mathbb{Z}_{2}\rtimes\mathbb{Z}_{2}\) where the central element \(x^{2}\in\mathbb{Z}_{2}\) acts by \(-1\). As such, the component \(\rho_{0}^{0}(x)^{2}\) "acts" non-trivially on the degree-(-1) component of the graded \(2\)-representation spaces. In other words, provided \(\rho_{0}^{0}\) is non-trivial, the component \(\rho_{0}\) of the \(2\)-representation \(\rho\) furnishes a representation of \(\mathbb{Z}_{2}\rtimes\mathbb{Z}_{2}\), satisfying \[\rho_{0}(x^{2})(w,v)=(\bar{e}(x,x)\cdot(\rho_{0}^{1}(x^{2}))w,\rho_{0}^{0}(x^{ 2})v)=(-w,v),\qquad x\in\mathbb{Z}_{2},\] where \((w,v)\in V=V_{-1}\xrightarrow{0}V_{0}\). We denote such representations by \(\rho_{0}=\rho_{0}^{1}\oplus_{\pm}\rho_{0}^{0}=(\bar{e}\cdot\rho_{0}^{1},\rho_ {0}^{0})\). From (4.1), we thus see that the magnetic vacuum line \(\mathbf{1}^{*}\) and the Cheshire string \(\mathbf{c}\) carry a \(\mathbb{Z}_{4}\)-representation, while the electric vacuum line \(\mathbf{1}\) and the magnetic Cheshire \(\mathbf{c}^{*}\) carry a \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\)-representation. Now recall from _Remark 4.1_ that \(\rho\) should preserve the identity, which was a vacuous condition as \(\rho_{0}^{0,1}(x^{2})=1\) are both trivial for \(D(B\mathbb{Z}_{2})^{\mathrm{trv}}\). However, due to the non-trivial sign coming from \(\rho_{1}(\bar{e}(x,x))=-1\) in the current case, this becomes a non-trivial relation that one must impose, \[-1\cdot\rho_{1}(y)=\rho_{1}(\bar{e}(x,x)\cdot y)\rho_{0}^{1}(x^{2})=\rho_{0}^{0}( x^{2})\rho_{1}(y)=\rho_{1}(y),\qquad y\in\widehat{\mathbb{Z}_{2}}.\] The component \(\rho_{1}\) is thus no longer required in general to preserve the identity. As \(V_{0},V_{-1}\cong k\) are both 1-dimensional vector spaces over the ground field \(k\), we have \[\rho_{1}(y^{-1})\rho_{1}(y)=\rho_{1}(y)^{2}=\rho_{1}(y)^{2}(\rho_{1}(y^{2}))^{-1 }\equiv\bar{c}(y,y)=-1 \tag{5.1}\] by considering \(\rho_{1}(y)\in k^{\times}\) as an invertible element. This defines a 2-cocycle \(\bar{c}\in H^{2}(\widehat{\mathbb{Z}_{2}},k^{\times})\) at degree-(-1) carried by 2-representations that have \(\rho_{1}\neq 0\). In other words, the Cheshire strings \(\mathbf{c},\mathbf{c}^{\bullet}\) are capable of carrying \(\bar{c}\), while the vacuum lines \(\mathbf{1},\mathbf{1}^{\ast}\) do not. By considering \(D(B\mathbb{Z}_{2})\) as a categorical group, \(\bar{c}\) determines an interchanger 2-morphism \(h(y_{1},y_{1}^{\prime};y_{2},y_{2}^{\prime})=\bar{c}(y_{1},y_{2})\) implementing (2.2) [22]; see _Remark 2.1_. Correspondingly, we thus have two versions of the 2-category \(2\mathrm{Rep}^{\tau}(D(B\mathbb{Z})^{\mathrm{sgn}})=2\mathrm{Rep}_{f,m}^{ \tau}(D(B\mathbb{Z})^{\mathrm{sgn}})\), given by 2-representations that either carry \(\bar{c}\) or do not. Twisted 2-Drinfel'd double.These 2-cocycles \(\bar{c},\bar{e}\) can alternatively be interpreted as "twists" in the 2-algebra structure of the 2-Drinfel'd double, and they make up precisely the 2-group 4-cocycles6 Footnote 6: Note that twists of 2-groupoid algebras by 4-cocycles have also appeared in [27]. This is a categorification of the 3-cocycle twists of an ordinary 1-Drinfel’d double [69]. \[\omega_{f}=\bar{c}[1]+\bar{c},\qquad\omega_{b}=\bar{c}\] over \(k^{\times}\)[70, 20, 21] that have appeared in **Theorem 1.2**. We take, now with proper naming, \[\begin{array}{ll}\mbox{\bf Spin-Kitaev:}&2\mathrm{Rep}_{f}^{\tau}(D(B \mathbb{Z}_{2})^{\mathrm{sgn}})=2\mathrm{Rep}^{\tau}(D(B\mathbb{Z}_{2})^{ \omega_{f}}),\\ \mbox{\bf Toric code:}&2\mathrm{Rep}_{m}^{\tau}(D(B\mathbb{Z}_{2})^{\mathrm{ sgn}})=2\mathrm{Rep}^{\tau}(D(B\mathbb{Z}_{2})^{\omega_{b}}),\end{array}\] in which the first version is called **fermionic** (\(f\)-subscript) while the second version is **bosonic** (\(m\)-subscript). This notation is suggestive, as it precisely corresponds to whether the degree-(1) \(\widehat{\mathbb{Z}_{2}}\) of the Dijkgraaf-Witten NLSM associated to \(D(B\mathbb{Z}_{2})^{\mathrm{sgn}}\) is fermion parity \(\mathbb{Z}_{2}^{f}\) or a bosonic \(\pi\)-flux \(\mathbb{Z}_{2}^{m}\)[20, 22]. Strictly speaking, the monster 2-BF theory (3.1) associated to \(2\mathrm{Rep}_{f}(D(B\mathbb{Z}_{2})^{\mathrm{sgn}})\) should include a term \(\bar{c}(B,B)\) given by the data of the 2-cocycle \(\bar{c}\), whence the partition function (3.3) reads \[Z_{\mathrm{Kit}}^{\mathrm{s}}(X)\sim\sum_{\begin{subarray}{c}d=0\\ dB=\tau\end{subarray}}e^{i2\pi\,\sharp_{X}\langle B\cup\bar{c}(A)\rangle+\bar{c }(B,B)}. \tag{5.2}\] Note that this term \(\bar{c}(B,B)\), being cohomological, does not alter the EOM7 for the fields \((A,B)\). The theory \(Z_{\mathrm{Kit}}^{\mathrm{s}}\) has also appeared as part of the NLSM construction in [20], provided we identify Footnote 7: Indeed, a 2-gauge theory with \(F=B\) as an equation of motion would host instead a trivial 2-group \(\mathbb{Z}\xrightarrow{1}\mathbb{Z}_{2}\)[2]. \[\bar{c}(A)=\frac{1}{2}\operatorname{Sq}^{1}A,\qquad\bar{c}(B,B)=\frac{1}{2} \operatorname{Sq}^{2}B\] in terms of the \(\mathbb{Z}_{2}\)-cohomology operation \(\operatorname{Sq}^{i}:H^{j}(X,\mathbb{Z}_{2})\to H^{j+i}(X,\mathbb{Z}_{2})\) called the **Steenrod square**[71]. _Remark 5.1_.: In the spin-Kitaev model \(Z_{\mathrm{Kit}}^{\mathrm{s}}\), the coefficient of 1/2 that appeared in front of the term \(\operatorname{Sq}^{2}B\) means that the point-like particle in the NLSM is a fermion [20]. If this coefficient is 1/4, then such a term \(\frac{1}{4}\operatorname{Sq}^{2}B=\mathfrak{p}_{2}(B)\) gives a cohomology operation called the _Pontrijagyn square_\(\mathfrak{p}_{2}:H^{2}(X,\mathbb{Z}_{2})\to H^{4}(X,\mathbb{Z}_{4})\)[21]. The point particle would then be a _semion_[20] in this case. ### Fusion structure in the twisted case Due to the presence of 2-cocycles \(\bar{e}\) and \(\bar{c}\) in \(2\mathrm{Rep}_{f}(D(B\mathbb{Z}_{2})^{\mathrm{sgn}})\), the corresponding coproduct component \(\Delta_{0}^{\prime}\) governing the tensor product of 2-representations via (2.15) now satisfies a modified version of the condition (2.6), \[\Delta_{0}^{\prime}(x^{2})=(\bar{c}(x,x)\cdot\bar{c}(x,x))\otimes x^{2}=\bar{c}( y,y)1\otimes 1, \tag{5.3}\] where we have noted \(y=\bar{c}(x,x)\) and the twisted monoidal structure \(y\cdot y=\bar{c}(y,y)\cdot 1\) for generators \(x\in\mathbb{Z}_{2},y\in\widehat{\mathbb{Z}_{2}}\). The presence of the sign \(\bar{c}(y,y)=-1\) allows us to lift or trivialize certain 2-representations. We demonstrate this with explicit computations. Forming the tensor product, we see that the fusion rules in \(2\mathrm{Rep}_{f}(D(B\mathbb{Z}_{2})^{\mathrm{sgn}})\) must be different than that in \(2\mathrm{Rep}(D(B\mathbb{Z}_{2})^{\mathrm{trv}})\). To see this more explicitly, we perform a monoidal computation while keeping track of the data \(\rho_{1}:V_{0}=\mathrm{sgn}\to V_{-1}=1\), \[\mathbf{c}\otimes\mathbf{c} = (\rho_{\mathbf{c}}\otimes\rho_{\mathbf{c}})\circ\Delta_{0}^{\prime}\] \[= (\bar{e}\cdot 1\otimes\bar{e}\cdot 1\xrightarrow{\rho_{1}}\text{sgn} \otimes\text{sgn}(\simeq 1))\oplus(\bar{e}\cdot 1\otimes\text{sgn}\xrightarrow{\rho_{1} \otimes\bar{e}}\text{sgn}\otimes\bar{e}\cdot 1)\] \[\simeq (1\xleftarrow{\frac{\bar{1}}{1}}1)\oplus(\bar{e}\cdot 1\otimes \text{sgn}\xrightarrow{\rho_{1}\otimes\bar{e}}\text{sgn}\otimes\bar{e}\cdot 1),\] where we we have used the fact that \((\bar{e}\cdot 1)^{2\otimes}\simeq 1\) and \(\rho_{1}\otimes\rho_{1}\simeq\bar{1}\). The first term is simply the trivial representation 1, while we use \(\rho_{1}(y)^{2}=\bar{c}(y,y)=-1\) in the second term to lift "\(\text{sgn}\)" to a sign representation of the subgroup \(\mathbb{Z}_{2}\subset\mathbb{Z}_{4}\). However, together with the factor \(\bar{e}(x,x)\neq 1\), this allows to degenerate \(\bar{e}\cdot 1\otimes\text{sgn}\simeq 1\) to the trivial representation; this is the effect of the condition (5.3). As such, we have \[\mathbf{c}\otimes\mathbf{c}\simeq(1\xleftarrow{\frac{\bar{1}}{1}}1) \oplus(1\xleftarrow{\frac{\bar{1}}{1}}1)\simeq 1\oplus 1=\mathbf{1}, \tag{5.4}\] which is indeed distinct from (4.2). The magnetic Cheshire \(\mathbf{c}^{\bullet}\), on the other hand, does not carry \(\bar{e}\), so it furnishes a \(k\mathbb{Z}_{2}\times k\mathbb{Z}_{2}\)-representation. However, it does carry the 2-cocycle \(\bar{c}\), which lifts the sign representation of \(\mathbb{Z}_{2}\) to the trivial one. Hence we deduce that we have \(\mathbf{c}^{\bullet}\otimes\mathbf{c}^{\bullet}\simeq\mathbf{1}\) as well. It then follows from the above argument that (5.4) relies crucially on \(\bar{c}\neq 0\). Therefore, if \(\bar{c}=0\) were trivial, then the Cheshire strings \(\mathbf{c},\mathbf{c}^{\bullet}\in 2\text{Rep}_{m}(D(B\mathbb{Z}_{2})^{\text{sgn}})\) must have the same fusion rules (4.2) as those in \(2\text{Rep}(D(B\mathbb{Z}_{2})^{\text{trv}})\). This observation corroborates with [23]. Fusion rules for the 2-intertwiners \(i[01],i[10]\).Now in contrast to the previous case of the invisible toric code, the coproduct component \(\Delta_{0}\) is non-trivial for the 2-Drinfel'd double \(D(B\mathbb{Z}_{2})^{\text{sgn}}\). By (2.15), this induces a tensor product between the 2-representations (4.1) and the 2-intertwiners on them. To be concrete and for brevity, we shall concentrate on the connected component \(\Gamma=\text{End}_{2\text{Rep}^{\tau}(D(B\mathbb{Z}_{2})^{\text{sgn}}(\mathbf{ 1})}\) in the following. The fusion rules for the self-2-intertwiners \(i[00]=i[11]=\mathbf{1},i[00]^{\prime}=i[11]^{\prime}=\mathfrak{e}\) remain the same as (5.4), hence we shall focus on the fusion rules between \(i[01],i[10]\). For convenience, we relabel these 2-intertwiners as \(v_{\mathbf{1}},v_{\mathbf{e}}\) by their domains, and the goal is to directly compute the tensor product \(v_{\mathbf{1}}\otimes v_{\mathbf{e}}=v_{\mathbf{e}}\otimes v_{\mathbf{1}}\) through the definition (2.15). Given (2.16), it was noted in [12] that, similar to what happens in _Gray categories_[67, 17], the following two decompositions of \(i\otimes j\) \[v_{\mathbf{1}}\otimes\mathbf{1}\circ\mathbf{1}\otimes v_{\mathbf{e}},\qquad v _{\mathbf{c}}\otimes\mathbf{c}\circ\mathbf{c}\otimes v_{\mathbf{1}}\] differ up to an invertible modification. This 2-isomorphism was computed in [12] to be given by the weak component \(\varrho=\rho_{1}\circ\bar{e}\), which in this case is determined by the 2-cocycle \(\bar{e}\in H^{2}(\mathbb{Z}_{2},\widehat{\mathbb{Z}}_{2})\) (see (5.6) later). We will directly verify this structure. After a bit of a lengthy computation, we find that, for each non-trivial \(x\in\mathbb{Z}_{2}\) (recall the counit \(\epsilon\) defines the trivial 2-representation \(\rho\simeq 1\)), \[\rho_{v_{\mathbf{1}}\otimes\mathbf{1}}\cdot\rho_{\mathbf{1}\otimes v _{\mathbf{e}}}(x) = \epsilon_{-1}\otimes\text{id}\simeq\rho_{1},\] \[\rho_{v_{\mathbf{e}}\otimes\mathbf{e}}\cdot\rho_{\Theta v_{ \mathbf{e}}}(x) = (\epsilon_{-1}\otimes\rho_{0}(x))\cdot(\epsilon_{-1}\otimes\rho_{ 0}(x))=(\epsilon_{-1}\otimes\rho_{0}(x)^{2}).\] Upon using the extension class \(\bar{e}\), the latter indeed becomes \(\rho_{1}(\bar{e}(x,x))\otimes\rho_{0}(x^{2})=\rho_{1}(y)\otimes\text{id} \simeq\rho_{\mathfrak{e}}\), where \(y\in\widehat{\mathbb{Z}}_{2}\) is the non-trivial generator. These contribute as direct summands into the tensor product, whence \[v_{\mathbf{1}}\otimes v_{\mathbf{e}}(=v_{\mathbf{c}}\otimes v_{\mathbf{1}}) \simeq\mathbf{1}\oplus\mathfrak{e}. \tag{5.5}\] This is required for the following. **Theorem 5.1**.: _There are monoidal equivalences_ \[\mathfrak{F}_{m}:Z_{1}(\Sigma\operatorname{Vect}[\mathbb{Z}_{2}])\simeq 2\text{Rep}_{m}^{\tau}(D(B\mathbb{Z}_{2})^{\text{sgn}}),\qquad \mathfrak{F}_{f}:Z_{1}(\Sigma\text{Vect})\simeq 2\text{Rep}_{f}(D(B\mathbb{Z}_{2})^{\text{sgn}})\] _of fusion 2-categories._ Proof.: Recall the proof of **Proposition 4.1**. There, the obstacle to the 2-functor \(\mathfrak{F}\), when restricted to the connected component \(\Sigma\operatorname{Vect}[\mathbb{Z}_{2}]\), from being a monoidal equivalence is precisely the fusion rule (5.5) [24]. Therefore, the same 2-functor \(\mathfrak{F}_{m}\) adapted to the bosonic 2-category \(2\text{Rep}_{m}^{\tau}(D(B\mathbb{Z}_{2})^{\text{sgn}})\) is in fact a monoidal equivalence \(\mathfrak{F}_{m}:\Sigma\operatorname{Vect}[\mathbb{Z}_{2}]\simeq\Gamma\). By following the same argument for the magnetic sector, this 2-functor extends to a monoidal equivalence \(\mathfrak{F}_{m}:Z_{1}(\Sigma\operatorname{Vect}[\mathbb{Z}_{2}])\simeq\mathscr{R} \simeq 2\text{Rep}_{m}^{\tau}(D(B\mathbb{Z}_{2})^{\text{sgn}})\) as desired. Now consider the fermionic case. We use the description of the braided fusion 2-category \(\mathscr{S}\) (with trivial associator class) describing the spin-\(\mathbb{Z}_{2}\) gauge theory given in [23]. The 2-category \(\mathscr{S}\) is very similar to \(\mathscr{R}\): its identity component is monoidally equivalent to \(\Sigma\operatorname{Vect}[\mathbb{Z}_{2}]\) and has \(\Omega\mathscr{S}=\text{sVect}\). We can then employ the same strategy as in the proof of the bosonic case. The caveat is that the Cheshire string \(c\in\operatorname{sVect}\) is the superalgebra \(\operatorname{Cl}(1)\), ie. the Clifford algebra with one odd generator. It satisfies the well-known fusion rule \[c\otimes c\simeq 1\] in the ambient category sVect. The rest of the fusion rules are identical to the bosonic case \(\mathscr{R}\), hence we also have \(m^{\prime}=m\otimes c\), which leads to \(m^{\prime}\otimes m^{\prime}=1\). If we define the functor \(\mathfrak{F}_{f}:\mathscr{S}\to 2\mathrm{Rep}_{f}(D(B\mathbb{Z}_{2})^{ \mathrm{sgn}})\) in the same way as in the proof of **Proposition 4.1**, then (5.4), (5.5) show that \(\mathfrak{F}_{f}\) preserves the monoidal structures of the objects. We define \(\Gamma^{\prime}\) as the identity component of \(2\mathrm{Rep}_{f}^{\gamma}(D(\mathbb{Z}_{2})^{\mathrm{sgn}})\), and consider \(\Omega\Gamma^{\prime}=\mathrm{End}_{2\mathrm{Rep}_{f}^{\gamma}(D(\mathbb{Z}_{ 2})^{\mathrm{sgn}})}(\mathbf{1})\). Now in general \(\Gamma^{\prime}\cong\Gamma\) as monoidal 2-categories, as they have different fusion rules for the objects. However, it is not hard to see that \(\Omega\Gamma^{\prime}\simeq\Omega\Gamma\cong\mathrm{Rep}(\mathbb{Z}_{2})\), as the 1-morphisms and the commutative diagrams governing the 2-intertwiners (2.12) are the same. By taking \(\mathfrak{F}_{f}\) such that \(\Omega\mathfrak{F}_{f}=\Omega\mathfrak{F}_{b}\), we then achieve an equivalence \[\mathfrak{F}_{f}:\Gamma^{\prime}\xrightarrow{\sim}\Sigma\mathrm{sVect}\] on the identity component, which extends to a monoidal equivalence \(\mathfrak{F}_{f}:\mathscr{S}\xrightarrow{\sim}2\mathrm{Rep}_{f}(D(B\mathbb{Z}_ {2})^{\mathrm{sgn}})\) as desired. The fact that \(\mathscr{S}\simeq\Sigma\mathrm{sVect}\)[23] proves the theorem. ### Proof of the main theorem Let us now look at the braiding data. Since now the Cheshire string \(c\) is invertible in \(\mathscr{S}\), there is a self-braiding morphism \[\beta_{c}:c\otimes c\to c\otimes c\] in addition to the magnetic self-braiding \(\beta_{m}:m\otimes m\to m\otimes m\). It was shown in [23] that \(\beta_{c}\simeq e\) is in fact non-trivial. We can once again provide a direct proof of this. In fact, we shall compute the entire braiding structure on \(2\mathrm{Rep}^{\gamma}(D(B\mathbb{Z}_{2})^{\mathrm{sgn}})\) and match it to the topological orders \(\mathscr{R},\mathscr{S}\). **Theorem 5.2**.: _The 2-functors \(\mathfrak{F}_{m,f}\) in_ **Theorem 5.1** _are_ braided _equivalences._ Proof.: Recall the braiding structure \(b\) is determined by the 2-\(R\)-matrix \((\mathcal{R},R)\) through (2.17), (2.19), which is in turn determined by the braided transposition \(\Psi\) by (4.9), (4.8). Since the degree-0 \(\mathbb{Z}_{2}\) now acts non-trivially on the degree-(-1) \(\widehat{\mathbb{Z}}_{2}\), the defining relations (4.10) implies that 2-R-matrix \(\mathcal{R}\) is non-trivial: \[\mathcal{R}^{+}=y\otimes x,\qquad\mathcal{R}^{-}=x\otimes y,\qquad R=x\otimes x,\] where \(x,y\) are the generators of \(\mathbb{Z}_{2},\widehat{\mathbb{Z}}_{2}\), respectively. We follow the same argument as in **Theorem 4.1**, but the non-trivial 2-cocycle twists \(\bar{e}\in H^{2}(\mathbb{Z}_{2},\mathbb{Z}_{2})\) and \(\bar{c}\in H^{2}(\widehat{\mathbb{Z}}_{2},k^{\times})\) will allow us to develop non-trivial braiding maps on \(D(B\mathbb{Z}_{2})^{\mathrm{sgn}}\). Let us proceed step by step. **Lemma 5.1**.: _The 2-cocycle \(\bar{e}\in H^{2}(\mathbb{Z}_{2},\mathbb{Z}_{2})\) leads to non-trivial full braiding maps between \(\mathfrak{e}\) and objects \(W\) in the magnetic sector._ Proof.: Recall \(\bar{e}\in H^{2}(\mathbb{Z}_{2},\mathbb{Z}_{2})\) determines the non-trivial central extension \(\mathbb{Z}_{4}\) of \(\mathbb{Z}_{2}\) by itself. Provided that the component \(\rho_{0}^{0}\) is non-trivial, then \(\rho_{0}=(\bar{e}\cdot\rho_{0}^{1},\rho_{0})\) furnishes a \(k\mathbb{Z}_{4}\)-representation. In addition, this 2-cocycle also dualizes to \(\bar{e}\in H^{2}(\mathbb{Z}_{2},\widehat{\mathbb{Z}}_{2})\), which "twists" the algebra structure in \(D(B\mathbb{Z}_{2})^{\mathrm{sgn}}\) in the sense that \[x\cdot(x\cdot y)=\bar{e}(x,x)y\neq x^{2}\cdot y=y,\] where \(x\in\mathbb{Z}_{2}\) and \(y\in k\widehat{\mathbb{Z}}_{2}\). In the 2-representation 2-category \(2\mathrm{Rep}^{\gamma}(D(B\mathbb{Z}_{2}))\), this phenomenon can be understood as the presence of the 2-morphism \[\varrho(x_{1},x_{2})=\rho_{1}(\bar{e}(x_{1},x_{2})):\rho_{0}(x_{1})\circ\rho_{ 0}(x_{2})\Rightarrow\rho_{0}(x_{1}x_{2}),\qquad x_{1},x_{2}\in k\mathbb{Z}_{2} \tag{5.6}\] mentioned in **Definition 2.4**. We will without loss of generality focus on the _full_ mixed braiding map \(B_{W\mathfrak{e}}=B_{tW}=b_{W\mathfrak{e}}\cdot b_{tW}\) with the non-trivial 1-morphism \(\mathfrak{e}\). Recall (4.4) that \(\mathfrak{e}\) swaps the grading of the 2-representation spaces, and hence \(\bar{e}\) is only present in \(B_{W\mathfrak{e}}\) for those 2-representations \(W\) carrying the non-trivial sign representation _in degree-(-1)_ -- namely the magnetic sector in (4.1). A simple computation then gives \[B_{W\mathfrak{e}}:\rho_{0}^{0}(\mathcal{R}_{(2)}^{+})\rho_{0}^{0}(\mathcal{R}_ {(1)}^{-})\Rightarrow\rho_{0}^{0}(\mathcal{R}_{(2)}^{+}\mathcal{R}_{(1)}^{-})=1, \tag{5.7}\] which is precisely \(\varrho_{1}(x,x)=\rho_{1}(\bar{e}(x,x))\simeq-1\) by (5.6). In other words, the \(\mathbb{Z}_{2}\)-particle \(\mathfrak{e}\) braids non-trivially with the magnetic sector \(\mathbf{1}^{\mathfrak{*}},\mathbf{c}^{\mathfrak{e}}\), as required by remote detectability (see _Remark 4.2_). Now what of the braiding maps on the objects (4.1)? First of all, the tensor unit \(\mathbf{1}\) carries the trivial \(2\)-representation, and hence must have trivial braiding. The Cheshire strings \(\mathbf{c},\mathbf{c}^{\bullet}\), on the other hand, are not invertible in \(2\mathrm{Rep}_{m}^{\pi}(D(B\mathbb{Z}_{2})^{\mathrm{sgn}})\) and therefore cannot be self-braided. This is not the case in the fermionic version. **Lemma 5.2**.: _The 2-cocycle \(\bar{c}\in H^{2}(\widehat{\mathbb{Z}_{2}},k^{\times})\) gives the non-trivial self-braiding \(b_{\mathfrak{e}}=-1\). Moreover, the self-braiding \(b_{\mathfrak{e}}\) is non-trivial in \(2\mathrm{Rep}_{f}^{\pi}(D(B\mathbb{Z}_{2})^{\mathrm{sgn}})\), but \(b_{\mathfrak{e}^{\bullet}},b_{\mathfrak{e}^{\bullet}}\) are trivial._ Proof.: Consider the first statement. By naturality, the braiding maps \(b_{ij}\) on \(1\)-morphisms \(i,j\) can be decomposed into mixed braiding maps, \[b_{ij}=b_{iW}b_{Vj},\qquad\begin{cases}i:V\to U\\ j:W\to T\end{cases}.\] Taking \(i=j=\mathfrak{e}\) with the fact that \(B_{\mathfrak{e}W}=\bar{e}\) from the above lemma, we have from (5.1) that \[b_{\mathfrak{e}}=B_{\mathfrak{e}W}B_{W\mathfrak{e}}=(\rho_{1}(\bar{e}(x,x))) ^{2}=\bar{e}(y,y)\cdot\mathrm{id}=-1\cdot\mathrm{id},\] where the extension cocycle \(\bar{e}\) satisfies \(\bar{e}(x,x)=y\) for the non-trivial generators \(x\in\mathbb{Z}_{2},y\in\widehat{\mathbb{Z}_{2}}\). This is consistent with the observatoin that \(\bar{c}\) implements the fermionic statistics of the \(\mathbb{Z}_{2}\)-charged particle in [22, 20, 23]. Consider the second statement. Since \(\bar{e}\) also determines a central extension of \(D(B\mathbb{Z}_{2})_{0}=\mathbb{Z}_{2}\) by itself, an analogous argument as the previous lemma shows that, provided the \(2\)-representation \(\rho_{0}\) has the non-trivial sign representation at degree-\(0\) (ie. the Cheshire string \(\mathbf{c}\) or the magnetic vacuum line \(\mathbf{1}^{\bullet}\)), then the self-braiding \[b_{V}:\rho_{0}^{0}(R_{(1)})\rho_{0}^{0}(R_{(2)})\Rightarrow\rho_{0}^{0}(R_{(1 )}R_{(2)})=1\] can carry the non-trivial \(1\)-morphism \(\rho_{0}(\bar{e}(x,x))\simeq\mathfrak{e}\). In particular, this establishes that \(b_{\mathfrak{e}^{\bullet}}\simeq\mathfrak{1}\) is trivial while \(b_{\mathfrak{e}}\simeq\mathfrak{e}\) is not. But what about the magnetic vacuum \(\mathbf{1}^{\bullet}\)? The above argument does not force \(b_{\mathfrak{1}^{\bullet}}\) to be trivial, but the fusion rule (4.3) (in the form \(\mathbf{c}\otimes\mathbf{c}^{\bullet}\simeq\mathbf{1}^{\bullet}\)) and the ribbon equation (2.18) do. Since the magnetic Cheshire \(\mathbf{c}^{\bullet}\) is bosonic, the _full_ braiding \(B_{\mathfrak{e}^{\bullet}\mathbf{c}}\simeq b_{\mathfrak{e}}\simeq\mathfrak{e}\) must be non-trivial. Using this along with (4.3) and the previous result then gives \[b_{\mathfrak{1}^{\bullet}} = b_{\mathfrak{e}^{\bullet}\otimes\mathbf{c}}\] \[\cong b_{\mathfrak{e}^{\bullet}}\otimes b_{\mathfrak{e}}\otimes B_{ \mathfrak{e}^{\bullet}\mathbf{c}}\] \[\simeq 1\otimes\mathfrak{e}\otimes\mathfrak{e}\simeq 1,\] hence the magnetic vacuum \(\mathbf{1}^{\bullet}\) must have trivial self-braiding \(b_{\mathfrak{1}^{\bullet}}=\mathfrak{1}\). These lemmas establish the desired braided equivalences (cf. [23, 24]). To further drive home the point of the main result **Theorem 5.2**, we shall recover the \(5\)-dimensional cobordism invariant associated to the spin \(\mathbb{Z}_{2}\)-gauge theory \(\mathscr{S}\) from the spin-Kitaev model. Recall the expressions of \(\bar{e}(A)=\frac{1}{2}\operatorname{Sq}^{1}A\) and \(\bar{c}(B,B)=\frac{1}{2}\operatorname{Sq}^{2}B\) in terms of the Steenrod square. Starting from the partition function (5.2), \[Z_{\mathrm{Kit}}^{\bullet}(X)\sim\sum_{\begin{subarray}{c}d\neq 0\\ dB=\tau\end{subarray}}e^{i2\pi\int_{X}B\cup\frac{1}{2}\operatorname{Sq}^{1}A+ \frac{1}{2}\operatorname{Sq}^{2}B},\] we deduce that, given \(W\) is a \(5\)-dimensional manifold with boundary \(X=\partial W\), the bulk partition function takes the form [20] \[Z_{\mathrm{Kit}}^{\bullet}(X)\sim\exp\left[i2\pi\int_{W}\tau(A)\cup\frac{1}{2} \operatorname{Sq}^{1}A+\frac{1}{2}\operatorname{Sq}^{2}\tau(A)\right]\] on-shell of the EOM \(dA=0,dB=\tau(A)\). By interpreting the on-shell gauge fields \((A,B)\) (ie. satisfying \(dA=0,dB=\tau(A)\)) as a classifying map \(f=(A,B):W\to BD(B\mathbb{Z}_{2})\)[20, 21], we can introduce group cohomology classes \[E\in H^{3}(\mathbb{Z}_{2},\widehat{\mathbb{Z}_{2}}),\qquad M\in H^{2}(\mathbb{ Z}_{2},\mathbb{Z}_{2})\] such that \(f^{\bullet}E=\tau(A)\) and \(f^{\bullet}M=\frac{1}{2}\operatorname{Sq}^{1}A=\bar{e}(A)\). Then, the spin-Kitaev partition function can be written as \[Z_{\mathrm{Kit}}(X)\sim\sum_{f\in[W,B\mathbb{K}]}([W],f^{\bullet}\alpha),\] where \([W]\in H_{5}(W,\mathbb{C}^{\times})\) is the fundamental homology class and \(\alpha\) is a degree-\(5\) group cohomology class given by \[\alpha=(-1)^{\operatorname{Sq}^{2}E+E\cup M}\in H^{5}(\mathbb{Z}_{2}[3]\times \mathbb{Z}_{2}[2],\mathbb{C}^{\times}). \tag{5.8}\] This is precisely the anomaly of the fermionic phase \(\mathscr{S}\)[23]. The \(w_{2}w_{3}\) gravitational anomaly.The reader may recall that we have conveniently left out the study of the anomalous version \(\mathscr{T}\) of the fermionic order \(\mathscr{S}\). This order \(\mathscr{T}\) is distinct from \(\mathscr{S}\) as fusion 2-supercategories [23], and in fact hosts the \(w_{2}w_{3}\) gravitational anomaly [72]. Indeed, it was also noted in [23] that \(\mathscr{T}\) is unlikely to admit a description in terms of a Drinfel'd centre, hence it may not be straightforward to construct a corresponding 2-Drinfel'd double. We shall leave the treatment of this anomalous order \(\mathscr{T}\) to a later work. There is a particular topological field theory, called the fermionic quasistrings order [73, 74], that has been proposed to describe the \(w_{2}w_{3}\) gravitational anomaly. Nevertheless, it is known that the two orders \(\mathscr{S},\mathscr{T}\) host distinct anomalies: the one classifying \(\mathscr{T}\) differs from that (5.8) of \(\mathscr{S}\) by a factor \((-1)^{\operatorname{Sq}^{2}\operatorname{Sq}^{1}M}\)[23, 24]. It would therefore be possible to examine how such a term arises as an additional data on the 2-Drinfel'd double \(D(B\mathbb{Z}_{2})\). We shall say a bit more about this in the conclusion. ## 6 Conclusion Following the construction of 2-Drinfel'd doubles, we have applied the structural results proven in [12] to the case of the 4d Kitaev model based on the 2-group associated to \(\mathbb{Z}_{2}\). We explicitly computed the associated 2-representation 2-category \(2\text{Rep}^{\tau}(D(B\mathbb{Z}_{2}))\) and shown that it satisfies formally the formula \[Z_{1}(\Sigma\operatorname{Vect}[\mathbb{Z}_{2}])\simeq 2\text{Rep}^{\tau}(D(B \mathbb{Z}_{2})), \tag{6.1}\] where \(Z_{1}\) is the Drinfel'd centre; cf. **Theorem 5.1**, 5.2. This directly categorifies the characteristic relation \[Z_{1}(\text{Rep}(G))\simeq\text{Rep}(D(G))\] of the quantum double of Drinfel'd [75] (and, more generally, of Majid [56]), at least in the case \(G=\mathbb{Z}_{2}\). Our results can be concisely summarized in the following table. \begin{tabular}{|c|c|c|c|} \hline Gapped phase & N/A & 4d toric code & spin-\(\mathbb{Z}_{2}\) gauge theory \\ \hline 2-representations & \(2\text{Rep}^{\tau}(D(B\mathbb{Z})^{\text{trv}})\) & \(2\text{Rep}^{\tau}_{m}(D(B\mathbb{Z}_{2})^{\text{sgn}})\) & \(2\text{Rep}^{\tau}_{f}(D(B\mathbb{Z}_{2})^{\text{sgn}})\) \\ 2-category in [23] & N/A & \(\mathscr{R}\) & \(\mathscr{S}\) \\ DW cocycle in [20] & \(\omega(A,B)=0\) & \(\omega(A,B)=\frac{1}{2}BA^{2}\) & \(\omega(A,B)=\frac{1}{2}BA^{2}+\frac{1}{2}\operatorname{Sq}^{2}B\) \\ \hline \end{tabular} We have also been able to concretely identify the 4d 2-group Dijkgraaf-Witten NLSMs constructed in [20] that host the 2-category \(2\text{Rep}^{\tau}(D(B\mathbb{Z}_{2}))\) as charges. These explicit equivalences provide an explicit and rigorous identification between the 2-categorical and field theoretic descriptions of the associated gapped 4d topological phases [22]. This hints towards a categorified notion of "Tannaka-Krein reconstruction", in which certain braided fusion 2-categories \(\mathcal{C}\) can be equivalently described \[\mathcal{C}\simeq 2\text{Rep}^{\tau}(\mathcal{G}^{\omega}),\] as the 2-representation 2-category of a (possibly twisted) quasitriangular 2-Hopf algebra/2-bialgebra \(\mathcal{H}\). Such a Tannakian duality is worthwhile to have in this context, as it (i) condenses the data of a (braided) monoidal 2-category \(\mathcal{C}\) to those of the underlying 2-bialgebra \(\mathcal{G}\), and (ii) it allows one to directly construct the action of the underlying TFT given the 2-categorical data \(\mathcal{C}\) of a gapped 4d topological phase. Aside from applications to condensed matter theory, we expect that these higher bialgebraic structures would see fruitful applications in high-energy physics, higher-dimensional conformal field theory and integrable systems. _Remark 6.1_.: In fact, we can make an even bolder and refined conjecture. Suppose \(\mathcal{C}=Z_{1}(\Sigma\mathcal{B})\) is the Drinfel'd centre of \(\Sigma\mathcal{B}\), where \(\mathcal{B}\) is any braided fusion category and \(\Sigma\) is the condensation functor [25, 24]. Let \(H\) denote the _1_-Hopf algebra corresponding to the usual Tannakian duality \[\mathcal{B}\simeq\text{Rep}^{\tau}(H),\] then: \(\mathcal{C}=Z_{1}(\Sigma\mathcal{B})\) is braided equivalent to \(2\text{Rep}^{\tau}(D(BH)^{\omega})\) where \(D(BH)^{\omega}\) is a (possibly twisted) skeletal _2_-quantum double constructed out of \(H\) in a manner similar to Section 2.2.1. Such a correspondence would provide a purely (higher-)algebraic interpretation of the condensation functor \(\Sigma\). It is interesting to note that both of the graded components of \(D(B\mathbb{Z}_{2})\) contribute to determine, at each level, the structures of the 2-representation 2-category -- the objects (point excitations) are not solely determined by the degree-0 piece \(D(G)_{0}\) of the 2-Drinfel'd double, for instance. The underlying graded duality structure of the 2-Drinfel'd double play a significant role in the categorified Tannakian duality. Indeed, the self-duality of the Drinfel'd double seem to be very closely related to the Morita self-duality of the Drinfel'd centre [76] in general, hence it may be possible to study Morita theory for 2-categories [77] by studying (higher-)module theory of 2-bialgebras. Aside from purely categorical endeavours, our results also pave the way towards the construction of a more complete and refined 4-dimensional topological invariant [16, 78, 29]. It may guide us in the exploration of more intricate higher structures in tensor 2-categories, such as pivotality [29], ribbon or even modularity. We discuss some open questions which we find very interesting and important to tackle in the future. Higher-ribbon structures and modular tensor 2-categories.As mentioned at the end of Section 5.2, the fermionic order \(\mathscr{T}\), namely the \(w_{2}w_{3}\) gravitational anomaly, still eludes us in terms of the above 2-algebraic treatment. It is expected that the subtlety lies in the higher modular data of the 2-category \(\mathscr{T}\), which calls for the notion of a _ribbon_ 2-bialgebras. Recall that a modular category is a braided tensor category equipped with a ribbon twist morphism \(\theta_{X}:X\to X\) on each object \(X\), satisfying certain non-degeneracy conditions for its modular \(S\)- and \(T\)-matrices. These modular matrices can be constructed from the twist and braiding operations \(\theta,b\) through the Verlinde formula. It is known that if \((H,\Delta,S,R,\nu)\) is a _ribbon Hopf algebra_, in the sense that the central element \(\mathfrak{v}=\mathfrak{u}S(\mathfrak{u})\in H\) given by \(\mathfrak{u}=\cdot(S\otimes 1)(R^{T})\in H\) admits a square-root \(\nu\in H\), then its representation category \(\operatorname{Rep}(H)\) is a ribbon category, whose twist morphism \(\theta_{V}(V)=\nu\cdot V\) is just given by the action of \(\nu\). Now given the theory of quasitriangular 2-Hopf algebras \((\mathcal{G},\Delta,S,\mathcal{T},\mathcal{R})\), one may seek to equip it with a _higher ribbon element_\(\nu=(\nu_{-1},\nu_{0})\in\mathcal{G}\), analogous to the 1-Hopf algebra case. It would then be reasonable to expect the existence of twist 1- and 2-morphisms \[\theta_{V}:V\mapsto\nu_{0}\cdot V,\qquad\theta_{i}:i\mapsto\nu_{-1}\cdot i\] to play the role of the ribbon structure for the 2-representation 2-category \(2\mathrm{Rep}^{\tau}(\mathcal{G})\), making it into a ribbon 2-category. We conjecture that the corresponding higher modular data would be able to distinguish the spin-\(\mathbb{Z}_{2}\) gauge theory \(\mathscr{S}\) from the \(w_{2}w_{3}\) gravitational anomaly \(\mathscr{T}\). We shall leave this for future work. Lattice realizations; 2-groupoid algebras and the 4-dimensional tube algebra.The main result of this paper properly pins down the continuum field theory description of the given 4d gapped topological phase. We have said nothing, however, about how one may UV complete and construct a lattice realization (or a class thereof). To do so, one must construct the lattice Hamiltonian and cast the excitations encoded in the field theory as (extended) local symmetric operators on the lattice. We know how this is done in 3-dimensions. The 3d lattice theory is given by _string-net condensation_[70], labeled by an underlying structure group \(G_{0}\). This Hilbert space is equipped with a "gluing operation", which combines two local string-nets along the lattice edges. This forms the **Oneanu tube algebra**, which can be modeled by representations of the groupoid algebra \(\mathbb{C}[\Lambda G_{0}]\) of the _inertia groupoid_\(\Lambda G_{0}\) of \(G_{0}\). However, it is known also that for finite groups \(G_{0}\), the (twisted) Drinfel'd double \(D(G_{0})\) is isomorphic8 to the (twisted) groupoid algebra \(\mathbb{C}[\Lambda G_{0}]\)[69], and as such the 3d tube algebra is given precisely by the representation theory of \(D(G_{0})\)[3]. Footnote 8: For the twists, the group 3-cocycle on \(D(G_{0})\) is sent to a groupoid 2-cocycle on \(\Lambda G_{0}\) via transgression. As always, we wish to categorify the above results. The construction of a 4d analogue of the tube algebra has in fact already appeared in [27], where _2-groupoid algebras_ and the representation theory thereof was studied. This allows one to construct the lattice Hamiltonian, as well as the local symmetric operators, of a given 4d topological phase; see also [32] from the categorical perspective. The missing link here is of course the correspondence between the 2-Drinfel'd double \(D(G)\) and the 2-groupoid algebra \(\mathbb{C}[\Lambda G]\) of the inertia 2-groupoid \(\Lambda G\). Evidence to suggest such a correspondence stems from the fact that both 2-representations of 2-groupoid algebras and those of the 2-Drinfel'd double describe gapped topological phases in 4d. Moreover, 2-groupoid algebras admit twists by 2-group 4-cocycles [27] just as the 2-Drinfel'd double does, as we have demonstrated in Section 5. If this correspondence can be made explicit, then we can construct lattice realizations of 4d field theories directly from the underlying 2-Drinfel'd double symmetry and its 2-representations.
2303.11914
Stronger EPR-steering criterion based on inferred Schrodinger-Robertson uncertainty relation
Steering is one of the three in-equivalent forms of nonlocal correlations intermediate between Bell nonlocality and entanglement. Schrodinger-Robertson uncertainty relation (SRUR), has been widely used to detect entanglement and steering. However, the steering criterion in earlier works, based on SRUR, did not involve complete inferred-variance uncertainty relation. In this paper, by considering the local hidden state model and Reid formalism, we derive a complete inferred-variance EPR-steering criterion based on SRUR in the bipartite scenario. Furthermore, we check the effectiveness of our steering criterion with discrete variable bipartite two-qubit and two-qutrit isotropic states.
Laxmi Prasad Naik, Rakesh Mohan Das, Prasanta K. Panigrahi
2023-03-21T15:06:11Z
http://arxiv.org/abs/2303.11914v3
# Stronger EPR-steering criterion based on Schrodinger-Robertson uncertainty relation ###### Abstract Steering is one of the three in-equivalent forms of nonlocal correlations intermediate between Bell nonlocality and entanglement. Schrodinger-Robertson uncertainty relation (SRUR), has been widely used to detect entanglement and steering. However, the steering criterion in earlier works, based on SRUR, did not involve complete inferred-variance uncertainty relation. In this paper, by considering the local hidden state model and Reid's formalism in SRUR, we derive a complete inferred-variance steering criterion for bipartite systems in one-sided, two-measurement and two-outcome scenarios. Furthermore, our steering criterion, when applied to bipartite discrete variable case, provides a stricter range for two-qubit Werner states. + Footnote †: : : _Keywords_: EPR-steering, Schrodinger-Robertson uncertainty relation ## 1 Introduction EPR-steering is a nonlocal correlation intermediate between Bell nonlocality and quantum entanglement [1, 2, 3]. It is the ability to remotely affect or _steer_ a shared entangled quantum state by a single party's (say Alice's) arbitrary choice of local measurements without violating the no-signalling principle [4]. Wiseman _et al._ gave an operational definition of steering as a task between Alice and Bob. Alice prepares an entangled state and sends one part to Bob. Here, Bob does not trust Alice, and by performing local measurements, she has to convince him that the state is entangled [3]. If Bob's steered quantum state cannot be explained by a local hidden state (LHS) model, then the state is said to exhibit steering. In contrast to Bell's nonlocality and entanglement, steering demonstrates asymmetric behaviour in which one party can steer the other party, but vice versa is not always permitted [5, 6, 7, 8]. Moreover, not every entangled state exhibits steering, and not every steerable state violates Bell inequality [3]. EPR-steering has a wide range of applications in many quantum information processing tasks e.g. in one-sided device-independent quantum key distribution [9, 10, 11], quantum networking tasks [12, 13, 14], subchannel discrimination [15, 16, 17], quantum secret sharing [18, 19], quantum teleportation [20, 21], randomness certification [22, 23, 24], and random number generation [25] to mention a few. Recently it has also been demonstrated to be a useful resource in noisy and lossy quantum network systems [26, 27]. Effective detection of steering exhibited by quantum states is crucial to realise applications of steerable quantum states. Uncertainty relations (UR) can be experimentally verified because it involves measurement of observables. Assuming the description of quantum mechanics is correct, EPR's condition of locality and sufficient condition of reality are satisfied, UR's become an important tool for determining steering criteria. Many criteria in this direction e.g., using the Heisenberg uncertainty relation (HUR) [28] and later involving a broader class of uncertainty relations, have been proposed [29, 30, 31, 32]. Additionally, under different measurement scenarios, more optimal steering criteria [33, 34] were obtained using fine-grained uncertainty relations [35, 36] and sum uncertainty relations [34, 37]. The criterion for experimental demonstration of steering was first proposed by Reid [28], which is based on inferred-variances. Recently, a steering criterion using Schrodinger-Robertson uncertainty relation (SRUR) was also proposed. However, these earlier works did not involve the inferred-means in the lower bound [38]. A recent work involves inferred-variance based product and sum uncertainty relations in the presence of entanglement [39]. We aim to derive a steering criterion based on SRUR involving inferred-means and inferred-variances for one-sided two-measurement and two-outcome scenarios, following up the analysis in [40]. The steering criterion's efficiency is evaluated for two-qubit Werner states [41] which provides a steering inequality that was obtained for one-sided three-measurement as well as two-sided two-measurement settings. [36, 42, 43, 44]. In the next section, we briefly discuss steering and the EPR-Reid criterion. In Sec.3, we derive a steering criterion based on the SRUR. We check the efficiency of the steering criterion in Sec.4, using it on two-qubit Werner states for which a steering inequality is obtained and discuss the improvement of our result compared to other measurement scenarios. The paper ends with a conclusion and an appendix. ## 2 Preliminaries ### EPR-Steering Consider a general unfactorizable bipartite pure state shared by two distant parties, Alice and Bob \[|\Psi\rangle=\sum_{n}c_{n}|u_{n}\rangle|v_{n}\rangle=\sum_{n}d_{n}|\psi_{n} \rangle|\phi_{n}\rangle \tag{1}\] where, \(\{|u_{n}\rangle\}\{(|v_{n}\rangle)\}\) and \(\{|\psi_{n}\rangle\}\{(|\phi_{n}\rangle)\}\) denote two different orthonormal bases in Alice's and Bob's system, respectively. This property of inseparability is called entanglement, which is one of the most useful resources in quantum information processing that has been studied extensively in the literature [45, 46, 47, 48, 49]. In this scenario, Alice chooses to measure in the \(|u_{n}\rangle\left(\{|v_{n}\rangle\}\right)\) basis, then Bob's state will be projected into \(|\psi_{n}\rangle(\{|\phi_{n}\rangle\})\) basis. The ability of Alice to influence (steer) Bob's state, nonlocally, was termed as steering by Schrodinger [1, 2, 50]. Consider the following situation, Alice and Bob share an entangled quantum state, described by density matrix \(\hat{\rho}\). The generalised local measurements of Alice and Bob are denoted by \(\hat{M}_{a|A}\) and \(\hat{M}_{b|B}\) (\(M_{a(b)|A(B)}\geq 0,\sum_{a(b)}M_{a(b)|A(B)}=1\quad\forall A(B)\)) respectively, where \(a\) and \(b\) denote the outcomes corresponding to the measurement operators \(\hat{M}_{a|A}\) and \(\hat{M}_{b|B}\). \(A\) and \(B\) are Alice's and Bob's measurement settings, respectively. In a Steering task, Bob does not trust Alice and wants to verify whether the shared state is entangled or not. So he asks Alice to perform a measurement \(\hat{M}_{a|A}\) and classically communicate its outcome \(a\) to him. The quantum probability of their joint measurement is given as follows \[P(a,b)=\mathrm{Tr}[\hat{\rho}(\hat{M}_{a|A}\otimes\hat{M}_{b|B})] \tag{2}\] where, \(P(a,b)\) is the joint probability of obtaining outcomes \(a\) and \(b\). If and only if for all measurements \(\hat{M}_{a(b)|A(B)}\), Eq.(2) can be defined as \[P(a,b)=\sum_{\eta}p(\eta)P(a,\eta)P_{Q}(b,\eta) \tag{3}\] where \(\eta\) is a classical random variable having probability distribution \(p(\eta)\), satisfying \(p(\eta)\geq 0\) and \(\sum_{\eta}p(\eta)=1\). \(P(a,\eta)\) is joint probability distribution between \(\eta\) and outcome \(a\). Quantum probability distribution \(P_{Q}\) between \(\eta\) and outcome \(b\) is \(P_{Q}(b,\eta)=\mathrm{Tr}_{B}[(\hat{M}_{b|B}\hat{\rho}_{\eta})]\) (\(Q\) stands for quantum), corresponding to a local hidden quantum state described by \(\hat{\rho}_{\eta}\), which is unaffected by local measurements of Alice. The use of LHS to explain steering is a clear implication of the consistency of EPR's condition of locality. Any constraint that can be obtained obeying Eq.(3) is called an _EPR-steering criterion_, violation of which will demonstrate steering. The joint probability distribution and the state is said to admit an LHS model if Eq.(2) can be expressed having a decomposition of the form Eq.(3) for all the choice of Alice's and Bob's measurements respectively. In other words, if and only if, Alice, for all her choice of measurements \(\hat{M}_{a|A}\) could steer Bob's system into a conditioned state which is given as \[\hat{\sigma}_{a|A}=\mathrm{Tr}_{A}[(\hat{M}_{a|A}\otimes I)\hat{\rho}] \tag{4}\] where \(\mathrm{Tr}_{A}\) is partial trace over Alice's system and if Bob cannot express \(\hat{\sigma}_{a|A}\) in the following form \[\hat{\sigma}_{a|A}=\int p(\eta)P(a,\eta)\hat{\rho}_{\eta}\,d\eta \tag{5}\] then, the state is said to be steerable. However, Alice cannot affect Bob's unconditioned state \(\mathrm{Tr}_{A}[\hat{\rho}]\), because that would violate superluminal communication [4]. Since Bob's state corresponds to a local hidden quantum state, uncertainty relations can be used for Bob's measurements. This was first realized by Reid [28], who proposed an experimental EPR-steering criterion using HUR in continuous variable systems. Therefore we aim to derive an EPR-steering criterion using SRUR because it involves the covariance of the observables, which captures stronger correlations. ### EPR-Reid criterion Reid proposed a modified version of EPR's sufficient condition of reality, which states that if without in any way disturbing a system, we can predict with some specified uncertainty the value of a physical quantity, there exists a stochastic element of physical reality which determines this physical quantity with atmost that specific uncertainty, called as _Reid's extension of EPR's sufficient condition of reality_. This is attributed to the intrinsic stochastic nature exhibited in the preparation and detection of quantum states [28, 40]. Consider two parties, Alice and Bob sharing an entangled state. Now Alice makes a local measurement \(\hat{Y}\) and makes an estimate \(\hat{X}^{est}(\hat{Y})\) for the result of Bob's measurement \(\hat{X}\) observing the outcomes of her own measurement \(\hat{Y}\). The idea of estimation is implemented to incorporate EPR's sufficient condition of reality. Therefore the average inferred-variance of \(\hat{X}\) for an estimate \(\hat{X}^{est}(\hat{Y})\) is given as follows \[\Delta_{inf}^{2}\hat{X}^{2}=\langle(\hat{X}-\langle\hat{X}^{est}(\hat{Y}) \rangle)^{2}\rangle. \tag{6}\] Alice's estimate for Bob's measurement is given by \(\hat{X}^{est}(\hat{Y})=g\hat{Y}\), where the choice of \(g\) should be such that it gives the minimum error, i.e., \(g=\frac{\langle\hat{X}\hat{Y}\rangle}{\langle\hat{Y}^{2}\rangle}\) gives the optimal inferred-variance. Using EPR's condition of locality, Reid's extension of EPR's sufficient condition for reality and completeness of quantum mechanics, a limit on the product of inferred-variances based on HUR for two noncommuting quadrature phase amplitude observables \(\hat{X}_{1}\) and \(\hat{X}_{2}\) on Bob's side is [28] \[\Delta_{inf}^{2}\hat{X}_{1}\Delta_{inf}^{2}\hat{X}_{2}\geq 1. \tag{7}\] This is known as _EPR-Reid criterion_. A state will show steering if Eq.(7) is violated, which has also been verified experimentally [40]. ## 3 EPR-steering criterion using Schrodinger-Robertson uncertainty relation Our derivation of EPR-steering is based on the works of [28, 40, 51]. Here, we use a different notation for the outcomes of measurement. Consider the outcomes \(A\) and \(B\), corresponding to observables \(\hat{A}\) and \(\hat{B}\), for Alice's and Bob's measurements respectively. Using the EPR-Reid criterion and the LHS model for \(A\) and \(B\), the inferred-variance is written as \[\Delta_{inf}^{2}B=\langle\big{(}B-B^{est}(A)\big{)}^{2}\rangle. \tag{8}\] The inferred-variance \(\Delta_{inf}^{2}B\) is minimized (optimized) when \(B^{est}(A)=\langle B\rangle_{A}\). So the minimized inferred-variance \(\Delta_{min}^{2}B\) is as follows \[\Delta_{min}^{2}B = \langle(B-\langle B\rangle_{A})^{2}\rangle=\sum_{A,B}P(A,B)( \langle B-\langle B\rangle_{A})^{2} \tag{9}\] \[= \sum_{A}P(A)\sum_{B}P(B|A)(B-\langle B\rangle_{A})^{2}\] \[= \sum_{A}P(A)\Delta^{2}(B|A).\] where \(\Delta^{2}(B|A)\) is calculated from the conditional probability distribution \(P(B|A)\), stands for the conditional variance of Bob's measurement outcome \(B\) provided, the outcome A of Alice's measurement is known. So we have the following condition \[\Delta^{2}_{inf}B\geq\Delta^{2}_{min}B. \tag{10}\] Assuming the LHS model Eq.(3,5), the conditional probability distribution \(P(B|A)\) can be written as follows \[P(B|A) = \frac{P(A,B)}{P(A)}=\sum_{\eta}\frac{P(\eta)P(A|\eta)}{P(A)}P_{Q} (B|\eta) \tag{11}\] \[= \sum_{\eta}P(\eta|A)P_{Q}(B|\eta)\] Here, \(\eta\) is a classical random variable such that, \(P(\eta)\geq 0\) and \(\sum_{\eta}P(\eta)=1\). Moreover, we can observe that the basic essence of adopting the LHS model is statistical independence of probabilities, which is one of the most important prescriptions in the local hidden variable (LHV) theory by Bell [52]. If \(P(u)\) is a classical probability distribution, which has a convex decomposition i.e. \(P(u)=\sum_{v}P(u)P(u|v)\), then the variance \(\Delta^{2}u\) corresponding to the probability distribution \(P(u)\) is bounded by the average of the variances \(\Delta^{2}(u|v)\) over the conditional distribution \(P(u|v)\), i.e. \(\Delta^{2}u\geq\sum_{u}P(u)\Delta^{2}(u|v)\). Therefore, from Eq.(9), the variance of the conditional measurement outcomes \(B|A\) is given as \[\Delta^{2}(B|A)\geq\sum_{\eta}P(\eta|A)\Delta^{2}_{Q}(B|\eta) \tag{12}\] where, the variance \(\Delta^{2}_{Q}(B|\eta)\) is calculated using the conditional quantum probability distribution \(P_{Q}(B|\eta)={\rm Tr}[\hat{B}\hat{\rho}_{\eta}]\). The average of the measurement operator \(\hat{B}\), specified by its outcome \(B\) is calculated corresponding to a local quantum hidden state described by \(\hat{\rho}_{\eta}\). Therefore the bound for \(\Delta^{2}_{min}B\), using Eq.(9,12) is given by \[\Delta^{2}_{min}B \geq \sum_{A}P(A)\Delta^{2}(B|A) \tag{13}\] \[\geq \sum_{A}P(A)\sum_{\eta}P(\eta|A)\Delta^{2}_{Q}(B|\eta)\] \[\geq \sum_{A,\eta}P(A,\eta)\Delta^{2}_{Q}(B|\eta)\] \[\geq \sum_{\eta}P(\eta)\Delta^{2}_{Q}(B|\eta).\] Consider Bob's arbitrary local measurement operators \(\hat{B}_{1},\hat{B}_{2}\) with their corresponding outcomes \(B_{1},B_{2}\). These operators then satisfy the SRUR [53] \[\langle\Delta^{2}\hat{B}_{1}\rangle\langle\Delta^{2}B_{2}\rangle \geq \frac{1}{4}|\langle[\hat{B}_{1},\hat{B}_{2}]\rangle|^{2}+ \tag{14}\] \[\frac{1}{4}(\langle\{\hat{B}_{1},\hat{B}_{2}\}\rangle-2\langle \hat{B}_{1}\rangle\langle\hat{B}_{2}\rangle)^{2}\] where, \(\{\hat{B}_{1},\hat{B}_{2}\}\) is the anticommutator, \([\hat{B}_{1},\hat{B}_{2}]\) is the commutator. \(\langle\Delta^{2}_{Q}\hat{B}_{i}\rangle_{\hat{\rho}}\) is the variance and \(\langle\hat{B}_{i}\rangle_{\hat{\rho}}\) is the average calculated for a quantum state. The above equation can be written in terms of the outcomes of Bob given by, \[\langle\Delta^{2}_{Q}B_{1}\rangle\langle\Delta^{2}_{Q}B_{2}\rangle \geq \frac{1}{4}|\langle[B_{1},B_{2}]\rangle_{Q}|^{2}+ \tag{15}\] \[\frac{1}{4}(\langle\{B_{1},B_{2}\}\rangle_{Q}-2\langle B_{1} \rangle_{Q}\langle B_{2}\rangle_{Q})^{2}.\] For any two vectors \({\bf u}\) and \({\bf v}\) in a linear vector space, the Cauchy-Schwartz inequality is given by \[||{\bf u}||^{2}_{2}||{\bf v}||^{2}_{2}\geq|\langle{\bf u},{\bf v} \rangle|^{2} \tag{16}\] where \(||.||_{2}\) is L2 norm, \(\langle.\rangle\) is inner product and \(|.|\) is the modulus in the linear vector space. Using Eq.(12,13) the vectors \({\bf u}\) and \({\bf v}\) can be defined as \[{\bf u}\equiv\{\sqrt{P(\eta_{1})}\Delta_{Q}(B_{1}|\eta_{1}),\sqrt{P(\eta_{2})} \Delta_{Q}(B_{1}|\eta_{2}),...\}\] \[{\bf v}\equiv\{\sqrt{P(\eta_{1})}\Delta_{Q}(B_{2}|\eta_{1}),\sqrt{P(\eta_{2})} \Delta_{Q}(B_{2}|\eta_{2}),...\}. \tag{17}\] From Eq.(13) and comparing Eqn(17), we have, \(\Delta^{2}_{min}B_{1}\geq||{\bf u}||^{2}_{2}\) and \(\Delta^{2}_{min}B_{2}\geq||{\bf v}||^{2}_{2}\). Hence, we employ Eq.(13,14,16) for the two outcomes \(\hat{B}_{1}\) and \(\hat{B}_{2}\), and obtain a lower bound for the product of multiplicative variances \(\Delta^{2}_{min}B_{1}\) and \(\Delta^{2}_{min}B_{2}\) which is given by, \[\Delta^{2}_{min}B_{1}\Delta^{2}_{min}B_{2} \geq ||{\bf u}||^{2}_{2}||{\bf v}||^{2}_{2}\geq|\langle{\bf u},{\bf v} \rangle|^{2} \tag{18}\] \[\geq \sum_{\eta}P(\eta)\Delta^{2}_{Q}(B_{1}|\eta)\Delta^{2}_{Q}(B_{2}| \eta).\] Using Eq.(15), the above inequality can be written as, \[\Delta^{2}_{min}B_{1}\Delta^{2}_{min}B_{2} \geq \frac{1}{4}\sum_{\eta}P(\eta)[|\langle B_{3}\rangle_{\eta}|^{2}+ \tag{19}\] \[(\langle\{B_{1},B_{2}\}\rangle_{\eta}-2\langle B_{1}\rangle_{ \eta}\langle B_{2}\rangle_{\eta})^{2}]\] where, \(\langle B_{i}\rangle_{\eta}\) is the mean w.r.t probability distribution \(P_{Q}(B_{i}|\eta)\). Using the properties of convex functions and Jensen's inequality \(|\alpha|^{2}\), where \(\alpha\) is a random variable, we have \(\sum_{\alpha}P(\alpha)|\alpha|^{2}\geq|\sum_{\alpha}P(\alpha)\alpha|^{2}\) for a given probability distribution \(P(\alpha)\), the RHS of Eq.(18) in terms of inferred-variances and averages is written as (refer Appendix for details), \[\Delta^{2}_{min}B_{1}\Delta^{2}_{min}B_{2} \geq \frac{1}{4}|\langle[B_{1},B_{2}]\rangle_{inf}|^{2}\] \[+\frac{1}{4}\left(\langle\{B_{1},B_{2}\}\rangle_{inf}-2\langle B_ {1}\rangle_{inf}\langle B_{2}\rangle_{inf}\right)^{2}.\] From Eq.(10), using the condition, \(\Delta^{2}_{inf}B\geq\Delta^{2}_{min}B\) and employing in Eq.(19), the EPR-steering criterion based on SRUR can be written as follows \[\Delta^{2}_{inf}B_{1}\Delta^{2}_{inf}B_{2} \geq \frac{1}{4}|\langle[B_{1},B_{2}]\rangle_{inf}|^{2}+\] \[\frac{1}{4}\left(\langle\{B_{1},\hat{B}_{2}\}\rangle_{inf}-2\langle B _{1}\rangle_{inf}\langle B_{2}\rangle_{inf}\right)^{2}.\] ## 4 Results and Discussion Our choice for Bob's measurement operators are spin observables, and we check the effectiveness of our steering criterion Eq.(21) i.e. \(\hat{B}_{1}=\hat{S}_{B_{x}}\),\(\hat{B}_{2}=\hat{S}_{B_{y}}\) with their corresponding outcomes \(S_{B_{x}}\), \(S_{B_{y}}\) respectively. We obtain a steering inequality for two-qubit Werner state in one-sided two-measurement two-outcome scenarios. Therefore, Eq.(20) for spin observables can be written in the following way \[\Delta^{2}_{inf}S_{B_{x}}\Delta^{2}_{inf}S_{B_{y}}\geq\frac{1}{4} |\langle S_{B_{z}}\rangle_{inf}|^{2}+ \tag{22}\] \[\frac{1}{4}\left(\langle\{S_{B_{x}},S_{B_{y}}\}\rangle_{inf}-2 \langle S_{B_{x}}\rangle_{inf}\langle S_{B_{y}}\rangle_{inf}\right)^{2}\] where \(\Delta^{2}_{inf}S_{B_{i}}=\langle(S_{B_{i}}-S^{est}_{B_{i}}(S_{A_{i}}))^{2} \rangle=\langle(S_{B_{i}}-g_{i}S_{A_{i}})^{2}\rangle,g_{i}=\frac{\langle S_{A_{ i}}S_{B_{i}}\rangle}{\langle S_{A_{i}}^{2}\rangle}\), \(i=x,y\) and \(\langle S_{B_{j}}\rangle_{inf}=\sum_{S_{A_{j}}}P(S_{A_{j}}\rangle\langle S_{B_{ j}}\rangle_{S_{A_{j}}}\), \(j=x,y,z\). Calculation of inferred-variance and inferred-mean for two-qubit Werner state \(\hat{\rho}_{I}=\eta|\Psi^{-}\rangle\langle\Psi^{-}|+\frac{1-\eta}{4}\tilde{I}\), where \(|\Psi^{-}\rangle=\frac{1}{\sqrt{2}}(|01\rangle-|10\rangle)\) is given by \(\Delta^{2}_{inf}S_{B_{x}}=\Delta^{2}_{inf}S_{B_{y}}=\frac{1}{4}(1-\eta^{2})\), \(\langle\{S_{B_{x}},S_{B_{y}}\}\rangle=0\), \(\langle S_{B_{z}}\rangle_{inf}=\langle S_{B_{y}}\rangle_{inf}=\langle S_{B_{z }}\rangle_{inf}=\frac{\eta}{2}\). Using these values in (21), we obtain the following condition \[\eta\leq\frac{1}{\sqrt{3}}. \tag{23}\] Violation of Eq.(22), i.e. \(\eta>\frac{1}{\sqrt{3}}\), detect steerable two qubit Werner states for \(\eta\in[0,1]\). Werner state was shown to be steerable in theory with an infinite number of measurements for \(\eta>\frac{1}{2}\)[29]. This is not achievable in experiments (i.e. for a finite number of measurements). In [30], it was shown that two-qubit Werner states violate steering inequality for \(\eta>\frac{1}{\sqrt{3}}\) for three measurement settings [42]. However, we obtain this result in one-side, two-measurement and two-outcome scenarios. ## 5 Conclusion In summary, utilizing the LHS model and Reid's criterion in SRUR, we derive the EPR-steering criterion for bipartite systems [1, 2, 28] for one-sided, two-measurement and two-outcome scenarios. Interestingly, this steering condition yields a stricter bound compared to earlier works [31, 34, 38, 40, 43]. Moreover, in the context of discrete variable systems, the steering criterion gives a stronger violation, \(\eta>\frac{1}{\sqrt{3}}\), for two-qubit Werner state which was earlier obtained for one-sided three-measurement and four-measurement cases. Moreover, the steering criterion obtained can be utilized to obtain stricter bound for higher dimensional states in the discrete variable case. It is also tempting to look for stricter bounds of steering in continuous variable systems. Furthermore, this criterion can be implemented using measurements corresponding to positive operator-valued measures (POVM). One of the works shows the correspondence between joint measurability and steering [54]. Therefore, it would be interesting to look for the correspondence between steering and uncertainty relations involving not jointly measurable POVMs. ## 6 Acknowledgments We would like to thank Mr Abhinash Kumar Roy, Mr Prabhuda Roy, Mr Sumit Mukherjee, Mr Arman and Mr Rajiuddin for numerous enlightening discussions. PKP acknowledge the support from DST, India, through Grant No. DST/ICPS/QuST/Theme-1/2019/2020-21/01 ## Appendix Consider the inequality developed in Eq.(18). For Bob's two measurement outcomes \(B_{1}\) and \(B_{2}\), corresponding to operator \(\hat{B}_{1}\) and \(\hat{B}_{2}\), we have the following relation \[\Delta^{2}_{min}B_{1}\Delta^{2}_{min}B_{2}\geq\frac{1}{4}\sum_{\eta}P(\eta)| \langle B_{3}\rangle_{\eta}|^{2}+ \tag{24}\] \[\frac{1}{4}\sum_{\eta}P(\eta)\left(\langle\{B_{1},B_{2}\}\rangle_{\eta}-2 \langle B_{1}\rangle_{\eta}\langle B_{2}\rangle_{\eta}\right)^{2}.\] The RHS of the above equation is given by, \[\sum_{\eta}P(\eta)\left[|\langle B_{3}\rangle_{\eta}|^{2}+(\{B_{1},B_{2}\})_{ \eta}-2\langle B_{1}\rangle_{\eta}\langle B_{2}\rangle_{\eta})^{2}\right]=\] \[\sum_{\eta}P(\eta)|\langle B_{3}\rangle_{\eta}|^{2}+\langle\{B_{1},B_{2}\} \rangle_{\eta}^{2}+4\langle B_{1}\rangle_{\eta}^{2}\langle B_{2}\rangle_{\eta}^ {2} \tag{25}\] \[-4\langle\{B_{1},B_{2}\}\rangle_{\eta}\langle B_{1}\rangle_{\eta}\langle B_{2 }\rangle_{\eta}.\] For all the measurement outcomes \(A_{i}\) corresponding to the operators \(\hat{A}_{i}\) that exhaust Alice's measurement setting, the RHS of Eq.(25) is as follows \[\sum_{A_{3},\eta}P(A_{3},\eta)|\langle B_{3}\rangle_{\eta}|^{2}+ \sum_{A_{1}A_{2},\eta}P\left(A_{1}A_{2},\eta\right)\langle\{B_{1},B_{2}\} \rangle_{\eta}^{2}\] \[+4\sum_{A_{1},\eta}P(A_{1},\eta)\langle B_{1}\rangle_{\eta}^{2} \sum_{A_{2},\eta}P(A_{2},\eta)\langle B_{2}\rangle_{\eta}^{2}\] \[-4\sum_{A_{1}A_{2},\eta}P(A_{1}A_{2},\eta)\langle\{B_{1},B_{2}\} \rangle_{\eta} \tag{26}\] \[\sum_{A_{2},\eta}P(A_{2},\eta)\langle B_{2}\rangle_{\eta}\sum_{A_ {1},\eta}P(A_{1},\eta)\langle B_{1}\rangle_{\eta}\] where, \(P(A_{i},\eta)\), for \(i=1,2\) is the joint probability distribution of outcomes \(A_{i}\) and classical random variable \(\eta\). \(P(A_{i}A_{j},\eta)\) for \(i,j=1,2\) is the joint probability distribution between \(\eta\) and the joint outcomes \(A_{1}A_{2}\) of the joint measurement observables \(\hat{A}_{1}\hat{A}_{2}\). \(P(A_{i})\) for \(i=1,2,3\) is probability distribution for outcomes \(A_{i}\), corresponding to the measurement observables \(\hat{A}_{i}\). \(|u|^{2}\) and \(u^{2}\) are convex functions for a real variable \(u\). Hence, by Jensen's inequality \(\sum_{u}P(u)|u|^{2}\geq|\sum_{u}P(u)u|^{2}\). Therefore, Eq.(A.3) is lower bounded, i.e., \[\geq\sum_{A_{3}}P(A_{3})\left|\sum_{\eta}P(\eta|A_{3})\langle B_{ 3}\rangle_{\eta}\right|^{2}+\] (A.4) \[\sum_{A_{1}A_{2}}P(A_{1}A_{2})\left(\sum_{\eta}P(\eta|A_{1}A_{2}) \langle\{B_{1},B_{2}\}\rangle_{\eta}\right)^{2}+\] \[\sum_{A_{1}}P(A_{1})\left(\sum_{\eta}P(\eta|A_{1})\langle B_{1} \rangle_{\eta}\right)^{2}\] \[\sum_{A_{2}}P(A_{2})\left(\sum_{\eta}P(\eta|A_{2})\langle B_{2} \rangle_{\eta}\right)^{2}-\] \[4\sum_{A_{1}}P(A_{1})\left(\sum_{\eta}P(\eta|A_{1})\langle B_{1} \rangle_{\eta}\right)\] \[\sum_{A_{2}}P(A_{2})\left(\sum_{\eta}P(\eta|A_{2})\langle B_{2} \rangle_{\eta}\right)\] \[\sum_{A_{1},A_{2}}P(A_{1}A_{2})\left(\sum_{\eta}P(\eta|A_{1}A_{2}) \langle\{B_{1},B_{2}\}\rangle_{\eta}\right)\] Here, the conditional probability distributions of \(\eta\); \(P(\eta|A_{i})\), where, \(i=1,2,3\), for measurement outcomes \(A_{i}\) corresponds to measurement operators \(\hat{A}_{i}\) and \(P(\eta|A_{1}A_{2})\) with the given measurement outcomes \(A_{1}A_{2}\) for joint measurement operators \(\hat{A}_{1}\hat{A}_{2}\). Considering the probabilities \(P(\eta)\) for all values of \(\eta\), we have the following equation independent of \(\eta\) which is given as \[=\sum_{A_{3}}P(A_{3})|\langle B_{3}\rangle_{A_{3}}|^{2}+\sum_{A_{1 }A_{2}}P(A_{1}A_{2})\langle\{B_{1},B_{2}\}\rangle^{2}_{A_{1}A_{2}}\] \[-4\sum_{A_{1}}P(A_{1})\langle B_{1}\rangle^{2}_{A_{1}}\sum_{A_{2} }P(A_{2})\langle B_{2}\rangle^{2}_{A_{2}}\] \[-4\sum_{A_{1}A_{2}}P(A_{1}A_{2})\langle\{B_{1},B_{2}\}\rangle_{A_ {1}A_{2}}\sum_{A_{2}}P(A_{2})\langle B_{2}\rangle_{A_{2}}\] \[\sum_{A_{1}}P(A_{1})\langle B_{1}\rangle_{a_{1}}.\] (A.5) The terms in (A.5) are called as inferred-averages because these are calculated under conditional probability distributions. Hence, (A.5) is given as follows \[|\langle B_{3}\rangle_{inf}|^{2}+\langle\{B_{1},B_{2}\}\rangle^{2 }_{inf}+4\langle B_{1}\rangle^{2}_{inf}\langle B_{2}\rangle^{2}_{inf}\] (A.6) \[-4\langle B_{1},B_{2}\rangle_{inf}\langle B_{1}\rangle_{inf} \langle B_{2}\rangle_{inf}\] \[=|\langle B_{3}\rangle_{inf}|^{2}+((\{B_{1},B_{2}\})_{inf}-2 \langle B_{1}\rangle_{inf}\langle B_{2}\}\rangle_{inf})^{2}\] Using Eq.(A.1), we have the following bound for the product of minimum inferred-variances. \[\Delta^{2}_{min}B_{1}\Delta^{2}_{min}B_{2} \geq\frac{1}{4}|\langle[B_{1},B_{2}]\rangle_{inf}|^{2}+\] (A.7) \[\frac{1}{4}\left(\langle\{B_{1},\hat{B}_{2}\}\rangle_{inf}-2 \langle B_{1}\rangle_{inf}\langle B_{2}\rangle_{inf}\right)^{2}.\]
2302.07287
Forward Pass: On the Security Implications of Email Forwarding Mechanism and Policy
The critical role played by email has led to a range of extension protocols (e.g., SPF, DKIM, DMARC) designed to protect against the spoofing of email sender domains. These protocols are complex as is, but are further complicated by automated email forwarding -- used by individual users to manage multiple accounts and by mailing lists to redistribute messages. In this paper, we explore how such email forwarding and its implementations can break the implicit assumptions in widely deployed anti-spoofing protocols. Using large-scale empirical measurements of 20 email forwarding services (16 leading email providers and four popular mailing list services), we identify a range of security issues rooted in forwarding behavior and show how they can be combined to reliably evade existing anti-spoofing controls. We further show how these issues allow attackers to not only deliver spoofed email messages to prominent email providers (e.g., Gmail, Microsoft Outlook, and Zoho), but also reliably spoof email on behalf of tens of thousands of popular domains including sensitive domains used by organizations in government (e.g., state.gov), finance (e.g., transunion.com), law (e.g., perkinscoie.com) and news (e.g., washingtonpost.com) among others.
Enze Liu, Gautam Akiwate, Mattijs Jonker, Ariana Mirian, Grant Ho, Geoffrey M. Voelker, Stefan Savage
2023-02-14T19:03:55Z
http://arxiv.org/abs/2302.07287v2
# Forward Pass: On the Security Implications of ###### Abstract The critical role played by email has led to a range of extension protocols (e.g., SPF, DKIM, DMARC) designed to protect against the spoofing of email sender domains. These protocols are complex as is, but are further complicated by automated email forwarding -- used by individual users to manage multiple accounts and by mailing lists to redistribute messages. In this paper, we explore how such email forwarding and its implementations can break the implicit assumptions in widely deployed anti-spoofing protocols. Using large-scale empirical measurements of 20 email forwarding services (16 leading email providers and four popular mailing list services), we identify a range of security issues rooted in forwarding behavior and show how they can be combined to reliably evade existing anti-spoofing controls. We further show how these issues allow attackers to not only deliver spoofed email messages to prominent email providers (e.g., Gmail, Microsoft Outlook, and Zoho), but also reliably spoof email on behalf of tens of thousands of popular domains including sensitive domains used by organizations in government (e.g., state.gov), finance (e.g., transunion.com), law (e.g., perkinscoie.com) and news (e.g., washingtonpost.com) among others. ## 1 Introduction Email has long been a uniquely popular medium for social engineering attacks.1 While it is widely used for both unsolicited business correspondence as well as person-to-person communications, email provides no intrinsic integrity guarantees. In particular, the baseline SMTP protocol provides no mechanism to establish if the purported sender of an email message (e.g., From: [email protected]) is in fact genuine. Footnote 1: In the 2021 Verizon Data Breach Investigation Report, phishing is implicated in 36% of the more than 4,000 data breaches investigated; and email-based attacks, including Business Email Compromise (BEC), completely dominate the social engineering attack vector [1]. To help address this issue, starting in the early 2000's, the email operations community introduced multiple anti-spoofing protocols, including the Sender Policy Framework (SPF) [2], DomainKeys Identified Mail (DKIM) [3] and Domain-based Message Authentication Reporting and Conformance (DMARC) [4], each designed to tighten controls on which parties can successfully deliver mail purporting to originate from particular domain names. However, these protocols had the disadvantage of being both post-hoc (needing to support existing email deployments and conventions) and piecemeal (each addressing slightly different threats in slightly different ways). As a result, the composition of these protocols is complex and hard to reason about, leading to a structure that Chen et al. recently demonstrated can enable a range of evasion attacks [5]. In this paper, we explore the unique aspects of this problem created as a result of _email forwarding_, which is commonly used by both individuals (i.e., to aggregate mail from multiple accounts) and organizations (i.e., for mailing list distribution). While clearly useful, forwarding introduces a range of new interaction complexities. First, forwarding involves three parties instead of two (the sender, the forwarder, and the receiver), where the "authenticity" of an email message is commonly determined by the party with the weakest security settings. Second, the intrinsic nature of email forwarding is to transparently send an existing message to a new address "on behalf" of its original recipient -- a goal very much at odds with the anti-spoofing function of protocols such as SPF and DMARC. For this reason, forwarded email messages can receive special treatment based on various assumptions about how forwarding is used in practice. Finally, there is no single standard implementation of email forwarding. Different providers make different choices and the email ecosystem is forced to accommodate them. Unfortunately, some problematic implementation choices (e.g., permitting "open forwarding") incur no security impact on the implementing party but can jeopardize the security of downstream recipients. This inversion of incentives and capabilities creates additional challenges to mitigating forwarding vulnerabilities. To characterize the nature of these issues, we conduct a large-scale empirical measurement study to infer and characterize the mail forwarding behaviors of 16 leading email providers and four popular mailing list services. From these results, we identify a range of implicit assumptions and vulnerable features in the configuration of senders, receivers, and forwarders. Using a combination of these factors, we then demonstrate a series of distinct evasion attacks that bypass existing anti-spoofing protocols and allow the successful delivery of email with spoofed sender addresses (e.g., From: [email protected]). These attacks affect both leading online email service providers (e.g., Gmail, Microsoft Outlook, iCloud, and Zoho) and mailing list providers/software (e.g., Google Groups and Gaggle). Moreover, some of these issues have extremely broad impact -- affecting the integrity of email sent from tens of thousands of domains, including those representing organizations in the US government (spanning the majority of US cabinet domains, such as state.gov and doe.gov, as well as the domains of security agencies such as odnii.gov, cisa.gov, and secretservice.gov), financial services (e.g., transunion.com, mastercard.com, and discover.com), news (e.g., washingtonpost.com, latimes.com, apnews.com, and afp.com), commerce (e.g., unilever.com, dow.com), and law (e.g., perkinsecoie.com). Finally, in addition to disclosing these issues to their respective providers, we discuss the complexities involved in identifying, mitigating, and fixing such problems going forward. ## 2 Background In this section, we describe the anatomy of a simple email transmission and the protocols used to authenticate such an email. We also present a high-level overview of how forwarding modifies the email delivery flow as a basis for a detailed description of different forwarding approaches and implementations in Section 3. Finally, we briefly survey related work on email security, particularly those whose insights we have built upon. ### _Simple Mail Transfer Protocol_ The Simple Mail Transfer Protocol (SMTP) governs the addressing and delivery of Internet email [6]. Designed to mimic physical mail, SMTP specifies two distinct sets of headers that declare the sender and recipient(s) of an email message. An outer set of headers, the _SMTP Envelope Headers_ (MAIL FROM and RCPT TO), tell email servers how to route and deliver email. In particular, the RCPT TO header identifies the message's recipient and the MAIL FROM header identifies where to send replies and bounce messages. An inner set of headers, the _Message Headers_ (FROM and TO), are contained in the body of the SMTP message [7]. These correspond to the human-readable names and addresses set by email clients when the sending user creates an email message. These headers are strictly intended for human user-interface purposes (i.e., for populating the "To:" and "From:" fields in email clients) and they are not used for email routing. Figure 1 illustrates an example message with both sets of headers. Note that, although the addresses in the Envelope and Message headers frequently match (as they do in our example), they are not required to do so and there are both benign (e.g., email forwarding) and malicious (e.g., phishing) reasons for producing mismatched headers (e.g., where the MAIL FROM address does not match the FROM address). ### _Email Spoofing Protections_ The original SMTP design lacks authentication, which has made email spoofing attacks both possible and common. To mitigate these attacks, the community has proposed multiple mechanisms that focus on authenticating the _domain name_ used by the purported sender.2 Of these mechanisms, we focus on SPF [2], DKIM [3], and DMARC [4] given their wide adoption. Footnote 2: True per-sender authentication has long flounced due to the lack of effective mechanisms for binding user identities with cryptographic credentials at scale. The best known protocol in this space, PGP, has been riddled with security and usability issues and remains, at best, a niche protocol. In this paper, we focus exclusively on domain-level sender authentication. **Sender Policy Framework (SPF)** defines a list of IP addresses permitted to send email on behalf of a domain and a set of actions the recipient should take if they receive an email from an unauthorized IP address.3 Domain owners specify this policy by publishing it in a DNS TXT record. Upon receiving an email message, the receiver fetches the list of authorized sender IP addresses by querying the domain in the email's MAIL FROM header. The recipient then verifies if the IP address of the sending server is included that list. If the verification fails, the receiver enforces the action (e.g., marking the email as spam) specified by the MAIL FROM domain in their SPF policy. Footnote 3: In addition to lists of raw IP addresses, SPF records can also “include” other SPF records by reference. **DomainKeys Identified Mail (DKIM)** cryptographically binds an email message with its sending email domain. With DKIM, the sender signs an email (or certain elements of an email) and attaches a digital signature via a DKIM-Signature message header for future verification. Receivers later retrieve the signer's public key (in the form of a DNS TXT record) from the domain specified in the DKIM-Signature header and authenticate an email's signature using that key. Sadly, neither SPF nor DKIM verify that an email's purported sender (i.e., the FROM header) truly wrote and sent it [5]. For example, an attacker could bypass DKIM by spoofing an email's FROM header, but then sign and attach a DKIM signature that uses key pairs linked to their own domain (since DKIM does not compare the signature's domain against the FROM domain). Attacks Fig. 1: Example SMTP headers in a transmission (inspired by Figure 3 in Chen et al. [5]). that exploit this lack of FROM header authentication motivated the creation of DMARC. **Domain Message Authentication, Reporting, and Conformance (DMARC)** combines and extends SPF and DKIM to mitigate these security issues. Under DMARC, an email's receiver performs an "alignment test": checking if the domain in the FROM header matches the domain name verified by either SPF (the domain in the MAIL FROM header) or DKIM (the domain in the DKIM-Signature header). By default ("relaxed mode"), the alignment test only requires that the registered domains in the headers match (i.e., not the fully qualified domain name (FQDN)). However, domain owners can specify that recipients should follow the strict mode of the alignment test, which requires the FROM header's FQDN to exactly match the domain authenticated by SPF or DKIM.4 Footnote 4: DMARC policy records are also stored as DNS TXT records. If the email passes either SPF or DKIM authentication, and the alignment test also passes, then DMARC considers the email authenticated. Otherwise, the receiver should implement the DMARC policy designated by the domain in the FROM header, selected from one of three options: None, Quarantine, or Reject. A policy of None specifies that an email should be delivered as normal (and thus is often used for monitoring purposes [8, 9]), and Reject specifies that the recipient mail server should drop the email without delivering it to the user. The Quarantine policy is not strictly defined (indicating only that the message should be treated "as suspicious") and allows each email provider considerable latitude in their implementation (e.g., setting a UI indicator or placing the email in a designated spam folder) [4]. ### _Email Forwarding_ Forwarding is ubiquitous in the email ecosystem and is necessitated by the wide use of mailing lists [10], email filtering services such as ProofPoint [11], and autoforwarding employed by individual users for account aggregation [12], among others. As shown in Figure 2, forwarding alters the standard transmission flow of an email message. Instead of a direct transmission from the sender to the recipient, forwarding relays an email from the sender to an intermediate server and/or account, which then transmits a copy of the email to the final recipient. For simplicity we show a single forwarder in our example, but email can pass through multiple forwarders in common use cases. Like normal receivers in direct mail transfer, forwarders are responsible for performing standard authentication checks on each email they receive. However, after authenticating a message, a forwarder often makes _changes_ to the email headers and/or the email body based on the service it provides. The forwarder then sends the modified message to the final receiver (or next forwarder), which also performs authentication checks upon receiving the email. Finally, when a recipient receives and opens an email, the receiver's user agent (MUA) parses and displays the message to the user. ### _Related Work_ Email security has been a long-standing problem and a variety of prior research efforts have examined different aspects of it. One line of work focuses on understanding and defending against phishing attacks. This includes papers that design new tools for detecting both traditional phishing and sophisticated paraphishing attacks [13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24], study the characteristics of real-world phishing attacks [25, 26, 27, 28], and examine the human aspect of such attacks [29, 30, 13, 31, 32, 33, 34, 35]. Another body of work investigates the security and deployment of email encryption mechanisms, such as PGP [36, 37, 38, 39, 40], DANE [41, 42], and STARTTLS [43, 44, 45, 46, 47]. A third research direction analyzes the security and deployment of anti-spoofing protocols such as SPF, DKIM and DMARC, with efforts from both industry and academia. The blogposts by Ullrich [48] and Haddouche [49] investigated approaches for bypassing DKIM and DMARC using malformed email messages. Other work has empirically measured the efficacy and deployment status of SPF, DKIM, and DMARC [50, 43, 44, 51, 52, 53, 54], as well as qualitatively characterized the factors that drive DMARC policy decisions [55]. The work most related to our own includes Chen et al.'s analysis of the security vulnerabilities introduced by protocol composition in modern email delivery [5], Shen et al.'s analysis [56] of modern sender spoofing attacks, and Wang et al.'s [57] analysis of email security under the experimental Authenticated Received Chain (ARC) protocol [58]. Of these, Chen et al. [5] do not consider forwarding at all and Wang et al. [57] focus on ARC and only consider one specific forwarding implementation as well (REM+MOD in Section 3), leaving many other vulnerable forwarding mechanisms and features unexplored. Shen et al.'s work [56] is the closest in that it also examines open forwarding, but because they only consider one forwarding mechanism (what we label as REM in Section 3), they do not identify the significant scope of this issue. We build on and generalize this work to show, among other attacks, that attackers are able to practically abuse open forwarding to spoof _any_ domain that includes the forwarding domain's SPF record in their own SPF record (a common practice when hosting email via Microsoft's Outlook service for example). In summary, our paper builds on the insights of prior efforts, but focuses exclusively and deeply on the particular security challenges introduced by the design and features of common forwarding mechanisms, and their complex interactions with existing email protocols. Through systematic measurements and analysis, we not only show that prior work largely underestimates the risks of open Fig. 2: Email flow involving forwarding. forwarding, but also reveal new attacks not discovered in prior work. ## 3 Email Forwarding in Practice Despite the ubiquity of email forwarding, there is no single and universally agreed-upon method for how email services should implement forwarding, resulting in several different approaches [59]. This heterogeneity stems in part from the difficulty of balancing compatibility with anti-spoofing protocols and the functional goals of many forwarding use cases: to transparently hide the intermediate forwarder and present the illusion that the recipient receives the email directly from its original sender. Absent a clear standard to depend on, we have used empirical measurements to _infer_ the forwarding behavior deployed by prominent email providers and mailing list services. For each service, we created multiple test accounts, used them to forward email to recipient accounts we controlled, and then analyzed the resulting email headers to identify the forwarding mechanism employed. (Section 4.1 has a more detailed description of our methodology.) We constructed a comprehensive and representative set of forwarding services by building on top of prior literature. In total, we studied 20 distinct, leading email forwarding services. We started by collecting all email providers studied in prior literature [5, 56, 50, 57]. We considered an email provider _out of scope_ if it meets any of these four criteria: (a) it is no longer active (e.g., excite.com); (b) it does not accept US customers (e.g., all Chinese providers studied in prior work); (c) it is not open to public registration (e.g., cock.li); or (d) it does not support forwarding (e.g., Protonmail). Using these criteria, we identified 23 email providers. Next, we excluded five email providers that prohibited bulk registration (which prevents us from running large-scale measurements), leaving us with 18 email providers. We then identified and removed duplicate providers that are operated by the same vendor under different names, leading to a total of 14 distinct email providers. Finally, we augmented this set of forwarding services by searching for popular email providers that supported forwarding and widely-used mailing list services (a common use case overlooked in prior literature), adding two additional email providers (Mail2World and GoDaddy) and four mailing lists. Our selection of email forwarding services covers a diverse set of countries and real-world use cases (personal and business email), and represents services used by the general public (used by over 46% of popular Alexa domains and government domains according to Liu et al. [60] ignoring email filtering services). We list all email providers and mailing lists in Table 1. Through our measurements, we confirmed the use of three common approaches that are generally known through public documentation, and identified a fourth uncommon implementation used by Microsoft Outlook (hence referred to as Outlook) and Freenali.hu (hence referred to as Freenail). As summarized in Figure 3, in each approach the forwarder modifies the sender and recipient fields in the SMTP Envelope and Message headers before relaying the email to its recipient. We now describe each of these approaches in detail using two running examples of common email forwarding use cases. In the first case, Alice has configured her university account ([email protected]) to forward to her primary personal account ([email protected]). When her university account receives email (e.g., from [email protected]), forwarding retransmits it to [email protected] in a way that makes it seem like the email comes directly from the sender ([email protected]), rather than from her university account. In the second case, Bob sends an email to a mailing list ([email protected]), which redistributes (forwards) the email to the list's members (e.g., [email protected]). **Plain Message-Forwarding (PMF):** Initially designed for the purpose of "source-routing" [61], PMF was one of the first forwarding mechanisms in wide use. Forwarders that use PMF only change the RCPT TO header from the forwarder's email account ([email protected]) to the final recipient's address ([email protected]), and leave all other fields untouched, as illustrated in Figure 2(a). This approach achieves the goal of transparent forwarding. Changing the RCPT TO header will tell mail servers to send the email to the new address's account, and leaving the FROM header intact will cause the recipient's email client to display the initial sender ([email protected]), rather than presenting [email protected] as the sender. **MAIL FROM Equals FROM (MFEF):** Similar to PMF, MFEF (Figure 2(b)) aims to achieve transparent forwarding by preserving the original sender's identity in the FROM header. Unlike the other forwarding approaches described in this section, MFEF is a custom forwarding implementation that appears to be used only by Outlook and Figure 3: Four prevalent approaches to email forwarding. Addresses in blue correspond to header values rewritten during the forwarding process. Freemail. A MFEF forwarder not only rewrites the RCPT TO header to the final recipient ([email protected]), but it _also_ sets the MAIL FROM header to be the same as the FROM header (from [email protected] to [email protected]). Footnote 4: The sender Rewriting Scheme (RFC 5231 [63])provides a generic framework for how forwarders should rewrite the MAIL FROM header. However, email providers do not strictly follow this scheme and the exact email address after rewriting varies by implementation. Footnote 5: Many domains still do not implement DKIM for outbound email, and even those that do can have their user’s DKIM signatures invalidated by mailing list software that adds content to a user’s post [64]. Email forwarded using PMF and MFFE often break SPF validation because the MAIL FROM domain typically does not list the forwarding server's IP address in its SPF allowlist; in our example, hotcrp.com does not list the email servers for univ.edu in its SPF allowlist. This incompatibility has hindered the adoption of SPF and DMARC [55], leading to provider-specific defenses and new anti-spoofing protocols that we describe in Section 4. **Remailing (REM):** Unlike PMF and MFFE, remaining (aka redistribution) works well with SPF because this approach alters the headers in a way that resembles the action of the forwarder submitting a new message [62]. As shown in Figure 3c, the REM forwarder (univ.edu's mail server) first changes the RCPT TO header to specify the final recipient ([email protected]). Additionally, the forwarder rewrites the MAIL FROM header so that it corresponds to an address in the forwarder's own domain (e.g., [email protected]).5 Footnote 5: The sender Rewriting Scheme (RFC 5231 [63])provides a generic framework for how forwarders should rewrite the MAIL FROM header. However, email providers do not strictly follow this scheme and the exact email address after rewriting varies by implementation. However, even though REM interoperates with SPF, it can still fail DMARC authentication. Absent a valid DKIM header, email messages forwarded via REM will fail DMARC's alignment test because the FROM domain will not match the SPF-verified MAIL FROM domain.6 This incompatibility has led to the common adoption of weaker DMARC policies, such as None and Quarantine instead of Reject[55]. Footnote 6: Many domains still do not implement DKIM for outbound email, and even those that do can have their user’s DKIM signatures invalidated by mailing list software that adds content to a user’s post [64]. **Remailing with Modification (REM + MOD):** The final forwarding approach, Remailing with Modification (REM + MOD) [64], resolves these compatibility issues by sacrificing the goal of transparent forwarding. Email forwarded using REM + MOD will pass both SPF and DMARC. However, email forwarded with this approach will display the _forwarder_ as the email's sender to the final recipient (hiding the identity of the original sender). Because of this functional change, most major email platforms do not adopt this approach, and it is used primarily by mailing list services such as Gaggle. As shown in Figure 3d, with REM + MOD the forwarder modifies the headers just like it would during REM forwarding: changing the RCPT TO header to the final recipient ([email protected]) and the MAIL FROM header to an address in the forwarder's domain. Additionally, the forwarder rewrites the FROM header to match its account or an email address within its domain (e.g., [email protected]). Although this forwarding approach produces email messages compatible with DMARC, we found that it also introduces a new set of security concerns and spoofing attacks (SS 5.4). At a high-level, because REM + MOD rewrites a forwarded email's headers to always pass SPF and DMARC checks, it enables an attacker to launder a spoofed email through a vulnerable forwarder such that it appears like a legitimate email message to the recipient. Table 1 summarizes the default forwarding approach used by each of the email providers and mailing lists in our study.7 The most common forwarding approach is remailing forwarding (REM), used by seven email providers (GMX, Gmail, GoDaddy, Inbox, Onet, Pobox, and Zoo) and three mailing lists (Google Groups, Listserv, and Mailman). Seven email providers, Fastmail, Hushmail, Cloud, Mail.ru, Mail2World, Runbox, and Yahoo, use plain-message forwarding (PMF). Outlook and Freemail use their own custom forwarding mechanism (MFEF) and, as described, Gaggle uses remailing with modification (REM+MOD) forwarding. Footnote 7: A provider might forward differently when forwarding between internal accounts, and a mailing list might switch to a different forwarding mechanism to avoid issues caused by forwarding email messages from domains with stricter DMARC policies [65]. We do not consider these two cases. ## 4 Assumptions and Vulnerable Features In this section, we describe a range of email design and implementation weaknesses that lead to forwarding vulnerabilities. We start by exploring four assumptions made by anti-spoofing mechanisms that email forwarding can bypass and violate. We then examine three vulnerable forwarding features in the major forwarding approaches. In each of these cases, we use active measurements -- either of mail services themselves or the DMARC policies as stored in DNS -- to document the prevalence of each issue among prominent domains and providers (summarized in Table 2). In the remainder of this section, we discuss the measurement methodology used to investigate and identify these issues and describe each vulnerability in turn. In the next section, we then show how these vulnerabilities can be combined to create complete and effective spoofing attacks involving a broad array of popular and sensitive domains. \begin{table} \begin{tabular}{l l|l l} \hline \hline **Email** & **Forwarding** & **Mailing** & **Forwarding** \\ **Provider** & **Mechanism** & **List Service** & **Mechanism** \\ \hline Fastmail & PMF & Gaggle & REM+MOD \\ Freemanil.hu & MFFE & Google Groups & REM \\ GMX/Mail.com & REM & Mailman & REM \\ Gmail & REM & Listserv & REM \\ GoDaddy & REM & & \\ Hushmail & PMF & & \\ Cloud & PMF & & \\ Inbox.lv & REM & & \\ Mail.ru & PMF & & \\ Mail2World & PMF & & \\ Onet.pl/Op.pl & REM & & \\ Outlook/Hotmail/O365 & MFFE & & \\ Pobox & REM & & \\ Runbox & PMF & & \\ Yahoo & PMF & & \\ Zoo & REM & & \\ \hline \hline \end{tabular} \end{table} TABLE I: The providers and mailing list services we tested and the forwarding mechanisms they use. For providers that are operated by the same vendor under different names (e.g., GMX and Mail.com), we merge them into one row. O365 stands for Office 365. ### _Methodology_ For our experiments we created test _forwarding_ accounts on all 20 forwarding services, test _recipient_ accounts on all 16 major email providers, and mail servers for domains we control as the _sending_ accounts. For Google Groups and Gaggle, we created mailing lists under our university's existing service and at gaggle.email, respectively. The other two mailing list services (Listserv and Mailman) rely upon a third-party backend mail server, we used Postika [66] as the backend with DMARC enforced. We then created mailing lists under new domains we acquired for testing (e.g., [email protected]). For each combination of forwarding and recipient accounts, we sent email using three different control domains in the FROM headers, each with the same SPF configuration but with distinct DMARC policies: None, Quarantine, and Reject. Some services (e.g., Gmail and Outlook) will mark email messages sent from new domains as spam until there is sufficient user interaction with those messages. To avoid this startup effect, we "warmed up" our domains using a series of legitimate exchanges. In particular, from each domain, we sent legitimate (i.e., unspoofed) email that passed SPF, DKIM and DMARC to our accounts at each provider. Any message that was delivered to the spam folder we manually marked as "not spam". After this warm up period, we validated that legitimate (i.e., unspoofed) email from our domains was properly delivered to account inboxes in all cases. Having primed our accounts, we assessed the prevalence of each vulnerability by sending legitimate and spoofed email messages to all pairwise combinations of our forwarding and recipient accounts.8 We analyzed the headers and outcomes of these attempts, and recorded which parties exhibited vulnerable behavior. In particular, we configured all forwarders to forward email messages to all receivers and recorded whether each message was delivered to the inbox, spam folder, or rejected without delivery by each receiver. We also noted whether any UI warning was shown in the native web-based MUA. Footnote 8: Our code for automatically sending these messages is available upon request. ### _Email Security Assumptions_ Anti-spoofing mechanisms define a set of validation procedures which both explicitly and implicitly rely on assumptions about the behavior of domain holders, email providers and users. Here we identify four such assumptions that are crucial to these defenses in the direct single-hop delivery context, but do not necessarily hold in the presence of email forwarding. #### 4.2.1 Domains use actionable DMARC policies DMARC enables recipients to authenticate whether an email truly originates from its purported sending domain. However, when a recipient encounters a spoofed or illegitimate email that fails authentication, DMARC relies on the true domain owner to specify a policy for how to treat such email. This design assumes that domain owners will use DMARC policies that result in protective actions, such as Quarantine or Reject. When a domain owner chooses a weaker policy, mail providers deliver the illegitimate email to a user's inbox even if the DMARC authentication fails, in accordance with email standards (RFC 7489 [4]). Unfortunately, prior work has shown that a large number of domains use weak DMARC policies of None [51, 55, 67, 68, 69], with roughly two-thirds of the Alexa Top 1M domains employing such a policy (as of May 2020). While poor security hygiene accounts for some of this outcome, many domains choose a weak DMARC enforcement policy for deliverability concerns due to incompatibility with forwarding [55]. Cognizant of this reality, several major email providers have decided to take two types of security actions against email that fails DMARC authentication, regardless of the domain owner's specified policy. First, as noted in prior work [50] and confirmed in our own experiments, Outlook quarantines email if it fails DMARC authentication, even when the email's FROM domain has a weak DMARC policy of None. Second, although Gmail, Onet, and Zoho deliver email that fails DMARC authentication to user inboxes, they will display a UI warning to users who read such messages. These defenses provide protection against attackers who directly send spoofed email to their victims. However, as we will show, email forwarding introduces new complexity that enables attackers to bypass these ad hoc defenses, and thus leverage weak DMARC policies to successfully spoofed email from prominent domains. #### 4.2.2 Each domain uses its own infrastructure The SPF protocol predates the emergence of large third-party email providers. As a result, SPF implicitly assumes that each organization (domain) maintains its own mailing infrastructure: that the set of authentic server IP addresses specified by a domain's SPF record is not also used by other domains or external users to send email. Unfortu \begin{table} \begin{tabular}{l l l l} \hline \hline & **Security Assumption or Feature** & **Implementation Aspect** & **Prevalence** \\ \hline § 4.2.1 & Domain will use actionable DMARC policies & DMARC None & Two-thirds of Alexa Top 1M \\ § 4.2.2 & Each domain uses its own infrastructure & Shared SPF record & All providers \\ § 4.2.3 & Quarantining is sufficient & Quarantine instead of reject & Outlook, Fastmail, GMX, Inbox.lv, Pobox \\ § 4.2.4 & Per-user DMARC overrides are fate-sharing & Domain whitelisting & All providers \\ § 4.3.1 & Users only forward to accounts they control & Open forwarding & Ten providers including Outlook and Fastmail \\ § 4.3.2 & Forwarded email from large providers benign & Relaxed validation & Gmail, Outlook, Mail.ru \\ § 4.3.3 & Adding DKIM signature increases deliverability & Unsolicited DKIM signatures & iCloud, Runbox, Hushmaial \\ \hline \hline \end{tabular} \end{table} TABLE II: Summary of vulnerable security assumptions and forwarding features, the aspect of their implementation that leads to the vulnerability, and the prevalence of the vulnerability. nately, as documented by Liu et al. [60] and Holzbauer et al. [70], this assumption is invalid today as many organizations outsource their email infrastructure to the _same_ third-party providers such as Outlook and Gmail. Hence, all of these domains have delegated the right to send on their behalf to the same third-party -- trusting that they will ensure isolation in spite of this blanket authorization. Concretely, our measurements show that all 16 email providers in our study appear to configure their email infrastructure in this shared fashion. Additionally, at least for email messages forwarded in our experiments, all providers but one (Fastmail) use the same set of servers to send both direct email and forwarded email. Since SPF no longer provides isolation in this model, the email providers in our study effectively _simulate_ it by preventing users from setting arbitrary values in their FROM header. Thus, even though each mail provider is empowered to send any email on behalf of all their mail customers, they prevent customers from taking advantage of this situation by _internally_ restricting the FROM headers of outbound email messages coming from a customer's domain. While this defense is effective in the absence of forwarding, we will show how open forwarding mechanisms bypass this filtering (by generating spoofed FROM headers from an _external_ server controlled by the adversary), exposing the latent conflict between SPF's design and modern mail service use--ultimately allowing unrestricted email spoofing. #### 4.2.3. Quarantining is sufficient RFC 7489 [63] suggests that if an email message falls under the scope of a DMARC reject policy, then the receiving server should reject and drop it entirely. However, some providers deviate from this advice by marking it as spam and delivering it to a spam folder, assuming that quarantining a malicious email neutralizes its threat. Our experiments found that five email providers (Outlook, Fastmail, GMX, Inbox.lv and Pobox) adopt this approach. Figure 4 displays an email message from our tests that shows this behavior: it fails DMARC validation, comes from a domain (state.gov) that has a DMARC policy of Reject, but is nonetheless delivered as "spam". Because these providers quarantine the spoofed email as spam, this design does not appear particularly dangerous.9 However, as we will show, in combination with email forwarding and another vulnerable feature (per-user domain whitelists in Section 4.2.4), attackers can override this protection and exploit the quarantine-over-reject implementation to spoof email from thousands of popular domains despite their strict DMARC Reject policy. Footnote 9: Some in the mail security industry criticize this weakening of DMARC rules and document attacks that “rescue” such email from the spam folder via social engineering [71, 72]. #### 4.2.4. Per-user DMARC overrides are fate-sharing Many email providers allow users to override DMARC decisions: users can whitelist domains, and as a result they will still deliver or forward email even if it fails DMARC. Providers offer this flexibility because it can help mitigate errors and improve mail deliverability for the users who need it. However, this feature implicitly assumes that this approach is fate-sharing -- that when a user overrides DMARC decisions, the risks of that choice are localized to the individual user account. While true in the single-hop context, forwarding again undermines this assumption. If adversaries can override DMARC decisions on a forwarding account they control, they can use that capability to launder spoofed mail and successfully deliver it downstream. Based on our measurements, all mail providers support this functionality in some form. Of particular note, four of the five providers mentioned in Section 4.2.3 (Fastmail, GMX, Inbox.lv, and Pobox) allow users to override any DMARC decision for any domains. The fifth (Outlook) allows users to override DMARC decisions for most domains, except for a small set of frequently-spoofed domains that have DMARC policy reject (e.g., aa.com) where Outlook appears to apply additional, special protection mechanisms. For Gmail, Hushmail, iCloud, Mail.ru, Onet, and Zoho, users can override DMARC decisions for domains with DMARC policy None or Quarantine, but not Reject. Finally, for Yahoo, we can only override DMARC decisions for domains with a policy of None. ### _Vulnerable Forwarding Features_ In the absence of forwarding, the assumptions described above are largely benign and allow the effective blocking of many spoofing attacks. However, when combined with three vulnerable forwarding features, open forwarding, relaxed validation, and unsolicited DKIM signatures, the weaknesses in these assumptions permit several opportunities for bypassing DMARC's protections. #### 4.3.1. Open Forwarding Many email service providers support a mechanism to automatically forward a user's messages to another account (e.g., to aggregate mail sent to multiple addresses into a single inbox). Because of the prevalence of these common, benign forwarding use cases, many platforms follow a design that we call _open forwarding_ (also referred to as "unauthorized forwarding" in previous work [56]). Services with open forwarding allow users to configure their account to forward messages to any destination email address, _without_ any verification from the destination address. Open forwarding implicitly assumes users will only forward email to accounts that they control or have a benign relationship with (an assumption that fails when an adversary creates or controls an account entirely for the purpose of malicious forwarding). Figure 4: Example message with a FROM header spoofing a domain with DMARC policy Reject. Outlook delivers it to the spam folder instead of rejecting it. Our measurements show that open forwarding is still prevalent among providers. Specifically, ten email providers (Outlook, Fastmail, iCloud, Freemanil, GoDaddy, Hushmail, Mail2World, Onet, Pobox, and Runbox) allow open forwarding.10 Moreover, as we demonstrate in three attacks described in Sections 5.1-5.3, when combined with other vulnerabilities, adversaries can exploit open forwarding to attack not only users on those providers that employ this design, but also a broad array of users on other platforms that disallow open forwarding. Footnote 10: Mail2World and Pobox do notify the destination account via email about the forwarding setup. #### 4.3.2 Relaxed Validation Since forwarded email can break SPF and DMARC at times, providers may employ relaxed validation for email forwarded by large email providers, assuming that these large providers will prevent spoofed email messages from being forwarded.11 Footnote 11: Then et al. [56] also make this observation, but do not document the concrete steps necessary to exploit this vulnerability or demonstrate its practical exploitation. We infer that three providers, Gmail, Outlook and Mail.ru, apply some form of relaxed validation. Gmail employs two versions of relaxed validation for forwarded email messages that both (1) fail SPF and DMARC checks and (2) are from domains with a DMARC policy of None or Quarantine. First, for email messages forwarded via Gmail or Outlook, Gmail delivers them regardless. Second, for messages forwarded via the other providers in our experiments, Gmail delivers the email if it meets specific conditions (more details in Appendix D). Similarly, our experiments found that Outlook applies relaxed validation for email messages from domains with a DMARC policy of None (as discussed in Section 4.2.1, Outlook usually overrides the policy of None and quarantines messages that fail DMARC). Specifically, Outlook accepts email messages forwarded via nine major providers (e.g., Gmail and Fastmail), despite failing SPF and DMARC checks. Finally, Mail.ru accepts email messages forwarded via Gmail that fail DMARC from domains with a DMARC policy of None or Quarantine. These relaxed validation policies aim to balance the incompatibility of forwarding approaches with anti-spoofing protocols by implicitly trusting high-profile email services. Unfortunately, the complexity introduced by forwarding and its interactions with the diverse set of assumptions we highlight enable attackers to abuse these trust relationships. This is particularly true because all of these providers offer individual consumer accounts. For example, in Section 5.2 we show that an adversary can deliver spoofed email messages from domains that have a DMARC policy of None or Quarantine to any Gmail user without triggering a warning. #### 4.3.3 Unsolicted DKIM Signatures for Hosted Domains RFC 6376 [3] and RFC 6377 [73] both recommend that forwarding services apply their own DKIM signatures for forwarded email messages, especially for cases where they modify the message. Shen et al. [56] showed that this configuration can be exploited by a malicious actor via an attack that they called the DKIM Signature Fraud Attack. Specifically, they showed that an adversary can acquire valid DKIM signatures for spoofed email messages if the forwarder naively signs every forwarded email. Such spoofed email messages can successfully pass subsequent DMARC checks if their spoofed sender's domain is the same as the domain used by the forwarding service to sign DKIM signatures. Shen et al. [56] found three providers that had this vulnerable feature: Yahoo, Office365 and Alibaba Cloud. Through our experiments, we identified that three providers' (iCloud, Hushmail, and Runbox) forwarding implementation contained a variant of this vulnerable feature, which would allow an adversary to mount attacks similar to the DKIM Signature Fraud Attack. Taking iCloud as an example, we find that iCloud adds unsolicited and valid DKIM signatures to spoofed email messages addressed from domains hosted by them. Additionally, iCloud signs the DKIM signature using the same domain as the purported sender's domain in the spoofed email. For instance, iCloud will add a valid DKIM signature signed by the domain peterborgapps.com (a domain hosted by iCloud) to spoofed email messages purporting to be from peterborgapps.com, allowing the spoofed email messages to pass subsequent DMARC checks. We surmise that providers can add valid DKIM signatures on behalf of hosted domains because they manage DKIM keys for these domains [74, 75]. ## 5 Attacks In this section, we demonstrate how an adversary can combine and exploit the issues described in Section 4 to create attacks that reliably bypass existing anti-spoofing protections. In particular, we consider an attack successful if a spoofed email message is delivered to a victim's inbox (i.e., not the spam folder), and yet does not produce a warning to the user. Figure 5 shows an example of a successful attack, where a spoofed email purporting to be from [email protected] is delivered to a Gmail user's inbox with no warning indication. We describe four distinct classes of attacks, summarized in Table 3, each of which we have validated empirically using accounts created at the affected providers. Some of these attacks are quite broad -- allowing an attacker to spoof email to any email recipient purporting to be from tens of thousands of popular and sensitive domains -- while others are more circumscribed in their impact. For each of the attacks described below, we refer to the domain an attacker specifies in their FROM header as the _spoofed domain_. We use the terms _spoofed address_ to refer to the full email address appearing in the FROM header and _forwarding domain_ to refer to the domain of the forwarder. Figure 5: Example of a successful attack. A spoofed email purporting to be [email protected] is delivered to a Gmail user’s inbox with no warning indicators. **Threat Models:** For the first three attacks, we assume an adversary controls the sender and forwarding accounts: they possess a server capable of sending spoofed email messages (sender) and a personal account with a specific third-party provider that allows _open forwarding_ (forwarder). For the attack described in Section 5.4, we make three assumptions: (a) that adversaries control a malicious server that can send spoofed email messages and try to spoof email from a domain that hosts a mailing list with REM forwarding (e.g., Google Groups, Listserv and Mailman as described in Section 3), (b) that the spoofed domain has a DMARC policy of None (all too common); and (c) the sending email address the attacker wishes to impersonate has permission to send to the mailing list. ### _Exploiting SPF Incorporation_ The first attack we describe exploits five discrete issues: three security assumptions (SS 4.2.2, 4.2.3, 4.2.4), the vulnerable _open forwarding_ feature that many providers offer (SS 4.3.1), and the header rewriting performed as part of the PMF and MFEF forwarding approaches (SS 3). Crucially, the rise of large third-party email providers violates SPF's assumption that the set of authorized server IP addresses specified by each domain cannot be used by other domains or external users to send email. For example, the owners of domain state.gov use Outlook as their email provider. Thus, email messages sent by state.gov's employees will originate from Outlook's mail servers. To ensure reliable delivery, such domains routinely add the server IP addresses of their email provider to their own SPF records. Although intuitive, this configuration creates an overly broad trust assumption: by adding the provider IP addresses to their SPF record, such domains (e.g., state.gov) implicitly grant permission for any account hosted by their provider, whether individual or corporate, to send email messages that purportedly come from their domain. This threat is only prevented because large providers like Outlook do not allow users to arbitrarily set or forge their email's FROM header. However, we observe that by combining header rewriting from PMF and MFEF and the use of _open forwarding_, attackers can overcome this defense and exploit SPF's violated assumption. Specifically, this attack allows an adversary to spoof email from domains that incorporate a third-party provider's SPF information in their own SPF record to any recipient, regardless of the domain's DMARC policy. **Scope:** This attack works for domains that include the SPF record of any of six large email providers (Outlook, iCloud, Freemail, Hushnail, Mail2World and Runbox) in their own SPF records. Notably, given Outlook's importance as a third-party provider [60], this attack allows an attacker to spoof email on behalf of tens of thousands of popular domains. Indeed, over 12% of the Alexa 100K most popular domains are vulnerable as a result (and almost 8% of the top 1M domains). A cursory examination of this list identified a range of potentially sensitive domains such as those hosting large news reporting organizations (e.g., washingtonpost.com, latimes.com, and apnews.com), financial services (e.g., marcarcard.com, transuntion.com, and docusign.com), domain registers (e.g., godaddy.com), certificate authorities (e.g., sectigo.com and digicert.com) and large law firms (e.g., perkinscoie.com). In addition, 32% of US.gov domains are vulnerable (including 22% of the domains used by Federal agencies). At the Federal level this includes the majority of US cabinet organizations (e.g., state.gov, dhs.gov and doe.gov), a range of security sensitive agencies (e.g., odni.gov, cisa.gov and secretservice.gov) as well as those charged with public health and safety (such as fema.gov, nih.gov, and cdc.gov). At the state and local level, virtually all primary state government domains (e.g., mass.gov) are \begin{table} \begin{tabular}{l l l l} \hline \hline & **Send email spoofing** & **Forward via** & **Deliver to** \\ \hline § 5.1 & Domains with the forwarding domain’s SPF information & Six providers including Outlook and iCloud & Any recipient \\ § 5.2 & Arbitrary domains with DMARC policy None or Quarantine & Outlook & \begin{tabular}{l} Gmail \\ Outlook \\ \end{tabular} \\ § 5.3 & Arbitrary domains with DMARC policy None & \begin{tabular}{l} Fastmail \\ \end{tabular} & \begin{tabular}{l} Zoho \\ \end{tabular} \\ § 5.4 & Domains hosting the mailing list and DMARC policy None & \begin{tabular}{l} Google Groups, Listserv, Mailman \\ Gaggle \\ \end{tabular} & \begin{tabular}{l} Any recipient \\ Any recipient \\ \end{tabular} \\ \hline \hline \end{tabular} * We build on the ARC vulnerability identified by Shen et al. [56], to demonstrate an attack that is practical. \end{table} TABLE III: Summary of email forwarding attacks (§ 5). Fig. 6: Example of an SPF Incorporation Attack (§ 5.1) exploiting Outlook’s open forwarding to spoof email from domains incorporating Outlook’s SPF records (e.g., state.gov) to arbitrary recipients. vulnerable (including a broad range of congress, judiciary, and law enforcement domains in each state) and over 40% of all.gov domains used by cities.12 Footnote 12: We have not broadly examined domains representing government offices outside the US, but we note that both gcbnq.gov.uk and nosc.gov.uk are also vulnerable. **Example:** Figure 6 shows an example of this attack using Outlook as the forwarding service. An attacker starts by creating a personal account for forwarding ([email protected]), adding the spoofed address ([email protected]) to the account's "allowlist" (thereby preventing any quarantining by Outlook), and configuring the account to forward all email to the desired target ([email protected]). In this case, the spoofed domain state.gov includes Outlook's SPF record (spf.protection.outlook.com) into its own SPF record and has a DMARC policy of Reject. Next, the attacker forges an email that purportedly originates from state.gov and sends it to their personal Outlook account. Normally, Outlook would quarantine this email because it fails DMARC validation (SS 4.2.3). However, since the spoofed address is present in the account's allowlist, this configuration overwrites the quarantine decision (SS 4.2.4), and as a result, Outlook would forward the spoofed email to the target. As per Outlook's MREF forwarding implementation, MAIL FROM is rewritten to match the FROM header, [email protected] in our example. Finally, the recipient's mail server receives the forwarded email and performs authentication checks. From the recipient's perspective, the spoofed email passes SPF validation because the MAIL FROM domain (state.gov) lists Outlook's SPF information in its SPF record, and the forwarding configuration arranged by the attacker ensures that the recipient receives this spoofed email from Outlook's servers. Moreover, this attack also ensures that DMARC's alignment check succeeds because the MAIL FROM and FROM domain are both state.gov. We validated this attack in practice, consistently sending spoofed email messages such as the example shown in Figure 5 to our own Gmail account, where it was delivered to the inbox without warning.13 In addition to Outlook, this attack also succeeds with iCloud, Freemail, Hushmail, Mail2World and Runbox. Footnote 13: Note that we did discover some exceptions in our experiments. For a small set of high-profile domains that have a DMARC policy of Reject (e.g., aa.com, foxnews.com and ikea.com), Outlook would quarantine spoofed email regardless of whether users have added the spoofed address to their account’s allowlist (Section 4.2.3). We surmise that Outlook applies special protections for a set of high-profile or frequently spoofed domains. ### Abusing Relaxed Forwarding Validation The second attack exploits the fact that many email providers apply relaxed validation policies to forwarded mail (SS 4.3.2), particularly when messages arrive from well-known mail providers. When combined with open forwarding, an attacker can abuse this behavior to spoof email from any domain that has a DMARC policy of Quarantine (or None) to any mail server that applies these relaxed measures (e.g., Gmail and Outlook). Recall that, in the absence of forwarding, attackers cannot spoof email from a domain with a DMARC policy of Quarantine. Provider-specific defenses, such as when Outlook quarantines any email that fails DMARC (SS 4.2.1), will also stop such direct, single-hop attacks. **Scope:** As described earlier in Section 4.3.2, Gmail and Outlook use relaxed validation checks for forwarded email. We find that an adversary can mount this attack against users with Gmail/Outlook email accounts as well as users who use GSuite and Outlook 365 for email services.14 Footnote 14: Mail.ru also uses relaxed validation, but since it is only applied to email forwarded via Gmail, which does not allow open forwarding, this attack does not work for Mail.ru. **Example:** Figure 7 illustrates the steps of this attack using an example where the adversary creates a personal Outlook account to forward spoofed email messages to Gmail recipients. First, the adversary selects a spoofed email address from a domain with a DMARC policy of Quarantine or None (we use alipay.com in this example, a prominent Chinese payment company), adds the address to their forwarding account's allowlist, and configures their forwarding account to send email to the victim (recipient). Like the first attack, the attacker then sends a message from this spoofed address to their forwarding account, which is then forwarded to the recipient. When the final recipient's mail server receives the email, the server will observe that the email comes from a "well-known" provider, apply its relaxed validation checks, and successfully deliver the email to the recipient's inbox (even though the spoofed email fails normal SPF and DMARC checks).15 Footnote 15: Additionally, we note that Gmail would usually display a UI warning for forwarded email messages. However, no UI warning is displayed for this email due to a bug detailed in Appendix B.2. Figure 7: Example of a spoofed email attack exploiting open forwarding and relaxed validation for forwarded email from well-known providers (§ 5.2). Note that the spoofed domain, alipay.com, has a DMARC policy of Quarantine and thus should not be delivered. ### Targeting ARC Vulnerabilities The third attack allows an adversary to deliver spoofed email messages from arbitrary domains to Zoho users. This attack exploits Zoho's vulnerable implementation of the experimental Authenticated Received Chain (ARC) protocol [76], which was first documented by Shen et al. [56]. Due to this bug, Zoho incorrectly reads ARC headers and will deliver arbitrary email messages with ARC headers added by providers such as Gmail and Fastmail to the recipient's inbox without any warning. However, we show that this issue is not limited to interactions between Gmail and Zoho customers. We demonstrate how further issues, including the fact that Zoho trusts and (incorrectly) reads ARC headers added by Fastmail (Appendix A), open forwarding (SS 4.3.1), and several forwarding assumptions (SS 4.2.3, SS 4.2.4), can be combined with the underlying ARC vulnerability to allow an adversary to deliver spoofed email messages from arbitrary domains to arbitrary Zoho users. This attack again highlights the fact that email security protocols are distributed and independently-configured components, where vulnerable decisions by one party incur harm to downstream recipients but not necessarily to their own users. Notably, the actions taken by one provider (e.g., Fastmail) can unexpectedly undermine the security of users on another platform (e.g., Zoho). **Scope:** Our experiments show that this attack can target arbitrary users of Zoho, which is estimated to have more than 10 million users [77]. **Example:** Figure 8 shows the mechanics of this attack in the context of an attacker with a forwarding account on Fastmail who targets a recipient on Zoho. First, the adversary creates a Fastmail account for forwarding, adds their spoofed address ([email protected]) to their allowlist, and configures their account to forward all mail to the target user at Zoho ([email protected]). Second, the adversary crafts and sends spoofed email from their own servers (e.g., [email protected]) to their forwarding account at Fastmail. Third, although this email will fail anti-spoofing validation, Fastmail will still faithfully forward it to the target user at Zoho due to the sender's presence on the user's allowlist (exploiting the security assumption discussed in SS 4.2.4). As part of the forwarding process, Fastmail will modify the RCPT TO header and add corresponding ARC headers to the spoofed email. Finally, upon receiving the forwarded email, Zoho's mail server will perform DMARC validation. Although the spoofed email will fail SPF and DMARC checks, Zoho's vulnerable ARC implementation will misinterpret the ARC headers that Fastmail attached (Appendix A). As a result, Zoho will treat the email as passing DMARC and deliver the spoofed message to the victim's inbox. We end by noting that this attack would not have worked for domains with DMARC policy Reject had Fastmail rejected spoofed email messages addressed from such domains (SS 4.2.3). ### Abusing Mailing Lists The final attack allows an adversary to abuse the forwarding process used by mailing lists so that spoofed email, which would otherwise fail DMARC authentication checks, successfully passes both SPF and DMARC validation. This attack targets domains with a weak DMARC policy of None, and exploits the way in which many mailing lists rewrite email headers during their forwarding process. Concretely, this attack allows an adversary to abuse REM header rewriting (SS 3) to launder spoofed email through mailing lists such that the forwarded email appears as if it originated from the legitimate sender, even though the original email fails DMARC authentication.16 Footnote 16: One such attack used to distribute phishing messages at our institution was part of the impetus for this study. **Scope:** Our experiments show that attackers can conduct this attack across all four popular mailing list services: Google Groups, Mailman, Listserv, and Gaggle. This attack only affects organizations that use a mailing list configured under their own domain name, and with a DMARC policy of None for their (sub)domain. While these requirements appear restrictive, prior work has found that many organizations (such as major U.S. universities) have exactly this configuration [55]. Indeed, in querying.edu and.gov domains, roughly 10% of all.edu domains and 5% of all.gov domains are potentially susceptible to this attack.17 Footnote 17: As examples, Yale University operates yale.edu with a DMARC policy of None and hosts multiple mailing lists using Mailman, and the State of Washington operates a range of mailing lists using Listserv and whose wa.gov domain also has a DMARC policy of None. Additionally, for mailing lists like Gaggle that do not enforce DMARC checks before forwarding (Appendix B.1), this attack affects every organization using their services, even when the domain has adopted stronger DMARC policies. Figure 8: Example attack that exploits Zoho’s vulnerable ARC implementation and open forwarding to spoof email from arbitrary domains to any Zoho recipient (§ 5.3). **Example:** Figure 9 describes an example of this attack using Google Groups. First, an attacker selects a target email address ([email protected]) to impersonate in their spoofed email, and sends the spoofed message from their malicious server to the organization's mailing list ([email protected]). Although the email fails DMARC validation, the mailing list will still accept the message (because univ.edu has a DMARC policy of None). As part of REM forwarding, the mailing list will rewrite the MAIL FROM header such that its domain matches the mailing list's domain, and then forward the email to the list's members. As a result, when a recipient's server receives the message it will successfully pass SPF validation since the domain of the rewritten MAIL FROM (univ.edu) allows the mailing list to send on its behalf. Moreover, the spoofed message will also pass DMARC alignment checks, since the rewriting performed during REM forwarding ensures that the MAIL FROM and FROM domains are identical.18 Footnote 18: Note that while our examples show this attack using the organization’s top-level domain, it is also effective for any of the organization’s subdomains (if the subdomains also have DMARC policy None) due to DMARC’s inherent relaxed alignment policy. During our experiments, we also observed that some mailing list services, such as Gaggle, do not enforce DMARC policies at all (Appendix B.1). This lack of enforcement allows the attack to succeed regardless of the spoofed domain's DMARC policy. We provide more details in Appendix F. ## 6 Ethics and Disclosure When sending spoofed email messages in our experiments, we took deliberate steps to avoid impacting any real users. First, we only sent spoofed email messages to accounts that we created ourselves. Second, we initially tested each attack by spoofing domains that we created and controlled for this research. Once we established that our attacks could succeed using these test domains, we ran a small set of experiments that spoofed email from real domains (to validate the absence of any unforeseen protection); however, these email messages were only sent to our test accounts and did not spoof existing, legitimate email addresses from these domains. Finally, all of our email messages contained innocuous text (e.g., "a spoofed email") that would not themselves cause harm. We have disclosed all of the vulnerabilities and attacks to the affected providers. As of the time of publication, we have received affirmative feedback from all affected providers and we summarize our current understanding of their present state here. Zoho has not only patched the issue with their ARC implementation (also confirmed by Wang et al. [57], who conducted their measurements after the patch) and awarded us a bug bounty, but is also further augmenting the security of its ARC implementation. Microsoft confirmed the vulnerabilities (with severity "Important", the highest severity assigned to email spoofing bugs) and awarded us a bug bounty. They have partially fixed the issues by rejecting spoofed email messages purporting to be from domains that have a DMARC policy of Reject[78]. Gaggle confirmed the issues we flagged and stated that they would start enforcing DMARC. Gmail fixed the issues we reported. iCloud partially fixed the issues we reported by not forwarding email messages that fail DMARC authentication (except for domains with DMARC policy None). Hushmail fixed the issues we reported by not forwarding email messages that fail DMARC authentication. Ferennial fixed the issues we reported by not forwarding spoofed email messages from domains that are their customers. Mail2World attempted to fix the issues by using spam filters and remains vulnerable. Runbox did not view the issues we reported as vulnerabilities. Instead, they consider monitoring account activities post-complaints sufficient. ## 7 Discussion and Mitigation We end by summarizing the root causes of the issues we discovered, and discuss potential mitigation strategies. ### Discussion In this work, we examine the complexities introduced by email forwarding to email security. We identify a diverse set of email forwarding mechanisms, assumptions, and features, and demonstrate how they can be combined together to perform evasion attacks. These attacks highlight four fundamental issues. First, as already demonstrated in prior work and further highlighted in our paper, email security involves distributed, optional, and independently-configured components implemented by different parties. In such an architecture, the "authenticity" of an email is commonly determined by the party with the weakest security settings. While traditionally email is sent directly from sender to receiver, forwarding involves three parties instead of two and introduces an extra layer of complexity. As we have shown, a vulnerable forwarder can jeopardize the security of downstream recipients that do not have problematic configurations or implementations. This inversion of incentives and capabilities naturally complicates mitigating forwarding vulnerabilities. Figure 9: Spoofed email attack that abuses mailing lists like Google Groups (§ 5.4). A second problem is that email forwarding has never been fully standardized, despite the longevity and popularity of its use. A lack of standardization has led to ad-hoc implementation decisions, each making different assumptions. This ad-hoc nature of implementations makes it challenging to perform both manual security analysis (analyzing individual implementation decisions is a non-trivial task even for experts) and automated testing (any such tool needs to account for the specific implementations of each provider). While our large-scale empirical measurements have been able to reveal the assumptions made by providers and their implications, it has required substantial manual work. This manual process is a reflection of the fact that there exists no unified framework or standard for implementing email forwarding. A third issue is that email is a large, slowly-evolving ecosystem with a wide range of legacy systems and protocols that need to be accommodated. One example we highlight is the "outdated" assumption made by SPF (SS 4.2.2). When SPF was first designed in the early 2000s, it was common practice for each domain owner to maintain their own mail infrastructure. However, this assumption is obsolete in the modern era, as many domains outsource their email services to third-party providers such as Outlook and Google [60]. These large providers often share the same email infrastructure across all customers (both business and personal accounts), violating the assumptions made by SPF. To mitigate the risks this reality poses to SPF, providers usually prevent users from setting arbitrary values in their FROM header. However, past literature has shown that this defense is not always implemented correctly [5]. We build on top of this prior work by identifying a new attack that can circumvent existing defenses through forwarding (SS 5.1). Last but not least, the intrinsic nature of email forwarding is to transparently send an existing message to a new address "on behalf" of its original recipient -- a goal very much at odds with the anti-spoofing function of protocols such as SPF and DMARC. As such, a range of ad-hoc decisions have been made to increase the deliverability of forwarded email messages, such as using the REM+MOD forwarding mechanism (SS 3), treating forwarded messages specially (SS 4.3.2), and adding DKIM signatures to forwarded messages (SS 4.3.3). As we have demonstrated, these decisions can fail to foresee unexpected interactions that lead to vulnerabilities, even with a lot of deliberation. ### Mitigation The attacks we demonstrate highlight the complicated interactions between email forwarding and existing anti-spoofing mechanisms. We start by reviewing short-term mitigations that could reduce some of the most significant risks we have uncovered. We then discuss challenges in developing more comprehensive solutions, which would require significant changes in either protocol or operational practices. A core issue we highlight in this paper is the ability to forward spoofed email messages to arbitrary recipients, a critical element in each of the first three attacks in Section 5. To mitigate this issue, providers could either block spoofed email messages from being forwarded, or enforce that a forwarder can only forward to accounts under their control by requiring explicit confirmation (similar to the online domain validation used by modern certificate authorities). However, we note that either approach comes with a usability tradeoff, and different providers make choices based on their considerations. Indeed, providers like Gmail and Mail.ru opted for the former option, while others like iCloud and Hushmail opted for the latter. As well, we advocate that providers should enforce a domain's DMARC Reject policy when specified, rather than substituting a weaker policy. If Outlook rejected spoofed email messages from such domains, the impact of the first attack exploiting SPF incorporation would narrow substantially. We understand that Outlook has plans to take such action in the future [78]. Unfortunately, all the defenses described above reflect a case of misaligned incentives: the recipients of spoofed email (e.g., spam and phishing) cannot implement this change, but instead need to rely on the entire ecosystem of providers and forwarding services to adopt such defenses. Email providers can also mitigate the second attack (SS 5.2) by eliminating relaxed validation policies. This approach would protect their users from receiving spoofed email without relying on changes by other platforms or services. However, to prevent benign forwarding from breaking will likely require providers to then implement ARC validation (which in turn places ARC implementation requirements on external forwarders). For the final attack (SS 5.4) that exploits mailing lists, potential mitigations trade usability for security. First, list owners can turn on message moderation and set their mailing lists to be private. While these measures increase the difficulty of performing email spoofing attacks, they do not rule out the attack entirely. A dedicated attacker might nonetheless identify a member of the mailing list and craft an email that fools a list's moderator. Second, some mailing list services, such as Listserv, support confirm-before-send [79], which requests confirmation from the (true) sender address before delivery. While this mechanism would impose significant overheads in general, these costs might be acceptable by limiting this confirmation requirement to incoming email that fails DMARC authentication checks. In addition to the short-term mitigations mentioned above that are specific to forwarding, others [5, 56] have proposed solutions such as improving UI notification, building better testing tools, and revising RFC standards, which are also important to consider. Additionally, the newly proposed ARC protocol may also help mitigate some of the issues we have uncovered. However, ARC is still in the early stages of development and deployment, its details are yet to be fleshed out and its effectiveness in practice remains to be seen. Lastly, we note that comprehensively fixing email forwarding would require a more fundamental set of changes (e.g., redesigning the entire suite of email security protocols), which will face significant deployment challenges given the current state of the email ecosystem. Chief among these challenges is that any new solution designed to fix forwarding must address backwards compatibility, a task complicated by email's forty-year-old ecosystem of varied protocols, implementations and use cases. Specifically, one must carefully consider how any new approach interacts and interoperates with existing systems (e.g., mail providers and filtering service providers) and protocols (e.g., SPF, DKIM and DMARC). While security might be enhanced by embracing a single standard approach to forwarding (e.g., when a message should be forwarded, what forwarding mechanisms should be used, what information should be added to forwarded messages, and how the receiving account should be verified), any such choice will inevitably align well with certain providers and conflict with those whose existing services have made different choices or who operate under different threat models. Finally, it is not enough to merely standardize new protocols, but one must then also incentivize and coordinate their universal deployment and operation. Thus, while such an aspirational goal is worthy of attention, it seems likely that email will continue to benefit from incremental and reactive improvements, such as those discussed earlier, for some time yet. ## 8 Conclusion Internet-based email has been in use since the early 1970s and the SMTP protocol has been in use since 1980. It is arguably the longest-lived text-based communication system in wide use. Unsurprisingly, its design did not anticipate the range of challenges we face today and, because of its central role, we have been forced to upgrade email protocols slowly and with deference to a wide range of legacy systems and expectations. Perhaps nowhere is this more clear than around the issue of authentication. Email protocols have no widely-used mechanism for establishing the authenticity of sender addresses, and thus we have focused on authenticating the domain portion of the email address (largely motivated by spam and phishing). In this work, using large-scale empirical measurements of 20 prominent email forwarding services, we identify a diverse set of email forwarding mechanisms, assumptions, and features, and demonstrate how they can be combined together to perform four types of evasion attacks. While we are the first academic paper to document these attacks, retrospectively examining Mailop [80], a prominent mailing list for mail operators, we have also found traces [81] of real-world attacks that are similar to what we reported in this paper. The attacks we document exploit four kinds of problems. One fundamental issue is that email security protocols are distributed, optional, and independently-configured components. This creates a large and complex attack surface with many possible interactions that cannot be easily anticipated or administered by any single party. A second problem is that email forwarding was never standardized, leading to ad-hoc implementation decisions that might be vulnerable. A third problem is that protocol assumptions for SPF are grounded at a point in time and have not been updated as practices have changed. Domains now out-source their mail service to large providers that share mail infrastructure across customers, undermining assumptions made in the design of SPF. Lastly, the intrinsic nature of email forwarding is to transparently send an existing message to a new address "on behalf" of its original recipient. This creates complex chain-of-trust issues that are at odds with implicit assumptions that mail is sent directly from sender to receiver. Indeed, it is this complication that has driven the creation of ARC. While there are certain short-term mitigations (e.g., eliminating the use of open forwarding) that will significantly reduce the exposure to the attacks we have described here, ultimately email requires a more solid security footing if it is to effectively resist spoofing attacks going forwards. ## Acknowledgments We thank our anonymous reviewers for their insightful and constructive suggestions and feedback. We thank Nishant Bhaskar, Cindy Moore, and Jennifer Folkestad for their operational support. We thank Stewart Grant for proofreading the paper. We thank Brad Chen for collecting feedback from Google. We thank John Levine and Wei-haw Chuang for their comments on the paper. Funding for this work was provided in part by National Science Foundation grants CNS-1705050 and CNS-2152644, the UCSD CSE Postdoctoral Fellows program, the Irwin Mark and Joan Klein Jacobs Chair in Information and Computer Science, and operational support from the UCSD Center for Networked Systems as well as the Twente University Centre for Cybersecurity Research (TUCCR).
2301.00906
Effects of opposite atoms on electronic structure and optical absorption of two-dimensional hexagonal boron nitride
We perform the first-principles many-body GW and Bethe-Salpeter equation (BSE) calculations on the two-dimensional hexagonal boron nitride (2D-hBN) to explore the effects of opposite atoms on the electronic structure and linear one-photon absorption (OPA). Five AA- and AB-stacked bilayer and eight AAB-stacked trilayer structures are considered. The AAB-stacked trilayer hBN (TL-BN) structures are constructed by mixing the AA- and AB-stacked bilayer hBN (BL-BN). We show that the GW approximation gives rise to different types (i.e., indirect or direct) of fundamental band gaps from the independent particle approximation for all structures except those dominated by the B-B opposite. The stacking modes dominated by the B-B opposite have a direct fundamental band gap in both approximations. The OPA spectra are calculated by solving the Bethe-Salpeter equation combined with the GW quasi-particle correction. Strong absorption peaks are found for most structures in the deep-ultraviolet region. The binding energy and Davydov splitting of excitons of TL-BN strongly depend on the opposite atoms and are related to the role of the stacking BL-BN substructure. Finally, taking the six-layer and below AB-stacked structures as examples, we show that the B-B opposite unit is helpful in constructing the turbostratic-phase-like stacking structures with a direct fundamental band gap which are more suitable for optoelectronic applications.
You-Zhao Lan
2023-01-03T00:35:12Z
http://arxiv.org/abs/2301.00906v1
Effects of opposite atoms on electronic structure and optical absorption of two-dimensional hexagonal boron nitride ## Abstract We perform the first-principles many-body GW and Bethe-Salpeter equation (BSE) calculations on the two-dimensional hexagonal boron nitride (2D-_h_BN) to explore the effects of opposite atoms on the electronic structure and linear one-photon absorption (OPA). Five AA- and AB-stacked bilayer and eight AAB-stacked trilayer structures are considered. The AAB-stacked trilayer _h_BN (TL-BN) structures are constructed by mixing the AA- and AB-stacked bilayer _h_BN (BL-BN). We show that the GW approximation gives rise to different types (_i.e._, indirect or direct) of fundamental band gaps from the independent particle approximation for all structures except those dominated by the B-B opposite. The stacking modes dominated by the B-B opposite have a direct fundamental band gap in both approximations. The OPA spectra are calculated by solving the Bethe-Salpeter equation combined with the GW quasi-particle correction. Strong absorption peaks are found for most structures in the deep-ultraviolet region. The binding energy and Davydov splitting of excitons of TL-BN strongly depend on the opposite atoms and are related to the role of the stacking BL-BN substructure. Finally, taking the six-layer and below AB-stacked structures as examples, we show that the B-B opposite unit is helpful in constructing the turbostratic-phase-like stacking structures with a direct fundamental band gap which are more suitable for optoelectronic applications. ## 1 Introduction The electronic structure and optical property of the two-dimensional hexagonal boron nitride (2D-_h_BN) have attracted much attention [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13] in recent years. Experimental and theoretical studies have consistently shown that 2D-_h_BN has a wide bandgap, but there are contradictions on the size and type of fundamental band gap (FBG) (_i.e._, indirect or direct), even for the monolayer BN (ML-BN) that has been widely studied [14; 15; 16; 7; 1]. For ML-BN, the density functional theory (DFT) calculations predict the FBGs with a range of 4.2 4.7 eV between the valence band maximum (VBM) at the K point and the conduction band minimum (CBM) at various points (K, M, or \(\Gamma\) point) [7, 14, 15]. The GW approximation (GWA) calculations dramatically increase the energy gap by \(\sim\) 2.5 eV and also lead to a change of FBG from direct to indirect [7, 15]. A recent experimental study confirmed a direct FBG of \(\sim\) 6.1 eV for ML-BN by using the reflectance and photoluminescence experiments in the deep-ultraviolet region [2]. Similarly, for the bulk \(h\)BN, the energy band structure strongly depends on the stacking mode [12, 17, 18]. The FBG ranges from 3.1 to 4.5 eV based on the DFT/PBE and DFT/LDA calculations [17, 18]. Direct and indirect FBGs are also theoretically found in different stacking modes, similar to the experimental studies [19, 20]. For few-layer \(h\)BN (FL-BN) structures, which have more adjustable parameters, such as stacking mode and layer number, they have more changeable electronic structure properties than ML-BN and bulk \(h\)BN [4, 7, 13, 21, 22, 23, 21]. For example, for the AA\({}^{\prime}\)- and AB-stacked structures, the FBG changes from direct to indirect as the structure changes from ML-BN to FL-BN [13]. The AA\({}^{\prime}\)-stacked bilayer has distinctly different excitonic response from the AB-stacked one [4]. As an increase of the number of layers, the emission peaks exhibit a monotonic blue-shift [21]. The measurements of optical second harmonic generation show strong enhancement in the AB-stacked structure relative to monolayer and AA\({}^{\prime}\)-stacked bilayer, though similar linear absorption spectra were measured for these three structures [22]. Theoretical calculations [7] on the AA\({}^{\prime}\)-stacked FL-BNs up to five layers show that the excitonic spectra are resolved by surface and inner excitons and interesting Davydov splitting. Under biaxial strain, the size and type of FBG of BL-BN are tunable [24], which indicates possibly various applications in electronic and optoelectronic devices. Note that most studies focus on the singly ordered stacking modes, such as AA\({}^{\prime}\) and AB stackings, while the randomly stacked and disordered phase, namely turbostratic (\(t\)-BN) phase, was found in experiments sixty years ago [25, 26, 27]. Recently, Mengle and Kioupakisa [23] studied five \(t\)-BN structures containing ten randomly chosen layers and found that the \(t\)-BN structures had a quasi-direct FBG which only allows weakly direct optical transitions. The random stacking breaks the symmetry of originally ordered FL-BN, which further results in changes in electronic structures and optical properties. Obviously, the random stacking combined with a change of the layer number will lead to a large number of FL-BNs. In particular, there can be a great variety of assignments of opposite atoms in these structures, which leads to some new unknown structure-property relations. Meanwhile, as mentioned above, FL-BN can have direct or indirect FBG depending on the stacking mode. Different stacking modes can lead to different opposite atoms, then to different interlayer interactions, and ultimately to different energy bands with direct or indirect FBG. It is beneficial to filter out the direct-gap FL-BN because materials with direct FBG have higher optical efficiency in optoelectronic applications, such as light emitting diodes and semiconductor laser. In this work, to explore the effect of opposite atoms on properties, we select five bilayer BN structures (_i.e._, two AA and three AB stackings) and eight AAB-stacked trilayer BN (TL-BN) structures formed by mixing AA and AB stacking modes, and calculate their electronic structures and linear optical properties. All possible atomic opposites are considered in these stackings. The electronic structures are calculated within both the independent particle approximation (IPA) and the GWA. The linear one-photon absorption (OPA) spectra are calculated by solving the Bethe-Salpeter equation (BSE) combined with the GWA quasi-particle correction. Based on the results of BL- and TL-BN structures, we further explore the \(t\)-BN-like randomly AB-stacked six-layer structures to filter out the structures with a direct FBG. In Section 2, we describe the computational details including geometry and GW\(+\)BSE calculations. In Section 3, we discuss the calculated electronic structures and OPA and construct the FL-BN structures which have a direct FBG based on the B-B opposite substructure. Conclusions are given in Section 4. ## 2 Computational methods 2.1 _Geometry_ The two-dimensional bilayer (BL-) and trilayer (TL-) hexagonal BN structures with different stacking modes are shown in Fig. 1. We consider AA and AB-stacked bilayer structures and their mixture to form the trilayer structures. The AB-stacked structures mean two layers overlap with one set of atoms facing each other. In terms of different sets of opposite atoms, there are three AB stackings labeled by AB-NN, AB-BN, and AB-BB. The AA-stacked structures mean two layers overlap with all atoms facing each other. We constructed two AA stackings labeled by AA-NN and AA-BN. Note that AA-NN can be also called by AA-BB. For mixture of AA and AB stackings, we constructed eight structures in terms of different opposite atoms, _i.e._, AAB-BNN, AAB-NBB, AAB-BNB, AAB-BBB, AAB-NNN, AAB-NN, AAB-NNB, AAB-BBN, and AAB-NBN. The definition of labels is given in the figure caption. Hereafter, for simplicity, we call trilayer structures by opposite atoms only, namely BNN, NBB, BNB, etc. The initial structures were constructed based on the bulk \(h\)BN structure [28]. As a reference, we optimized the bulk \(h\)BN by using the same method and obtained the structure parameters (\(a=b=2.503\) A, \(c=6.681\)A) which are in agreement with the experimental parameters [28] (\(a=b=2.498\)A, \(c=6.636\)A). All the structures were optimized using the DFT within the GGA-PBE approximation combined with the pseudopotential plane wave method, as implemented in the PWSCF code. [29] A k-point mesh of 6\(\times\)6\(\times\)1, a force threshold of 0.01 eV/A, and a stress threshold of 0.02 GPa were used for the optimizations. The relaxation of the unit cell was included in optimizations. The optimized lattice parameters are shown in Fig. 1. A vacuum spacing larger than 13 A was used to ensure negligible interaction between the slabs. The van der Waals interaction was taken into account by using the Tkatchenko-Scheffler (TS) dispersion corrections [30]. The interlayer distances and cell parameters of all optimized structures are given as inset tables in Fig. 1. 2.2 _GW+BSE calculations_ Since the DFT within the GGA-PBE approximation usually underestimates the energy gap of materials, we performed the many-body GWA (one-shot level or \(G_{0}W_{0}\)) calculations to correct the energy bands. The GWA calculations were based on the plasmon pole approximation as implemented in the Yambo package [31] which reads the band structures and wave functions of ground state calculated within the IPA as implemented Figure 1: _The optimized structures (side and top views) of BL- and TL-BN. The labels of BL-BN are defined by AB-XY and AA-XY, where X and Y are the opposite atoms at positions 1 and 2, respectively. For example, AB-NN and AA-BN are shown explicitly. The labels of TL-BN are similarly defined by AAB-XYZ, where X, Y, and Z are opposite atoms at positions 2, 4, and 6, respectively. For example, AAB-BBB is shown explicitly. The inset tables list the interlayer distances, cell parameters, and atoms at selected positions of all optimized structures._ in the PWSCF code. [29] The GGA-PBE combined with the pseudopotential plane wave method was used to calculate the ground state. The convergence tests were performed on the ML-BN which has been widely studied [3, 32, 33, 34, 11, 35, 36]. Three parameters, namely k-grid, response block size in polarizability matrix, and the number of empty states in dielectric function, were considered. As shown in table S1, we obtain the converged \(G_{0}W_{0}\) energy gaps for both an indirect energy gap of 6.57 eV between the K point and the \(\Gamma\) point and a direct energy gap of 7.24 eV at the K point, in agreement with previous calculations [32, 3, 13]. The corresponding three parameters are 30\(\times\)30\(\times\)1, 10 Ry, and 200 empty states, respectively. Finally, for BL-BN and TL-BN, we used the 30\(\times\)30\(\times\)1, 10 Ry, and 300 empty states in the \(G_{0}W_{0}\) calculations, which produced the \(G_{0}W_{0}\) gaps within an accuracy of \(\sim\)0.03 eV. We calculated the optical spectra based on the solution of the BSE: [35] \[\left(E_{ck}-E_{vk}\right)A_{vck}^{S}+\sum_{k^{\prime}v^{\prime}c^{\prime}} \left\langle vck\left|K_{eh}\right|v^{\prime}c^{\prime}k^{\prime}\right\rangle A_{v^{ \prime}c^{\prime}k^{\prime}}^{S}=\Omega^{S}A_{vck}^{S}\qquad\quad(1)\] The excited state \(S\) is given by the linear combination of independent-particle excitations \(|vck\rangle\) (_i.e._, valence band \(|vk\rangle\) to conduction band \(|ck\rangle\)) as \[\left|S\right\rangle=\sum_{c,v,k}A_{c,v,k}^{S}\left|vck\right\rangle\qquad(2)\] The interaction kernel \(K_{\rm eh}\) includes the screened Coulomb interaction between electrons and holes, and the exchange interaction, which includes the so-called local-field effect. When the \(K_{\rm eh}\) is ignored, Eq.1 reduces to independent particle excitations. We used the Coulomb cutoff technique [36, 37, 38] and the corresponding length cutoff was set to a slightly smaller value than the \(c\) lattice parameter (Fig. 1) of supercell. In this case, we prefer to use the imaginary part of two-dimensional polarizability to understand the optical absorption of materials [36]. The two-dimensional polarizability is defined by [37, 39]: \[\chi^{\,2D}=L\frac{\varepsilon-1}{4\pi}\qquad(3)\] , where \(L\) is the effective thickness, which is assumed to \(c\) lattice parameter (Fig. 1), and \(\varepsilon\) is the dielectric constant. The imaginary part (\(\varepsilon_{2}\)) of \(\varepsilon\) can be understood by [36]: \[\varepsilon_{\,2}\left(\omega\right)=\frac{8\pi^{\,2}}{q^{\,2}}\lim_{q\to 0 }\sum_{S}\left|\sum_{c,v,k}A_{vck}^{S}\left\langle v\left(k-q\right)\left|e^{- i\pi\tau}\right|ck\right\rangle\right|^{2}\delta\left(\Omega^{S}-\omega\,-\eta\,\right)\] , where \(\eta\) is the damping factor and set to 0.1 eV. For the BSE calculation, we also carried out the convergence tests on the k-grid and the IPA bands used to construct the electron-hole basis (\(eh\)-basis) of the BSE kernel (\(K_{\rm eh}\)). Other parameters (_i.e._, response block size in polarizability matrix and the number of empty states in dielectric function) are the same as those used in the \(G_{0}W_{0}\) calculations. Since the valence and conduction band dispersions based on the IPA are somewhat different from those based on the GWA (see Fig. 2 below), we used a quasi-particle correction for the entire energy band, rather than a simple scissor correction. As an example, convergence tests on k-grids and _eh_-basis were performed on ML-BN, while convergence tests on _eh_-basis were performed on AA-BN and BNN, and corresponding results are shown in Fig. S1. A k-grid of 30\(\times\)30\(\times\)1 leads to a good convergence for the first and second absorption peaks. For the fixed 30\(\times\)30\(\times\)1 k-grid, the highest four valence bands and the lowest four conduction bands are enough to obtain the converged first two (see 1-8 of ML-BN and 5-12 of AA-BN) or three (see 9-16 of BNN) absorption peaks. This is due to that these absorption peaks mainly arise from the transitions between the valence and conduction bands near the Fermi level (see below). Finally, we adopt the 30\(\times\)30\(\times\)1 k-grid for all BSE calculations and the _eh_-basis of 5-12 and 9-16 for BL-BN and TL-BN, respectively. points, but changes the CBB at \(\Gamma\) point relative to those at M and K points. Second, for BL-BN, we observe that the stackings with only B-B (AB-BB) and N-N (AB-NN) opposites separately have a direct and indirect FBG, and that those with B-N (AB-BN and AA-BN) or with both B-B and N-N (AA-BB) have an indirect FBG The AB-BB (Fig. 2e) with only B-B opposite has direct FBGs of 4.14 and 6.32 eV within the IPA and GWA, respectively. In the band structure of AB-BB, the M-K path shows a strong dispersion. The AB-NN with only N-N opposite (Fig. 2f) has an indirect FBG within the IPA and GWA. The AA-BB with both the B-B and N-N opposites (Fig. 2c) has a direct FBG within the IPA but an indirect one within the GWA. The AB-BN with only B-N opposite (Fig. 2d) has an indirect FBG within the IPA and GWA. Similar to the ML-BN (Fig. 2a), the AA-BB and AB-BN also have a relatively flat M-K path (e.g., see gaps of 4.54 and 4.59 eV in Fig. 2d) and a close CBB energy at K and \(\Gamma\) points. For ease of understanding, Fig. 3a shows the scheme of dependence of M-K path on the opposite atoms. As shown in Fig. 2, for BL-BN, the highest occupied valence band (HOVB) of all five structures exhibit a similar M-K path with a higher energy at K point than M point, which is illustrated as the bottom line in Fig. 3a. The valence Figure 2: _Band structures of ML-, BL-, and TL-BN based on the IPA (blue solid line) and GWA (red dash line) calculations. The energy gaps between the conduction band minimum and the valence band maximum are shown by arrows and values. The valence band maximum is set to zero._ band maximum (VBM) locates at K point. However, the dispersion of the LUCB along M-K path strongly depends on the opposite atoms. The N-N and B-B opposites separately lead to a dispersion with a higher (red line in Fig. 3a) and lower (blue line in Fig. 3a) energy at K point than M point; thus the AB-BB (Fig. 2e) has a direct FBG while the AB-NN (Fig. 2f) has an indirect one. For B-N or B-B \(+\) N-N, the mixture of B-N interactions results in a relatively flat dispersion (yellow and green lines in Fig. 3a). To elucidate the different dispersions of M-K path, we examine the charge density distributions (Fig. 3b) of HOVB and LUCB at M and K points for all five BL-BN structures. As shown in Fig. 3b, the HOVBs of all BL-BNs are derived from the \(p_{\mathrm{z}}\) state of N atom and hardly from that of B atom, and thus the HOVBs of all BL-BNs have a similar dispersion (Fig. 2). In contrast, the LUCBs of all BL-BNs are very different and strongly depend on the opposite atoms. The LUCB of AA-BB, with both B-B and N-N opposites, is more like that of AB-BB than AB-NN, that is, LUCBs of AA-BB and AB-BB at M and K points are derived from the \(p_{\mathrm{z}}\) state of _directly_ opposite B atoms while the LUCB of AB-NN at K point are derive from the \(p_{\mathrm{z}}\) state of _diagonally_ opposite B atoms. Thus, the M-K path of AA-BB is relative flat but trends to that of AB-BB with a higher energy at M point than K point [see blue and yellow lines in Fig. 3a and gaps of 4.13 (K-M) and 4.07 (K-K) eV in Fig. 2c]. The LUCB of AA-BN is more like that of AB-NN than AB-BB, and thus the M-K path of AA-BN is relative flat but trends to that of AB-NN with a lower energy at M point than K point [see red and green lines in Fig. 3a and gaps of 4.55 (K-M) and 4.75 (K-K) eV in Fig. 2b]. Finally, the M-K path of AB-BN is relative flat but trends to that of AB-NN [see gaps of 4.54 (K-M) and 4.59 (K-K) eV in Fig. 2d] because the lower layer of LUCB of AB-BN is almost the same as that of AB-NN. Figure 3: _(a) Scheme of dependence of M-K path on the opposite atoms. (b) Charge density distributions of HOVB and LUCB at M and K points for BL-BN._ Finally, for TL-BN, the band structures of all eight structures are shown in Figs. 2(g-n). Overall, the band structures of TL-BN have very similar characteristics to those of BL-BN. All the TL-BN structures have similar valence band dispersions, and thus the difference in FBG depends on the conduction band dispersion. Since the VBM of all TL-BN structures locates at K point, the FBG will be determined by the CBB at M, K, or \(\Gamma\) point. Meanwhile, the type of FBG of TL-BN also dramatically depends on the opposite atoms. We note that the effect of the mixture of AA and AB stackings on the type of FBG The substacking with B-B opposite is helpful in forming the direct FBG For example, NBB formed by mixing the AB-BB and AA-BN has a direct FBG (Fig. 2g). AB-BB has a direct FBG with CBM at K point (Fig. 2e), while AA-BN has an indirect FBG with CBM at M point (Fig. 2b), which leads to a direct FBG at K point for NBB. Again, BNB (Fig. 2n) is formed by mixing AA-BN and AB-BN. Since both substackings have an indirect FBG (see Fig. 2b and Fig. 2d), BNB ultimately has an indirect FBG The BBB structure with two B-B opposites has a direct FBG Moreover, for structures with the largest number of B-B opposites, the GWA correction does not change the type of FBG (Fig. 2e and Fig. 2h). For other structures, the GWA corrections mostly change the type of FBG or the position of CBM. In subsection 3.3, based on the B-B opposite substacking, we will further discuss the construction of the FL-BN with a direct FBG. ### Absorption spectra Figure 4 shows the OPA spectra along the in-plane direction of ML-BN and five BL-BN structures. For ML-BN, our calculated spectrum is consistent with previous reports [7] in terms of line shape and peak positions. To understand the spectra, we list in table 1 the transition energies and corresponding optical activities of the first two excitons of ML-BN and BL-BN. We calculated the binding energies (\(E_{\rm b}\)) based on the direct \(G_{0}W_{0}\) gap at K point because the vertical transition is considered here and the contributions to these excitons mainly come from transitions near K point [3, 7]. In table 1, we also list the Davydov splitting [40, 41] energy (\(E_{\rm ds}\)) of BL-BN which is the energy difference between the first and second excitons. For AA-BN, it has been shown [7] that the first and second excitons mainly stem from the first exciton of ML-BN. For other four bilayer structures, we obtain similar Davydov splitting behaviors. Based on Fig. 4 and table 1, we can first see that the excitons of BL-BN exhibit large binding energies, but significantly lower than that of ML-BN, mainly due to the increased screening in the bilayer structure [7, 42, 43]. The first exciton of ML-BN locates at 5.25 eV and has a binding energy of 2.03 eV, in agreement with previous reports [7, 44]. The binding energies of the first exciton of five bilayer structures show an order of AB-BB \(<\) AA-BB \(<\) AB-NN \(\approx\) AB-BN \(<\) AA-BN. In this order, the structures with the B-B opposite (_i.e._, AB-BB and AA-BB) have a relatively small binding energy, and that those with the B-N opposite have a relatively large binding energy. Furthermore, as shown in table 1, the binding energy of the first exciton of AA-BN is 1.78 eV in agreement with the Paleari _et al._'s report [7] in which they theoretically investigated the effects of the number of layers on the binding energies of excitonic states based on the same stacking as AA-BN. They showed that the binding energy of the first exciton of the pentalayer structure reduced to 1.32 eV which is close to those of the first excitons of AB-BB (1.38 eV) and AA-BB (1.43 eV). This implies that the electronic screening environment of the bilayer structures with the B-B opposite should be comparable to that of pentalayer AA-BN structures. Thus, the B-B opposite may have a stronger electronic screening than the B-N opposite. Figure 4: _(a and b) Absorption spectra (i.e., imaginary part of polarizability, Im\(\gamma\)) along the in-plane direction of ML-BN and BL-BN calculated using the GW+BSE method. The GW direct gaps at K point are indicated by the vertical lines. The transition energy of the first bright exciton is indicated by arrow. (c and d) PDOS per layer of AB-BN and AA-BB._ Second, the absorption spectra are very similar in terms of line shapes and peak positions for BL-BN with the same opposite atoms, though they have different electronic energy gaps. For example, the \(G_{0}W_{0}\) gaps of AA-BN and AB-BN (Fig. 4b) are 7.06 and 6.83 eV, respectively. Both of them have strong absorption peaks at \(\sim\)5.30 eV and \(\sim\)6.1 eV. A similar case occurs for the AB-NN and AA-BB/NN (Fig. 4a) with the same N-N opposite atoms. We can also see that the absorption spectra of AB-BB are significantly different from those of other four bilayer structures in terms of line shapes and peak positions, especially the position of the strongest absorption peak. This absorption peak locates at 4.95 eV, which is distinctly lower than 5.20 \(\pm\) 0.10 eV of other four bilayer structures. So, the AB-BB could be distinguished from other four bilayer structures by the absorption spectra. Third, the AA-BB has the smallest \(E_{\rm b}\) but the largest \(E_{\rm ds}\) among five bilayer structures. To understand the large \(E_{\rm ds}\) of AA-BB, we examine the weight (_i.e._, \(|A_{\rm c,v,k}|^{2}\)) of contributions defined in Eq. 2 for the first and second excitons. Table 1 lists the IPA transitions at K point with the weight larger than 0.02. For example, for ML-BN, the IPA transition from 4 (HOVB) to 5 (LUCB) has a major contribution to the first bright exciton, in agreement with previous report [7]. All the bilayer structures except AA-BB have a small E\({}_{\rm ds}\)[12]. For AB-BB, AB-NN, and AA-BN, the small E\({}_{\rm ds}\)[12] may be due to that two excitons stem from the same IPA transitions. For \begin{table} \begin{tabular}{l l l l l l l} \hline Excitons & ML-BN (D\({}_{\rm 3d}\)) & AB-BB (D\({}_{\rm 3d}\)) & AB-NN (D\({}_{\rm 3d}\)) & AA-BB (D\({}_{\rm 3d}\)) & AB-BN (C\({}_{\rm 3v}\)) & AA-BN (D\({}_{\rm 3d}\)) \\ \hline 1 (\(\times\)2) a & 5.25 (2.03, br) b & 4.95 (1.38, d) & 5.09 (1.62, d) & 4.78 (1.43, d) & 5.24 (1.59, br) & 5.28 (1.78, d) \\ 2 (\(\times\)2) & & 4.96 (1.37, br) & 5.15 (1.55, br) & 5.18 (1.03, br) & 5.28 (1.55, br) & 5.31 (1.75, br) \\ E\({}_{\rm ds}\)[12] & & 0.01 & 0.06 & 0.40 & 0.04 & 0.03 \\ 1 (\(\times\)2) a & 4\(\rightarrow\)5 (4.69) c & 7\(\rightarrow\)9 (4.14) & 8\(\rightarrow\)9 (4.47) & 8\(\rightarrow\)9 (4.07) & 7\(\rightarrow\)9 (4.69) & 7\(\rightarrow\)9 (4.75) \\ & & 8\(\rightarrow\)9 (4.14) & 8\(\rightarrow\)10(4.47) & & & 8\(\rightarrow\)9 (4.72) \\ 2 (\(\times\)2) & & 7\(\rightarrow\)9 (4.14) & 8\(\rightarrow\)9 (4.47) & 7\(\rightarrow\)9 (4.45) & 8\(\rightarrow\)10 (4.73) & 7\(\rightarrow\)9 (4.75) \\ & & 8\(\rightarrow\)9 (4.14) & 8\(\rightarrow\)10(4.47) & & & 8\(\rightarrow\)9 (4.72) \\ \hline \end{tabular} a \end{table} Table 1: _Transition energies (eV) and corresponding optical activities (\(d=\) dark or \(br=\) bright) of the first two excitons of ML-BN and BL-BN. The binding energies (eV) based on the direct G0W0 gap at K point are given in parenthesis. For example, the Davydov splitting (E\({}_{\rm ds}\)[12]/eV) is the energy difference between the first and second excitons. The transitions based on the IPA bands have a major contribution to the excitons._ AB-BN, which also has a small E\({}_{\rm ds}\)[12] (0.04 eV), the major IPA transitions are 7\(\rightarrow\)9 and 8\(\rightarrow\)10 for two excitons, respectively. According to the PDOS (Fig. 4c), we find that these two transitions belong to the intralayer transition (_i.e._ 7\(\rightarrow\)9 from B layer, 8\(\rightarrow\)10 from A layer). Meanwhile, the difference in these two IPA transition energies is 0.04 eV (4.73 - 4.69) that is equal to the E\({}_{\rm ds}\)[12]. Now, we go back to the AA-BB with the largest E\({}_{\rm ds}\)[12]. The first and second excitons originate mainly from the IPA transitions of 8\(\rightarrow\)9 and 7\(\rightarrow\)9, respectively. According to the PDOS (Fig. 4d), we find that each layer almost has the same contribution to the transition (_i.e._ both layers contribute to the bands 7, 8, and 9). Thus, the large E\({}_{\rm ds}\)[12] is mainly due to the difference in the IPA transition channels, that is, the energy difference of 0.38 eV between bands 7 and 8 is very close to the E\({}_{\rm ds}\)[12] of 0.40 eV. Now, we turn to TL-BN, as shown in Fig. 5, all the trilayer structures have a strong absorption peak at about 5 eV, and, to some degrees, inherit the characteristics of the absorption spectrum of the bilayer substructure. To understand these absorption peaks, we list in table 2 the information for the first three excitons of all the trilayer structures. The first three excitons are bright. Similar to BL-BN, all the first three excitons are double degenerate and related to the first exciton of each monolayer. Based on Fig. 5 and table 2, we first observe strong absorption peaks at 4.95 and 5.01 eV for NBB and BBB, respectively, which may be related to the AB-BB substructure with the lowest absorption peak at 4.95 eV (Fig. 4b). The BBN has a strong absorption peak at 5.23 eV because the substructures AA-BB and AB-BN have strong absorption peaks at 5.18 eV (Fig. 4a) and 5.28 eV (Fig. 4b), respectively. As shown in table 2, the binding energies of the first exciton of eight trilayer structures have an order of BBB (1.13) \(<\) NNB (1.21) \(<\) NNN (1.24) \(<\) NBB (1.30) \(<\) BBN (1.35) \(<\) BNB (1.52) \(<\) NBN (1.56) \(<\) BNN (1.60). In this order, we also find that the structures with the B-B opposite have a relatively small binding energy, which is obviously shown in BBB formed by AA-BB and AB-BB substructures. The substructure with the B-N opposite dramatically increases the binding energy owing to the weak electronic screening mentioned above. For example, the binding energy increases by 0.17 eV from BBN to BNB. In these two trilayer structures, the difference in the structures is AA-stacked substructure which changes from AA-BB/NN to AA-BN, and note that the AA-BN has the largest binding energy for the first two excitons (table 1). Second, the magnitude of two Davydov splitting energies (\(E_{\rm ds}\)[12] and \(E_{\rm ds}\)[23]) of the trilayer structures is closely related to that of \(E_{\rm ds}\)[12] of the bilayer substructures. As shown in table 1, the \(E_{\rm ds}\)[12] of five bilayer structures can be ordered by AA-BB (0.40) \(>\) AB-NN (0.06) \(>\) AB-BN (0.04) \(\approx\) AA-BN (0.03) \(>\) AB-BB \begin{table} \begin{tabular}{l l l l l l l l} Excitons & NNB & NNN & BBN & BBB & BNN & BNB & NBB & NBN \\ \hline 1 (\(\sim\)2) a & 4.73 (1.21) & 4.69 (1.24) & 4.71 (1.35) & 4.59 (1.13) & 5.01 (1.60) & 5.16 (1.52) & 4.89 (1.30) & 5.22 (1.56) \\ 2 (\(\sim\)2) & 5.18 (0.76) & 5.04 (0.89) & 5.17 (0.89) & 4.84 (0.88) & 5.12 (1.49) & 5.21 (1.47) & 4.99 (1.20) & 5.23 (1.55) \\ 3 (\(\sim\)2) & 5.21 (0.73) & 5.09 (0.84) & 5.23 (0.83) & 5.01 (0.71) & 5.23 (1.38) & 5.25 (1.43) & 5.24 (0.95) & 5.36 (1.42) \\ E\({}_{\rm ds}\)[12] & 0.45 & 0.35 & 0.46 & 0.25 & 0.11 & 0.05 & 0.10 & 0.01 \\ E\({}_{\rm ds}\)[23] & 0.03 & 0.05 & 0.05 & 0.17 & 0.11 & 0.04 & 0.25 & 0.13 \\ 1 (\(\sim\)2) a & 12\(\rightarrow\)13(4.05) & 12\(\rightarrow\)13(3.97) & 12\(\rightarrow\)13(3.79) & 12\(\rightarrow\)13(3.79) & 12\(\rightarrow\)13(4.48) & 11\(\rightarrow\)14(4.73) & 12\(\rightarrow\)13(4.14) & 10\(\rightarrow\)13(4.70) \\ & & & & & & & & 12\(\rightarrow\)14(4.75) \\ 2 (\(\sim\)2) & 10\(\rightarrow\)13(4.44) & 12\(\rightarrow\)14(4.40) & 10\(\rightarrow\)13(4.45) & 11\(\rightarrow\)13(3.98) & 12\(\rightarrow\)14(4.48) & 12\(\rightarrow\)15(4.75) & 10\(\rightarrow\)13(4.19) & 10\(\rightarrow\)13(4.70) \\ & & & & & & & & 12\(\rightarrow\)14(4.75) \\ 3 (\(\sim\)2) & 11\(\rightarrow\)13(4.14) & 11\(\rightarrow\)13(4.24) & 11\(\rightarrow\)14(4.70) & 10\(\rightarrow\)13(4.19) & 11\(\rightarrow\)15(4.75) & 10\(\rightarrow\)13(4.74) & 10\(\rightarrow\)14(4.76) & 11\(\rightarrow\)15(4.78) \\ \end{tabular} \end{table} Table 2: _Transition energies (eV) of the first three excitons (all are bright) of TL-BN. The binding energies (eV) based on the direct gap at K point are given in parenthesis. All the structures have \(C_{3v}\) symmetry. The \(E_{\rm ds}\)[12](eV) is the Davydov splitting between the first and second excitons, and the \(E_{\rm ds}\)[23](eV) is the Davydov splitting between the second and third excitons. The transitions based on the IPA bands have a major contribution to the excitons._ Figure 5: _Absorption spectra (i.e., imaginary part of polarizability, Im\(\gamma\)) along the in-plane direction of TL-BN calculated using the GW\(+\)BSE method. The GW direct gaps at K point are indicated by the vertical lines. The transition energy of the first bright exciton is indicated by arrow._ (0.01). From table 2, we can see that the trilayer structures (NNB and BBN), containing the AA-BB/NN and AB-BN substructures, have relatively large \(E_{\rm ds}\)[12] but relatively small \(E_{\rm ds}\)[23]. For NNB and BBN, the \(E_{\rm ds}\)[12] is 0.45 and 0.46 eV, respectively, which is even larger than that of AA-BB (0.40 eV in table 1). This is due to a large \(E_{\rm ds}\)[12] (0.40 eV) of AA-BB/NN and a small one (0.04 eV) of AB-BN. A similar case occurs for BBN which has the \(E_{\rm ds}\)[12] and \(E_{\rm ds}\)[23] of 0.35 and 0.05 eV, respectively. Both BNB and NBN have relatively small \(E_{\rm ds}\)[12] and \(E_{\rm ds}\)[23] because they contain the AA-BN and AB-BN substructures with relatively small \(E_{\rm ds}\)[12] (_i.e._ 0.03 and 0.04, respectively). Interestingly, for BBB containing the AA-BB/NN and AB-BB substructures, the AA-BB substructure has the largest \(E_{\rm ds}\)[12] (0.40 eV) among the five bilayer structures while the AB-BB substructure has the smallest \(E_{\rm ds}\)[12] (0.01 eV), which leads to similar values of \(E_{\rm ds}\)[12] (0.25 eV) and \(E_{\rm ds}\)[23] (0.17 eV). Finally, according to the IP transition contributions to excitons, we can see that the \(E_{\rm ds}\) values of excitons of TL-BN also depend on the IP transition energies strongly, similar to those of BL-BN. For example, a large \(E_{\rm ds}\)[12] (0.45 eV) of NNB is related to a large difference in transition energies between 10\(\rightarrow\)13 (4.44 eV) and 12\(\rightarrow\)13 (4.05 eV) transitions. The \(E_{\rm ds}\)[12] and \(E_{\rm ds}\)[23] of BNB have similar values (0.05 and 0.04 eV, respectively) because the IP transitions with major contributions have very similar transition energies (\(\sim\) 4.74 eV). 3.3 _Direct FBG based on the B-B opposite substructure_ As shown above, the B-B opposite makes the multilayer BN structures trend to having a direct FBG (Figs. 2c, 2e, 2g, 2h, and 2i). Particularly, for AB-BB (Fig. 2e) and AAB-BBB (Fig. 2h), which have as many B-B opposites as possible, the FBG is direct within both the IPA and GWA. However, the N-N opposite results in an indirect FBG gap (Figs. 2f, 2k, and 2l). In this section, we explore the effect of the N-N opposite on the FBG of the B-B opposite structures to show that the reservation of the B-B opposite substructure plays an important role in forming the direct FBG For this purpose, we construct a series of six-layer structures by inserting the N-N opposite into the six-layer AB-BB structure and keep at least a B-B opposite in these AB-stacked structures. These structures are labeled by the order of the opposite atoms, similar to those shown in Fig. 1. As an example, the geometry of BNNBBB is given in Fig. 6k. We calculated their energy band structures within the IPA, and the results are shown in Figs. 6(a-j). As shown in Fig. 6a, the six-layer AB-stacked structure with B-B opposite (BBBBBB) has a direct FBG of 3.64 eV at K point. By inserting N-N opposite into the BBBBBB, the structure gradually transforms from direct FBG (Figs. 6a-6c, 6e, and 6f) to indirect one (Figs. 6g-6j) as the number of N-N opposites increases. The structures with one N-N opposite at different insertion positions have a direct FGB (Figs. 6b, 6c, and 6e) because two or three B-B opposites are reserved in the structure. A consecutive B-B opposite unit is essential for the structure to have a direct FGB, as shown in Figs, 6b, 6c, 6e, and 6f whose corresponding structures have BBB unit. On the contrary, consecutive N-N opposites (Figs. 6g and 6i) make the structure exhibit an indirect FBG. To understand these behaviors, we show in Fig.7 the PDOS of several selected six-layers. The conduction band edges of BNNNBB (Fig. 7d) and NBBBBNN (Fig. 7e) mainly come from NNN and BBB units, respectively, which leads to indirect and direct FBG in corresponding structures. For NNNBBB (Fig. 7b), NNN and BBB competitively contribute to the conduction band edge, and the result is that NNNBBB exhibit a direct FBG determined by the BBB unit. Meanwhile, we can see that the inner layers have more effects on the band edge than the outer layers, Figure 6: _(a–j) Band structures of six-layer BN structures based on the IPA calculations. The labels (e.g., NNBBBB and BNNBBB) are defined by the directly opposite atoms. (k) Geometry of BNNBBB._ because the inner layers interact with two sides, which possibly leads to a larger dispersion of energy band. For example, in both BBNNBB and NNBBNN, the B-N opposites of inner layers play a major role in determining the type of FBG. As shown in Figs. 2b, 2d, and 3a, the bilayer structures with the B-N opposite have a relative flat M-K path and a slightly indirect FGB, which leads to a slightly indirect FBG in both BBNNBB (Fig. 6d) and NNBBNN (Fig. 6h). Similarly, for BNNNNBB (Fig. 7d) and NBBBBNN (Fig. 7e), the inner NNN and BBB units also determine the type of FBG (the former is indirect and the latter is direct). Interestingly, our present finding is very applicable to the ten-layer t-BN reported by Mengle and Kioupakis [23]. Although they only show a representative t-BN structure (Fig. 3a of Ref [23]) consisting of ten randomly chosen layers, we can see from this structure that two inner BBB units lead the structure to having a direct FBG. More examples, we show in Fig. S2 the band structures of selected AB-stacked four- and five-layer structures. As expected, the BBBNN and NBBBN with a BBB unit have a direct FGB, while the BBNN and BNNNB with a NNN unit have an indirect one. The NBBNN with not only a inner B-B opposite but also B-N and N-N opposites exhibit a relative flat M-K path and have a slightly direct FBG at K point, a similar case occurs for BNNBB which has a slightly indirect FBG The BBNN with not only B-B opposite but also Figure 7: _PDOS of selected six structures based on the \(p_{z}\) orbitals of B and N atoms. L1 represents the first layer, L2 represents the second layer, and so on. The atomic labels in the legends indicate the directly opposite atoms in the six-layer structure._ N-N or N-B opposite has a slightly direct FBG at K point. Finally, BBNBB has an obvious direct FBG at K points which is determined by two B-B opposites, though two N-B opposites locate in inner layer. Thus, the B-B opposite plays a crucial role in determining the type of FBG ## 4 Conclusions We have performed the first-principles many-body GW and BSE calculations on the BL-BN and AAB-stacked TL-BN structures. The size and type of FBG strongly depend on the opposite atoms. Structures dominated by the B-B opposite are expected to have a direct FBG The B-B opposite makes the stacking have a relative small binding energy of exciton, and thus the corresponding structures have a stronger electronic screening than those with the B-N and N-N opposites. The Davydov splitting energies of excitons of TL-BN are closely related to those of BL-BN substructure, which implies the Davydov splittings of FL-BN could be understood on the basis of those of substructure. All the structures have similar dispersion of valence band edge, and thus the intrinsic FBG is mainly determined by the conduction band edge whose dispersion depends on the type of opposite atoms in FL-BN. For _t_-BN or FL-BN, to obtain the structure with a direct FBG we should construct the B-B opposite as many as possible and preferably locate them in the inner layer because the B-B opposite can make the structure have a direct FBG at K point. Our findings not only show a new structure-property relationship but also provide a useful reference for experimentally designing FL-BNs with a direct FBG which are more suitable for optoelectronic applications in deep ultraviolet region. ## Acknowledgements We appreciate the financial support from Natural Science Foundation of China Project 21303164.
2310.16825
CommonCanvas: An Open Diffusion Model Trained with Creative-Commons Images
We assemble a dataset of Creative-Commons-licensed (CC) images, which we use to train a set of open diffusion models that are qualitatively competitive with Stable Diffusion 2 (SD2). This task presents two challenges: (1) high-resolution CC images lack the captions necessary to train text-to-image generative models; (2) CC images are relatively scarce. In turn, to address these challenges, we use an intuitive transfer learning technique to produce a set of high-quality synthetic captions paired with curated CC images. We then develop a data- and compute-efficient training recipe that requires as little as 3% of the LAION-2B data needed to train existing SD2 models, but obtains comparable quality. These results indicate that we have a sufficient number of CC images (~70 million) for training high-quality models. Our training recipe also implements a variety of optimizations that achieve ~3X training speed-ups, enabling rapid model iteration. We leverage this recipe to train several high-quality text-to-image models, which we dub the CommonCanvas family. Our largest model achieves comparable performance to SD2 on a human evaluation, despite being trained on our CC dataset that is significantly smaller than LAION and using synthetic captions for training. We release our models, data, and code at https://github.com/mosaicml/diffusion/blob/main/assets/common-canvas.md
Aaron Gokaslan, A. Feder Cooper, Jasmine Collins, Landan Seguin, Austin Jacobson, Mihir Patel, Jonathan Frankle, Cory Stephenson, Volodymyr Kuleshov
2023-10-25T17:56:07Z
http://arxiv.org/abs/2310.16825v1
# CommonCanvas: An Open Diffusion Model Trained ###### Abstract We assemble a dataset of Creative-Commons-licensed (CC) images, which we use to train a set of open diffusion models that are qualitatively competitive with Stable Diffusion 2 (SD2). This task presents two challenges: (1) high-resolution CC images lack the captions necessary to train text-to-image generative models; (2) CC images are relatively scarce. In turn, to address these challenges, we use an intuitive transfer learning technique to produce a set of high-quality synthetic captions paired with curated CC images. We then develop a data- and compute-efficient training recipe that requires as little as 3% of the LAION data (i.e., roughly 70 million examples) needed to train existing SD2 models, but obtains the same quality. These results indicate that we have a sufficient number of CC images (also roughly 70 million) for training high-quality models. Our training recipe also implements a variety of optimizations that achieve \(\sim\)3X training speed-ups, and that enable rapid model iteration. We leverage this recipe to train several high-quality text-to-image models, which we dub the _CommonCanvas_ family. Our largest model achieves comparable performance to SD2 on human evaluation, even though we only use a CC dataset that is \(<\)3% the size of LAION and synthetic captions for training. We release our models, data, and code at [https://github.com/mosaicml/diffusion/blob/main/assets/common-canvas.md](https://github.com/mosaicml/diffusion/blob/main/assets/common-canvas.md). ## 1 Introduction Current methods train high-quality, text-to-image (T2I) models with. A lack of curated datasets that are large enough for the task has led researchers to turn to web-scraped solutions [29; 30], like LAION-2B [26]. The use of web-scraped data is a very common practice for training generative models, however, US courts have yet to definitively rule if this is permissible under copyright law [1; 13; 15; 20; 21; 60]. In response, recent work has begun to investigate alternative methods of navigating copyright concerns in text generation [39], code completion [16; 51], and image generation [24]. Nevertheless, matching the performance of state-of-the-art models remains a challenge. In this work, we study the following natural question: _Is it possible to efficiently produce a high-quality T2I model by training only on Creative-Commons-licensed data?_ We suggest a possible path forward, training a suite of T2I architectures using _only_ open-licensed, Creative-Commons (CC) images (Figures 1 & 2). This task brings to light two significant challenges. The first problem is data incompleteness: almost all CC images lack the captions necessary to train a high-quality T2I model. The second is data scarcity: there are relatively few high-resolution CC images -- roughly 70 million, compared to LAION-2B's roughly 2 billion [26]. We address the data incompleteness problem by using a pre-trained BLIP-2 model [34], which we use to produce high-quality, synthetic captions for a set of curated, open licensed CC images. This is an intuitive transfer-learning solution: leveraging powerful pre-trained generative models to produce synthetic labels for an unlabeled dataset, which we can then use to train a different multimodal generative model. We note that this is an increasingly common pattern in the literature, which we shorthand with the name _telephoning_. To deal with data scarcity, we propose a data- and compute-efficient training recipe that obtains the same quality as SD2, but (perhaps surprisingly) requires as little as 3% of the LAION-2B data (i.e., roughly 70 million examples) originally used to train SD2. We call this model SD2-base. These results indicate that we have a sufficient number of CC images (also roughly 70 million) for training high-quality models. Our training recipe also implements a variety of optimizations that achieve \(\sim\)3X training speed-ups, and that allow for rapid model iteration. The above methods enable us to create _CommonCanvas_, a suite of latent diffusion model (LDM) architectures trained on our curated dataset of CC images and synthetic captions, which we denote _CommonCatalog_. For CommonCanvasL-NC, we swap SD2's UNet for SDXL to demonstrate how even with less data, larger models do not overfit to this smaller dataset. Our largest model achieves performance comparable to SD2-base on human evaluation of Parti Prompts [66], even though our CommonCatalog training dataset is \(<3\%\) the size of LAION and has synthetically generated captions. Figure 1 shows select samples from our CommonCanvas models compared to corresponding samples from SD2-base. Although this model is a larger and - likely - more capable model architecture than SD2, we find it surprising and important that it is possible to train an SD2-quality model at all based on such a limited dataset that was cobbled together in this fashion. This reveals a promising path forward for future research on highly-capable, open T2I models. In summary, we: * Synthesize a set of high-quality captions for uncaptioned CC images, which we can then use together for training. We note that this type of transfer-learning technique is increasingly common, and we give it the shorthand name _telephoning_ (Section 3). * Curate _CommonCatalog_, a dataset of roughly 70 million open-licensed CC images, for which we use telephoning to generate accompanying high-quality synthetic captions (Section 4). Figure 1: Selection of text prompts. Using entirely Creative-Commons images and our synthetic captioning approach, we achieve comparable qualitative performance to Stable Diffusion 2 (SD2-base), as seen in CommonCanvas generations, while only requiring a small fraction (\(<3\%\)) of the amount of training data. We include results for two CommonCanvas architectures, small (S) and large (L) (Section 6), and two CC-image datasets, commercial (C) and non-commercial (NC) (Section 4). We label our results accordingly as CommonCanvas-\(<\)architecture\(>\)-\(<\)dataset\(>\). * Train and evaluate _CommonCanvas_, a suite of LDM architectures trained on CommonCatalog. We demonstrate that these models produce competitive qualitative and quantitative results compared to the SD2-base baseline (Section 6). To make this analysis tractable, we implement a variety of training optimizations, which achieve \(\sim\)3X speed-ups in training SD2-base (Section 5). * Release our CommonCatalog dataset of CC images and synthetic captions along with our trained CommonCanvas model at [https://github.com/mosaicml/diffusion/blob/main/assets/common-canvas.md](https://github.com/mosaicml/diffusion/blob/main/assets/common-canvas.md). ## 2 Preliminaries and Motivation In this section, we present background on training the T2I Stable Diffusion model, originally trained on the web-scraped LAION-2B dataset. We then discuss copyright and reproducibility with respect to LAION datasets. This discussion motivates the creation of an alternative dataset composed of open licensed, CC images with synthetic captions, which we introduce in Section 4. ### Text-to-image generative models Text-to-image (T2I) generative models refer to large neural networks trained on paired image-caption data examples. One such family of T2I models is Stable Diffusion (SD) [47]. SD is a latent diffusion model (LDM) that converts images to latent representations and back again using Variational Autoencoders (VAEs) [23]; it uses an iterative sampling procedure [57] and trains an underlying UNet [48]. The architecture also includes a text encoder, such as the Contrastive Language-Image Pre-training (CLIP) model [43] - either the original CLIP from OpenAI [45] or its open-source counterpart, OpenCLIP [10; 18]. Stable Diffusion 2 (SD2)'s UNet has approximately 865 million trainable parameters; Stable Diffusion XL (SDXL) is larger, with 2.6 billion parameters, and has other advancements involving aspect ratio bucketing, micro-conditioning, and multiple text encoders and tokenizers. In terms of training data, the SD-family of models and OpenCLIP are both trained on subsets of the LAION-5B dataset [3; 53]. The exact training dataset for CLIP is unknown, but it is likely webscraped data [45] ### Copyright and reproducibility in relation to LAION datasets LAION-5B is a dataset derived from a snapshot of the Common Crawl, a massive corpus of data scraped from the web. From this snapshot, the LAION organization curated pairs of image URLs and their corresponding alt-text captions for the intended use of training T2I and image-to-text (I2T) generative models [3; 53]. In practice, T2I models are typically trained on filtered subsets of the full LAION-5B dataset (e.g. LAION-2B [26]). Training T2I models on this dataset requires visiting the URLs and downloading the associated images. There are two elements of LAION datasets that are relevant to our work: **Copyright.** The images associated with LAION datasets have unclear _provenance_: it is often not known what the original image sources are [29; 30]. Courts have not yet decided if training on these datasets is "fair use" -- an important exception in copyright [29; 33; 50; 56]. In the interim, there are several copyright lawsuits for the alleged use of LAION-5B subsets to train generative models [1; 15; 20; 61]. Figure 2: When given prompts for concepts related to Disney movies (**a**, **d**), SD2-base generates a recognizable image of Elsa from _Frozen_ (**b**) and a poster-like image with a misshapen Disney logo and characters resembling those from _The Lion King_ (**c**), and CommonCanvas (-SC) does not (**c**, **f**). **Reproducibility.** Since the datasets only contain the image URLs, and not the images themselves, they are plagued with _link rot_[27].1 When accessing LAION-5B, there is no guarantee the images still exist at their URLs, making it impossible to fully reproduce the dataset and opening up the possibility of data poisoning attacks [8]. Footnote 1: This also applies to other scraped datasets, such as DataComp [14] and OBELICS [28]. A natural alternative is to not use LAION datasets for training. One could instead independently curate a dataset of CC-licensed images with known provenance that expressly allow for copying, adaptation, and commercial use. As constituent images can be stored and distributed, this would also solve the link rot problem, thereby enabling greater reproducibility. We defer our discussion of sourcing CC-licensed images to Section 4, where we detail CommonCatalog: our new, open dataset. While CC images are an attractive alternative to LAION-5B, we note that CC images rarely contain the captions necessary to train T2I models. Therefore, we first need a method for captioning CC images, which we describe in the next section. ## 3 Telephoning: A Transfer Learning-based Image-captioning Method Our solution for handling the lack of captions in CC images is an intuitive type of transfer learning for producing high-quality synthetic labels. We describe this method, and then note that there are various similar methods in prior generative modeling literature. Altogether, these methods indicate that this type of transfer learning to produce synthetic labels (to later serve as inputs to training other generative models) has become an increasingly common pattern. We therefore give this method a name: _telephoning_. ### Describing telephoning Telephoning (Figure 3) takes inputs from a high-dimensional modality (e.g., images), effectively performs a "lossy compression" to a low-dimensional modality (e.g., short-text captions), and then Figure 3: (**a**) LAION’s massive dataset of image-caption pairs is used to train BLIP-2, an image-to-text model. (**b**) We leverage BLIP-2 to produce synthetic captions for our caption-less CC images, and use the resulting synthetic image-caption pairs (the _CommonCatalog_ dataset) to train our open diffusion model, _CommonCanvas_. (**c**) Although BLIP-2 was trained on LAION (e.g., including pictures of characters Snoopy), the captions it produces behave like a “lossy compression” (e.g., a black and white cartoon dog with black ears, which has no mention of Snoopy). When we supply such “lossy” captions to a T2I model, like a game of telephone, it produces outputs that no longer resemble the original images (e.g., we show how CommonCanvas produces an image that matches the caption, but does not look like Snoopy). decompresses back to the high-dimensional modality. Because the intermediate compression step is "lossy", the ultimate output often does not remotely resemble the original input, just like a game of telephone [38]. We derive the term telephoning from the above intuition, and employ it as useful shorthand to denote instances of transfer learning that solve data-scarcity problems in multimodal generative modeling. In this work, CC images are the high-dimensional inputs, and we use a pre-trained BLIP-2 model [34] for "lossy compression" to short-text captions (Figure 3a). Together, these CC-image-caption pairs comprise the CommonCatalog dataset, which we use to train our CommonCanvas T2I models (Figure 3b). Even though BLIP-2 was pre-trained on LAION-400M [52], CommonCatalog and CommonCanvas never have direct access to LAION-400M or, importantly, anything that is similar to the images that BLIP-2 was trained on. Instead, we only have access to the mapping in the model, which, given an image input, produces lossy output text that inherently does not literally resemble its image counterpart (Figure 3c).2 Footnote 2: We draw on the example of Snoopy from [49]. Figure 3’s Snoopy is CC-licensed [54]. We defer to experts about fair use (Section 2.2) -- namely, regarding models like BLIP-2, and LAION-5B's images and alt-text captions. Generally, these experts seem to think that many cases will fall under fair use [29; 32; 50], especially when model outputs do not resemble their inputs, which is the case with BLIP-2. ### Related work on telephoning Our work aligns with the trend of using advanced generative models to address data scarcity. This is evident in various modalities, such as producing audio captions from image-text pairs [64] and text from audio [46]. Similar approaches have also been used to generate instruction tuning datasets for both text and images [35; 37]. Concurrent work has used visual question answers models such as LLava [37] to enhance existing captions such as such as DALLE-3 [4] and Chen et al. [9]. However, our model is the one of the first work to train on a dataset without any ground truth captions, and one of the first to release our synthetic captioning dataset along with a fully trained diffusion model. Furthermore, the caption upsampling approaches described in these works could be used to further improve the captions of the CommonCatalogue in future work. Captioning models have been used before to create descriptive captions before to guide a diffusion model to create an image visually similar to a specific image. The concurrent work SynthCap [6] generates a synthetic captioning dataset using a diffusion model to generate images from captions, tackling the inverse of our problem statement. We coin the term telephoning to shorthand processes like these, which include our work and prior work, and which we believe will become more prevalent as generative models progress. ## 4 CommonCatalog: A Dataset of CC Images & Synthetic Captions In this section, we introduce our open dataset, _CommonCatalog_. First, we describe the collection and curation process for the open-licensed, CC images. This process brings to light two challenges: caption-data incompleteness and image-data scarcity. To address the lack of CC captions, we show concretely how we use telephoning to produce high-quality synthetic captions to accompany our set of curated images. We investigate the topic of data scarcity in the next section, where we also discuss necessary systems-level training optimizations that enable us efficient SD-model iteration. ### Sourcing provenancened, licensed images for CommonCatalog We focus on locating high-resolution Creative-Commons images that have open licenses. We began with the YFCC100M dataset, which consists of 100 million CC-licensed images and multimedia files, as well as Flickr IDs linking to the original data [59]. The images in the dataset associated with the original paper exhibit two issues that make it ill-suited for direct use to train Stable Diffusion: they are low-resolution, and many of them have licenses that do not expressly allow for the distribution of derivative works, which are an area of unsettled copyright law in the context of model training. We therefore re-scraped these images from Flickr, based on the IDs provided in the YFCC100M metadata. Our scraped images are very high resolution (exceeding 4K), which makes them more suitable for T2I training. We exclude images with non-derivative (ND) licenses. The remaining images can be further divided into those that can be used for commercial (C) purposes and those that cannot (non-commercial/ NC). As shown in Table 4, we accordingly construct two datasets, CommonCatalog-C and CommonCatalog-NC. We defer additional details about licenses to Appendix B.1.1, but emphasize that all of the images included have open licenses: individuals are free to use, adapt, and remix the images, so long as they attribute them. In total, CommonCatalog contains roughly 70 million NC CC-images, of which a subset of approximately 25 million images can also be used commercially. Directly sourcing CommonCatalog avoids some concerns (Section 2.2); however, it also comes with its own challenges. For one, CC images rarely have the alt-text captions necessary to train a T2I model like Stable Diffusion (Figure 4); those that do have associated text often just include the image title or a URL. For another, we could _only_ find roughly 70 million usable CC images, which pales in comparison to the billions of images in LAION used to train SD2 (Section 5). We take each of these challenges in turn. First, in the next subsection, we show how we instantiate telephoning (Section 3) to produce high-quality, synthetic captions for CC images. ### Synthesizing captions with telephoning We compared several captioning models and, based on qualitative analysis and its state-of-the-art performance on MS COCO, chose to use the pre-trained BLIP-2 OPT2.5B model for synthesizing CommonCatalog's captions [34]. BLIP-2 consists of three components: a pre-trained, frozen (i.e., fixed) visual encoder, a learned transformer network that converts the visual embeddings into a text prompt, and a frozen large language model (LLM) that takes in the prompt. The only trainable variables in the transformers are between the frozen visual encoder and frozen LLM layers. Given a LAION-2B image as input, we found that the resulting BLIP-2 caption is often qualitatively more descriptive than the corresponding LAION-2B ground-truth alt-text caption. LAION-2B captions often contain product names, irrelevant details, or poor grammar and syntax (Figure 5). This finding is corroborated by Nguyen et al. [42], which shows quantitatively (in terms of CLIP Score) that BLIP-2 captions are higher quality than ground-truth captions, at the cost of caption diversity. Based on these preliminary results, we captioned all of the YFCC100M Creative-Commons images, which required about 1,120 GPU A100 hours. To do so, we center-cropped and resized all of the images to a maximum size of 512x512 pixels. We perform these transformations because captioning images at native resolution would be very expensive. At training time of the diffusion model, all images remain in their native resolution. We release our commercial (CommonCatalog-C) and non-commercial (CommonCatalog-NC) CC-image and synthetic-caption datasets on HuggingFace at [REDACTED] with associated data cards. As an evaluation set, we also release the BLIP-2 captions that we produced for the non-derivative (ND) CC images that we did not use for training. ## 5 Training Efficiency Optimizations and Data Scarcity Analysis High-resolution CC images are indeed much less abundant than arbitrary web-scraped ones, but the amount of data necessary to train high-quality SD2 models has not been well-studied. We set out to quantify this amount by training multiple SD2 models on differently-sized subsets of LAION-2B. However, training a single SD2 model, even with hundreds of GPUs, can take several days. To make our data scarcity analysis more tractable, we first implement several efficiency optimizations. Figure 5: Original vs. BLIP-2-generated captions for an image from LAION-2B. BLIP-2 generates a caption that better aligns with what a human would write. See Figure 14 for more examples. Figure 4: CommonCatalog-C contains images licensed only for commercial use; -NC contains -C as well as images licensed for non-commercial use. ### Software and hardware speed-ups Stability AI reports an estimated 200,000 A100 hours to train SD2 [58]. Depending on the available hardware, a single SD2 run could take anywhere from a few weeks to over a month to train. We sought out multiple avenues to reduce this training-time constraint. Ultimately we were able to achieve a speedup of 2.71X relative to the original SD2 implementation. First, we applied Flash Attention [11] with the xFormers library [31]. We also pre-computed VAE and text encoder latents over the entire training dataset, cast all GroupNorm [63] and LayerNorm [2] to float16 precision, and applied fully-sharded data parallelism (FSDP) to our training run. Finally we opted to only keep an exponential moving average of the weights for the final 3.5% of training. More detail on each of these improvements can be found in Appendix D. When applying all of the aforementioned strategies together, we are able to achieve a 2.71X speedup in A100 hours over our SD2-baseline implementation. We found that latent pre-computation helped the most at low resolutions, while FSDP also provided significant gains, especially at scale. The other optimizations helped reduce total memory usage, allowing us to increase the microbatch size for better hardware utilization. Figure 6 summarizes each of the proposed methods and the cumulative speedup that results from its application. Equipped with an optimized training setup, we are able to more easily study effect of varying training dataset size. ### Investigating data scarcity: Saturating SD2 evaluations with \(<3\%\) of LAION-2B YFCC100M contains 100 million images, about 10% the size of the 1.1B LAION examples we could access, thus about 5% of the original LAION-2B dataset. One interesting question that remains unanswered is how much data is actually needed to train these diffusion models effectively. We ask whether or not it is necessary to train on 1+ billion images to get results that are as good as the original LAION-trained SD2. Our results show, surprisingly, that this is not the case with a slightly larger model (CommonCanvas-L); this model replaces SD2's U-Net with SDXL's [43] larger one. Further, our larger model achieves comparable results to SD2-base on human evaluation, using 33X less training data. We train on increasingly smaller, random subsets of data from our LAION-1.1B model and find that we can achieve a similar result on the commonly reported MS COCO numbers, but with \(<\)3% the amount of SD2's training data (Figure 8). In fact, we run experiments down to 1-million LAION-1.1B images, and find that only 10 million images are required for stable training behavior (Appendix, Figure 15). ### Investigating the performance of CC trained model These findings suggest that SD2 models may be underparameterized. In fact, when we use CommonCanvas-LNC, we achieve competitive performance with SD2 on user preferences, despite training on significantly less data (Section 7). Further, in spite of the drastic reduction in dataset size, we observe that the larger model (CommonCanvas-LNC) outperforms the smaller one (CommonCanvas-SNC), consistent with the notion that these models are still underparameterized. We hypothesize about why this might be the case and how much data is actually necessary to saturate the model in Appendix A.1. Figure 6: Cumulative effect of various speed-ups in our SD2 training pipeline on 128 Throughputs evaluated on 128 A100s. Figure 7: User preference study using Parti prompts. CommonCanvas-LNC model matches the performance of SD2 despite being trained with \(<3\%\) the amount of data. ## 6 Experiments Equipped with commercial (CommonCatalog-C) and non-commercial (CommonCatalog-NC) datasets, we train two different CommonCanvas models. We additionally train a larger variant of CommonCanvas-NC (CommonCanvas-LNC) that, as we note above (Section 5.2), has a significantly larger U-Net. Figure 1 displays qualitative results from each of these model variants. More details on the CommonCanvas-LNC architecture can be found in Appendix A.2. ### Automated quality metrics for model evaluation We measure performance with three automated image quality metrics on the MS COCO dataset [36]: Frechet Inception Distance (FID) [17], Kernal Inception Distance (KID) [5], and CLIP-FID [25]. Additionally, CLIP Score was evaluated to understand the alignment between captions and their respective images. Our model demonstrated comparable performance compared to the baseline of SD2 on the popular MS COCO benchmark. However, like any model, ours has limitations. It underperformed in several categories, including faces, general photography, and paintings. These categories originated from the Conceptual Captions dataset [55], which relies on web-scraped data. These web-sourced captions, while abundant, may not always align with human-generated language nuances. This discrepancy underscores the importance of incorporating large-scale, human-generated caption data. Although transitioning to synthetic captions introduces certain performance challenges, the Figure 8: FID, KID, and CLIP-FID vs. CLIP-Score computed on 30K samples from COCO2014 for different SD2 models trained on smaller subsets of LAION (10M, 90M, using either original captions or synthetic BLIP2 captions. Interestingly, increasing the amount of training data from 10M to 90M samples does not lead to improved quantitative metrics across guidance scales 1 to 8. Lower FID is better; higher CLIP score is better. Figure 9: CLIP-FID for different models. We can see domain shift between MS COCO captions and web-scraped conceptual captions. CLIP-FID likely favors SD2, as CLIP is trained on a similar style of text as LAION. This plot only covers the first stage of training at 256x256 resolution. We drop in performance is not as dramatic as one might assume. Moreover, we speculate that it would if users were to supplement with their own datasets, like FFHQ [22], if they seek to fine-tune models for specific categories. ### Human evaluation While automated quality metrics are useful, given the level of detail and breadth of of the distribution large T2I are intended to generate, there is no substitute for evaluation by human raters. Human pairwise preference ratings for the three 512x512 resolution CommonCanvas models compared to SD2-base can be seen in Figure 7. In this experiment, human raters were shown a prompt (selected randomly from the PartiPrompts prompts set [66]) along with two generated images in randomized order, one from the reference model (SD2-base) and the other from a CommonCanvas model. Users were asked which generated image they preferred. We report the fraction of the time users selected the image generated by the CommonCanvas model over the corresponding generation from SD2 as the user preference rate for that model. In agreement with our automated quality metrics, we find that the two small CommonCanvas models are less preferred than SD2-base, with preference rates of 37% for CommonCanvas-SC and 38% for CommonCanvas-SNC, which we find surprisingly high considering the smaller and synthetic nature of the dataset. For the largest model, CommonCanvas-LNC, we do not measure a statistically significant difference in user preference between this model and SD2-base. While SDXL is a significantly larger model, this finding represents an existential result, showing that we are capable of matching the performance of a model trained on several magnitudes more of data. ### Benefits and challenges of synthetic captions Interestingly, we observe that synthetic captions can enhance the alignment of our model. For instance, the CLIP Score for synthetic captions exceeded that of ground-truth captions as seen in Figure 8. We also observed reduced diversity of n-grams in our synthetic captions, a pattern previously noted by Nguyen et al. [42]. This effect can be visualized through the decrease in unique trigrams. Although we train on Creative-Commons images, it is still possible for an adversarial prompt to produce content that, for example, includes iconic characters. In Figure 10, we subject our model to ambiguous prompts that are suggestive of such characters. Examples include visuals closely resembling Elsa from Frozen, Indiana Jones resembling Harrison Ford, and even a likeness to Harry Potter (Figure 10). Qualitatively, our model deviated more from these characters than SD2. Figure 10: We compare CommonCanvas-SNC (Ours) to SD2. Our model is less likely to generate iconic characters given suggestive prompts (drawn from Lee et al. [29]). ## 7 Discussion and Related Work In this paper, we train the family of CommonCanvas text-to-image latent diffusion models on only Creative-Commons images and synthetic captions. We discuss the data incompleteness and scarcity issues associated with CC images, and how we address each of these issues in turn. For data incompleteness, we propose telephoning, an intuitive type of transfer learning (Section 3), which we instantiate with BLIP-2 to produce synthetic captions for CC images -- together, the CommonCatalog dataset (Section 4). With regard to data scarcity, we hypothesize that much less data than what is contained in LAION-2B is necessary to saturate SD2, and that CommonCatalog should be sufficient for training. To make testing this hypothesis more efficient, we implement a variety of ML-systems optimizations, which achieve a 2.7X speed-up over our SD2 baseline. Ultimately, we find that we can train SD2 on \(<\)3% of LAION-2B (Section 5), which encourages us to train on CommonCatalog's commercial (roughly 70 million) and non-commercial (roughly 25 million) examples. Our CommonCanvas models under-perform in some categories, like faces, but CommonCanvas-LNC demonstrates statistically equivalent performance with SD2 on human evaluation (Section 6). We note that several recent works study copyright. This work tends to concern text-to-text training data [39], be primarily theoretical [51, 62], involve ablation studies [24], or only handle verbatim memorization [7] through the use of generation-time content filters [16], which has been shown to be an incomplete solution [19]. To the best of our knowledge, no prior open work attempts to train T2I models on only open-licensed data. Most prior work on text-caption-dataset creation has focused on extracting caption data from Common Crawl [12, 14, 28]. We instead focus on synthesizing captions directly by using a pre-trained BLIP-2 model. [42] demonstrate that existing caption datasets can be improved by using BLIP2 to re-caption low-quality captions in large datasets like Datacomp, but do not focus on creating a new dataset of synthetic captions, as we do here. An issue, which we do not address, is that the YFCC100M data is about a decade old; its CC images are not as current as those in LAION-2B. Given the success of our results, in the future, we plan to augment CommonCatalog with Creative-Commons images from other sources, as well as test larger CommonCanvas model architectures. ## Acknowledgements We would like to thank Christopher De Sa for feedback on earlier drafts of this work. A. Feder Cooper is funded by Professor Christopher De Sa's NSF RI-CAREER award 2046760. This work was also sponsored by Volodymyr Kuleshov's CAREER grant: #2145577. We also would like to thank Apolinario Passos for helping us host the data + models and for insightful discussions along the way. Figure 11: Using CommonCanvas-SNC (Ours) to generate celebrities. Our model is worse at synthesizing individual people than SD2, but is capable of generating some noteworthy public figures.
2308.12809
Matrix elements of $SO(3)$ in $sl_3$ representations as bispectral multivariate functions
We compute the matrix elements of $SO(3)$ in any finite-dimensional irreducible representation of $sl_3$. They are expressed in terms of a double sum of products of Krawtchouk and Racah polynomials which generalize the Griffiths-Krawtchouk polynomials. Their recurrence and difference relations are obtained as byproducts of our construction. The proof is based on the decomposition of a general three-dimensional rotation in terms of elementary planar rotations and a transition between two embeddings of $sl_2$ in $sl_3$. The former is related to monovariate Krawtchouk polynomials and the latter, to monovariate Racah polynomials. The appearance of Racah polynomials in this context is algebraically explained by showing that the two $sl_2$ Casimir elements related to the two embeddings of $sl_2$ in $sl_3$ obey the Racah algebra relations. We also show that these two elements generate the centralizer in $U(sl_3)$ of the Cartan subalgebra and its complete algebraic description is given.
Nicolas Crampe, Julien Gaboriaud, Loïc Poulain d'Andecy, Luc Vinet
2023-08-24T14:10:41Z
http://arxiv.org/abs/2308.12809v2
# Matrix elements of \(So(3)\) in \(sl_{3}\) representations ###### Abstract We compute the matrix elements of \(SO(3)\) in any finite-dimensional irreducible representation of \(sl_{3}\). They are expressed in terms of a double sum of products of Krawtchouk and Racah polynomials which generalize the Griffiths-Krawtchouk polynomials. Their recurrence and difference relations are obtained as byproducts of our construction. The proof is based on the decomposition of a general three-dimensional rotation in terms of elementary planar rotations and a transition between two embeddings of \(sl_{2}\) in \(sl_{3}\). The former is related to monovariate Krawtchouk polynomials and the latter, to monovariate Racah polynomials. The appearance of Racah polynomials in this context is algebraically explained by showing that the two \(sl_{2}\) Casimir elements related to the two embeddings of \(sl_{2}\) in \(sl_{3}\) obey the Racah algebra relations. We also show that these two elements generate the centralizer in \(U(sl_{3})\) of the Cartan subalgebra and its complete algebraic description is given. ## 1 Introduction Consider the particular irreducible representation of the Lie algebra \(sl_{3}\) given by the \(n^{\rm th}\) symmetric power of its three-dimensional defining representation. The group \(SO(3)\) acts naturally on this representation and its matrix elements are given (up to some normalization) by the bivariate Griffiths-Krawtchouk polynomials. This was proven in [1] by using oscillator algebras to realize the \(n^{\rm th}\) symmetric power representation. The goal of this paper is to consider an arbitrary finite-dimensional irreducible representation of \(sl_{3}\). The group \(SO(3)\) still acts naturally on this representation and our main result is an expression of its matrix elements. They are expressed as a novel family of 3-variable functions, enjoying many nice properties: they are bispectral, orthogonal and are given as sums of products of univariate Krawtchouk and Racah polynomials. ###### Abstract We consider the \(d\)-variable extension of read \[\begin{split}[K_{2},[K_{1},K_{2}]]&={K_{2}}^{2}+\{K_{1 },K_{2}\}+dK_{2}+e_{1}\,,\\ [[K_{1},K_{2}],K_{1}]&={K_{1}}^{2}+\{K_{1},K_{2}\}+dK _{1}+e_{2}\,,\end{split} \tag{1}\] where \(K_{1}\), \(K_{2}\) are generators of this algebra and \(d\), \(e_{1}\) and \(e_{2}\) are some parameters (or central elements). The Racah relations first appeared in the study of angular momentum recoupling [32] and were connected to the Racah polynomials in [8]. Hence, whenever the Racah algebra appears, the Racah polynomials will be lurking, and vice-versa. Remarkably, even though, the Racah algebra is quadratic, it can be studied in detail. Its representation theory is well-known [33, 34], it has been embedded in various algebraic structures [35, 36, 37, 38, 39, 40, 41, 42] and it encapsulated the properties of the eponymous polynomials. It has been connected to physics and plays in particular an important role as the symmetry algebra of the generic superintegrable system on the 2-sphere [43] (see also [44] for a review). Generalizations of this algebra to describe multivariate Racah polynomials lead to the higher rank Racah algebras, which have also been extensively studied [45, 46, 47, 21, 48, 49, 21]. The Racah polynomials will appear here in a new fashion as matrix elements of some particular rotation in \(sl_{3}\) representations. From the above, one would expect the existence of a realization of the Racah algebra in \(U(sl_{3})\), and indeed we exhibit explicitly such a realization. ### Outline We first define the algebra \(U(sl_{3})\) and introduce its Gelfand-Tsetlin basis in Section 2. In Section 3, we pose the main problem precisely. Inner automorphisms of \(U(sl_{3})\) corresponding to \(SO(3)\) rotations are introduced and the strategy to compute the matrix elements by decomposing the general rotation as a product of two types of rotations is explained. In Section 4, we look at a first type of rotation, from which we extract Krawtchouk polynomials. A second type of rotation analyzed in Section 5 leads to Racah polynomials. Explicit expressions for the matrix elements associated to a generic rotation are presented in Section 6. Special cases of interest are displayed in Section 7. In Section 8, the realization of the Racah algebra in \(U(sl_{3})\) is presented, its connection with a centralizer is explained and its Hilbert-Poincare series is computed. Closing remarks and perspectives conclude the paper. ## 2 The algebra \(U(sl_{3})\) and its Gelfand-Tsetlin basis Let us first introduce the algebra \(U(sl_{3})\) and its finite-dimensional irreducible representations in the Gelfand-Tsetlin bases. Definition.The enveloping algebra \(U(sl_{3})\) of the Lie algebra \(sl_{3}\) is generated by the elements \(e_{ij}\) with \(1\leq i,j\leq 3\) satisfying the defining relations \[[e_{ij},e_{k\ell}]=\delta_{jk}e_{i\ell}-\delta_{i\ell}e_{jk}\,,\qquad\sum_{i=1 }^{3}e_{ii}=0\,. \tag{2}\] The following Casimir elements generate the center of \(U(sl_{3})\): \[C_{2}=\sum_{i,j=1}^{3}e_{ij}e_{ji}\qquad\text{and}\qquad C_{3}=\sum_{i,j,k=1}^{ 3}e_{ij}e_{jk}e_{ki}\,. \tag{3}\] We shall take \(J\) to be the Casimir element of the \(sl_{2}\) subalgebra generated by \(e_{11}-e_{22}\), \(e_{12}\) and \(e_{21}\): \[J=\frac{(e_{11}-e_{22})^{2}+2(e_{11}-e_{22})}{4}+e_{21}e_{12}\,. \tag{4}\] Finite-dimensional irreducible representations.Finite-dimensional irreducible representations of \(sl_{3}\) are in one-to-one correspondence with 3-tuples of complex numbers \(\lambda=(\lambda_{31},\lambda_{32},\lambda_{33})\), called the highest weight, such that \(\lambda_{31}-\lambda_{32}\in\mathbb{Z}_{\geq 0}\), \(\lambda_{32}-\lambda_{33}\in\mathbb{Z}_{\geq 0}\) and \(\lambda_{31}+\lambda_{32}+\lambda_{33}=0\). The representation associated to the highest weight \(\lambda\) contains a unique (up to normalization) non-zero vector \(\xi\), called the highest weight vector, such that \[e_{ii}\,\xi=\lambda_{3,i}\,\xi\qquad\text{for }1\leq i\leq 3\,,\qquad\text{and} \qquad e_{ij}\,\xi=0\qquad\text{for }1\leq i<j\leq 3\,. \tag{5}\] This representation is also characterized by a two-row Young tableau with \(\lambda_{31}-\lambda_{33}\) boxes in the first row and \(\lambda_{32}-\lambda_{33}\) boxes in the second row. This representation can also be described by a Gelfand-Tsetlin (GT) pattern which is given by \(\Lambda=(\lambda_{11},\lambda_{21},\lambda_{22};\lambda_{31},\lambda_{32}, \lambda_{33})\) with the conditions \[\lambda_{31}-\lambda_{21},\quad\lambda_{21}-\lambda_{32},\quad\lambda_{32}- \lambda_{22},\quad\lambda_{22}-\lambda_{33},\quad\lambda_{21}-\lambda_{11}, \quad\lambda_{11}-\lambda_{22}\ \in\mathbb{Z}_{\geq 0}\,. \tag{6}\] In the following the three last numbers in \(\Lambda\) are fixed and we write only \(\Lambda=(\lambda_{11},\lambda_{21},\lambda_{22})\). The set of GT patterns for this highest weight \(\lambda\) is denoted \(\mathcal{P}_{\lambda}\). For a GT pattern \(\Lambda\), one associates the representation basis vectors, called GT vectors (see [50] and references therein)1 Footnote 1: For later convenience, we change the normalization of the vectors in comparison with [50]: \[\xi_{\Lambda}=\left|\begin{array}{ccc}\lambda_{31}&&\lambda_{32}&&\lambda_ {33}\\ &\lambda_{21}&&\lambda_{22}&\\ &&\lambda_{11}&&\\ \end{array}\right.\right\rangle. \tag{7}\] The \(sl_{3}\) generators \(e_{ij}\) act as follows on these vectors: \[e_{11}\xi_{\Lambda} =\lambda_{11}\xi_{\Lambda}\,,\qquad e_{22}\xi_{\Lambda}=(\lambda_ {21}+\lambda_{22}-\lambda_{11})\xi_{\Lambda}\,,\qquad e_{33}\xi_{\Lambda}=-( \lambda_{21}+\lambda_{22})\xi_{\Lambda}\,, \tag{8a}\] \[e_{12}\xi_{\Lambda} =(\lambda_{21}-\lambda_{11})(\lambda_{11}-\lambda_{22}+1)\xi_{ \Lambda+\delta^{11}}\,,\qquad e_{21}\xi_{\Lambda}=\xi_{\Lambda-\delta^{11}}\,,\] (8b) \[e_{23}\xi_{\Lambda} =\frac{\lambda_{31}-\lambda_{21}}{\lambda_{21}-\lambda_{22}+1}\xi _{\Lambda+\delta^{21}}+\frac{\lambda_{31}-\lambda_{22}+1}{\lambda_{21}- \lambda_{22}+1}\xi_{\Lambda+\delta^{22}}\,,\] (8c) \[e_{32}\xi_{\Lambda} =\frac{(\lambda_{21}-\lambda_{32})(\lambda_{21}-\lambda_{33}+1)( \lambda_{21}-\lambda_{11})}{\lambda_{21}-\lambda_{22}+1}\xi_{\Lambda-\delta^{ 21}}\] \[\qquad+\frac{(\lambda_{11}-\lambda_{22}+1)(\lambda_{32}-\lambda_{ 22}+1)(\lambda_{22}-\lambda_{33})}{\lambda_{21}-\lambda_{22}+1}\xi_{\Lambda- \delta^{22}}\,, \tag{8d}\] where \(\xi_{\Lambda\pm\delta^{ij}}\) is either the basis element associated to the GT pattern \(\Lambda\pm\delta^{ij}\) where the value of \(\lambda_{ij}\) has become \(\lambda_{ij}\pm 1\), or \(0\) if the resulting pattern is not a valid GT pattern. One can deduce the actions of the remaining generators: \[e_{13}\xi_{\Lambda} =\frac{(\lambda_{11}-\lambda_{22}+1)(\lambda_{31}-\lambda_{21})}{ \lambda_{21}-\lambda_{22}+1}\xi_{\Lambda+\delta^{21}+\delta^{11}}-\frac{( \lambda_{21}-\lambda_{11})(\lambda_{31}-\lambda_{22}+1)}{\lambda_{21}-\lambda_{ 22}+1}\xi_{\Lambda+\delta^{22}+\delta^{11}}\,, \tag{8e}\] \[e_{31}\xi_{\Lambda} =\frac{(\lambda_{21}-\lambda_{32})(\lambda_{21}-\lambda_{33}+1)}{ \lambda_{21}-\lambda_{22}+1}\xi_{\Lambda-\delta^{21}-\delta^{11}}-\frac{( \lambda_{32}-\lambda_{22}+1)(\lambda_{22}-\lambda_{33})}{\lambda_{21}-\lambda_{ 22}+1}\xi_{\Lambda-\delta^{22}-\delta^{11}}\,. \tag{8f}\] The Casimir elements of \(U(sl_{3})\) are proportional to the identity in this representation and, using (8), one gets \[C_{2}\,\xi_{\Lambda} =2(\lambda_{31}^{2}+\lambda_{31}\lambda_{32}+\lambda_{32}^{2}+2 \lambda_{31}+\lambda_{32})\xi_{\Lambda}\,, \tag{9a}\] \[C_{3}\,\xi_{\Lambda} =3\lambda_{31}(1-\lambda_{32})(2+\lambda_{31}+\lambda_{32})\xi_{ \Lambda}\,. \tag{9b}\] The element \(J\) defined by (4) acts diagonally on this basis \[J\,\xi_{\Lambda}=\frac{1}{4}(\lambda_{21}-\lambda_{22})(\lambda_{21}-\lambda_{ 22}+2)\,\xi_{\Lambda}\,. \tag{10}\] In fact, this eigenvalue of the \(sl_{2}\) Casimir element \(J\), with the ones of the hypercharge \(Y=\frac{1}{3}(e_{11}+e_{22}-2e_{33})\) and the Cartan element \(H=e_{11}-e_{22}\): \[Y\,\xi_{\Lambda} =\left(\lambda_{21}+\lambda_{22}\right)\xi_{\Lambda}\,, \tag{11}\] \[H\,\xi_{\Lambda} =\left(2\lambda_{11}-\lambda_{21}-\lambda_{22}\right)\xi_{\Lambda}\,, \tag{12}\] completely characterize the vectors \(\xi_{\Lambda}\) (if the highest weight \(\lambda\) is given). Let us introduce the normalized vectors \[\zeta_{\Lambda}=\frac{1}{N_{\Lambda}}\xi_{\Lambda}\,, \tag{13}\] with \[\left(N_{\Lambda}\right)^{2} =\frac{1}{\lambda_{21}-\lambda_{22}+1}(\lambda_{21}-\lambda_{11 })!(\lambda_{31}-\lambda_{21})!(\lambda_{31}-\lambda_{22}+1)!(\lambda_{31}- \lambda_{32})!(\lambda_{32}-\lambda_{33})!\] \[\times\frac{(\lambda_{22}-\lambda_{33})!(\lambda_{31}-\lambda_{ 33}+1)!(\lambda_{21}-\lambda_{32})!(\lambda_{21}-\lambda_{33}+1)!}{(\lambda_{ 11}-\lambda_{22})!(\lambda_{32}-\lambda_{22})!}. \tag{14}\] In this basis, the anti-automorphism \(\star\) of \(sl_{3}\) defined by \(\star:e_{ij}\mapsto e_{ji}\) corresponds to the transposition. ## 3 The general problem: rotations and change of basis coefficients In this Section, we introduce the inner automorphisms of \(sl_{3}\) associated to elements of the group \(SO(3)\). The matrix elements of \(SO(3)\) are interpreted as the overlap coefficients between different bases related by these inner automorphisms. We provide the decomposition of a general rotation of \(SO(3)\) into a product of two types of rotations \(R^{z}\) and \(T\), which will be examined separately in the following two Sections. Inner automorphism.Let \(R\in SO(3)\) be a \(3\times 3\)-matrix whose real entries satisfy \[\sum_{j=1}^{3}R_{ij}R_{kj}=\sum_{j=1}^{3}R_{ji}R_{jk}=\delta_{ik}\,. \tag{15}\] The map \(\Psi_{R}\), labeled by \(R\in SO(3)\) and acting on \(g\in sl_{3}\) by \(\Psi_{R}(g)=RgR^{-1}\), provides an automorphism of \(U(sl_{3})\) given explicitly by: \[\Psi_{R}\ :\ e_{ij}\mapsto\sum_{k,\ell=1}^{3}R_{ki}R_{\ell j}e_{k\ell}\,. \tag{16}\] One gets that \(\Psi_{R}\circ\Psi_{R^{\prime}}=\Psi_{RR^{\prime}}\) for any \(R,R^{\prime}\in SO(3)\). From now on, let us fix a highest weight \(\lambda=(\lambda_{31},\lambda_{32},\lambda_{33})\) and let \(\xi_{\Lambda}\) be vectors of the representation space. For any rotation \(R\), we denote \(\rho\) the operator representing \(R^{-1}\). It satisfies: \[\Psi_{R}(g)\cdot\xi_{\Lambda}=\rho^{-1}\cdot g\cdot\rho\cdot\xi_{\Lambda} \qquad\text{ for any }g\in sl_{3}\,, \tag{17}\] where \[\rho\cdot\xi_{\Lambda}=\sum_{\Lambda^{\prime}\in\mathcal{P}_{\lambda}}\rho_{ \Lambda^{\prime},\Lambda}\,\xi_{\Lambda^{\prime}}\,. \tag{18}\] This means that \(\Psi_{R}(g)\) and \(g\) are two equivalent representations and the change of basis is provided by \(\rho\). The coefficients \(\rho_{\Lambda^{\prime},\Lambda}\) are the matrix elements of \(R^{-1}\) in the Gelfand-Tsetlin basis. In the basis of normalized vectors \(\zeta_{\Lambda}\) (13), the matrices corresponding to \(\rho\) are orthogonal. Therefore, due to the change of normalization, one gets the following orthogonality relation \[\sum_{\Lambda^{\prime}\in\mathcal{P}_{\lambda}}\left(N_{\Lambda^{\prime}} \right)^{2}\,\rho_{\Lambda^{\prime},\Lambda}\,\rho_{\Lambda^{\prime},\Lambda^{ \prime\prime}}=\left(N_{\Lambda}\right)^{2}\delta_{\Lambda,\Lambda^{\prime \prime}}\,. \tag{19}\] Change of basis for any rotation.The goal of this paper is to provide an explicit formula for the entries of \(\rho\) for any \(R\) and any representation. When the representation is a symmetric representation of \(U(sl_{3})\) (_i.e._\(\lambda_{32}=\lambda_{33}\)), the entries of \(\rho\) are given in terms of the Griffiths-Krawtchouk polynomials [1, 15]. As explained in the introduction, we generalize this result to any finite-dimensional irreducible representation of \(sl_{3}\). Following [1], to compute the change of basis \(\sigma\) for any \(S\in SO(3)\), we decompose \(S\) into three elementary rotations: two rotations around the \(z\) axis and one around the \(y\) axis. \[S=R_{\chi}^{z}R_{\theta}^{y}R_{\phi}^{z}\,, \tag{20}\] with \[R_{\phi}^{z}=\begin{pmatrix}\cos(\phi)&\sin(\phi)&0\\ -\sin(\phi)&\cos(\phi)&0\\ 0&0&1\end{pmatrix},\qquad\qquad R_{\theta}^{y}=\begin{pmatrix}\cos(\theta)&0& -\sin(\theta)\\ 0&1&0\\ \sin(\theta)&0&\cos(\theta)\end{pmatrix}\,. \tag{21}\] The rotations around the \(z\) axis are easy to deal with since they leave the Casimir element \(J\) of \(sl_{2}\) invariant and the calculation is done in Section 4. For the rotation around the \(y\) axis, the computation is more involved but the idea consists in writing this rotation as follows: \[R_{\theta}^{y}=TR_{\theta}^{z}T^{-1}\,, \tag{22}\] where \[T=\begin{pmatrix}1&0&0\\ 0&0&1\\ 0&-1&0\end{pmatrix}\,. \tag{23}\] The change of basis \(\tau\) associated to the transformation \(T\) is computed in Section 5 in terms of Racah polynomials. After the changes of basis \(\rho_{\phi}\) and \(\tau\) associated to \(R_{\phi}^{z}\) and \(T\) have been computed, one can write the whole change of basis \(\sigma=\rho_{\phi}\,\tau^{-1}\,\rho_{\theta}\,\tau\,\rho_{\chi}\) associated to the rotation \(S=R_{\chi}^{z}TR_{\theta}^{z}T^{-1}R_{\phi}^{z}\,,\) since \[\Psi_{S}(g)\cdot\xi_{\Lambda} =\Psi_{R_{\chi}^{z}TR_{\theta}^{z}T^{-1}R_{\phi}^{z}}(g)\cdot\xi_ {\Lambda}=\Psi_{R_{\chi}^{z}}\Bigg{(}\Psi_{T}\Big{(}\Psi_{R_{\theta}^{z}} \big{(}\Psi_{T^{-1}}(\Psi_{R_{\phi}^{z}}(g))\big{)}\Big{)}\Bigg{)}\cdot\xi_{\Lambda}\] \[=\left(\rho_{\phi}\,\tau^{-1}\,\rho_{\theta}\,\tau\,\rho_{\chi} \right)^{-1}\,.\,g\cdot\left(\rho_{\phi}\,\tau^{-1}\,\rho_{\theta}\,\tau\, \rho_{\chi}\right)\cdot\xi_{\Lambda}\,. \tag{24}\] We shall show that this full transformation is given as a double sum of a product of three Krawtchouk and two Racah polynomials (see Proposition 6.1). ## 4 The change of basis for \(R_{\phi}^{z}\) and the Krawtchouk polynomials In this section, we focus on the case where \(R=R_{\phi}^{z}\) and we denote the change of basis by \(\rho\). Imposing relation (17) for \(g=Y\), \(H\) and \(J\) constrains the operator \(\rho\). Let us remark that \(\Psi_{R}(Y)=Y\) and \(\Psi_{R}(J)=J\), therefore relation (17) reduces to \[Y\cdot\rho\cdot\xi_{\Lambda}=\rho\cdot Y\cdot\xi_{\Lambda}\qquad\text{and} \qquad J\cdot\rho\cdot\xi_{\Lambda}=\rho\cdot J\cdot\xi_{\Lambda}\,. \tag{25}\] Using the explicit expressions of the action of \(sl_{3}\) on the GT basis, one gets \[0 =\rho_{\Lambda^{\prime}\Lambda}(\lambda_{21}^{\prime}+\lambda_{22 }^{\prime}-\lambda_{21}-\lambda_{22})\,, \tag{26}\] \[0 =\rho_{\Lambda^{\prime}\Lambda}((\lambda_{21}^{\prime}-\lambda_{2 2}^{\prime})(\lambda_{21}^{\prime}-\lambda_{22}^{\prime}+2)-(\lambda_{21}- \lambda_{22})(\lambda_{21}-\lambda_{22}+2))\,,\] which imply that \[\rho_{\Lambda^{\prime}\Lambda}=\delta_{\lambda_{21}^{\prime},\lambda_{21}} \delta_{\lambda_{22}^{\prime},\lambda_{22}}\ r(\lambda_{11},\lambda_{21}^{ \prime},\lambda_{11}^{\prime},\lambda_{22}^{\prime})\,. \tag{27}\] One also gets \[\Psi_{R}(H)=\left(\cos^{2}(\phi)-\sin^{2}(\phi)\right)H-2\cos(\phi)\sin(\phi)( e_{12}+e_{21})\,. \tag{28}\] Relation (17) for \(g=H\) provides the following constraint (and using relation (27)): \[(2\lambda_{11}^{\prime}-\lambda_{21}^{\prime}-\lambda_{22}^{\prime}) \rho_{\Lambda^{\prime},\Lambda}= -2\cos(\phi)\sin(\phi)(\lambda_{21}-\lambda_{11})(\lambda_{11}- \lambda_{22}+1)\rho_{\Lambda^{\prime},\Lambda+\delta^{11}}\] \[+(\cos^{2}(\phi)-\sin^{2}(\phi))(2\lambda_{11}-\lambda_{21}- \lambda_{22})\rho_{\Lambda^{\prime},\Lambda}\] \[-2\cos(\phi)\sin(\phi)\rho_{\Lambda^{\prime},\Lambda-\delta^{11}}\,. \tag{29}\] This relation is similar to the recurrence relation of the Krawtchouk polynomials \(K_{n}(x;p,N)\): \[K_{n}(x;p,N)={}_{2}F_{1}\left(\genfrac{}{}{0.0pt}{}{-n\,,-x}{-N}\Big{|}\frac{1 }{p}\right), \tag{30}\] where \({}_{2}F_{1}\) is the usual hypergeometric function [51]. We use the convention that \(K_{n}(x;p,N)=0\) if \(n,x<0\) or \(n,x>N\). By direct check, we see that \[r(\lambda_{11},\lambda_{21}^{\prime},\lambda_{11}^{\prime},\lambda _{22}^{\prime})=\frac{\tan^{\lambda_{11}-\lambda_{22}^{\prime}}(\phi)}{( \lambda_{11}-\lambda_{22}^{\prime})!}\,K_{\lambda_{11}-\lambda_{22}^{\prime}} \Big{(}\lambda_{11}^{\prime}-\lambda_{22}^{\prime};\sin^{2}(\phi),\lambda_{21} ^{\prime}-\lambda_{22}^{\prime}\Big{)}r(\lambda_{21}^{\prime},\lambda_{11}^{ \prime},\lambda_{22}^{\prime}) \tag{31}\] is the solution of (29). Here the indices \(\lambda_{11}-\lambda_{22}^{\prime}\) and variables \(\lambda_{11}^{\prime}-\lambda_{22}^{\prime}\) both go from \(0\) to \(\lambda_{21}^{\prime}-\lambda_{22}^{\prime}\). It remains to determine the function \(r\). By using the orthogonality relation (19) satisfied by \(\rho\), one can show \[r(x,y,z)^{2}=\left(\frac{(x-z)!}{(x-y)!}\tan^{y-z}(\phi)\cos^{x-z}(\phi) \right)^{2} \tag{32}\] by making use of the orthogonality relation of the Krawtchouk polynomials [51] \[\binom{N}{n}\sum_{X=0}^{N}\binom{N}{X}p^{X+n}(1-p)^{N-X-n}K_{m}(X;p,N)K_{n}(X ;p,N)=\delta_{n,m}\,. \tag{33}\] The previous results lead to the following proposition. **Proposition 4.1**.: _The change of basis \(\rho\) associated to \(R_{\phi}^{z}\) is given by_ \[\rho_{\Lambda^{\prime}\Lambda}= \delta_{\lambda_{21}^{\prime},\lambda_{21}}\delta_{\lambda_{22}^{ \prime},\lambda_{22}}\ (-1)^{\lambda_{11}^{\prime}-\lambda_{22}}\ \frac{(\lambda_{21}-\lambda_{22})!\tan^{\lambda_{11}^{ \prime}+\lambda_{11}-2\lambda_{22}}(\phi)\cos^{\lambda_{21}-\lambda_{22}}( \phi)}{(\lambda_{11}-\lambda_{22})!(\lambda_{21}-\lambda_{11}^{\prime})!}\] \[\times K_{\lambda_{11}-\lambda_{22}}\Big{(}\lambda_{11}^{\prime} -\lambda_{22};\sin^{2}(\phi),\lambda_{21}-\lambda_{22}\Big{)}\,. \tag{34}\] Proof.: By combining (31) and taking the square root of (32), we can obtain the change of basis \(\rho\) up to signs. The signs may be fixed by using equation (17) for the other elements of \(sl_{3}\) but it is simpler to remark that the operator \(\rho\) tends continuously to the identity operator when \(R\) tends to the identity (_i.e._\(\phi\to 0\)): \[\rho_{\Lambda^{\prime},\Lambda}\Big{|}_{\phi=0}=\delta_{\lambda_{21}^{\prime}, \lambda_{21}}\delta_{\lambda_{11}^{\prime},\lambda_{11}}\delta_{\lambda_{22}^ {\prime},\lambda_{22}}\,. \tag{35}\] Using \[p^{(x+n)/2}\binom{N}{n}K_{n}(x;p,N)\Big{|}_{p=0}=(-1)^{x}\delta_{x,n}\,, \tag{36}\] the result (34) follows. The change of basis corresponding to \(T\) and the Racah polynomials Under the \(sl_{3}\) automorphism associated to the rotation \(T\), the \(sl_{2}\) subalgebra on indices \(1,2\) is sent to the subalgebra on indices \(1,3\). Indeed, after a short calculation, one can show that the \(sl_{2}\) Casimir element \(J\) is mapped to \[\Psi_{T}(J)=\frac{(e_{11}-e_{33})^{2}+2(e_{11}-e_{33})}{4}+e_{31}e_{13}\,, \tag{37}\] which is precisely the Casimir element of the subalgebra \(sl_{2}\) generated by \(e_{11}-e_{33}\), \(e_{13}\) and \(e_{31}\). Relations (17) for \(R=T\) and \(g=H,Y\) provide the following two constraints on the entries of the change of basis \(\tau\): \[0 =(\lambda^{\prime}_{11}-\lambda_{11})\tau_{\Lambda^{\prime} \Lambda}\,, \tag{38a}\] \[0 =(\lambda_{21}+\lambda_{22}-\lambda^{\prime}_{11}+\lambda^{ \prime}_{21}+\lambda^{\prime}_{22})\tau_{\Lambda^{\prime}\Lambda}\,. \tag{38b}\] These constraints lead to the following form for \(\tau\): \[\tau_{\Lambda^{\prime}\Lambda}=\delta_{\lambda^{\prime}_{11},\lambda_{11}} \delta_{\lambda_{21}+\lambda_{22},\lambda^{\prime}_{11}-\lambda^{\prime}_{21 }-\lambda^{\prime}_{22}}\,t(\lambda_{21},\lambda^{\prime}_{21},\lambda^{ \prime}_{11},\lambda^{\prime}_{22})\,. \tag{39}\] Similarly, equation (17) for \(g=J\) provides the following constraint (where we use relations (38) to replace \(\lambda_{11}\) by \(\lambda^{\prime}_{11}\) and \(\lambda_{21}+\lambda_{22}\) by \(\lambda^{\prime}_{11}-\lambda^{\prime}_{21}-\lambda^{\prime}_{22}\)): \[A_{\Lambda}\,\tau_{\Lambda^{\prime},\Lambda-\delta^{21}+\delta^{22}}+(A_{ \Lambda}+C_{\Lambda})\,\tau_{\Lambda^{\prime}\Lambda}+C_{\Lambda}\tau_{ \Lambda^{\prime},\Lambda+\delta^{21}-\delta^{22}}=(\lambda^{\prime}_{21}- \lambda_{31})(\lambda^{\prime}_{22}-\lambda_{31}-1)\tau_{\Lambda^{\prime} \Lambda}\,, \tag{40}\] with \[A_{\Lambda} =\frac{(\lambda_{21}-\lambda_{32})(\lambda_{31}-\lambda_{22}+1)( \lambda_{21}-\lambda_{11})(\lambda_{21}-\lambda_{33}+1)}{(\lambda_{21}- \lambda_{22}+1)(\lambda_{21}-\lambda_{22})}\,, \tag{41a}\] \[C_{\Lambda} =\frac{(\lambda_{11}-\lambda_{22}+1)(\lambda_{22}-\lambda_{33})( \lambda_{32}-\lambda_{22}+1)(\lambda_{31}-\lambda_{21})}{(\lambda_{21}- \lambda_{22}+1)(\lambda_{21}-\lambda_{22}+2)}\,. \tag{41b}\] Relation (40) is the recurrence relation of the Racah polynomial [51]. \[R_{n}(x(x+\gamma+\delta+1);\alpha,\beta,\gamma,\delta)={}_{4}F_{3}\left( \begin{matrix}-\,n\,,n+\alpha+\beta+1\,,-x\,,x+\gamma+\delta+1\cr\alpha+1\,, \beta+\delta+1\,,\gamma+1\end{matrix}\Big{|}1\right). \tag{42}\] To simplify the notation, one defines \(\widetilde{R}_{n}(x;\alpha,\beta,\gamma,\delta)=R_{n}(x(x+\gamma+\delta+1); \alpha,\beta,\gamma,\delta)\). We use also the convention that \(\widetilde{R}_{n}(x;\alpha,\beta,\gamma,\delta)=0\) if \(n,x<0\) or \(n,x>N\) where \(N=min(-\alpha-1,-\beta-\delta-1,-\gamma-1)\). One gets explicitly: \[t(\lambda_{21},\lambda^{\prime}_{21},\lambda^{\prime}_{11},\lambda^{\prime}_{ 22})=t(\lambda^{\prime}_{21},\lambda^{\prime}_{11},\lambda^{\prime}_{22})\ (-1)^{\lambda_{31}-\lambda_{21}}\ \widetilde{R}_{\lambda_{31}-\lambda_{21}}(\lambda_{31}-\lambda^{\prime}_{21} ;\alpha,\beta,\gamma,\delta)\,,\] (43a) with \[\alpha =\lambda_{32}-\lambda_{31}-1\,, \beta =\lambda^{\prime}_{11}-\lambda^{\prime}_{21}-\lambda^{\prime}_{22} +\lambda_{33}-1\,, \tag{43b}\] \[\gamma =\lambda^{\prime}_{11}-\lambda_{31}-1\,, \delta =\lambda^{\prime}_{21}+\lambda^{\prime}_{22}-\lambda^{\prime}_{11 }-\lambda_{31}-1\,. \tag{43c}\] The degree \(\lambda_{31}-\lambda_{21}\) of the Racah polynomials is in the set \(\{0,1,\ldots,\min(-\alpha_{\Lambda}-1,-\gamma_{\Lambda}-1)\}\) and its variable \(\lambda_{31}-\lambda^{\prime}_{21}\) is in the set \(\{0,1,\ldots,\min(-\alpha_{\Lambda^{\prime}}-1,-\gamma_{\Lambda^{\prime}}-1)\}\). After the identification (38), both sets are identical. As in the previous section, the factor \(t(\lambda^{\prime}_{21},\lambda^{\prime}_{11},\lambda^{\prime}_{22})\) is determined by using the orthogonality relation satisfied by \(\tau\) and the one of the Racah polynomials [51]. After a few manipulations on the parameters, one obtains \[t(x,y,z)^{2}=\left((x-z+1)\,\frac{(\lambda_{31}-\lambda_{32})!(\lambda_{31}- \lambda_{33}+1)!(\lambda_{31}-y)!(\lambda_{32}-z)!(y-z)!}{(x-\lambda_{32})!(x- \lambda_{33}+1)!(x-y)!(\lambda_{31}-z+1)!(\lambda_{31}-x)!(z-\lambda_{33})!} \right)^{2}\,. \tag{44}\] The previous results lead to the following proposition **Proposition 5.1**.: _The matrix elements of the change of basis \(\tau\) corresponding to the rotation \(T\) are given (up to a global undetermined sign) by_ \[\tau_{\Lambda^{\prime}\Lambda}= \delta_{\lambda^{\prime}_{1},\lambda_{11}}\delta_{\lambda_{21}+ \lambda_{22},\lambda^{\prime}_{11}-\lambda^{\prime}_{21}-\lambda^{\prime}_{22}} _{\lambda^{\prime}_{21}}\,t(\lambda^{\prime}_{21},\lambda^{\prime}_{11}, \lambda^{\prime}_{22})(-1)^{\lambda^{\prime}_{22}-\lambda_{21}} \tag{45}\] \[\times \widetilde{R}_{\lambda_{31}-\lambda_{21}}(\lambda_{31}-\lambda^{ \prime}_{21};\lambda_{32}-\lambda_{31}-1,\lambda_{21}+\lambda_{22}+\lambda_{33 }-1,\lambda_{11}-\lambda_{31}-1,-\lambda_{21}-\lambda_{22}-\lambda_{31}-1)\,,\] _with_ \[t(x,y,z)=(x-z+1)\,\frac{(\lambda_{31}-\lambda_{32})!(\lambda_{31}-\lambda_{3 3}+1)!(\lambda_{31}-y)!(\lambda_{32}-z)!(y-z)!}{(x-\lambda_{32})!(x-\lambda_{3 3}+1)!(x-y)!(\lambda_{31}-z+1)!(\lambda_{31}-x)!(z-\lambda_{33})!}\,. \tag{46}\] Proof.: From equations (39), (43), (44), \(\tau_{\Lambda^{\prime},\Lambda}\) is determined up to a sign \(\sigma_{\lambda^{\prime}_{21},\lambda^{\prime}_{11},\lambda^{\prime}_{22}}\) which could depend on the values of \(\lambda^{\prime}_{21}\), \(\lambda^{\prime}_{11}\), \(\lambda^{\prime}_{22}\): \[\tau_{\Lambda^{\prime}\Lambda}= \delta_{\lambda^{\prime}_{11},\lambda_{11}}\delta_{\lambda_{21}+ \lambda_{22},\lambda^{\prime}_{11}-\lambda^{\prime}_{21}-\lambda^{\prime}_{22} }\,t(\lambda^{\prime}_{21},\lambda^{\prime}_{11},\lambda^{\prime}_{22})(-1)^{ \lambda_{31}-\lambda_{21}}\widetilde{R}_{\lambda_{31}-\lambda_{21}}(\lambda_{3 1}-\lambda^{\prime}_{21};\alpha,\beta,\gamma,\delta)\,,\] (47a) with \[t(\lambda^{\prime}_{21},\lambda^{\prime}_{11},\lambda^{\prime}_{22})=T( \lambda^{\prime}_{21},\lambda^{\prime}_{11},\lambda^{\prime}_{22})\sigma_{ \lambda^{\prime}_{21},\lambda^{\prime}_{11},\lambda^{\prime}_{22}}\,,\] (47b) and \[T(x,y,z)=(x-z+1)\,\frac{(\lambda_{31}-\lambda_{32})!(\lambda_{31}-\lambda_{33 }+1)!(\lambda_{31}-y)!(\lambda_{32}-z)!(y-z)!}{(x-\lambda_{32})!(x-\lambda_{3 3}+1)!(x-y)!(\lambda_{31}-z+1)!(\lambda_{31}-x)!(z-\lambda_{33})!}\,. \tag{47c}\] Once the sign function has been determined, one immediately recovers the full expression of \(\tau_{\Lambda^{\prime},\Lambda}\) from equations (39), (43), (47b). The expression for the sign function \(\sigma_{\lambda^{\prime}_{21},\lambda^{\prime}_{11},\lambda^{\prime}_{22}}\) is obtained from constraints following from (17) for other elements of the algebra \(U(sl_{3})\). First, use \(g=e_{12}\) in relation (17) to get \[0 =(\lambda_{21}-\lambda_{11})(\lambda_{11}-\lambda_{22}+1)\tau_{ \Lambda^{\prime},\Lambda+\delta^{11}}-\frac{(\lambda^{\prime}_{11}-\lambda^{ \prime}_{22})(\lambda^{\prime}_{31}-\lambda^{\prime}_{21}+1)}{(\lambda^{\prime }_{21}-\lambda^{\prime}_{22})}\tau_{\Lambda^{\prime}-\delta^{21}-\delta^{11},\Lambda}\] \[+\frac{(\lambda^{\prime}_{21}-\lambda^{\prime}_{11}+1)(\lambda^{ \prime}_{31}-\lambda^{\prime}_{22}+2)}{(\lambda^{\prime}_{21}-\lambda^{\prime}_ {22}+2)}\tau_{\Lambda^{\prime}-\delta^{22}-\delta^{11},\Lambda}\,.\] Substituting (47), and using the following relation between Racah polynomials [52, equation (4.13)] \[(n+\gamma)(n-\gamma+\alpha+\beta+1) \widetilde{R}_{n}(x;\alpha,\beta,\gamma,\delta) \tag{48}\] \[=\frac{\gamma}{2x+\gamma+\delta+1}\Big{[}(x+\alpha+1)(x+\beta+ \delta+1)\widetilde{R}_{n}(x+1;\alpha,\beta,\gamma-1,\delta)\] \[\qquad\qquad\qquad-(x-\alpha+\gamma+\delta)(x-\beta+\gamma) \widetilde{R}_{n}(x;\alpha,\beta,\gamma-1,\delta)\Big{]}\] allows one to fix \[\sigma_{\lambda^{\prime}_{21}-1,\lambda^{\prime}_{11}-1,\lambda^{\prime}_{22}} =(+1)\sigma_{\lambda^{\prime}_{21},\lambda^{\prime}_{11},\lambda^{\prime}_{22} }\,,\qquad\sigma_{\lambda^{\prime}_{21},\lambda^{\prime}_{11}-1,\lambda^{\prime}_ {22}-1}=(-1)\sigma_{\lambda^{\prime}_{21},\lambda^{\prime}_{11},\lambda^{ \prime}_{22}}\,. \tag{49}\] Next, use \(g=e_{23}\) in relation (17). Substituting (47), and using the following relation between Racah polynomials2 Footnote 2: This relation follows from applying the following hypergeometric contiguity relation to simplify the first two terms and the last two terms of (51) \[\frac{\alpha_{1}}{\alpha_{2}-\alpha_{1}}{}_{4}F_{3}\left(\genfrac{}{}{0.0pt}{}{ \alpha_{1}+1,\alpha_{2},\alpha_{3},\alpha_{4}}{\beta_{1},\beta_{2},\beta_{3}} \big{|}z\right)-\frac{\alpha_{2}}{\alpha_{2}-\alpha_{1}}{}_{4}F_{3}\left( \genfrac{}{}{0.0pt}{}{\alpha_{1},\alpha_{2}+1,\alpha_{3},\alpha_{4}}{\beta_{1}, \beta_{2},\beta_{3}}\big{|}z\right)={}_{4}F_{3}\left(\genfrac{}{}{0.0pt}{}{ \alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4}}{\beta_{1},\beta_{2},\beta_{3}} \big{|}z\right) \tag{50}\] allows one to fix \[\sigma_{\lambda^{\prime}_{21}+1,\lambda^{\prime}_{11},\lambda^{\prime}_{22}}=(+1) \sigma_{\lambda^{\prime}_{21},\lambda^{\prime}_{11},\lambda^{\prime}_{22}}\,, \qquad\sigma_{\lambda^{\prime}_{21},\lambda^{\prime}_{11},\lambda^{\prime}_{22 }+1}=(-1)\sigma_{\lambda^{\prime}_{21},\lambda^{\prime}_{11},\lambda^{\prime}_ {22}}\,. \tag{52}\] Finally, combining (49), (52), one can write (up to a global undetermined sign) \[\sigma_{\lambda^{\prime}_{21},\lambda^{\prime}_{11},\lambda^{\prime}_{22}}=(-1 )^{\lambda^{\prime}_{22}-\lambda_{31}}\,. \tag{53}\] Let us emphasize that the undetermined global sign does not play a role in the upcoming computations since only the product of \(T\) and \(T^{-1}\) is involved. For particular representations, the previous result simplifies. **Corollary 5.2**.: _For the symmetric representation (i.e. \(\lambda_{32}=\lambda_{33}=\lambda_{22}\), \(\lambda_{31}=-2\lambda_{32}\)), \(\tau\) is given (up to a global sign) by_ \[\tau_{\Lambda^{\prime}\Lambda}= \delta_{\lambda^{\prime}_{11},\lambda_{11}}\delta_{\lambda_{21} +2\lambda_{32},\lambda^{\prime}_{11}-\lambda^{\prime}_{21}}\delta_{\lambda^{ \prime}_{22},\lambda_{32}}\delta_{\lambda_{22},\lambda_{32}}(-1)^{\lambda_{3 2}-\lambda_{21}}\frac{(\lambda_{21}-\lambda_{32})!}{(\lambda_{11}-\lambda_{21 }-3\lambda_{32})!}\,. \tag{54}\] _For the representation characterized by \(\lambda_{31}=\lambda_{32}=\lambda_{21}\), \(\lambda_{33}=-2\lambda_{31}\), one gets_ \[\tau_{\Lambda^{\prime}\Lambda}= \delta_{\lambda^{\prime}_{11},\lambda_{11}}\delta_{\lambda_{11} -\lambda_{22}-2\lambda_{31},\lambda^{\prime}_{22}}\delta_{\lambda^{\prime}_{2 1},\lambda_{31}}\delta_{\lambda_{21},\lambda_{31}}(-1)^{\lambda^{\prime}_{22}- \lambda_{31}}\frac{(\lambda_{22}+2\lambda_{31})!}{(\lambda_{11}-\lambda_{22})! }\,. \tag{55}\] Proof.: For the symmetric representation, the Racah polynomial in (45) reduces to \[R_{-2\lambda_{32}-\lambda_{21}}(X)={}_{3}F_{2}\left(\begin{matrix}2\lambda_{3 2}+\lambda_{21}\,,&3\lambda_{32}-1\,,&\lambda_{11}-\lambda_{21}\\ \lambda_{11}+2\lambda_{32}\,,&3\lambda_{32}\end{matrix}\Big{|}1\right)\,. \tag{56}\] Using Thomae's transformation formula (see [53, relation (3.1.1)] for example) \[{}_{3}F_{2}\left(\begin{matrix}-n\,,&a\,,&b\\ c\,,&d\end{matrix}\Big{|}1\right)=\frac{(d-b)_{n}}{(d)_{n}}{}_{3}F_{2}\left( \begin{matrix}-n\,,&c-a\,,&b\\ c\,,&1+b-d-n\end{matrix}\Big{|}1\right)\,, \tag{57}\] relation (56) becomes \[R_{-2\lambda_{32}-\lambda_{21}}(X) =\frac{(3\lambda_{32}-\lambda_{11}+\lambda_{21})-\lambda_{21}-2 \lambda_{32}}{(3\lambda_{32})_{-\lambda_{21}-2\lambda_{32}}}{}_{2}F_{1}\left( \begin{matrix}2\lambda_{32}+\lambda_{21}\,,&\lambda_{11}-\lambda_{21}\\ \lambda_{11}+2\lambda_{32}\end{matrix}\Big{|}1\right)\,, \tag{58}\] \[=\frac{(3\lambda_{32}-\lambda_{11}+\lambda_{21})-\lambda_{21}-2 \lambda_{32}}{(3\lambda_{32})_{-\lambda_{21}-2\lambda_{32}}(\lambda_{11}+2 \lambda_{32})_{-\lambda_{21}-2\lambda_{32}}}\,. \tag{59}\] This proves relation (54). Relation (55) is proven by remarking that \(R_{0}(X)=1\). General matrix elements for \(So(3)\) rotations in a general \(sl_{3}\) irrep and a new family of bispectral trivariate hybrid functions **Theorem 6.1**.: _The matrix elements of the change of basis \(\sigma\) induced by the rotation \(S\) given by (20) are, for \(\Lambda,\Lambda^{\prime}\) GT patterns,_ \[\sigma_{\Lambda^{\prime},\Lambda}=\sum_{n=\text{max}\{-\lambda_{2 1},-\lambda^{\prime}_{21}\}}^{\text{min}\{\lambda_{31},n-\lambda_{33}\}}\sum_{ \ell=\ell_{min}}^{\text{min}\{\lambda_{31},n-\lambda_{33}\}}\mu_{\Lambda^{ \prime},\Lambda}(\ell,n)K_{n+\lambda^{\prime}_{21}}(\lambda^{\prime}_{11}- \lambda^{\prime}_{22};\sin^{2}(\phi),\lambda^{\prime}_{21}-\lambda^{\prime}_{22})\] \[\times \widetilde{R}_{\lambda_{31}-\lambda^{\prime}_{21}}(\lambda_{31}- \ell;\lambda_{32}-\lambda_{31}-1,\lambda^{\prime}_{21}+\lambda^{\prime}_{22}+ \lambda_{33}-1,n+\lambda^{\prime}_{21}+\lambda^{\prime}_{22}-\lambda_{31}-1,- \lambda^{\prime}_{21}-\lambda^{\prime}_{22}-\lambda_{31}-1)\] \[\times \widetilde{R}_{\lambda_{31}-\lambda_{21}}(\lambda_{31}-\ell; \lambda_{32}-\lambda_{31}-1,\lambda_{21}+\lambda_{22}+\lambda_{33}-1,n+ \lambda_{21}+\lambda_{22}-\lambda_{31}-1,-\lambda_{21}-\lambda_{22}-\lambda_{3 1}-1)\] \[\times K_{\lambda_{11}-\lambda_{22}}(n+\lambda_{21};\sin^{2}(\chi),\lambda_{21}-\lambda_{22})\,, \tag{60}\] _where \(\ell_{min}=\text{max}\{\lambda_{32},n-\lambda_{32},-\lambda_{21}-\lambda_{22},n+ \lambda_{21}+\lambda_{22},-\lambda_{21}^{\prime}-\lambda_{22}^{\prime},n+ \lambda_{21}^{\prime}+\lambda_{22}^{\prime}\}\),_ \[\mu_{\Lambda^{\prime},\Lambda}(\ell,n) =(-1)^{\lambda_{1}^{\prime}+\ell-n}t(\lambda_{21}^{\prime},n+ \lambda_{21}^{\prime}+\lambda_{22}^{\prime},\lambda_{22}^{\prime})t(\ell,n+ \lambda_{21}+\lambda_{22},n-\ell) \tag{61}\] \[\times\frac{(\lambda_{21}^{\prime}-\lambda_{22}^{\prime})!\,(2 \ell-n)!\,(\lambda_{21}-\lambda_{22})!\,\cos^{\lambda_{21}^{\prime}-\lambda_{2 2}^{\prime}}(\phi)\cos^{2\ell-n}(\theta)\cos^{\lambda_{21}-\lambda_{22}}( \chi)}{(n+\lambda_{21}^{\prime})!(\lambda_{21}^{\prime}-\lambda_{11}^{\prime} )!(\ell+\lambda_{21}+\lambda_{22})!(\ell-n-\lambda_{21}^{\prime}-\lambda_{22} ^{\prime})!(\lambda_{11}-\lambda_{22})!(-n-\lambda_{22})!}\] \[\times\tan^{n+\lambda_{11}^{\prime}+\lambda_{21}^{\prime}- \lambda_{22}^{\prime}}(\phi)\tan^{2\ell+\lambda_{21}^{\prime}+\lambda_{22}^{ \prime}+\lambda_{21}+\lambda_{22}}(\theta)\tan^{n+\lambda_{11}+\lambda_{21}- \lambda_{22}}(\chi)\,,\] _and \(t(x,y,z)\) is defined by (46)._ Proof.: From the discussion at the end of Section 2, one gets \[\sigma_{\Lambda^{\prime},\Lambda}=\sum_{\Lambda^{(1)},\Lambda^{(2)},\Lambda^{ (3)},\Lambda^{(4)}\in\mathcal{P}_{\lambda}}(\rho_{\phi})_{\Lambda^{\prime}, \Lambda^{(1)}}\left(\tau^{-1}\right)_{\Lambda^{(1)},\Lambda^{(2)}}(\rho_{ \theta})_{\Lambda^{(2)},\Lambda^{(3)}}\left(\tau\right)_{\Lambda^{(3)},\Lambda^ {(4)}}(\rho_{\chi})_{\Lambda^{(4)},\Lambda} \tag{62}\] The components of \(\rho_{\phi}\), \(\rho_{\theta}\), \(\rho_{\chi}\) are given by (34) and the ones of \(\tau\) by (45). The ones of \(\tau^{-1}\) are computed as follows. Using the orthogonality relation (19), one gets \[(\tau^{-1})_{\Lambda,\Lambda^{\prime}}=\left(\frac{N_{\Lambda^{\prime}}}{N_{ \Lambda}}\right)^{2}\tau_{\Lambda^{\prime},\Lambda} \tag{63}\] which leads to \[(\tau^{-1})_{\Lambda,\Lambda^{\prime}}=\delta_{\lambda_{11}^{ \prime},\lambda_{11}}\delta_{\lambda_{21}+\lambda_{22},\lambda_{11}^{\prime}- \lambda_{21}^{\prime}-\lambda_{22}^{\prime}}t(\lambda_{21},\lambda_{11}, \lambda_{22})(-1)^{\lambda_{22}^{\prime}-\lambda_{21}} \tag{64}\] \[\times\widetilde{R}_{\lambda_{31}-\lambda_{21}}(\lambda_{31}- \lambda_{21}^{\prime};\lambda_{32}-\lambda_{31}-1,\lambda_{21}+\lambda_{22}+ \lambda_{33}-1,\lambda_{11}-\lambda_{31}-1,-\lambda_{21}-\lambda_{22}-\lambda_{ 31}-1)\,.\] The expression (62) can be expressed now as follows \[\sigma_{\Lambda^{\prime},\Lambda}=\sum_{\lambda_{21}^{(1)},\lambda_ {11}^{(1)},\lambda_{22}^{(1)},\lambda_{21}^{(2)},\ldots,\lambda_{22}^{(4)}} \delta_{\lambda_{21}^{\prime},\lambda_{21}^{(1)}}\delta_{\lambda_{22}^{\prime},\lambda_{22}^{(1)}}\overline{\rho}_{\Lambda^{\prime},\Lambda^{(1)}}\ \delta_{\lambda_{11}^{(1)}, \lambda_{11}^{(2)}}\delta_{\lambda_{21}^{(1)}+\lambda_{22}^{(1)},\lambda_{11}^ {(2)}-\lambda_{21}^{(2)}-\lambda_{22}^{(2)}}\overline{\tau}_{\Lambda^{(1)}, \Lambda^{(2)}}\] \[\times\delta_{\lambda_{21}^{(2)},\lambda_{21}^{(3)}}\delta_{\lambda _{22}^{(2)},\lambda_{22}^{(3)}}\overline{\rho}_{\Lambda^{(2)},\Lambda^{(3)}}\ \delta_{\lambda_{11}^{(3)}, \lambda_{11}^{(4)}}\delta_{\lambda_{21}^{(3)}+\lambda_{22}^{(3)},\lambda_{11}^ {(4)}-\lambda_{21}^{(4)}-\lambda_{22}^{(4)}}\overline{\tau}_{\Lambda^{(3)}, \Lambda^{(4)}}\] \[\times\delta_{\lambda_{21}^{(4)},\lambda_{21}}\delta_{\lambda_{22}^ {(4)},\lambda_{22}}^{(4)}\overline{\rho}_{\Lambda^{(4)},\Lambda}\,, \tag{65}\] where \(\overline{\rho}_{\Lambda,\Lambda^{\prime}}\) (resp. \(\overline{\tau}_{\Lambda,\Lambda^{\prime}}\), \(\widetilde{\tau}_{\Lambda,\Lambda^{\prime}}\)) are the expressions following the \(\delta\)'s in (34) (resp. (45), (64)). Combining everything together and setting \(n=\lambda_{11}^{(1)}-\lambda_{21}^{\prime}-\lambda_{22}^{\prime}\), \(\ell=\lambda_{21}^{(2)}\), the theorem is proven. The element \(\sigma_{\Lambda^{\prime},\Lambda}\) given in (60) can be seen as a function of the three variables \(\Lambda^{\prime}=(\lambda_{21}^{\prime},\lambda_{11}^{\prime},\lambda_{22}^{ \prime})\). In the following, we shall give a difference relation satisfied by this function. We shall also provide a relation between the functions for different values of \(\Lambda=(\lambda_{21},\lambda_{11},\lambda_{22})\), called recurrence relation, in reference to the similar relations satisfied by the Krawtchouk or Racah polynomials. Recurrence relations.To simplify the notations, we denote by \(S_{ij}\) the entries of the rotation \(S\) given by (20). Relation (17) for this rotation reads as follows, for any \(g\in sl_{3}\), \[\sigma\cdot\Psi_{S}(g)\cdot\xi_{\Lambda}=g\cdot\sigma\cdot\xi_{\Lambda}\,, \tag{66}\] and provides three different recurrence relations for \(\sigma_{\Lambda^{\prime},\Lambda}\) when one chooses either \(g=H,Y,J\). Explicitly, for \(g=H\), it reads: \[\Big{(}(S_{11}S_{11}-S_{12}S_{12})\lambda_{11}+(S_{21}S_{21}-S_{22}S _{22})(\lambda_{21}+\lambda_{22}-\lambda_{11})-(S_{31}S_{31}-S_{32}S_{32})( \lambda_{21}+\lambda_{22})\Big{)}\sigma_{\Lambda^{\prime},\Lambda}\] \[+(S_{11}S_{21}-S_{12}S_{22})(\lambda_{21}-\lambda_{11})(\lambda_ {11}-\lambda_{22}+1)\sigma_{\Lambda^{\prime},\Lambda+\delta^{11}}+(S_{21}S_{11 }-S_{22}S_{12})\sigma_{\Lambda^{\prime},\Lambda-\delta^{11}}\] \[+(S_{21}S_{31}-S_{22}S_{32})\left(\frac{\lambda_{31}-\lambda_{21} }{\lambda_{21}-\lambda_{22}+1}\sigma_{\Lambda^{\prime},\Lambda+\delta^{21}}+ \frac{\lambda_{31}-\lambda_{22}+1}{\lambda_{21}-\lambda_{22}+1}\sigma_{ \Lambda^{\prime},\Lambda+\delta^{22}}\right)\] \[+(S_{31}S_{21}-S_{32}S_{22})\left(\frac{(\lambda_{21}-\lambda_{3 2})(\lambda_{21}-\lambda_{33}+1)(\lambda_{21}-\lambda_{11})}{\lambda_{21}- \lambda_{22}+1}\sigma_{\Lambda^{\prime},\Lambda-\delta^{21}}\right.\] \[\left.+\frac{(\lambda_{11}-\lambda_{22}+1)(\lambda_{32}-\lambda_{ 22}+1)(\lambda_{22}-\lambda_{33})}{\lambda_{21}-\lambda_{22}+1}\sigma_{ \Lambda^{\prime},\Lambda-\delta^{22}}\right)\] \[+(S_{11}S_{31}-S_{12}S_{32})\left(\frac{(\lambda_{11}-\lambda_{22 }+1)(\lambda_{31}-\lambda_{21})}{\lambda_{21}-\lambda_{22}+1}\sigma_{\Lambda^ {\prime},\Lambda+\delta^{11}+\delta^{21}}-\frac{(\lambda_{21}-\lambda_{11})( \lambda_{31}-\lambda_{22}+1)}{\lambda_{21}-\lambda_{22}+1}\sigma_{\Lambda^{ \prime},\Lambda+\delta^{11}+\delta^{22}}\right)\] \[+(S_{31}S_{11}-S_{32}S_{12})\left(\frac{(\lambda_{21}-\lambda_{3 2})(\lambda_{21}-\lambda_{33}+1)}{\lambda_{21}-\lambda_{22}+1}\sigma_{\Lambda ^{\prime},\Lambda-\delta^{11}-\delta^{21}}-\frac{(\lambda_{32}-\lambda_{22}+1) (\lambda_{22}-\lambda_{33})}{\lambda_{21}-\lambda_{22}+1}\sigma_{\Lambda^{ \prime},\Lambda-\delta^{11}-\delta^{22}}\right)\] \[= (2\lambda_{11}^{\prime}-\lambda_{21}^{\prime}-\lambda_{22}^{\prime })\sigma_{\Lambda^{\prime},\Lambda}\,. \tag{67}\] For \(g=Y,J\), the recurrence relations can be computed similarly. The 'hybrid' property of the functions \(\sigma_{\Lambda^{\prime},\Lambda}\) shows up in particular in the fact that the eigenvalue in the R.H.S. is also linear in \(\Lambda^{\prime}\) for \(g=Y\) while it is quadratic for \(g=J\). Difference relations.To get the difference relations, replace \(g\) by \(\Psi_{S^{t}}(g)\) in relation (66) to find \[\sigma\cdot g\cdot\xi_{\Lambda}=\Psi_{S^{t}}(g)\cdot\sigma\cdot\xi_{\Lambda}\,, \tag{68}\] where \(S^{t}\) stands for the transposition of \(S\) and we have used that \(\Psi_{S}(\Psi_{S^{t}}(g))=g\). Using this relation for \(g=H,Y,J\), one gets three difference relations. Namely, for \(g=H\), it becomes \[(2\lambda_{11}-\lambda_{21}-\lambda_{22})\sigma_{\Lambda^{\prime}, \Lambda}= \tag{69}\] \[\Big{(}(S_{11}S_{11}-S_{21}S_{21})\lambda_{11}^{\prime}+(S_{12}S_ {12}-S_{22}S_{22})(\lambda_{21}^{\prime}+\lambda_{22}^{\prime}-\lambda_{11}^{ \prime})-(S_{13}S_{13}-S_{23}S_{23})(\lambda_{21}^{\prime}+\lambda_{22}^{ \prime})\Big{)}\sigma_{\Lambda^{\prime},\Lambda}\] \[+(S_{11}S_{12}-S_{21}S_{22})(\lambda_{21}^{\prime}-\lambda_{11}^{ \prime}+1)(\lambda_{11}^{\prime}-\lambda_{22}^{\prime})\sigma_{\Lambda^{ \prime}-\delta^{11},\Lambda}+(S_{12}S_{11}-S_{22}S_{21})\sigma_{\Lambda^{ \prime}+\delta^{11},\Lambda}\] \[+(S_{12}S_{13}-S_{22}S_{23})\left(\frac{\lambda_{31}^{\prime}- \lambda_{21}^{\prime}+1}{\lambda_{21}^{\prime}-\lambda_{22}^{\prime}}\sigma_{ \Lambda^{\prime}-\delta^{21},\Lambda}+\frac{\lambda_{31}^{\prime}-\lambda_{22}^ {\prime}+2}{\lambda_{21}^{\prime}-\lambda_{22}^{\prime}+2}\sigma_{\Lambda^{ \prime}-\delta^{22},\Lambda}\right)\] \[+(S_{13}S_{12}-S_{23}S_{22})\left(\frac{(\lambda_{21}^{\prime}- \lambda_{32}^{\prime}+1)(\lambda_{21}^{\prime}-\lambda_{33}^{\prime}+2)( \lambda_{21}^{\prime}-\lambda_{11}^{\prime}+1)}{\lambda_{21}^{\prime}-\lambda_{2 2}^{\prime}+2}\sigma_{\Lambda^{\prime}+\delta^{21},\Lambda}\right.\] \[\left.+\frac{(\lambda_{11}^{\prime}-\lambda_{22}^{\prime})( \lambda_{32}^{\prime}-\lambda_{22}^{\prime})(\lambda_{22}^{\prime}-\lambda_{3 3}^{\prime}+1)}{\lambda_{21}^{\prime}-\lambda_{22}^{\prime}}\sigma_{\Lambda^{ \prime}-\delta^{11}-\delta^{21},\Lambda}-\frac{(\lambda_{21}^{\prime}- \lambda_{11}^{\prime}+1)(\lambda_{31}^{\prime}-\lambda_{22}^{\prime}+2)}{ \lambda_{21}^{\prime}-\lambda_{22}^{\prime}+2}\sigma_{\Lambda^{\prime}-\delta^{ 11}-\delta^{22},\Lambda}\right)\] \[+(S_{13}S_{11}-S_{23}S_{21})\left(\frac{(\lambda_{21}^{\prime}- \lambda_{32}^{\prime}+1)(\lambda_{21}^{\prime}-\lambda_{33}^{\prime}+2)}{ \lambda_{21}^{\prime}-\lambda_{22}^{\prime}+2}\sigma_{\Lambda^{\prime}+\delta^{11 }+\delta^{21},\Lambda}-\frac{(\lambda_{32}^{\prime}-\lambda_{22}^{\prime})( \lambda_{22}^{\prime}-\lambda_{33}^{\prime}+1)}{\lambda_{21}^{\prime}-\lambda_{2 2}^{\prime}}\sigma_{\Lambda^{\prime}+\delta^{11}+\delta^{22},\Lambda}\right)\,.\] For \(g=Y,J\), similar relations are obtained. Again, the 'hybrid' property of the functions \(\sigma_{\Lambda^{\prime},\Lambda}\) is illustrated by the fact that the eigenvalue in the L.H.S. is also linear in \(\Lambda\) for \(g=Y\) while it is quadratic for \(g=J\). These sets of recurrence relations and difference relations establish the bispectrality of the functions introduced in (60). ## 7 Particular examples ### Bivariate Krawtchouk polynomials of Griffiths type Let us consider the case of symmetric representations of \(sl_{3}\) which has also been looked at in Corollary 5.2. We recall that the GT pattern of the symmetric representations are such that \(\lambda_{32}=\lambda_{33}\), \(\lambda_{31}=-2\lambda_{33}\) which implies that \(\lambda_{22}=\lambda_{32}=\lambda_{33}\). **Theorem 7.1**.: _The matrix elements of the change of basis \(\sigma\) induced by the rotation \(S\) given by (20) are, for \(\Lambda,\Lambda^{\prime}\) GT patterns of symmetric representations,_ \[\begin{split}\sigma_{\Lambda,\Lambda^{\prime}}=& \sum_{\ell=max\{0,\lambda_{21}-\lambda_{21}^{\prime}\}}^{\lambda_{21}- \lambda_{33}}f_{\ell}(\lambda_{21},\lambda_{11},\lambda_{21}^{\prime},\lambda_ {11}^{\prime})\ K_{\ell}(\lambda_{11}-\lambda_{33};\sin^{2}\phi,\lambda_{21}- \lambda_{33})\\ &\times K_{\ell+\lambda_{21}^{\prime}-\lambda_{21}}(\ell;\sin^{2 }\theta,\ell-\lambda_{21}-2\lambda_{33})\ K_{\lambda_{11}^{\prime}-\lambda_{ 33}}(\ell+\lambda_{21}^{\prime}-\lambda_{21},\sin^{2}\chi,\lambda_{21}^{\prime }-\lambda_{33})\,,\end{split} \tag{70}\] _with_ \[\begin{split} f_{\ell}(\lambda_{21},\lambda_{11},\lambda_{21}^{ \prime},\lambda_{11}^{\prime})&=(-1)^{\lambda_{11}-2\lambda_{21}+2 \lambda_{33}-\lambda_{22}^{\prime}}(\tan\phi)^{\ell+\lambda_{11}-\lambda_{33} }(\tan\theta)^{2\ell+\lambda_{21}^{\prime}-\lambda_{21}}(\tan\chi)^{\ell+ \lambda_{21}^{\prime}-\lambda_{21}+\lambda_{11}^{\prime}-\lambda_{33}}\\ &\times\frac{(\cos\phi)^{\lambda_{21}-\lambda_{33}}(\cos\theta)^ {\ell-\lambda_{21}-2\lambda_{33}}(\cos\chi)^{\lambda_{21}^{\prime}-\lambda_{33 }}}{\ell!(\ell+\lambda_{21}^{\prime}-\lambda_{21})!(\ell-\lambda_{21}-2 \lambda_{33})!(-\ell+\lambda_{21}-\lambda_{33})!}\\ &\times\frac{[(\lambda_{21}-\lambda_{33})!(\lambda_{21}^{\prime} -\lambda_{33})!]^{2}}{(\lambda_{21}-\lambda_{11})!(\lambda_{11}^{\prime}- \lambda_{33})!(-2\lambda_{33}-\lambda_{21})}\,.\end{split} \tag{71}\] Proof.: The calculation proceeds similarly to what was done in Theorem 6.1, this time using the expressions for the symmetric representation in Corollary 5.2. Combining everything together and setting \(\ell=\lambda_{11}^{(1)}-\lambda_{33}\), the result follows. Making use of the symmetry property of Krawtchouk polynomials \[K_{n}(x;p,N)=\frac{n!}{(N-n)!}(-1)^{x+n}p^{n-x}(1-p)^{x+n-N}K_{N-n}(N-x;p,N), \tag{72}\] one can compare with the expressions found in [1, Equations (10.4)-(10.5)] for the bivariate Griffiths-Krawtchouk polynomials. ### Bivariate hybrid functions in terms of Krawtchouk and Racah polynomials We consider the transformation \(T\) followed by a rotation around the \(z\)-axis: \[S=R_{\eta}^{z}T=\begin{pmatrix}\cos(\eta)&0&\sin(\eta)\\ -\sin(\eta)&0&\cos(\eta)\\ 0&-1&0\end{pmatrix}. \tag{73}\] It can be obtained with the following choice of angles in the parametrization of a general rotation used in the preceding sections: \[\theta=\frac{\pi}{2}\,,\qquad\phi=-\frac{\pi}{2}\,,\qquad\chi=\eta+\frac{\pi} {2}. \tag{74}\] It is simpler to calculate the matrix elements of \(S\) directly rather than putting these parameters in the general formula. Indeed, we have: \[\sigma_{\Lambda^{\prime},\Lambda}=\sum_{\Lambda^{(1)}}\tau_{\Lambda^{\prime}, \Lambda^{(1)}}(\rho_{\eta})_{\Lambda^{(1)},\Lambda}. \tag{75}\] Using the explicit expressions found for the matrix elements \(\tau\) and \(\rho_{\eta}\), we get the following result. **Proposition 7.2**.: _For \(\Lambda,\Lambda^{\prime}\) GT patterns, the matrix elements of the change of basis \(\sigma\) corresponding to the rotation \(S\) are given (up to a global undetermined sign) by,_ \[\sigma_{\Lambda^{\prime}\Lambda}= \delta_{\lambda_{21}+\lambda_{22},\lambda^{\prime}_{11}-\lambda^{ \prime}_{21}-\lambda^{\prime}_{22}}\,t(\lambda^{\prime}_{21},\lambda^{\prime}_ {11},\lambda^{\prime}_{22})(-1)^{2\lambda_{21}+\lambda^{\prime}_{21}}\frac{( \lambda_{21}-\lambda_{22})!\tan^{\lambda^{\prime}_{11}+\lambda_{11}-2\lambda_ {22}}(\eta)\cos^{\lambda_{21}-\lambda_{22}}(\eta)}{(\lambda_{11}-\lambda_{22})! (\lambda_{21}-\lambda^{\prime}_{11})!}\] \[\times K_{\lambda_{11}-\lambda_{22}}(\lambda^{\prime}_{11}-\lambda_{ 22};\sin^{2}(\eta),\lambda_{21}-\lambda_{22})\ \widetilde{R}_{\lambda_{31}-\lambda_{21}}(\lambda_{31}-\lambda^{\prime}_{21}; \alpha,\beta,\gamma,\delta)\,, \tag{76}\] _with_ \[\alpha=\lambda_{32}-\lambda_{31}-1\,, \beta=\lambda^{\prime}_{11}-\lambda^{\prime}_{21}-\lambda^{ \prime}_{22}+\lambda_{33}-1\,, \tag{77}\] \[\gamma=\lambda^{\prime}_{11}-\lambda_{31}-1\,, \delta=\lambda^{\prime}_{21}+\lambda^{\prime}_{22}-\lambda^{ \prime}_{11}-\lambda_{31}-1\,.\] In the previous proposition, a product of Krawtchouk and Racah polynomials appears in each sector where \(\lambda_{21}+\lambda_{22}=\lambda^{\prime}_{11}-\lambda^{\prime}_{21}-\lambda^ {\prime}_{22}\). It naturally suggests to consider the following bivariate function: \[P_{n_{1},n_{2}}(x_{1},x_{2})= K_{n_{1}}(x_{1}-n_{2};\sin^{2}(\eta),N-2n_{2})\ \widetilde{R}_{n_{2}}(x_{2};\alpha,\beta,x_{1}-N-1,\delta)\,, \tag{78}\] where \(N\) is a positive integer and \(\alpha,\beta,\delta,\eta\) are parameters. In the formula of the proposition, \(N\) corresponds to \(2\lambda_{31}-\lambda_{21}-\lambda_{22}\) and \(x_{1}\), \(x_{2}\), \(n_{1}\) and \(n_{2}\) to \[x_{1} =\lambda^{\prime}_{22}+\lambda^{\prime}_{21}+\lambda_{31}\,, x_{2} =\lambda_{31}-\lambda^{\prime}_{21}\,, \tag{79a}\] \[n_{1} =\lambda_{11}-\lambda_{22}\,, n_{2} =\lambda_{31}-\lambda_{21}\,. \tag{79b}\] For these choices, the L.H.S. of the recurrence relations of \(P_{n_{1},n_{2}}(x_{1},x_{2})\) obtained by (66) involves only terms of the form \(P_{n^{\prime}_{1},n^{\prime}_{2}}(x_{1},x_{2})\). More precisely, (66) for \(g=e_{11}\) provides a recurrence relation involving only \(P_{n_{1}+\varepsilon,n_{2}}(x_{1},x_{2})\) and (66) for \(g=e_{21}e_{12}\) provides a recurrence relation involving only \(P_{n_{1}+\varepsilon,n_{2}}(x_{1},x_{2})\), \(P_{n_{1},n_{2}+\varepsilon}(x_{1},x_{2})\), \(P_{n_{1}+\varepsilon,n_{2}-\varepsilon}(x_{1},x_{2})\) and \(P_{n_{1}+2\varepsilon,n_{2}-\varepsilon}(x_{1},x_{2})\), with \(\varepsilon\in\{-1,0,1\}\). Similarly, the R.H.S. of the difference relations of \(P_{n_{1},n_{2}}(x_{1},x_{2})\) obtained by (68) involves only \(P_{n_{1},n_{2}}(x^{\prime}_{1},x^{\prime}_{2})\): for \(g=e_{11}\), one obtains \(P_{n_{1},n_{2}}(x_{1}+\varepsilon,x_{2})\) and \(P_{n_{1},n_{2}}(x_{1}+\varepsilon,x_{2}-\varepsilon)\) and for \(g=J\), one obtains \(P_{n_{1},n_{2}}(x_{1},x_{2}+\varepsilon)\), with \(\varepsilon\in\{-1,0,1\}\). Formula (78) resembles very much the Tratnik construction of bivariate orthogonal polynomials, except that here it is an hybrid of two different univariate polynomials. Such hybrid constructions already appeared, see for example [5, 29] for a formula like (78) using dual Hahn polynomials instead of Racah polynomials. ## 8 The Racah algebra in \(U(sl_{3})\) and the centralizer of the Cartan subalgebra This section contains some algebraic results concerning the elements \(J\) and \(\Psi_{T}(J)\) introduced previously in Section 5. Racah algebra.The observation that the change of basis associated to \(T\) is related to the Racah polynomials can be understood algebraically by remarking that \(J\) and \(\overline{J}=\Psi_{T}(J)\) satisfy the relations of the Racah algebra (1). Indeed, let us define \[K:=\big{[}\,J,\overline{J}\,\big{]}=e_{31}e_{12}e_{23}-e_{32}e_{21}e_{13}\,.\] (80a) Then, by a lengthy but straightforward calculation using the defining relations of \[U(sl_{3})\], one can show that \[[J,K] =2J^{2}+2\{J,\overline{J}\}-aJ+b^{+}\,, \tag{80b}\] \[[K,\overline{J}] =2\overline{J}^{2}+2\{J,\overline{J}\}-a\overline{J}+b^{-}\,, \tag{80c}\] where \[a=h^{2}+3y^{2}+C_{2}\,, \tag{81a}\] \[b^{\pm}=(3y\mp h)\left(\frac{(2-y\mp h)C_{2}}{4}-\frac{C_{3}}{3}+ \frac{(y-2\pm h)(y+2\pm h)(y\mp h)}{8}\right)\,. \tag{81b}\] The Casimir elements \(C_{2}\), \(C_{3}\) of \(U(sl_{3})\) are defined by (3) and \(h=\frac{1}{2}(e_{22}-e_{33})\), \(y=\frac{1}{6}(2e_{11}-e_{22}-e_{33})\) are Cartan elements stable under \(T\). Relations (80) are those of the Racah algebra3. This realization interestingly adds to others previously found, namely between the recurrence and the difference operators of the Racah polynomials [44, 54, 55], in terms of the intermediate Casimir elements of \(U(sl_{2})^{\otimes 3}\)[8] or in terms of the elements of \(U(sl_{2})\)[56, 57]. The result here provides a realization of the Racah algebra in terms of the elements of \(U(sl_{3})\) (also see [37, 49]). Footnote 3: Up to a trivial rescaling of the generators such as \(J=-2K_{2}\), \(\overline{J}=-2K_{1}\), \(a=4d\), \(b^{+}=8e_{1}\), \(b^{-}=8e_{2}\). It is well-known that there exists a central element in the Racah algebra given by \[\Gamma=2\{J^{2},\overline{J}\}+2\{J,\overline{J}^{2}\}-K^{2}-4(J+\overline{J })^{2}-a\{J,\overline{J}\}+2(b^{-}+a)J+2(b^{+}+a)\overline{J}\,. \tag{82}\] In the realization (80)-(81) introduced here, this central element takes a definite value in terms of the other central elements of the Racah algebra: \[\Gamma= \frac{1}{2}(y^{2}+h^{2})^{2}(1-C_{2})-\frac{1}{8}(h^{2}-y^{2})^{ 3}+2h^{2}y^{2}-\left(\frac{C_{3}}{3}+2y+yC_{2}\right)\left(\frac{C_{3}}{3}+2y +yC_{2}-C_{2}\right)\] \[-\frac{y}{6}(3h^{2}-11y^{2})(3C_{2}-2C_{3})+\frac{C_{2}}{8}(h^{2} +3y^{2})(5h^{2}-y^{2}+4)\,. \tag{83}\] The quotient of the Racah algebra by the previous relation (83) is called the special Racah algebra [48]. The terminology "special" comes from [58], where a similar quotient, called "special Askey-Wilson algebra", was introduced for the Askey-Wilson algebra. This type of quotient appears in various other situations and, in particular, as the diagonal centralizer of \(U(sl_{2})\) in \(U(sl_{2})^{\otimes 3}\)[48]. Isomorphism with a centralizer.In this paragraph, the centralizer \(\mathcal{Z}\) of the Cartan subalgebra in \(U(sl_{3})\) is described using the elements previously introduced. The centralizer is defined as follows: \[\mathcal{Z}=\{x\in U(sl_{3})\ |\ [x,H_{i}]=0\,,\ i=1,2\}\,, \tag{84}\] where \(H_{1}=\frac{1}{2}(e_{11}-e_{22})\) and \(H_{2}=\frac{1}{2}(e_{22}-e_{33})\). Assigning degree \(1\) to each generator \(e_{ij}\), the algebra \(U(sl_{3})\) is filtered and so is its subalgebra \(\mathcal{Z}\), which allows to describe \(\mathcal{Z}\) as in the following proposition. **Proposition 8.1**.: _The algebra \(\mathcal{Z}\) is generated by_ \[J\,,\overline{J}\,,K\,,H_{1},\,H_{2},\,C_{2},\,C_{3}\,, \tag{85}\] _and its Hilbert-Poincare series is:_ \[F_{\mathcal{Z}}(t)=\frac{1+t^{3}}{(1-t)^{2}(1-t^{2})^{3}(1-t^{3})}\,. \tag{86}\] Proof.: A monomial \(H_{1}^{a}H_{2}^{b}\,e_{21}^{c}\,e_{12}^{d}\,e_{32}^{e}\,e_{23}^{f}\,e_{31}^{g }\,e_{13}^{h}\) commutes with \(H_{1}\) and \(H_{2}\) if and only if: \[\left\{\begin{array}{l}2(d-c)+(e-f)+(h-g)=0\,,\\ (c-d)+2(f-e)+(h-g)=0\,,\end{array}\right.\qquad d-c=f-e=h-g\,.\] Considering either \(d-c\geq 0\) or \(d-c\leq 0\), it follows that \(\mathcal{Z}\) is spanned by the following elements: \[\{H_{1}^{a}H_{2}^{b}\big{(}e_{21}e_{12}\big{)}^{i}\big{(}e_{32}e_{23}\big{)}^{ j}\big{(}e_{31}e_{13}\big{)}^{k}x\}_{a,b,i,j,k\geq 0}\,,\ \ \text{with}\ x\in\{\big{(}e_{12}e_{23}e_{31}\big{)}^{\ell},\big{(}e_{32}e_{21}e_{ 13}\big{)}^{\ell}\}_{\ell\geq 0}\,. \tag{87}\] This shows that \(\mathcal{Z}\) is generated as an algebra by: \[H_{1}\,,\ H_{2}\,,\ e_{21}e_{12}\,,\ e_{32}e_{23}\,,\ e_{31}e_{13}\,,\ e_{31}e_{12}e_{23}\,,\ e_{32}e_{21}e_{13}\,. \tag{88}\] By direct computations, one extracts \[e_{21}e_{12}=J-H_{1}(H_{1}+1)\,, \tag{89a}\] \[e_{31}e_{13}=\overline{J}-(H_{1}+H_{2})(H_{1}+H_{2}+1)\,,\] (89b) \[e_{32}e_{23}=\tfrac{1}{2}C_{2}-J-\overline{J}-H_{2}+\tfrac{1}{3} (2H_{1}H_{2}+2H_{1}^{2}-H_{2}^{2})\,,\] (89c) \[e_{31}e_{12}e_{23}-e_{32}e_{21}e_{13}=K\,,\] (89d) \[e_{31}e_{12}e_{23}+e_{32}e_{21}e_{13}=\tfrac{1}{3}C_{3}-2J(H_{1}+ H_{2}+1)-2\overline{J}H_{1}+\tfrac{1}{3}C_{2}(2H_{1}+H_{2})\] \[\qquad\quad+\tfrac{2}{27}(H_{2}+2H_{1})(11H_{1}^{2}+11H_{1}H_{2}- 4H_{2}^{2})-\tfrac{2}{3}(2H_{1}H_{2}-H_{1}^{2}+2H_{2}^{2}+2H_{2}-H_{1})\,. \tag{89e}\] It follows easily that \(\mathcal{Z}\) is generated by the elements in (85). To calculate the Hilbert-Poincare series, we sum the degrees of the elements in (87), which are linearly independent in \(U(sl_{3})\). Namely, one gets \[F_{\mathcal{Z}}(t)=\sum_{a,b,i,j,k\geq 0}t^{a+b+2(i+j+k)}\left(1+2\sum_{\ell\geq 1 }t^{3\ell}\right)=\frac{1}{(1-t)^{2}(1-t^{2})^{3}}\left(1+\frac{2t^{3}}{(1-t^ {3})}\right)\,, \tag{90}\] which leads to the desired result. Recall that the elements \(H_{1},H_{2},C_{2},C_{3}\) are central in the subalgebra \(\mathcal{Z}\). Moreover, we have found some generators of \(\mathcal{Z}\) together with the relations (80) and (83). These relations allow to write any element of \(\mathcal{Z}\) as a linear combination of \[\{H_{1}^{a}H_{2}^{b}C_{2}^{c}C_{3}^{d}J^{i}\overline{J}^{J}K^{k}\}_{a,b,c,d,i, j\geq 0,\ k\in\{0,1\}}\,. \tag{91}\] This is true since we can reorder any product using (80), the elements \(H_{1},H_{2},C_{2},C_{3}\) are central, and \(K^{2}\) is rewritten using the special relation (83). Thus, the above set is a spanning set of \(\mathcal{Z}\), and comparing with the Hilbert-Poincare series found previously, we arrive at the following corollary. **Corollary 8.2**.: _The centralizer \(\mathcal{Z}\) is isomorphic to the algebra generated by \(J\), \(\overline{J}\), \(K\) and the central elements \(H_{1},H_{2},C_{2},C_{3}\) with the defining relations (80) and (83). A basis is given by the set (91)._ ## 9 Conclusion The realization of the rank one Racah algebra as the centralizer in \(U(sl_{3})\) of the Cartan subalgebra naturally brings to mind its generalization to \(U(sl_{n})\) for any \(n\). This suggests a way of realizing higher rank Racah algebras which is different from the generalization related to tensor products of \(U(sl_{2})\). Centralizers of the Cartan subalgebras were studied in [49] but a description of the resulting algebra is still not known for arbitrary \(n\). Similar results for any simple Lie (super)-algebra should be also very interesting. We have so far focused on finite-dimensional representations of the algebra and obtained correspondingly finite families of functions. Infinite-dimensional representations should provide similar results, involving infinite families of functions. Studying the cases associated to the quantum groups can also be envisaged. The results of this paper should generalize: the centralizer of the Cartan subalgebra of \(U_{q}(sl_{3})\) should be associated to the Askey-Wilson algebra, the Krawtchouk polynomials should be replaced by certain \(q\)-Krawtchouk polynomials (see [59, 60]) and the Racah polynomials, by the \(q\)-Racah polynomials. We plan on pursuing these questions. ### Acknowledgments N. Crampe and L. Poulain d'Andecy are partially supported by Agence Nationale de la Recherche Projet AHA ANR-18-CE40-0001 and by the IRP AAPT of CNRS. J. Gaboriaud held an Alexander-Graham-Bell scholarship from the Natural Sciences and Engineering Research Council of Canada (NSERC), received scholarships from the ISM and the Universite de Montreal and is now supported by JSPS KAKENHI Grant Numbers 22F21320 and 22KF0189. The research of L. Vinet is funded in part by a Discovery Grant from NSERC. ### Data Availability Data sharing not applicable - no new data generated.
2310.01381
DiffAR: Denoising Diffusion Autoregressive Model for Raw Speech Waveform Generation
Diffusion models have recently been shown to be relevant for high-quality speech generation. Most work has been focused on generating spectrograms, and as such, they further require a subsequent model to convert the spectrogram to a waveform (i.e., a vocoder). This work proposes a diffusion probabilistic end-to-end model for generating a raw speech waveform. The proposed model is autoregressive, generating overlapping frames sequentially, where each frame is conditioned on a portion of the previously generated one. Hence, our model can effectively synthesize an unlimited speech duration while preserving high-fidelity synthesis and temporal coherence. We implemented the proposed model for unconditional and conditional speech generation, where the latter can be driven by an input sequence of phonemes, amplitudes, and pitch values. Working on the waveform directly has some empirical advantages. Specifically, it allows the creation of local acoustic behaviors, like vocal fry, which makes the overall waveform sounds more natural. Furthermore, the proposed diffusion model is stochastic and not deterministic; therefore, each inference generates a slightly different waveform variation, enabling abundance of valid realizations. Experiments show that the proposed model generates speech with superior quality compared with other state-of-the-art neural speech generation systems.
Roi Benita, Michael Elad, Joseph Keshet
2023-10-02T17:42:22Z
http://arxiv.org/abs/2310.01381v3
# DiffAR: Denoising Diffusion Autoregressive Model for Raw Speech Waveform Generation ###### Abstract Diffusion models have recently been shown to be relevant for high-quality speech generation. Most work has been focused on generating spectrograms, and as such, they further require a subsequent model to convert the spectrogram to a waveform (i.e., a vocoder). This work proposes a diffusion probabilistic end-to-end model for generating a raw speech waveform. The proposed model is autoregressive, generating overlapping frames sequentially, where each frame is conditioned on a portion of the previously generated one. Hence, our model can effectively synthesize an unlimited speech duration while preserving high-fidelity synthesis and temporal coherence. We implemented the proposed model for unconditional and conditional speech generation, where the latter can be driven by an input sequence of phonemes, amplitudes, and pitch values. Working on the waveform directly has some empirical advantages. Specifically, it allows the creation of local acoustic behaviors, like vocal fry, which makes the overall waveform sounds more natural. Furthermore, the proposed diffusion model is stochastic and not deterministic; therefore, each inference generates a slightly different waveform variation, enabling abundance of valid realizations. Experiments show that the proposed model generates speech with superior quality compared with other state-of-the-art neural speech generation systems. ## 1 Introduction In the last two decades, impressive progress has been made in speech-based research and technologies. With these advancements, speech applications have become highly significant in communication and human-machine interactions. One aspect of this is generating high-quality, naturally-sounding synthetic speech, namely text-to-speech (TTS). In recent years, substantial research has been made to design a deep-learning-based generative audio model. Such an effective model can be used for speech generation, enhancement, denoising, and manipulation of audio signals. Many neural-based generative models segment the synthesis process into two distinct components: a _decoder_ and a _vocoder_(Zhang et al., 2023). The _decoder_ takes a reference signal, like the intended text for synthetic production, and transforms it into acoustic features using intermediate representations, such as mel-spectrograms. The specific function of the decoder varies based on the application, which can be text-to-speech, image-to-speech, or speech-to-speech. The _vocoder_, on the other hand, receives these acoustic features and generates the associated waveform (Kong et al., 2020). Although this two-step approach is widely adopted (Ren et al., 2020; Chen et al., 2020), one potential drawback is that focusing solely on the magnitude information (the spectrogram) might neglect certain natural and human perceptual qualities that can be derived from the phase (Oppenheim & Lim, 1981). By contrast, _end-to-end_ frameworks are capable of generating the waveform using a single model without producing the acoustic features explicitly (Weiss et al., 2021; Chen et al., 2021). Such end-to-end models have large variability and usually simpler training pipeline, but lack explainability (Watanabe, 2023). The generation of long waveforms can be effectively achieved using an autoregressive (AR) approach. This involves the sequential generation of waveform samples during the inference phase (e.g., Oord et al., 2016; Wang et al., 2023). While autoregressive models work well for TTS, their inference is slow due to their sequential nature. On the other hand, non-autoregressive models such as (Ren et al., 2020; Chen et al., 2021) struggle to generate extremely long audio clips that correspond to a long text sequence due to the limited GPU memory. Recently, diffusion models have demonstrated impressive generative capabilities in synthesizing images, videos, and speech. A large body of work has used diffusion models for speech synthesis. Numerous studies have suggested using diffusion models as decoders to generate the Mel-Spectrogram representation from a given text: _DiffTTS_(Jeong et al., 2021) and _GradTTS_(Popov et al., 2021). The model _Guided-TTS_(Kim et al., 2022) is a decoder that does not require any transcript of the target speaker using classifier guidance (Dhariwal and Nichol, 2021). On the other hand, _WaveGrad_(Chen et al., 2020) is a vocoder (only) that generates a waveform by conditioning the diffusion process on a corresponding Mel-spectrogram. _DiffWave_(Kong et al., 2020) is an end-to-end model that generates a fixed duration of speech (1 second). The model can learn a manifold of a limited, fixed-length vocabulary (the ten digits) and produce consistent word-level pronunciations. This model cannot generate a whole sentence. Last, _WaveGrad 2_(Chen et al., 2021) is an end-to-end model that consists of (i) Tacotron 2(Elias et al., 2021) as an encoder for extracting an abstract hidden representation from a given phoneme sequence; and (ii) a decoder, which predicts the raw signal by refining the noisy waveform iteratively. This work proposes a novel autoregressive diffusion model for generating raw audio waveforms by sequentially producing short overlapping frames. Our model is called _DiffAR_ - Denoising Diffusion Autoregressive Model. It can operate in an unconditional mode, where no text is provided, or in a conditional mode, where text and other linguistic parameters are used as input. Because our model is autoregressive, it can generate signals of an arbitrary duration, unlike _DiffWave_. This allows the model to preserve coherent temporal dependencies and maintain critical characteristics. _DiffAR_ is an end-to-end model that works without using any intermediate representation such as the Mel-spectrogram. By considering both the amplitude and phase components, it can generate a reliable and human-like voice that contains everyday speech phenomena including _vocal fry_, which refers to a voice quality characterized by irregular glottal opening and low pitch, and often used in American English to mark phrase finality, sociolinguistic factors and affect. We are not the first to introduce autoregressive diffusion models. Ho et al. (2022) proposed a method for video synthesis, and Hoogeboom et al. (2021) extended diffusion models to handle ordered structures while aiming to enhance efficiency in the process. Our model focuses on one-dimensional time-sequential data, particularly unlimited-duration high-quality speech generation. The contributions of the paper are as follows: (i) An autoregressive denoising diffusion model for high-quality speech synthesis; (ii) This model can generate unlimited waveform durations while preserving the computational resources; and (iii) This model generates human-like voice, including vocal fry, with a high speech quality compared to other state-of-the-art models. This paper is organized as follows. In Section 2, we formulate the problem and present our autoregressive approach to the diffusion process for speech synthesis. Our model, _DiffAR_, can be conditioned on input text, described in Section 3. In Section 4 we detail _DiffAR_'s architecture. Next, in Section 5, we present the empirical results, including a comparison to other methods and an ablation study. We conclude the paper in Section 6. ## 2 Proposed model Our goal is to generate a speech waveform that mimics the human voice and sounds natural. We denote the waveform by \(\mathbf{x}=(x_{1},\ldots,x_{T})\) where each sample \(x_{t}\in[-1,1]\). The number of samples, \(T\), is not fixed and varies between waveforms. To do so, we consider the joint probability distribution of the speech \(p(\mathbf{x})\) from a training set of speech examples, \(\{\mathbf{x}_{i}\}_{i=1}^{N}\). Each sample from this distribution would generate a new valid waveform. This is the _unconditional_ case. Our ultimate objective is to generate the speech from a specified text. To convert text into speech, we specify the text using its linguistic and phonetic representation \(\mathbf{y}=(y_{1},\ldots,y_{T})\), where we can consider \(y_{t}\) to be the phoneme at time \(t\), and may also include the energy, pitch or other temporal linguistic data. In the _conditional_ case we estimate the conditional distribution \(p(\mathbf{x}|\mathbf{y})\) from the transcribed training set \(\{\mathbf{x}_{i},\mathbf{y}_{i}\}_{i=1}^{N}\). Sampling from this distribution generates a speech of the input text given by \(\mathbf{y}\). To generate a waveform of an arbitrary length \(T\), our model operates in frames, each containing a fixed \(L\) samples, where \(L\ll T\). Let \(\mathbf{x}^{l}\) denote a vector of samples representing the \(l\)-th frame. To ensure a seamless transition between consecutive frames, we _overlap_ them by shifting the starting position by \(L_{o}\) samples. We propose an autoregressive model wherein the generation of the current frame \(l\) is conditioned on the last \(L_{o}\) samples of the previous frame \(l-1\). See Figure 1. Following these definitions, let the true probability distribution of the \(l\)-th speech frame be denoted by \(p(\mathbf{x}^{l}|\mathbf{x}^{l-1})\), indicating that it is dependent on the preceding frame, \(l-1\), but not conditioned on any input text (unconditional case). Similarly, let \(p(\mathbf{x}^{l}|\mathbf{x}^{l-1},\mathbf{y}^{l})\) be the probability distribution conditioned also on a specified input text. The sequence \(\mathbf{y}^{l}\) stands for the linguistic-phonetic representation of the \(l\)-th frame, which will be discussed in the following section. Our approach is based on denoising diffusion probabilistic models (DDPM; Ho et al., 2020). A diffusion model is a generative procedure involving latent variables constructed from two stochastic processes: the _forward_ and the _reverse_ processes. Each process is defined as a fixed _Markovian_ chain composed of \(S\) latent instances of the \(l\)-th speech frame \(\mathbf{x}_{1}^{l},...,\mathbf{x}_{S}^{l}\). We denote the source speech frame as the \(0\)-th process element, \(\mathbf{x}_{0}^{l}=\mathbf{x}^{l}\). During the _forward process_, a small Gaussian noise is gradually mixed with the original speech frame \(\mathbf{x}_{0}^{l}\) through \(S\) diffusion steps. The step sizes are predefined by a variance schedule \(\{\beta_{t}\in(0,1)\}_{s=1}^{S}\), which gradually transforms the original frame \(\mathbf{x}_{0}^{l}\) into the last latent variable \(\mathbf{x}_{S}^{l}\) that follows an isotropic Gaussian distribution \(\mathbf{x}_{S}^{l}\sim\mathcal{N}(0,\mathbf{I}_{L})\). Denote by \(q\) the distribution of the forward process, and taking into account its Markovian nature, we have: \[q\left(\mathbf{x}_{1:S}^{l}|\mathbf{x}_{0}^{l}\right)=\prod_{s=1}^{S}q( \mathbf{x}_{s}^{l}|\mathbf{x}_{s-1}^{l}). \tag{1}\] Following Ho et al. (2020), the conditional process distribution \(q\) is parameterized as Gaussian distribution as follows: \[q(\mathbf{x}_{s}^{l}|\mathbf{x}_{s-1}^{l})=\mathcal{N}\left(\mathbf{x}_{s}^{l };\sqrt{1-\beta_{s}}\mathbf{x}_{s-1}^{l},\beta_{s}\mathbf{I}_{L}\right). \tag{2}\] Note that the distribution \(q\) is not directly influenced by the previous frame \(\mathbf{x}^{l-1}\), nor by the input text \(\mathbf{y}^{l}\) in the conditional case. The _reverse process_ aims to recover the original speech frame \(\mathbf{x}_{0}^{l}\) from the corrupted frame \(\mathbf{x}_{S}^{l}\) by progressively denoises it. The probability distribution of the reverse process takes into account the autoregressive property of our overall model, conditioned on the previous frame \(\mathbf{x}^{l-1}\) and the input text if given. The reverse process, also under the Markovian assumption, is defined as the conditional distribution: \[p_{\theta}\left(\mathbf{x}_{0:S}^{l}\mid\mathbf{x}^{l-1},\mathbf{y}^{l} \right)=p(\mathbf{x}_{S}^{l})\prod_{s=0}^{S-1}p_{\theta}(\mathbf{x}_{s}^{l} \mid\mathbf{x}_{s+1}^{l-1},\mathbf{x}^{l-1},\mathbf{y}^{l}), \tag{3}\] where \(p_{\theta}\) is a learned model with parameters \(\theta\), and \(\mathbf{y}^{l}\) is either given in the conditional case or omitted in the unconditional case. To be precise, the learned model uses the overlap portion of the previous frame, namely \(L_{o}\) samples. We use the notation \(\mathbf{H}\mathbf{x}^{l-1}\) to specify the overlapped segment of the previous frame (Figure 1), where \(\mathbf{H}\in\mathbb{R}^{L\times L}\) is an _inpainting_ and _reordering_ matrix, which is defined as follows: \[\mathbf{H}=\left[\begin{array}{cc}\mathbf{0}&\mathbf{I}_{L_{o}}\\ \mathbf{0}&\mathbf{0}\end{array}\right]. \tag{4}\] Beyond the Markovian factorization, as shown above, we further assume that each transition for a time step \(s\) is represented as drawn from a Gaussian distribution: Figure 1: The autoregressive model uses part of the previous frame to generate the current frame. \[p_{\theta}(\mathbf{x}_{s}^{l}\ |\ \ \mathbf{x}_{s+1}^{l},\mathbf{x}^{l-1}, \mathbf{y}^{l})\ =\ \mathcal{N}\Big{(}\mathbf{x}_{s}^{l};\ \mu_{\theta}\left(\mathbf{x}_{s+1}^{l},\mathbf{H}\mathbf{x}^{l-1}, \mathbf{y}^{l},s\right),\Sigma_{\theta}\left(\mathbf{x}_{s+1}^{l},\mathbf{H} \mathbf{x}^{l-1},\mathbf{y}^{l},s\right)\Big{)}. \tag{5}\] Training is performed by minimizing the variational bound on the negative log-likelihood while using the property that relates \(\mathbf{x}_{s}^{l}\) directly with \(\mathbf{x}_{0}^{l}\)(Ho et al., 2020): \[\mathbf{x}_{s}^{l}=\sqrt{\bar{\alpha}_{s}}\mathbf{x}_{0}^{l}+\sqrt{1-\bar{ \alpha}_{s}}\boldsymbol{\epsilon}_{s}\quad\boldsymbol{\epsilon}_{s}\sim\mathcal{ N}(\mathbf{0},\mathbf{I})\, \tag{6}\] where \(\alpha_{s}=1-\beta_{s}\), \(\bar{\alpha}_{s}=\prod_{i=1}^{s}\alpha_{i}\). The loss is reduced as follows: \[\mathcal{L}_{s}=\mathbb{E}_{\mathbf{x}_{0}^{l},\boldsymbol{\epsilon}_{s}} \left[\left\|\boldsymbol{\epsilon}_{\theta}\left(\sqrt{\bar{\alpha}_{s}} \mathbf{x}_{0}^{l}+\sqrt{1-\bar{\alpha}_{s}}\boldsymbol{\epsilon}_{s}, \mathbf{H}\mathbf{x}^{l-1},\mathbf{y}^{l},s\right)-\boldsymbol{\epsilon}_{s} \right\|^{2}\right]\, \tag{7}\] where \(\boldsymbol{\epsilon}_{\theta}\) is an approximation of \(\boldsymbol{\epsilon}_{s}\) from \(\mathbf{x}_{s}\) with parameters \(\theta\) and \(s\) is uniformly taken from the entire set of diffusion time-steps. In summary, our model aims to learn the function \(\boldsymbol{\epsilon}_{\theta}\), which acts as a conditional _denoiser_. This function can be used along with a noisy speech frame to estimate a clean version of it. The _inference_ procedure is sequential and carried out autoregressively for each frame. Assume we would like to generate the \(l\)-th frame, given the already generated previous frame \(\hat{\mathbf{x}}^{l-1}\). For the new frame generation we apply the following equation iteratively from \(s\!=\!S\!-\!1\): \[\mathbf{x}_{s}^{l}=\frac{1}{\sqrt{\bar{\alpha}_{s}}}\left(\mathbf{x}_{s+1}^{l }-\frac{1-\alpha_{s}}{\sqrt{1-\bar{\alpha}_{s}}}\boldsymbol{\epsilon}_{\theta }\left(\mathbf{x}_{s+1}^{l},\mathbf{H}\hat{\mathbf{x}}^{l-1},\mathbf{y}^{l},s \right)\right)+\sigma_{s}\mathbf{z}_{s}\, \tag{8}\] where \(\boldsymbol{\epsilon}_{\theta}\) is the learned model, \(\mathbf{z}_{s}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) and \(\sigma_{s}=\sqrt{\frac{1-\bar{\alpha}_{s-1}}{1-\bar{\alpha}_{s}}\beta_{s}}\). To initiate the generation, we designate the initial frame (\(l\!=\!0\)) as a silence one. In the last iteration, when \(s\!=\!0\), we use \(\mathbf{z}_{0}=\mathbf{0}\). ## 3 Text representation as linguistic and phonological units Recall that our ultimate goal is to synthesize speech given an input text. Following Ren et al. (2020); Kim et al. (2020); Chen et al. (2021), we use the phonetic representation of the desirable text as a conditioned embedding, as it accurately describes how the speech should be produced. Let \(\mathcal{Y}\) represent the set of phonemes, \(|\mathcal{Y}|=72\). Recall that in our setup, we are required to supply the phonetic content for each frame, denoted as \(\mathbf{y}^{l}\). This entails a vector comprising \(L\) values, where each value represents a phoneme from the set \(\mathcal{Y}\) for each respective sample. Note that while the phoneme change rate is much slower than the sampling frequency, we found this notation clearer for our discussion. Since the actual text is given as a sequence of words, during training, we transform the text sequence into phonemes and their corresponding timed phoneme sequence using a phoneme alignment procedure. This process identifies the time span of each phoneme within the waveform (McAuliffe et al., 2017). During inference, we do not have a waveform as our goal is to generate one, and we use a _grapheme-to-phoneme_ (G2P) component to convert the words into phonemes (Park and Kim, 2019) and a _duration predictor_ to estimate the time of each phoneme. **Duration Predictor.** The duration prediction is a small neural network that gets as input a phoneme and outputs its typical duration. The implementation details are given in Appendix (D.1). During inference, the generated frame duration is allowed to deviate from the exact value \(L\), since we restrict the vector \(\mathbf{y}^{l}\) to encompass entire phoneme time-spans, which is easier to manage. The speech corresponding to each text can be expressed in various ways, particularly when the transition is executed directly on the waveform. Utilizing a diffusion process to implement the model further amplifies this variability, owing to the stochastic nature of the process. On the one hand, we aim to retain the diversity generated by the model to facilitate more reliable and nuanced speech. On the other hand, we aspire to steer and regulate the process to achieve natural-sounding speech. Consequently, following the approach outlined in Ren et al. (2020), we allow the incorporation of elements such as energy and pitch predictors into the process. Namely we enhance the vector \(\mathbf{y}^{l}\) to include other linguistic information rather than just phonemes. **Energy Predictor.** We gain significant flexibility and control over the resulting waveform by directly conditioning our model on the energy signal. Instead of relying on the estimated output of the energy predictor, we have the autonomy to determine the perceived loudness of each phoneme. Our approach offers guidance to the synthesis process while still governed by the inherent stochasticity of the diffusion model. Much like the duration predictor, our energy predictor was trained to predict the relative energy associated with each phoneme. Detailed information about the implementation of the energy predictor can be found in Appendix D.2. **Pitch.** Pitch, or fundamental frequency, is another critical element in the structure of the waveform. To assess its impact on the synthesis process while conditioned on a given pitch contour, we also decided to incorporate it into our model. In this case, we did not build a pitch predictor and used a given sequence of pitch values, estimated using state-of-the-art method (Segal et al., 2021). ## 4 Model architecture The architecture of _DiffAR_ is shown in the Figure 2. The model's backbone is based on the _DiffWave_ architecture (Kong et al., 2020). Figure 2(a) illustrates the general structure of the network. The network consists of \(N=36\) residual layers, each containing \(C=256\) residual channels. The output from each layer is integrated with the accumulated outputs from previous ones. These combined outputs are fed into a network of two fully connected layers, which leverage ReLU activation functions to generate the final output. The layer dimensions are described in the Appendix C. Figure 2(b) schematically depicts a single channel of the residual layer. This layer employs the bidirectional dilated convolution architecture (Oord et al., 2016), which facilitates parallel inference for each frame through a dilation cycle of \([1,2,\dots,2048]\). To foster an autoregressive progression, the layer is conditioned on \(\mathbf{H}\mathbf{x}^{l-1}\), incorporating essential information from the previous frame. The indication of the diffusion step \(s\) is accomplished by employing a 128-dimensional encoding vector for each \(s\)(Vaswani et al., 2017) as input to the model, similar to the approach used in (Kong et al., 2020). Additionally, _DiffAR_ can be conditioned on optional data, including the targeted phonemes, the desired energy, and the desired pitch. Each conditioned signal passes through a Multi-scaled Residual Block (MRB) and is then summed to the output of the bidirectional convolutional component. The MRBs comprise three convolutional layers with kernels \([3,5,7]\) and use the identical dilation pattern as the residual layer. These MRBs are trained concurrently with the model. Figure 2: (a) A general overview of the structure of the residual layers and their interconnections. (b) A detailed overview of a single residual layer. Experiments In this section, we comprehensively evaluate our model through empirical analysis. Initially, we explore unconditional speech generation, wherein a specific text does not constrain the synthesis. Subsequently, we discuss our conditional model, employed when there is a designated text to synthesize. We compare our model with two state-of-the-art TTS models: _WaveGrad 2_(Chen et al., 2021) and _FastSpeech 2_(Ren et al., 2020). We then turn to a short ablation study, comparing different parts of our model. We conclude with the computational limits of our methods compared with other methods. In Appendices B, A, we also present the synthesis of vocal fly by our model and its stochasticity and controllability. All models were trained and evaluated on the LJ-Speech (Ito and Johnson, 2017) dataset, which consists of 13,100 short audio clips (about 24 hours) of a female speaker. The dataset was divided into three subsets: 12,838 samples for the training set, 131 samples for the test set, and an additional 131 samples for the validation set. Throughout the experiments, we maintained the original LJ-Speech data partitioning. In all the experiments, we used relatively long frame durations (e.g., \(L\) = 500 and \(L_{o}\) = 250 milliseconds). We would like to point out that a conventional frame length of 20-30 milliseconds and a shift of 10 milliseconds, often used in speech processing, are established based on the stationary properties of speech. However, this is not a concern in diffusion models, thereby permitting us to employ substantially larger frame sizes. This aids the diffusion process in seamlessly modeling the waveform encompassing three-four consecutive phonemes in the newly generated segment. ### Unconditional speech generation First, we created a model entirely unconditioned by external factors, relying solely on information from the previous frame. The main goal is to assess whether generating a sequence of frames, as outlined in the autoregressive approach in Section 2, results in a continuous signal with seamless transitions. During the training phase, we fixed the frame length settings to \((L,L_{o})=(1000,500)\), utilizing \(S=200\) diffusion steps. We utilize a noise schedule parameterized by \(\beta_{t}\in\left[1\times 10^{-4},0.02\right]\) to control the diffusion process. However, in the synthesis phase, we assessed the model's ability to generalize across different frame lengths, specifically considering \((L,L_{o})\ =\ \{(1000,500)\,,(500,250)\,,(400,200)\}\). Examples can be found in our model's GitHub repository1. Footnote 1: [https://github.com/RBenita/DIFFAR.git](https://github.com/RBenita/DIFFAR.git) The generated signals exhibit smooth transitions and connectivity, indicating that the _DiffAR_ architecture has effectively learned local dependencies. However, the model generated non-existent but human language-like words (similar to Oord et al., 2016; Weiss et al., 2021). Additionally, we observed that global dependencies are improved as the frame length increases, utilizing the entire learned receptive field. This result is not unexpected, considering the model does not condition on textual information. Modeling a manifold that generates a large vocabulary and meaningful words without textual guidance is still challenging. On the other hand, a simple manifold for only ten digits can be successfully generated (Kong et al., 2020). ### Conditional Speech Generation We conducted a comparative study of our conditional model against other state-of-the-art TTS models. Although there is a plethora of TTS systems available, our objective was to benchmark against the most high-performing and relevant models _WaveGrad 2_ and _FastSpeech 2_. We evaluated the synthesized speech using two subjective and two objective metrics. For subjective measurement, we used the mean opinion scores (MOS), where 45 samples from the test set are evaluated for each system, and 10 ratings are collected for each sample. Raters were recruited using the Amazon Mechanical Turk platform, and they were asked to evaluate the quality of the speech on a scale of 1 to 5. Despite their advantages, MOS tests can be challenging to compare between different papers (Kirkland et al., 2023), and they may even exhibit bias within the same study, due to the influence of samples from other systems in the same trial. To mitigate these challenges and provide a more robust evaluation framework, we used another subjective evaluation - the Multiple Stimuli with Hidden Reference and Anchor (MUSHRA) test. We followed the MUSHRA protocol (Series, 2014), using both a hidden reference and a low anchor. For the overall quality test, raters were asked to rate the perceptual quality of the provided samples from 1 to 100. We report average ratings along with a \(95\%\) confidence interval for both metrics. We randomly selected 60 recordings from the test set for an objective assessment and used their text to re-synthesize waveforms. We evaluated the generated waveforms using state-of-the-art automatic speech recognition (Whisper medium model; Radford et al., 2023) and reported the character error rate (CER) and the word error rate (WER) relative to the original text. During the training phase, we fixed the frame length settings to \((L,L_{o})=(500,250)\). We utilize a noise schedule \(\beta_{t}\in\left[1\times 10^{-4},0.02\right]\). We trained two models - one with \(S\!=\!200\) steps and one with 1000 steps. During inference, the models were conditioned on phonemes (obtained from G2P unit (Park and Kim, 2019)), the predicted durations, and the predicted energy. **WaveGrad 2.** We start by describing a comparison of our model to _WaveGrad 2_ (Chen et al., 2021), which is an encoder-decoder end-to-end waveform generation system that is based on diffusion models. We used an unofficial implementation2 of it as the original one is unavailable. Results for _WaveGrad 2_ are presented in Table 1. Each row represents a different model, where the first row, denoted _Ground truth_, represents the performance with the original waveforms from the database, and it is given as a reference. For each model we show the results of MOS, MUSHRA, CER and WER. The column labeled **MOS scaled** indicates the adjusted MOS results, which have been scaled proportionately to align with the MOS values of ground truth and _WaveGrad 2_ (Chen et al., 2021). Footnote 2: [https://github.com/maum-ai/wavegrad2](https://github.com/maum-ai/wavegrad2) The table illustrates that our model surpasses _WaveGrad 2_ across all evaluated metrics. This can be attributable to the fact that _WaveGrad 2_ uses an architecture that generates the entire utterance in a single instance instead of operating in an autoregressive manner like _DiffAR_. **FastSpeech 2.** We turn now to compare _DiffAR_ with _FastSpeech 2_ (Ren et al., 2020), which is one of the best models implementing the two-state decoder-vocoder approach. Again, we used an unofficial implementation3 as the original one associated with the paper was not made available. We evaluated two versions of this model: the original _FastSpeech 2_, as described in Ren et al. (2020), and an improved version, which uses additional _Tacotron-2_ Shen et al. (2018) style post-net after the decoder, gradient clipping during the training, phoneme-level pitch and energy prediction \begin{table} \begin{tabular}{l c c c c c} **Method** & \(\uparrow\)**MOS** & \(\uparrow\)**MOS scaled** & \(\uparrow\)**MUSHRA** & \(\downarrow\)**CER(\%)** & \(\downarrow\)**WER(\%)** \\ \hline Ground truth & \(3.98\pm 0.08\) & \(4.70\pm 0.09\) & \(71.2\pm 2.0\) & \(0.89\) & \(2.13\) \\ WaveGrad 2 & \(3.61\pm 0.09\) & \(4.26\pm 0.10\) & \(63.8\pm 2.3\) & \(3.47\) & \(5.75\) \\ DiffAR (200 steps) & \(3.75\pm 0.08\) & \(4.43\pm 0.10\) & \(65.7\pm 2.2\) & \(2.67\) & \(6.16\) \\ DiffAR (1000 steps) & \(3.77\pm 0.08\) & \(\textbf{4.45}\pm\textbf{0.09}\) & \(\textbf{66.7}\pm\textbf{2.2}\) & **1.95** & **4.65** \\ \end{tabular} \end{table} Table 1: Comparison to WaveGrad 2 (Chen et al., 2021) \begin{table} \begin{tabular}{l c c c c c} **Method** & \(\uparrow\)**MOS** & \(\uparrow\)**MOS scaled** & \(\uparrow\)**MUSHRA** & \(\downarrow\)**CER(\%)** & \(\downarrow\)**WER(\%)** \\ \hline Ground truth & \(3.98\pm 0.05\) & \(4.22\pm 0.05\) & \(68.9\pm 1.4\) & \(0.89\) & \(2.13\) \\ FastSpeech 2 & \(3.54\pm 0.09\) & \(3.75\pm 0.09\) & \(63.4\pm 2.2\) & \(2.15\) & \(4.82\) \\ FastSpeech 2 improved & \(3.75\pm 0.08\) & \(3.98\pm 0.09\) & \(64.2\pm 2.5\) & **1.73** & **4.31** \\ DiffAR (200 steps) & \(3.76\pm 0.05\) & \(3.99\pm 0.06\) & \(66.0\pm 1.5\) & \(2.67\) & \(6.16\) \\ DiffAR (1000 steps) & \(3.82\pm 0.05\) & \(\textbf{4.05}\pm\textbf{0.06}\) & \(\textbf{66.2}\pm\textbf{1.5}\) & \(1.95\) & \(4.65\) \\ \end{tabular} \end{table} Table 2: Comparison to FastSpeech 2 (Ren et al., 2020) instead of frame-level prediction, and normalizing the pitch and energy features.4 Both versions were trained on the LJ-speech dataset, with a pre-trained HiFi-GAN (Chu et al., 2017) as a vocoder. The results are given in Table 2. Like the previous table, the rows represent different models, and the columns are the evaluation metrics. It is important to note that the subjective evaluation (MOS and MUSHRA tests) were carried out independently for _WaveGrad 2_ and _FastSpeech 2_ to ensure the results were not influenced by each other. Also note that the column **MOS scaled** in this table was scaled proportionally to the ground-truth and _FastSpeech 2_ MOS values, as reported in Ren et al. (2020). Footnote 4: Ideally, we should have also compared our model to _FastSpeech 2s_(Ren et al., 2020), which an is end-to-end text-to-waveform system, and to _Wave-Tacotron_(Weiss et al., 2021), but no implementations have been found for these models. Based on the MOS and MUSHRA values, it is evident that our model generates speech characterized by higher quality and a more natural sound, compared to _FastSpeech 2_, and in the same ballpark compared to _FastSpeech 2 Improved_. By analyzing the CER and WER values, it is evident that our model achieves slightly greater intelligibility than _FastSpeech 2_, yet still falls short of the performance of _FastSpeech 2 improved_. ### Ablation study In this section, we introduce an ablation study designed to evaluate the impact of integrating additional components into the model and assess these components' contribution to the observed error rates. We carried out evaluations based on CER and WER metrics to accomplish this. The results are presented in Table 3. The table is structured to present the ablation results initially when the conditioning is based on the ground truth values for linguistic and phonetic content. Subsequently, we showcase the ablation results obtained with predicted values. The _DiffAR-E_ model denotes a variant conditioned on phonemes and their respective durations but not on the energy. In contrast, the _DiffAR_ model is conditioned on phonemes, their durations, and their energy levels. Lastly, the _DiffAR+P_ model represents a version that additionally incorporates pitch conditioning. The number in parentheses indicates the number of diffusion steps each model was trained and tested. The first set of columns indicates whether the model was conditioned on true or predicted values. The final two columns provide the CER and WER values. It can be seen from the results that as we incorporate more supplementary information into the process, the quality of the results improves. In addition, as the task approaches a more realistic scenario, where the only source of original information is the text itself, we observe an increase in values, and inaccuracies appear to be linked to the prediction components. Nevertheless, it is noteworthy that by increasing the number of diffusion steps in the process, the model seems capable of autonomously learning crucial relationships, resulting in lower error values in the realistic scenario compared to a shorter process. Another notable finding is that when we have access to the original energy and pitch \begin{table} \begin{tabular}{l c c c c c c} **Method** & **Phonemes** & **Durations** & **Energy** & **Pitch** & \(\downarrow\)**CER(\%)** & \(\downarrow\)**WER(\%)** \\ \hline Ground truth & – & – & – & – & \(0.89\) & \(2.13\) \\ DiffAR-E (200) & true & true & – & – & \(2.90\) & \(5.98\) \\ DiffAR (200) & true & true & true & – & \(1.18\) & \(3.96\) \\ DiffAR (1000) & true & true & true & – & \(1.70\) & \(4.25\) \\ DiffAR+P (200) & true & true & true & true & \(1.12\) & \(3.47\) \\ \hline DiffAR-E (200) & true & pred & – & – & \(2.68\) & \(6.09\) \\ DiffAR-E (200) & pred & pred & – & – & \(3.35\) & \(7.41\) \\ DiffAR (200) & true & pred & pred & – & \(1.05\) & \(3.09\) \\ DiffAR (1000) & true & pred & pred & – & \(2.03\) & \(4.34\) \\ DiffAR (200) & pred & pred & pred & – & \(2.67\) & \(6.16\) \\ DiffAR (1000) & pred & pred & pred & – & \(1.95\) & \(4.65\) \\ \end{tabular} \end{table} Table 3: Intelligibility of different configurations of _DiffAR_, where different phonetic and linguistic values are either true or predicted. information, we achieve results that closely approximate ground truth. This outcome is expected, as this information plays a significant role in modeling the characteristics of a natural waveform signal. Another noteworthy aspect highlighted in these results is the balance between the inherent stochasticity of the diffusion process and the degree of controllability achieved through conditioning the model with supplementary information. A more detailed demonstration is provided in Appendix A. ### Computational limitations and synthesis time Existing models face a notable challenge in training and synthesizing extremely long texts due to GPU computational constraints (Ren et al., 2020). However, with its autoregressive architecture, our model might handle this while preserving a consistent signal structure. Figure 3 present the analysis of the synthesis process of three models: _DiffAR (200)_, _WaveGrad 2_, and _FastSpeech 2_, where each time we doubled the number of words in the text and tested the maximum GPU consumption throughout the process. Each point on this graph was created by executing the corresponding model on GPU NVIDIA A40 with a memory of 48GB. For the last two models, the GPU consumption escalates with an increase in text length up to a certain threshold where it hits a limit and triggers an out-of-memory error. For the _WaveGrad 2_ model, this occurs post-processing 512 words; in the case of _FastSpeech 2_, it happens after 1024 words. Contrarily, our model maintains a consistent memory consumption level, an order of magnitude lower than the other models, offering controlled efficiency. A notable limitation of _DiffAR_ is the extended synthesis time associated with the use of diffusion models and the inherent limitation in the autoregressive approach, which is sequential by definition. There are numerous strategies to expedite the synthesis process while still maintaining the autoregressive nature: shortening the diffusion process (e.g., using DDIM Song et al. (2020) which involves a trade-off between time and quality) or even by developing a parallelized algorithm (Oord et al., 2018). However, addressing this issue goes beyond the scope of this paper. ## 6 Conclusion In this work, we proposed _DiffAR_, an end-to-end denoising diffusion autoregressive model designed to address audio synthesis tasks, specifically focusing on TTS applications. Our model incorporates a carefully selected set of characteristics, each contributing significantly to its overall performance. The diffusive process enriches synthesis quality and introduces stochasticity, while the autoregressive nature enables the handling of temporal signals without temporal constraints and facilitates effective integration with the diffusive process. Synthesizing the waveform directly, without using any intermediate representations enhanced the variability and simplified the training procedure. By estimating both the phase and amplitude, _DiffAR_ enables the modeling of phenomena such as vocal fly phonation, resulting in more natural-sounding signals. Furthermore, The architecture of _DiffAR_ offers simplicity and versatility, providing explicit control over the output signal. These characteristics are interconnected, and their synergy contributes to the model's ability to outperform leading models in terms of both intelligibility and audio quality. Like other autoregressive models, _DiffAR_ model faces the challenge of long synthesis times. Future work can focus on reducing synthesis time by using fewer diffusion steps (Song et al., 2020) or by exploring methods to expedite the process (Hoogeboom et al., 2021). Another avenue for improvement is conditioning the model with elements like speaker identity and emotions and incorporating classifier-free guidance (Ho and Salimans, 2022) to handle such various conditions effectively. Lastly, ablation studies suggest that enhancing the force aligner, grapheme-to-phoneme, and prediction components could significantly improve the results. Figure 3: Used memory versus text length. ## 7 Reproducibility To ensure the work is as reproducible as possible and comparable with future models, we have provided comprehensive descriptions of both the training and the sampling procedures. The main ideas of the method are presented in section 2. The model architecture is provided in Section 4 and is also presented in a more detailed format in Appendix C and D. In addition, our complete code for training and inference, along with hyperparameter settings to run experiments and examples, can be found under the project GitHub [https://github.com/RBenita/DIFFAR.git](https://github.com/RBenita/DIFFAR.git).
2302.01190
On the Efficacy of Differentially Private Few-shot Image Classification
There has been significant recent progress in training differentially private (DP) models which achieve accuracy that approaches the best non-private models. These DP models are typically pretrained on large public datasets and then fine-tuned on private downstream datasets that are relatively large and similar in distribution to the pretraining data. However, in many applications including personalization and federated learning, it is crucial to perform well (i) in the few-shot setting, as obtaining large amounts of labeled data may be problematic; and (ii) on datasets from a wide variety of domains for use in various specialist settings. To understand under which conditions few-shot DP can be effective, we perform an exhaustive set of experiments that reveals how the accuracy and vulnerability to attack of few-shot DP image classification models are affected as the number of shots per class, privacy level, model architecture, downstream dataset, and subset of learnable parameters in the model vary. We show that to achieve DP accuracy on par with non-private models, the shots per class must be increased as the privacy level increases. We also show that learning parameter-efficient FiLM adapters under DP is competitive with learning just the final classifier layer or learning all of the network parameters. Finally, we evaluate DP federated learning systems and establish state-of-the-art performance on the challenging FLAIR benchmark.
Marlon Tobaben, Aliaksandra Shysheya, John Bronskill, Andrew Paverd, Shruti Tople, Santiago Zanella-Beguelin, Richard E Turner, Antti Honkela
2023-02-02T16:16:25Z
http://arxiv.org/abs/2302.01190v3
# On the Efficacy of Differentially Private Few-shot Image Classification ###### Abstract There has been significant recent progress in training differentially private (DP) models which achieve accuracy that approaches the best non-private models. These DP models are typically pretrained on large public datasets and then fine-tuned on downstream datasets that are (i) relatively large, and (ii) similar in distribution to the pretraining data. However, in many applications including personalization, it is crucial to perform well in the few-shot setting, as obtaining large amounts of labeled data may be problematic; and on images from a wide variety of domains for use in various specialist settings. To understand under which conditions few-shot DP can be effective, we perform an exhaustive set of experiments that reveals how the accuracy and vulnerability to attack of few-shot DP image classification models are affected as the number of shots per class, privacy level, model architecture, dataset, and subset of learnable parameters in the model vary. We show that to achieve DP accuracy on par with non-private models, the shots per class must be increased as the privacy level increases by as much as 32\(\times\) for CIFAR-100 at \(\epsilon=1\). We also find that few-shot non-private models are highly susceptible to membership inference attacks. DP provides clear mitigation against the attacks, but a small \(\epsilon\) is required to effectively prevent them. Finally, we evaluate DP federated learning systems and establish state-of-the-art performance on the challenging FLAIR federated learning benchmark. Machine Learning, ICML ## 1 Introduction It is well known that neural networks trained without formal privacy guarantees can be attacked to expose a subset of the training data (Carlini et al., 2021; Balle et al., 2022). For applications where training data are sensitive (Abowd, 2018; Cormode et al., 2018), it has become increasingly common to train under Differential Privacy (DP) (Dwork et al., 2006) which is considered to be the gold standard for protecting individual training examples from discovery. Training with DP-SGD (Rajkumar and Agarwal, 2012; Song et al., 2013; Abadi et al., 2016), which adapts SGD to guarantee DP, typically impairs model performance due to gradient clipping and the addition of noise during training in order to mask the contribution of individual examples to model updates. However, there has been significant recent progress in training DP models which achieve accuracy that approaches the best non-private models in both NLP (Li et al., 2022; Yu et al., 2022) and compter vision (Kurakin et al., 2022; De et al., 2022; Mehta et al., 2022; Cattan et al., 2022). The majority of these approaches are based on transfer learning where the models have been pretrained on large public datasets and then fine-tuned (Yosinski et al., 2014) on a downstream dataset, as this approach has been shown to be highly effective on non-private data (Kolesnikov et al., 2019; Shysheya et al., 2022). The subset of model parameters to fine-tune ranges from all model parameters (Kolesnikov et al., 2019) to only the final layer, with the tuning of parameter efficient adapters (Perez et al., 2018; Houlsby et al., 2019; Karimi Mahabadi et al., 2021) becoming increasingly prevalent. Transfer learning has also proven successful in the DP setting with (Yu et al., 2022) and without (Mehta et al., 2022) adapters. However, strong DP results have only been demonstrated with relatively large datasets, with no extensive DP few-shot studies performed. The few-shot setting is crucial to any application where obtaining large amounts of labeled data is problematic. It is also especially important in federated learning (where a global model is learned from many distributed users) and personalized federated learning (where a model obtained via federated learning is personalized with a specific user's data) where data contributed by each user may be small and sensitive, including medical images (Shelter et al., 2020), personal photos (Massiceti et al., 2021), or personal data or actions entered on a mobile device (Differential Privacy Team, 2017; Ding et al., 2017). In addition, the strong DP transfer learning results that have recently been reported have largely considered the case where the data distribution of the downstream dataset overlaps significantly with the pretraining data distribution (Tramer et al., 2022). A more demanding test is out-of-domain transfer where more information needs to be extracted from the downstream dataset, making private learning more challenging. Support for differing data distributions is essential for frequently encountered specialist settings such as medical imaging, Earth imaging, or personalized object recognition. In this work, we answer the question: _Under what conditions is differentially private few-shot image classification effective?_ We provide the first comprehensive study on the efficacy of DP few-shot image classification in both central and federated settings. Our contributions are: * We perform an exhaustive set of experiments that reveals how the accuracy of DP and non-private models are affected as the number of shots per class, privacy level, dataset distribution overlap, model architecture, and the subset of learnable parameters in the model vary. Though high DP accuracy can be achieved with relatively little data when the distribution overlap is high (CIFAR-10 achieves better than \(90\%\) accuracy using only \(2\%\) of the dataset at \(\epsilon=1\)), a key finding is that the number of shots per class must be increased by as much as 32\(\times\) for CIFAR-100 as the privacy level is increased to \(\epsilon=1\) to obtain DP accuracy on par with non-private accuracy. * We establish a new DP baseline for the VTAB-1k (Zhai et al., 2019) transfer learning benchmark to encourage DP researchers to test methods on more challenging datasets. * We assess the vulnerability of DP few-shot models with a strong membership inference attack (MIA) and find the attack to perform close to the theoretical upper bound derived as a composite of DP under the _substitute_ adjacency for different \(\delta\). The bound is significantly higher than indicated by naive analysis with \((\epsilon,\delta)\) from the _add/remove_ adjacency commonly used in DP deep learning. * We establish state-of-the-art performance on the challenging FLAIR (Song et al., 2022) few-shot federated learning benchmark in terms of both classification metrics (macro average precision increased from 44.3 to 51.9) and communication efficiency (cost reduced from 11.9M to 0.017M parameters per round) using the same backbone. * Finally, we establish recommended practice guidelines and considerations for training DP few-shot models. ## 2 Background In this section, we provide background information, definitions, and nomenclature required for subsequent sections. We focus our analysis on few-shot transfer learning based image classifiers that rely on large pretrained backbones. **Preliminaries** We denote input images \(\mathbf{x}\) and image labels \(y\in\{1,\dots,C\}\) where \(C\) is the number of image classes indexed by \(c\). Assume that we have access to a model \(f(\mathbf{x})=h_{\mathbf{\phi}}(b_{\mathbf{\theta}}(\mathbf{x}))\) that outputs class-probabilities for an image \(p(y=c|\mathbf{x},\mathbf{\theta},\mathbf{\phi})\) for \(c=1,\dots,C\) and comprises a feature extractor backbone \(b_{\mathbf{\theta}}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d_{b}}\) with parameters \(\mathbf{\theta}\) pretrained on a large upstream public dataset such as Imagenet-21K (Russakovsky et al., 2015) where \(d\) is the input image dimension and \(d_{b}\) is the output feature dimension, and a linear layer classifier or head \(h_{\mathbf{\phi}}:\mathbb{R}^{d_{b}}\rightarrow\mathbb{R}^{C}\) with weights \(\mathbf{\phi}\). Let \(\mathcal{D}=\{(\mathbf{x}_{n},y_{n})\}_{n=1}^{N}\) be the downstream dataset that we wish to fine-tune the model \(f\) to. We denote the number of training examples per class or _shot_ as \(S\). **Learnable Parameters** In all experiments, the head parameters \(\mathbf{\phi}\) are initialized to zero and are always learned when fine-tuning on \(\mathcal{D}\). For the backbone weights \(\mathbf{\theta}\), we consider three options: (i) _Head_: \(\mathbf{\theta}\) are fixed at their pretrained values and do not change during fine-tuning, only the head parameters \(\mathbf{\phi}\) are updated; (ii) _All_: \(\mathbf{\theta}\) are initialized with pretrained values, but can be updated during fine-tuning in addition to the head; and (iii) _FiLM_: using FiLM (Perez et al., 2018) layers. There exists a myriad of adaptors for both 2D convolutional and transformer networks including FiLM, Adapter (Houlsby et al., 2019), LoRA (Hu et al., 2021), VPT (Jia et al., 2022), AdaptFormer (Chen et al., 2022), NOAH (Zhang et al., 2022), Convpass (Jie and Deng, 2022), Model Patch (Mudrakarta et al., 2019), and CaSE (Patacchiola et al., 2022) that enable a pretrained network to adapt to a downstream dataset in a parameter efficient manner. In this work, we use FiLM due to its simplicity, high performance, and low parameter count (Shysheya et al., 2022), though another adapter could be used. A FiLM layer scales and shifts the activations \(\mathbf{a}_{ij}\) arising from the \(j^{th}\) output of a layer in the \(i^{th}\) block of the backbone as \(\texttt{FiLM}(\mathbf{a}_{ij},\gamma_{ij},\beta_{ij})=\gamma_{ij}\mathbf{a}_{ij}+\beta _{ij}\), where \(\gamma_{ij}\) and \(\beta_{ij}\) are scalars. We implement FiLM by fixing \(\mathbf{\theta}\) at their pretrained values except for a subset of the scale and offset parameters utilized in the backbone normalization layers (e.g. BatchNorm, GroupNorm, or LayerNorm -- see Appendix A.4.1 for details), which can update during fine-tuning. For example, in a ResNet50, there are only \(11\,648\) learnable FiLM parameters, which is fewer than 0.05% of \(\mathbf{\theta}\). **Dataset Distribution Overlap (DDO)** The overlap between the distributions of the pretraining data and the downstream dataset is a key determinant of the ease and success of transfer learning. We measure the overlap as the relative difference between the accuracy of the _All_ and _Head_ learnable parameter configurations for a non-private model. If two domains overlap substantially, then only adapting the head of the network is sufficient. If the overlap is small, then the backbone must also be adapted. Table A.1 provides the DDO values for all of the datasets used in the paper. **Differential Privacy (DP)** DP (Dwork et al., 2006) is the gold standard for protecting sensitive data against privacy attacks. A stochastic algorithm is differentially private if it produces similar output distributions on similar datasets. More formally, \((\epsilon,\delta)\)-DP with privacy budget \(\epsilon\geq 0\) (lower means more private) and additive error \(\delta\in[0,1]\) bounds how much the output distribution can diverge on adjacent datasets. We primarily use _add/remove_ adjacency, where two datasets are adjacent if one can be obtained from the other by adding or removing one datapoint. We denote \((\epsilon,\delta)\) the corresponding privacy parameters. When considering _substitute_ adjacency, where two datasets are adjacent if one can be obtained from the other by substituting one datapoint, we use instead \((\epsilon_{S},\delta_{S})\). (See Appendix A.2 for more details.) The additive error is typically chosen such that \(\delta<1/|\mathcal{D}|\). We refer to Dwork and Roth (2014) for a comprehensive introduction to DP. DP-SGD (Rajkumar and Agarwal, 2012; Song et al., 2013; Abadi et al., 2016) adapts stochastic gradient descent (SGD) to guarantee DP. DP-SGD selects mini-batches using Poisson sampling, clips the \(\ell_{2}\) norm of per-example gradients, and adds isotropic Gaussian noise to the sum of mini-batch gradients. The privacy loss in \((\epsilon,\delta)\)-DP is a result of the noise multiplier \(\sigma^{2}\) which scales the variance of the added noise, the number of steps, and the sampling ratio (the Poisson sampling probability, i.e., expected batch size/\(|\mathcal{D}|\)). **Membership Inference Attacks (MIAs)** MIAs aim to determine if a particular example was used in the training set of a model (Shokri et al., 2017). MIAs can be used to audit DP training algorithms as they test how well the \((\epsilon,\delta)\)-DP guarantee holds for trained models. While there are many types of MIA (Hu et al., 2022), in this work we consider attacks that operate in the black-box mode (i.e. only model outputs can be observed) and can evaluate the privacy loss on particular training examples (Carlini et al., 2022; Ye et al., 2022). In addition to black-box access to the model, we assume that attacks have access to images from the training data distribution and know the training algorithm used and its hyperparameters. To evaluate the effectiveness of a MIA, we examine the Receiver Operating Characteristic (ROC) curve which plots the attack true positive rate (TPR) against its false positive rate (FPR). We focus on the TPR at low FPR regime since a MIA is harmful if it can infer membership of even a small number of training examples with high certainty (Carlini et al., 2022). DP implies an upper bound on TPR at a given FPR: \(\mathrm{TPR}\leq\min\{e^{\epsilon_{S}}\mathrm{FPR}+\delta_{S},1-\frac{1- \delta_{\delta}-\mathrm{FPR}}{\exp(\epsilon_{S})}\}\) Since MIAs are defined w.r.t. substitute adjacency, this depends on \((\epsilon_{S},\delta_{S})\) rather than \((\epsilon,\delta)\) (see Appendix A.2). ## 3 Related Work **DP Transfer Learning** Section 1 describes various works where DP transfer learning using models pretrained on large public datasets achieves accuracy close to non-private approaches. However, to the best of our knowledge, there are no comprehensive studies on few-shot transfer learning under DP. The closest work to ours is Luo et al. (2021) where the authors evaluate DP fine-tuning of a sparse subset of the parameters of models pretrained on public data on a small number of few-shot downstream datasets. Their work employs a relatively small backbone (ResNet18), pretrained on a small public dataset (miniImageNet), with limited analysis. In contrast, our work utilizes large backbones, a large public pretraining set, a wider range of privacy levels and downstream datasets, in addition to assessing vulnerability to attacks and the federated learning setting. Tramer et al. (2022) point out that current DP benchmarks rely excessively on downstream datasets with a high level of overlap with the pretraining data. Our work aims at resolving this issue by evaluating DP models on datasets that have a wide range of DDO. **Federated Learning (FL) and Transfer Learning** There has been a recent surge of interest in using large pretrained models as initialization for training decentralized models in both NLP (Lin et al., 2022; Stremmel and Singh, 2021; Weller et al., 2022; Tian et al., 2022) and computer vision (Chen et al., 2022; Tan et al., 2022; Qu et al., 2021; Chen et al., 2022; Nguyen et al., 2022; Liu et al., 2022). Most of these works were able to improve upon state-of-the-art results under different tasks and settings within FL as well as showing that the client data heterogeneity problem often seen in FL can be partially mitigated with pretrained networks. **FL and DP** Even though the server in FL does not have access to raw user data, the privacy of users may still be compromised if i) the server is untrusted (Huang et al., 2021) or ii) a third party has access to the model after training (Geiping et al., 2020; Carlini et al., 2022). Cryptographic techniques like secure aggregation (Goryczka et al., 2013) can protect against the former, while to tackle the latter, DP adaptations of the FL aggregation algorithms are needed (McMahan et al., 2018). Similarly to DP-SGD, DP-FedAvg (McMahan et al., 2018) is an adaptation of the baseline FL algorithm FedAvg (McMahan et al., 2016), which provides user-level DP guarantees by applying the Gaussian mechanism to parameter updates sent to the server. Recently, a few studies have been conducted investigating the use of large pretrained models for FL under DP constraints in NLP (Basu et al., 2021), representation learning (Xu et al., 2022), and image classification (Song et al., 2022). The closest work to ours is the work of Song et al. (2022) who introduce FLAIR, a few-shot federated learn ing image classification dataset, which they use to perform a relatively small evaluation of pretrained models (only ResNet18 was used) fine-tuned using FL under DP. However, to the best of our knowledge, there are no other studies on how large pretrained models fine-tuned via FL aggregation algorithms behave under DP constraints for the task of transfer-learned image classification. In this work we aim to close this gap and provide experimental evaluation of these methods on real-world datasets. ## 4 Centralized Learning Experiments In our experiments, we endeavor to answer the question: "Under what conditions is differentially private few-shot image classification effective?" We focus on transfer learning approaches that utilize large pretrained backbones. We do this empirically for both centralized and FL (Section 5) settings by varying the: (i) number of shots \(S\); (ii) set of learnable parameters in \(f\) (_All_, _Head_, _FiLM_); (iii) downstream dataset \(\mathcal{D}\) (with varying DDO); and (iv) network architecture: ResNet18 (R-18) (He et al., 2016) pretrained on ImageNet-1K with 11.2M parameters, BiT-M-R50x1 (R-50) (Kolesnikov et al., 2019) pretrained on ImageNet-21K with 23.5M parameters, Vision Transformer VIT-Base-16 (VIT-B) (Dosovitskiy et al., 2020) pretrained on ImageNet-21K with 85.8M parameters. Source code for all experiments can be found at: [https://github.com/cambridge-mlg/dp-few-shot](https://github.com/cambridge-mlg/dp-few-shot). **Centralized Training Protocol** For all centralized experiments, we first draw \(\mathcal{D}\) of the required size (usually \(|\mathcal{D}|=CS\) or \(|\mathcal{D}|=1000\) in the case of VTAB-1k) from the entire training split of the current dataset under evaluation. For the purposes of hyperparameter tuning, we then split \(\mathcal{D}\) into 70\(\%\) train and 30\(\%\) validation. We then perform 20 iterations of Bayesian optimization based hyperparameter tuning (Bergstra et al., 2011) with Optuna (Akiba et al., 2019) to derive a set of hyperparameters that yield the highest accuracy on the validation split. This set of parameters is subsequently used to train a final model on all of \(\mathcal{D}\). We evaluate the final, tuned model on the entire test split of the current dataset. Details on the set of hyperparameters that are tuned and their ranges can be found in Appendix A.4.2. We assume that any pretraining has been non-private. For DP fine-tuning on \(\mathcal{D}\), we use Opacus (Yousefpour et al., 2021) and compute the required noise multiplier depending on the targeted \((\epsilon,\delta)\). We report the results over three runs. We report \((\epsilon,\delta)\)-DP computed with the RDP accountant (Mironov, 2017) and set \(\delta=1/|\mathcal{D}|\). Similarly to previous work (De et al., 2022; Mehta et al., 2022; Sander et al., 2022) we do not account for privacy loss originating from the tuning of the hyperparameters. See Appendix A.4 for additional training details. ### Effect of Shots and DP We evaluate the performance of transfer learning under DP when varying \(S\) and \(\epsilon\). Results are in Figures 1 to 3, with tabular versions in Tables A.2 to A.7. In addition to the CIFAR-10 and CIFAR-100 datasets (Krizhevsky, 2009), which are commonly used in DP transfer learning, we also evaluate SVHN (Netzer et al., 2011), which has a low DDO and hence requires a greater degree of adaptation of the pretrained backbone. Key observations are: **Shots** Figure 1 shows that accuracy decreases as \(\epsilon\) decreases. For \(S\leq 10\), accuracy is poor under DP. However, if the DDO is high or medium, a moderate number of shots (\(S\approx 100\)) is sufficient to approach the accuracy of the non-private setting. For example, at \(S=100\), the model achieves better than \(90\%\) accuracy on CIFAR-10 using only \(2\%\) of the full training split at \(\epsilon=1\). On the other hand, if DDO is low, learning is more challenging and more shots are required to approach non-private accuracy. For example, for \(S=100\) and \(\epsilon=2\), SVHN achieves just over \(20\%\) accuracy and falls well short of non-private levels even at \(S=500\). **Learnable Parameters** Referring to Figure 2, _FiLM_ is at least as good or better than _All_ and _Head_ in terms of accuracy, demonstrating its ability to adapt to differing downstream datasets despite fine-tuning fewer than 0.05\(\%\) of the parameters in the backbone. When the DDO is high, training only the _Head_ is competitive with _FiLM_ and _All_, but when DDO is low, _Head_ falls short as it cannot adapt the backbone to a downstream dataset that has a different data distribution. See Appendix A.3.4 for heat maps showing the advantage of _FiLM_ over _Head_. **Effect of DP** Referring to Figure 3, we see that DP requires significantly more shots than non-private, with the multiple of shots increasing as the privacy level increases (i.e. as \(\epsilon\) decreases). For all datasets, at \(\epsilon=8\), S must be increased by approximately \(8\times\) to meet \(S=5\) non-private accuracy and \(32\times\) at \(\epsilon=1\) for _FiLM_ and VIT-B. In effect, as the privacy level increases, the effective number of shots decreases in an exponential manner. There is evidence that these multipliers are lower for simpler forms of adaptation (e.g. _Head_) than for more complex forms (e.g. _All_), see Figures A.3 and A.4. **Backbone** Referring to Figure A.5, we see that VIT-B performs comparably to or better than R-50, at the expense of having significantly more parameters (see Table A.18). ### Vtab-1k The VTAB-1k benchmark (Zhai et al., 2019) is a low to medium-shot transfer learning benchmark that consists of 19 datasets grouped into three distinct categories (natural, specialized, and structured). From each dataset, \(1000\) examples are drawn at random from the training split to use for the downstream dataset \(\mathcal{D}\). After fine-tuning, the entire test split is used to evaluate classification performance. Figure 4 shows average classification accuracy over all of the datasets in the VTAB-1k benchmark. Complete tabular results are in Tables A.8 to A.13. Key observations are: * DP classification accuracy decreases significantly as \(\epsilon\) is decreased and always falls short of non-private accuracy. * For non-private settings, the _All_ learnable parameters setting outperforms _FiLM_ which outperforms _Head_. For DP settings, _All_ performs worst, _FiLM_ and _Head_ perform similarly, though _FiLM_ is better in the majority of cases. * At the expense of extra parameters (85.8M vs. 23.5M), the VIT-B backbone outperforms the R-50 backbone. Figure 5 depicts the classification accuracy for VTAB-1k datasets ordered by the number of classes (\(C\)) in each as a function of privacy level for the VIT-B backbone in the _FiLM_ configuration. Note that since the dataset \(\mathcal{D}\) has a fixed size of \(1000\) examples, as \(C\) increases, \(S\) necessarily decreases. The key observation is that as \(S\) decreases, the degradation in accuracy is the more severe as \(\epsilon\) decreases. Although classifiers for the Retinopathy dataset appear to perform equally well independently of \(\epsilon\), a closer inspection reveals that this dataset is unbalanced and learned classifiers predict the most common class in all settings. A complete set of plots for various backbones and configurations can be found in Figures A.8 and A.9. Figure 6 shows the difference between the accuracy of _FiLM_ and _Head_ for VTAB-1k datasets as a function of \(\epsilon\). The datasets are ordered from high to low DDO (see Table A.1). At \(\epsilon=1\), _Head_ has an advantage over _FiLM_ on several datasets. _FiLM_ shows a significant advantage when the DDO decreases and as \(\epsilon\) increases. ### Membership Inference Attacks We use the state-of-the-art Likelihood Ratio Attack (LiRA) (Carlini et al., 2022) to attack models trained on CIFAR-100 and _Head_. The results are shown in Table A.1. Figure 1: Classification accuracy as a function of shots and \(\epsilon\) for CIFAR-10, CIFAR-100 and SVHN. DDO (low, medium, high) refers to the data distribution overlap (see Appendix A.1). Backbone is VIT-B and the best performing configuration out of _All_, _FiLM_ and _Head_ is used for each combination of \(\epsilon\) and \(S\), with \(\delta=1/|\mathcal{D}|\). The accuracy is reported over three seeds with the line showing the median and the band reporting the lowest and highest accuracy. Figure 3: Multiplier of shots required to reach same accuracy as non-private with \(S=5\) for VIT-B with _FiLM_ on CIFAR-10, CIFAR-100 and SVHN with \(\delta=1/|\mathcal{D}|\). The data is obtained using linear interpolation (see details and variations for more configurations in Appendix A.3.2). Figure 2: Classification accuracy as a function of shots and learnable parameters (_All_, _FiLM_ and _Head_) on VIT-B for CIFAR-10, CIFAR-100 and SVHN for \(\epsilon\in\{2,\infty\}\) with \(\delta=1/|\mathcal{D}|\). DDO (low, medium, high) refers to the data distribution overlap (see Appendix A.1). The accuracy is reported over three seeds with the line showing the median and the band reporting the lowest and highest accuracy. 100 with varying \(S\) and privacy level \(\epsilon\). For each setting of \(S\) and \(\epsilon\), we first sample \(2|\mathcal{D}|\) examples (recall \(|\mathcal{D}|=CS=100S\)) from the CIFAR-100 training set, and then train 257 different models (1 target model plus 256 shadow models) where each sample for the training set is randomly selected with 50\(\%\) probability from the \(2|\mathcal{D}|\) examples. This ensures that approximately half of the models are trained on each example and half are not so that we can create distributions over the losses for each example being in and out of the training set as described in the LiRA algorithm (Carlini et al., 2022). We use each of the trained models in turn as the target model and then accumulate the attack predictions over all 257 targets to produce the ROC curve for the attack. Due to the extreme computation demand in training a large number of shadow models for each setting of \(S\) and \(\epsilon\), we restrict the attacks to the R-50 backbone and the _Head_ and _FiLM_ parameter configurations. Refer to Appendix A.4.5 for more detail. Excerpts from attack results are shown in Figure 7. The complete set of attack ROC curves are shown in Figures A.10 and A.11, while Table A.14 reports TPR at several low FPR values, AUC score, and the maximum membership inference advantage (defined as TPR - FPR by Yeom et al. (2018)) achieved over the curve, Key observations are: * Non-private (\(\epsilon=\infty\)) models are extremely vulnerable to MIAs (see Figure 7, middle). For example, in the case of \(\epsilon=\infty\), \(S=10\), and _Head_ configuration, 84.5\(\%\) of the examples can be successfully identified with a false positive rate of only 0.1\(\%\). * Vulnerability of non-private (\(\epsilon=\infty\)) models decreases as \(S\) increases. Also, the _FiLM_ configuration is consistently less vulnerable than _Head_ (see Figure 7). We hypothesize that _FiLM_ generalizes better, so training examples do not stand out as much as in the _Head_ configuration. * When \(S\) is fixed, vulnerability to MIAs greatly decreases with decreasing \(\epsilon\) (see Figure 7, right). However, when \(S=10\) with \(\epsilon=1\), 5.1\(\%\) of the examples can be successfully identified with a FPR of 1\(\%\) and 0.8\(\%\) of the examples with 0.1\(\%\) FPR (see Table A.14). * Under DP, there appears to be little or no difference between the vulnerability of the _FiLM_ and _Head_ configurations at the same \(\epsilon\) (see Figure 7, right). * Under DP with small \(\epsilon\), the vulnerability to MIA decreases as \(S\) increases and TPR is close to the theoretical upper bound (see Figure 7, left). For larger \(\epsilon\) there is no trend with \(S\) and the bounds are loose. ## 5 Federated Learning Experiments In this section, we investigate how imposing DP constraints influences the performance of large pretrained models fine-tuned via federated aggregation. In our evaluation, we use three datasets with different DDO. The first is FLAIR (Song et al., 2022), which is a recently proposed real-world dataset for multi-label image classification. It has more than \(50\)k users with heterogeneous data as well as a long-tailed label distribution, making it particularly appealing for benchmarking federated learning both in non-private and private settings. Comprising mainly natural image data, FLAIR is a Figure 4: Average classification accuracy over all VTAB-1k datasets as a function of backbone, learnable parameters, and privacy level (\(\epsilon\)) at \(\delta=10^{-3}\). Colored columns indicate results under DP, light gray indicates non-private accuracy for the corresponding configuration. Figure 5: Classification accuracy as a function of VTAB-1k dataset and privacy level (\(\epsilon\)) at \(\delta=10^{-3}\). Backbone is VIT-B and configuration is _FiLM_. The datasets are ordered increasingly by \(C\) (in parenthesis) or equivalently decreasingly by \(S\) as \(|\mathcal{D}|=1000\). low to medium DDO dataset. The second dataset is CIFAR-100, which is medium DDO. For CIFAR-100, we use \(500\) training clients and \(100\) test clients, with each client having \(100\) samples and no clients sharing any data. To introduce more client heterogeneity, the data are distributed using the Pachinko Allocation Method (Li and McCallum, 2006) as in Reddi et al. (2021). The third dataset is Federated EMNIST (Caldas et al., 2018), a dataset of black-and-white handwritten symbols from \(62\) classes grouped according to the writer. EMNIST is a highly out-of-distribution dataset (i.e. low DDO) with respect to the ImageNet-21K pretraining data. As the number of training users in CIFAR-100 (\(500\) users) and Federated EMNIST (\(3400\) users) is relatively low, we need to increase \(\epsilon\) from \(2\) to \(8\), such that the amount of added noise during aggregation is not excessive. \(\delta\) is set to \(N^{-1.1}\), where \(N\) is the number of training clients. We use FedADAM (Reddi et al., 2021) aggregation, which was shown to have a better empirical performance than standard FedAvg (McMahan et al., 2016). We do not use Bayesian optimization for hyper-parameter tuning, as each FL run is prohibitively expensive. Instead, we perform a small grid search over the server and client learning rates. The hyper-parameter ranges searched are provided in Ap Figure 8: Private (colored) and non-private (gray) FL performance on FLAIR (left), CIFAR-100 (middle) and EMNIST (right) as a function of backbone and learnable parameters. We use Macro-AP as the primary metric to report accuracy for FLAIR, and standard accuracy on other datasets. The R-18 _All_ result on FLAIR is taken from Song et al. (2022). Our FLAIR results set a new state-of-the-art. Figure 6: Heat map showing the accuracy difference between _FiLM_ and _Head_ for the VTAB-1k datasets as a function of \(\epsilon\). Backbone is VIT-B. Darker red indicates _FiLM_ is better. Darker blue indicates _Head_ is better. Datasets ordered from highest to lowest DDO. \(\delta=10^{-3}\). Figure 7: ROC curves for LiRA (Carlini et al., 2022) on CIFAR-100 with R-50 backbone for two values of \(\epsilon\) (1 and \(\infty\)) where \(S\) varies and for \(S=50\) where \(\epsilon\) varies. TPR values in legends are measured at FPR=0.001. Complete results in Table A.14 and Figures A.10 and A.11. The dotted red curve on the \(\epsilon=1\) plot indicates the theoretical upper bound on TPR for \(S=10\). \(\delta=1/100S\). pendix A.4.6. Figure 8 shows the performance of different model configurations on all three datasets with and without DP. For FLAIR, we report macro average precision (Macro-AP) results in the main text of the paper, while other metrics are in Tables A.15 and A.16. For a fair comparison on FLAIR, we fixed all of the training hyperparameters to the values from the original paper (Song et al., 2022), except for local and server learning rates. For CIFAR-100 and Federated EMNIST, we report standard test classification accuracy. All training details and hyperparameters are in Appendix A.4.6. As communication cost is important in FL, in Figure 9 we report the number of parameters required to be transmitted for each model configuration in one user-server interaction. The results are shown for just FLAIR. The results for the other datasets are of the same magnitude, varying only because the number of classes is different. Summarizing Figures 8 and 9, key observations are: * With R-18 used in the original paper, we achieve state-of-the-art performance under DP with _FiLM_, improving Macro-AP from \(44.3\) to \(51.9\). This improvement comes with a reduction in communication cost from \(11.2\)M parameters per each user-server interaction to only \(17\)k. * With VIT-B we further improve the state-of-the-art results on FLAIR in both DP and non-private settings. Under DP, we get improvement of \(14.7\%\) in Macro-AP, while for non-private, the Macro-AP increased from \(62.1\) to \(74.7\). * _Head_ is more robust under DP than _All_ or _FiLM_. _Head_ has the smallest relative drop in performance of around \(10\%\) on FLAIR and \(33.8\%\) on CIFAR-100 using VIT-B. * Under DP, in the case of a large number of training clients as in FLAIR, tuning either _FiLM_ or _Head_ is the most performant in terms of both accuracy and communication cost. _Head_ is preferable in the case of a large output feature dimension (as in R-50 with \(d_{b}=2048\)), while _FiLM_ adaptation is better when the output feature dimension is smaller (as in R-18 with \(d_{b}=512\)). * In the case of a small number of training clients as in CIFAR-100, the performance deteriorates significantly under DP and _Head_ performs the best (\(50.9\%\) for VIT-B). Interestingly, VIT-B achieves far better accuracy than the other backbones. **Federated EMNIST (low DDO):** * Although _All_ has the largest relative drop in accuracy under DP (\(14.1\%\) for R-50), it achieves the best accuracy for \(\epsilon=8\) and \(\epsilon=\infty\). * _Head_ under-performs regardless of the backbone used, providing empirical evidence that adapting the head only is insufficient for datasets with low DDO. ## 6 Discussion and Recommendations Our work shows that there is still much to be done in order to realize effective transfer learning under DP constraints for few-shot, low DDO datasets. Alternative strategies may include side-stepping privacy costs by leveraging the zero-shot capabilities of large pretrained models such as CLIP (Radford et al., 2021) or utilizing public data in addition to private data in the training process (Golatkar et al., 2022) in order to improve utility. In summary, our experiments show that: * **Shots Per Class (S)** Image classification accuracy decreases as \(\epsilon\), \(S\), or DDO decreases. As a result, one should expect to use roughly 8\(\times\) larger \(S\) for \(\epsilon=8\) and 32\(\times\) larger \(S\) for \(\epsilon=1\) under DP to achieve accuracy comparable to non-private \(S\in\{5,10\}\). * **Membership Inference Attacks (MIAs)** The vulnerability of non-private few-shot models increases as S decreases. DP significantly mitigates the effectiveness of MIAs, however, we found that DP few-shot models can leak \(5.1\%\) of the examples with a \(1\%\) FPR even when \(\epsilon=1\) (on CIFAR-100 with \(S\)=10 on R-50). The level of leakage violates theoretical DP bounds with add/remove adjacency but is explained by substitute adjacency. More theoretical work is needed to fully understand the relationship of adjacency and MIAs. * **Learnable Parameters** Learning _All_ parameters under few-shot DP is generally outperformed by _FiLM_ and _Head_. _FiLM_ outperforms _Head_ when the DDO is low. Otherwise, _Head_ is sufficient to achieve the best accuracy. Figure 9: FLAIR communication cost – the number of parameters sent at every user-server communication round. * [leftmargin=*] * **Federated Learning (FL)** DP few-shot FL can be effective in terms of accuracy and can lower communication cost per round by orders of magnitude using _FiLM_ in the case of low DDO and _Head_ in the case of high DDO. To achieve a high level of privacy (i.e. low \(\epsilon\)) the number of clients needs to be in the 10s of thousands in order to minimize the noise added during parameter updates. ## Acknowledgments Marlon Tobaben and Antti Honkela are supported by the Academy of Finland (Flagship programme: Finnish Center for Artificial Intelligence, FCAI; and grant 325573), the Strategic Research Council at the Academy of Finland (Grant 336032) as well as the European Union (Project 101070617). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the granting authority can be held responsible for them. Aliaksandra Shysheya, John Bronskill, and Richard E. Turner are supported by an EPSRC Prosperity Partnership EP/T005386/1 between the EPSRC, Microsoft Research and the University of Cambridge. This work has been performed using resources provided by the CSC - IT Center for Science, Finland, and the Finnish Computing Competence Infrastructure (FCCI), as well as the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service [https://www.hpc.cam.ac.uk](https://www.hpc.cam.ac.uk) funded by EPSRC Tier-2 capital grant EP/P020259/1. We thank Joonas Jalko, Stratis Markou, Massimiliano Patacchiola and Runa Eschenhagen for helpful comments and suggestions.
2310.07928
Towards a lattice-Fokker-Planck-Boltzmann model of thermal fluctuations in non-ideal fluids
Microscopic thermal fluctuations are known to affect the macroscopic and spatio-temporal evolution of a host of physical phenomena central to the study of biological systems, turbulence, and reactive mixtures, among others. In phase-changing fluids metastability and nucleation rates of embryos are known to be non-trivially affected by thermal noise stemming from molecules random velocity fluctuations, which ultimately determine the long-term growth, morphology, and decay of macroscopic bubbles in cavitation and boiling. We herein present the mathematical groundwork for a lattice-based solution of the combined Fokker-Planck and Boltzmann equations that by proxy solve the stochastic Navier-Stokes-Fourier equations and a non-ideal, cubic van der Waals equation of state. We present the derivation of the kinetic lattice-Fokker-Planck-Boltzmann equations facilitated by Gauss-Hermite quadrature, and show by multi-scale asymptotic analysis that the non-equilibrium dynamics in velocity space inherent to the Fokker-Planck equation manifest as stresses. The resulting coarse-grained lattice-Fokker-Planck-Boltzmann method (LFPBM) is attractive as its dynamics are hypothesized to continually evolve thermal fluctuations introduced into the thermo-hydrodynamic variables by initial conditions in a manner that obeys the fundamental fluctuation-dissipation balances. Simulations will be showcased in future publications.
K. J. Petersen, J. R. Brinkerhoff
2023-10-11T22:53:41Z
http://arxiv.org/abs/2310.07928v2
Towards a lattice-Fokker-Planck-Boltzmann method for simulating thermal noise in compressible phase-changing fluids ###### Abstract Molecular fluctuations are known to affect the macroscopic and spatio-temporal evolution of a host of physical phenomena central to the study of biological systems, turbulence, and reactive mixtures, among others. In phase-changing fluids metastability and nucleation rates of embryos are known to be non-trivially affected by thermal noise stemming from molecules' random velocity fluctuations, which ultimately determine the long-term growth, morphology, and decay of macroscopic bubbles in cavitation and boiling. We herein present the mathematical groundwork for a lattice-based solution of the combined stochastic Fokker-Planck and Boltzmann equations that by proxy solve the compressible fluctuating Navier-Stokes-Fourier equations. We present the derivation of the kinetic lattice-Fokker-Planck-Boltzmann equations facilitated by means of Gauss-Hermite quadrature, and show by multi-scale asymptotic analysis that the non-equilibrium dynamics in velocity space inherent to the Fokker-Planck equation manifest as fluctuating stresses analogous to the conventional viscous-stress tensor. The key implication of this analysis is a direct correlation between the thermal white noise of the Fokker-Planck equation and the effective viscosities \(\mu,\zeta\) that in concert with a cubic equation of state enable simulations of phase-changing fluids perturbed by stochastic thermal fluctuations. **Usage:** Our theoretical development is inspired by the lattice-Fokker-Planck development in [Moroni _et al._ Phys. Rev. E 73, 066707] and employs Particles-on-Demand for kinetic theory presented in [Reyhanian _et al._ Phys. Rev. E 102, 020103(R)] to mitigate numerical instability issues associated with high compressibility levels and temperature gradients. lattice-Fokker-Planck-Boltzmann method (LFPBM), Particles-on-Demand (PonD), Navier-Stokes-Fourier (NSF) equations, kinetic theory of liquids, phase change, thermal noise, metastability ## I Introduction ### Fluctuations, nucleation, and phase transitions Nucleation is responsible for initiating _homogeneous_ as well as _heterogeneous_ phase transitions in fluid systems, and occurs in a myriad of natural and engineered processes. Phase transitions are commonly described in literature to be either pressure-driven along isotherms (_cavitating_) or thermally-driven along isobars (_boiling_) but in reality evolve in a blended manner simultaneously along both the \(p\) and \(T\) dimensions [3]. Cavitation dominates in highly advective flows with large pressure gradients causing local pressures to fall below the saturation pressure, whereas boiling occurs in diabetic flows where local temperatures exceed the saturation temperature. The relevance of either of the modes is exemplified by technological applications such as medical drug delivery and acoustic therapies facilitated by cavitation [4], turbomachinery erosion and efficiency loss from cavitation [5], pool boiling in cryogenic spills [3], as well as natural occurrences in skeletal-joint cavitation [6], cavitation in the xylem of drought-stricken trees [7], cavitation as a predation tool in mantis shrimp [8], to mention a few exotic cases. Inherent to cavitation and boiling is the property that phase change may occur either homogeneously or heterogeneously, and that the thermodynamics can be _metastable_. In heterogeneous phase change, the presence of nucleation sites--e.g. on the surface of suspended solid particles and in undissolved gases in cavitation, or in microscale crevices in the surface topology of a solid boiling substrate--can non-trivially alter the effective liquid tensile strength by an order of magnitude in comparison to the homogeneous counterpart occurring in pure fluids [9; 10; 11], thus increasing the probability of nucleation events occurring. Commonly, phase transitions are thought to occur instantaneously when the pressure falls below the saturation state, and conversely when the temperature exceeds the saturation temperature, when they in reality are relaxed processes in time. This means that the thermodynamic state of a stretched or superheated liquid in between its binodal and spinodal curves may remain in its original (liquid) state for a finite period of time until it is perturbed [12; 13] where the phase stability is fundamentally dictated by entropy generation and the entropy-extremum principle [14]. Such perturbations can by exemplification be pressure discontinuities in shock waves propagating on the macroscales, and/or thermal noise below and around the microscales where large ensembles of molecules can produce fluctuations in the density and temperature of a fluid thence provoking a phase transition [15; 16]. In addition to the above complexity, phase transitions statistically evolve across scales; new thermodynamic phases originate from the molecular scale and develops into the macroscale [9; 17], and the size distributions of impurities span the nm-mm scales. Consequently, the nuclei density (number of nucleation sites/m\({}^{3}\)), and the rate at which vapour phases grow during phase change markedly impacts the thermo-hydrodynamics of many flows including the cloud-shedding dynamics in cavitating hydrofolis [18], or hypothetically the Taylor-Helmholtz instability and conjugate heat transfer in pool boiling [3]. Capturing the heterogeneous and metastable nature of phase-transitioning fluids in numerical simulations is rather difficult, and in computational, macroscopic simulations of industrial relevance it is frequently assumed that phase transitions occur homogeneously and instantaneously, as found in our recent literature reviews [3; 19]. Due to the span of length scales in heterogeneous phase transitions, resolving nucleation sites in macroscale simulations is computationally intractable with contemporary computational resources. Thus, a more viable approach to the problem is to involve sub-grid modelling of unresolved nuclei that can act as nucleation sites. In cavitation and boiling simulations such heterogeneity in the form of undissolved microbubbles and solid surfaces, rigorously coupled with the physics of phase transformations and capillary effects of surface imperfections, is yet to be fully represented in computational fluid dynamics. Nevertheless, there do exist strategies for introducing sub-grid nuclei into simulations. In cavitation, works such as [20] evolved sub-grid microbubbles with a Rayleigh-Plesset equation including phase-change mass transfer. In the case of boiling, nucleation-site density models for pool boiling [21; 22] have proven useful, but still rely on empirically informed activation models. Otherwise, the computational fluid dynamics community predominantly rely on lower-fidelity, empirical models for predicting latent heat transfer in cavitation and boiling simulations [3], such as those of Kunz _et al._[23], Schnerr and Sauer [24], and Zwart _et al._[25], which account for nucleation sites via a constant nucleation-site volume fraction, but not for non-equilibrium thermodynamic effects nor the impact of micro-scale thermal fluctuations on nucleation. Recently, Menzl _et al._[9] showed that in homogeneously cavitating water, the classical nucleation theory of Volmer and Weber [26], Farkas [27], Becker and Doring [28] combined with a macroscopic, thermal-noise-augmented Rayleigh-Plesset equation was able to reproduce nucleation rates from molecular-dynamics simulations demonstrating the non-trivial diffusive effects of molecular fluctuations in phase transitions. The concept of including thermal-noise in macroscopic simulations is the fundament of fluctuating-hydrodynamics theory. In recent works by Magaletti _et al._[13], Gallo [15], Gallo _et al._[17; 29] the authors combined the theory with Navier-Stokes simulations of boiling and cavitation. From a similar perspective as the authors' Gaussian white-noise enforced viscous-stress and heat-flux components, we aim to investigate the fluid physics of thermal noise and its effect on phase transitions across the micro and macroscales. The Fokker-Planck equation (FPE) by Fokker [30] and Planck [31] has been suggested to be a suitable model of liquids [32] compared to the popularly employed Boltzmann equation in lattice-Boltzmann methods, which arguably is a powerful tool in studying phase-changing flows [19]. This is owed to the FPE being directly derived from the Langevin equation, which models the velocity space of particles including their chaotic fluctuations that arise due to the continuous interaction between the electro-static potentials of neighbouring molecules and other external forces. The FPE thereby _mimics_ the stochastic behaviour of the Langevin equation, from the underlying molecular dynamics that manifests as thermal noise in the FPE. We are interested in investigating the suitability of the FPE as a versatile model of liquids; as the continuum density, momentum and total energy are recovered from the moments of the primitive variables of the FPE, the inherited Langevin noise directly projects into the macroscopic variables as additional noisy components \(\widetilde{\mathbf{u}},\widetilde{T},\widetilde{p}\), etc. Moreover, entropy generated by thermal noise can be directly derived from the FPE [33] and as entropy maxima dictate phase stability [14], the FPE framework may present an avenue for rigorously studying the non-equilibrium residence times of fluids in metastable regimes. The infinitesimal oscillations in pressure and temperature fields are presumed to enable the perturbation of (respectively) stretched and superheated metastable fluids, and thereby spontaneous nucleation. The FPE has already enjoyed success in investigating the turbulence-energy cascade and its associated entropy generation [34], modelling rarefied-gas dynamics [35; 36; 37; 38], ion transport in clays [39], homogeneous and heterogeneous condensation [40], among others. Thus, in a series of articles, we wish to assess the potential numerical and theoretical capabilities of the FPE tailored to nucleation and first-order phase transitions. As such, with this article we initially seek a lattice-based solution framework of the FPE for a simple, pure van der Waals fluid based on a cubic equation of state (EOS). We acknowledge that real noise in fluids may not be white, but attributed with a coloured spectrum, pending further investigation in a dedicated literature review. Nevertheless, prior to exploring coloured noise, we first wish to establish the lattice solution with idealized white noise. Eventually the work can be extended to accommodate more unique spectra for various fluids, more sophisticated EOS, as well as multi-component kinetic models. To understand how nucleation is mediated by thermal noise, section SS1.2 lays out the Langevin foundation of the FPE and explores its benefits as a kinetic model of liquids [41]. ### The Fokker-Planck equation First-order phase transitions occur when a critically large population of molecules experience sudden large fluctuations in the momenta that manifest a perturbation in entropy and hence a transition between stable states associated with the concavity of entropy [14]. Since the 1970's the Fokker-Planck equation has proven a powerful tool for studying problems that involve stochastic fluctuations (noise), and how fluctuations affect systems around critical transition points [42]. In describing the FPE, we start with the 1D linear Langevin equation for Brownian acceleration of a particle with mass \(m\) and speed \(\xi\) as, \[\eta=\partial_{t}\xi=-\gamma\xi+\widetilde{\Gamma}(t), \tag{1}\] in which \(\gamma\doteq\alpha/m=1/\tau\) is a friction factor that correlates with a Stokes-like damping-force coefficient \(\alpha\), and a relaxation time \(\tau\)[42]. For simplicity we only included the Langevin equation for a single velocity component, but it generalizes to a molecular-velocity vector \(\mathbf{v}=\big{[}\xi_{x},\xi_{y},\xi_{z}\big{]}^{\dagger}\) in three dimensions. In the article we use bold-font for vector quantities and for the viscous stress tensor \(\mathbf{\tau}\). The stochasticity in the equation is facilitated by the per-unit mass Langevin fluctuation force term \(\widetilde{\Gamma}=\widetilde{F}/m\). Assuming that \(\widetilde{\Gamma}(t)\) is Gaussian and \(\delta\)-correlated, a set of key properties arise from the fluctuation-dissipation theorem (FDT); namely that the average over an ensemble is zero \(\big{\langle}\widetilde{\Gamma}(t)\big{\rangle}=0\), and that the ensemble-average of a product of two Langevin force-terms at different times is zero if \(t-t^{\prime}\) is greater than the correlation time \(\tau_{c}\), i.e. \(\big{\langle}\widetilde{\Gamma}(t)\widetilde{\Gamma}(t^{\prime})\big{\rangle} =0,|t-t^{\prime}|\geq\tau_{c}\). Conversely, when we are interested in timescales smaller than the correlation time, the ensemble average is Dirac \(\delta\)-correlated, \[\big{\langle}\widetilde{\Gamma}(t)\widetilde{\Gamma}(t^{\prime})\big{\rangle} =q\delta(t-t^{\prime}), \tag{2}\] where the noise-strength can be shown to be \(q=2\gamma k_{B}T/m\), where \(k_{B}\) is Boltzmann's constant and \(T\) is the absolute temperature. In a three dimensional case \(\widetilde{\Gamma}(t)\) would be a vector quantity with mutually independent components for each of the spatial dimensions. Conversely, if the noise components were mutually dependent the strength \(q\) would be a matrix quantity. The noise is "white" characterized by the spectral density of the Langevin equation (1) being independent of frequency. On the other hand, for non-\(\delta\)-correlated terms, the spectral density is frequency-dependent and hence the noise is attributed as being "coloured". We here clarify that we are not interested in directly solving the Langevin equation for a finite number of particles, but instead wish to solve an equivalent equation of motion for a distribution \(f(\mathbf{v};\mathbf{x},t)\) of Langevin particles. With this notation of the arguments that we generally adopt in the article, we imply that \(f\) is inherently dependent on the molecular velocities being the quadrature nodes of the population, whereas the spatio-temporal dependence is indirectly owed to the time-integration of the FPE, as well will detail later. The bridge between the FP and Langevin equations is the Kramers-Moyal (KM) expansion [43; 44] around a single or larger set of fluctuating variables \(\mathbf{v}\in\{\chi\}\), where we specifically consider the molecular velocity vector \(\mathbf{v}\) in one and two spatial dimensions. If the Langevin equation with Gaussian \(\delta\)-correlated noise governs \(\mathbf{v}\) it can be shown [42] that the KM expansion, \[\partial_{t}f\big{(}\mathbf{v};\mathbf{x},t\big{)}=\sum_{n=1}^{\infty}\bigl{(}- \partial_{\mathbf{v}}\bigr{)}^{n}\mathscr{D}^{(n)}f\big{(}\mathbf{v};\mathbf{x},t\big{)}, \tag{3}\] reduces to the FPE, with vanishing coefficients \(\mathscr{D}^{(n)}\) for the higher-order terms with \(n\geq 3\). In this case the expansion comprises the deterministic drift coefficient \(\mathscr{D}^{(1)}\) (\(n=1\)), and the diffusion coefficients \(\mathscr{D}^{(2)}\) (\(n=2\)) accounting for the fluctuations in the variable. Truncation to any finite order \(n>2\) results in introducing vanishingly small noise contributions from the Langevin equation, guaranteeing that the process is statistically continuous except for some non-vanishing cases [45] where discrete, anomalous events play a significant role [34]. The drift and diffusion terms read, \[\mathscr{D}^{(1)} =-\gamma\mathbf{v}-\mathbf{\eta}, \tag{4}\] \[\mathscr{D}^{(2)} =1/2q=\gamma k_{B}T/m, \tag{5}\] which in concert with (3) for \(f(\mathbf{v};\mathbf{x},t)\) in position and velocity-space produces the FP equation [42], \[\big{(}\partial_{t}+v_{\alpha}\partial_{\alpha}+\eta_{\alpha} \partial_{v_{\alpha}}\big{)}f=\widetilde{\Omega}^{(\text{FP})}\circ f\doteq \gamma\partial_{v_{\alpha}}\big{(}v_{\alpha}+v_{T}^{2}\partial_{v_{\alpha}} \big{)}f, \tag{6}\] where \(v_{T}=\sqrt{k_{B}Tm^{-1}}\) is the thermal velocity. We further appended the LHS body-forcing term with acceleration \(\mathbf{F}_{B}/m=\mathbf{\eta}\), which is not to be confused with the Langevin acceleration. We employ the notation for the FP operator \(\widetilde{\Omega}^{(\text{FP})}\circ f\) imposed on the populations, and apply the Einstein notation for which repeated subscripts denote summation over pertinent indices. At this point, we emphasize that the FP operator effectively models liquid states [32; 41] and so we also need to consider a hard-sphere binary collision model for gaseous states. To that end, the Boltzmann operator \(\Omega^{(\text{B})}\doteq\tau_{c}^{-1}\big{(}f^{(\text{eq})}-f\big{)}\) with the Maxwell-Boltzmann (MB) equilibrium distribution, \[f^{(\text{eq})}=\frac{\rho}{\big{(}2\pi RT\big{)}^{D/2}}\exp\left(-\frac{\big{(} \mathbf{v}-\mathbf{u}\big{)}^{2}}{2RT}\right), \tag{7}\] introduced into the RHS of (6) yields a Fokker-Planck-Boltzmann (FPB) type equation, \[\big{(}\partial_{t}+v_{\alpha}\partial_{\alpha}\big{)}f=-\eta_{ \alpha}\partial_{v_{\alpha}}f +\gamma\partial_{v_{\alpha}}(v_{\alpha}+v_{T}^{2}\partial_{v_{ \alpha}})f\] \[+\frac{1}{\tau_{c}}\big{(}f^{(\text{eq})}-f\big{)}, \tag{8}\] where the FP and Boltzmann operators share the same MB equilibrium [32], and the acceleration term has been moved from the advection side. In the equilibrium (7) \(\rho\) is the mass density, \(R\) the gas constant, \(D\) the number of spatial dimensions, \(\mathbf{v}\) the molecular velocity, and \(\mathbf{u}\) the continuum velocity. We are interested in achieving a unique, transient solution of the FPB equation in seven degrees of freedom, i.e. \((\mathbf{v},\mathbf{x},t)\), and its moment equations enabling simulation of large-scale fluid-dynamics problems with phase transitions. The FP operator is discretized on the lattice following Moroni _et al._[1] to derive kinetic equations for phase-changing flows of pure van der Waals fluids. The Boltzmann operator is modelled following the conventional lattice-Boltzmann method with the Bhatnagar-Gross-Krook (BGK) single-relaxation time collision model. As such, this paper revolves around the derivation and nature of such a lattice solution, and is organized as follows: In SSII we seek a phase-space discretized form of (8) via a Hermite-series expansion resulting in a set of explicit kinetic lattice equations with a discretized velocity space in accordance with conventional \(DmQn\) lattice models, exemplified by the \(D2Q9\) variant in particular. Then, in SSIII a Chapman-Enskog multiscale analysis is carried out to prove that the continuum conservation laws are satisfied by the lattice equations. A discussion in SSIV revolving around thermodynamic consistency, as well as strengths and weaknesses of the model, is followed by a conclusion SSV giving an outlook on planned applications. ## II Kinetic equations ### Advection The FPBE (8) comprises two processes that need to be treated separately in prospective simulations. Going forward we attribute relaxation due to the BGK operator with "collisions", and that due to the FP operator with "interactions". The LHS advection terms represent the advection of populations prior to particle collisions and interactions, and it is responsible for the spatio-temporal discretization of \(f\) in \((\mathbf{x},t)\)-space. It is well known that the stochastic sample path of a Langevin particle can be predicted by integration of the (exemplary 1D) Ito stochastic-differential equation (SDE), \[d\xi(t)=\mathscr{D}^{(1)}\big{(}\xi(t),t\big{)}dt+\sqrt{\mathscr{D}^{(2)} \big{(}t\big{)}}dW(t), \tag{9}\] in which the first term prefixed by \(\mathscr{D}^{(1)}\) accounts for the deterministic effects of the Langevin equation and the second term prefixed by \(\mathscr{D}^{(2)}\) the stochastic dynamics of the Langevin equation as prescribed by the Weiner process \(W(t)\) with the difference \(dW(t)=\widetilde{\Gamma}(t)dt\)[46]. The equivalence between the Ito formalism and the FPE is drift and diffusion coefficients that occur in the SDE and (8). As the stochastic sample path is inherently realized by the velocity space \(\mathbf{v}(\mathbf{x},t)\) of the FP operator during collisions, we need to account for the associated non-equilibrium effects that arise in the populations \(f(\mathbf{v};\mathbf{x},t)\). We do this by adapting the lattice-discrete velocity space to the local thermo-hydrodynamics by means of the recently proposed Particles-on-Demand method in the form [47], \[\mathbf{v}_{i}=\sqrt{\theta}\mathbf{c}_{i}+\mathbf{u},\] (10a) such that the inertial reference frame is shifted by the continuum velocity \[\mathbf{u}(\mathbf{x},t)\] rendering \[\mathbf{c}_{i}\] the constant relative velocities as given in the conventional LBM, and the discrete velocities are further scaled by the local thermodynamics through the reduced pressure \[\theta(\mathbf{x},t)\], that are predicted by the FPB populations. The subscript \[i\] corresponds to each of the \[n\] quadrature vectors of the pertinent \[DmQn\] model. We adopt the real-gas formulation, \[\theta=\frac{p}{\rho RT_{L}}, \tag{10b}\] where \(T_{L}\) is the lattice temperature of a corresponding isothermal, ideal-gas realization of the LFPBE, represented by a known constant for every \(DmQn\) lattice model [48; 49]. The recast velocities (10) enable simulation of high-Mach and thermal flows owing to the fact that the Hermite-expansion of the equilibrium populations shared by both the FP and Boltzmann equations become exact and velocity-error free [47; 49; 50], \[f_{i}^{(\mathrm{eq})} =\rho w_{i}, \tag{11}\] \[g_{i}^{(\mathrm{eq})} =\rho w_{i}\left(2e-D\frac{p}{\rho}+v_{i}^{2}\right), \tag{12}\] informed by \(\rho(\mathbf{x},t),D\), the quadrature weights \(w_{i}\), internal energy \(e(\mathbf{x},t)\), and real-gas pressure \(p(\mathbf{x},t)\). \(v_{i}^{2}=\mathbf{v}_{i}\cdot\mathbf{v}_{i}\) is understood to be the dot product. Mitigating velocity errors in a prospective simulation with a lattice-FPBE is especially attractive as previous lattice-solutions [32] of the FPE have suffered from significantly reduced stability limits with increasing Reynolds numbers in comparison to conventional lattice-Boltzmann models. The benefits of the scheme are contrasted by the complexity of the required solution methods for the considered lattice equations; in the conventional LBM the discrete velocity sets ensure exact streaming between lattice nodes, but this is not the case with (10) that is more likely to incur populations streaming between off-lattice locations and lattice nodes. Thus, establishing the populations at off-lattice locations demands a predictor-corrector scheme and a reconstruction procedure on the basis of the adjacent known on-lattice populations. Moreover, the off-lattice nature of the discrete velocities requires that spatio-temporal integration of (8) is carried out backwards in time along the characteristic lines. We adopt the second-order accurate trapezoidal integration resulting in the kinetic lattice equations, \[f_{i}(\mathbf{x},t)-f_{i}(\mathbf{x}-\mathbf{v}_{i}\delta t,t-\delta t) =\omega^{\rm(FP)}\Omega_{i}^{\rm(FP,\eta)}(f_{i})\] \[+\omega^{\rm(B)}\Omega_{i}^{\rm(B)}(f_{i}), \tag{13a}\] \[g_{i}(\mathbf{x},t)-g_{i}(\mathbf{x}-\mathbf{v}_{i}\delta t,t-\delta t) =\omega^{\rm(FP)}\Omega_{i}^{\rm(FP,\eta)}(g_{i})\] \[+\omega^{\rm(B)}\Omega_{i}^{\rm(B)}(g_{i}), \tag{13b}\] where the populations \(f\) and \(g\) recovering density and total energy, respectively, are relaxed by the relaxation frequencies \(\omega^{\rm(FP)}\) and \(\omega^{\rm(B)}\) that are imposed on the lattice-discretized FP-interaction \(\Omega_{i}^{\rm(FP)}\) and BGK-collision \(\Omega_{i}^{\rm(B)}\) operators. In the temporal discretization of (13), the streaming operator results in, \[\int_{t-\delta t}^{t}\Omega^{\rm(S)}\circ f(t)\ dt =f^{\prime}-f\] \[=\frac{\delta t}{2}\Big{(}\Omega\circ f^{\prime}+\Omega\circ f \Big{)}, \tag{14}\] where we adopt the notation for the advected population is \(f^{\prime}=f(\mathbf{x},t)\), and the initial \(f=f(\mathbf{x}-\mathbf{v}\delta t,t-\delta t)\). However, the purpose of the integration is to be able to predict the post-advected state \(f^{\prime}\), and thus it is unfeasible to compute \(\Omega\circ f^{\prime}\) unless a transformation is made to avoid the implicitness of the equation. An explicit form can be procured analogously to the BGK model with the population remapping, \[\tilde{f}\doteq f-\frac{\delta t}{2}\Omega\circ f, \tag{15}\] which in the BGK case yields the kinetic equation [51], \[\tilde{f}^{\prime}-\tilde{f} =-\omega^{\rm(B)}\Omega^{\rm(B)}\circ f\] \[=-\frac{\tau^{-1}\delta t}{1+\tau^{-1}\delta t/2}\Omega^{\rm(B)} \circ\tilde{f}. \tag{16}\] It was previously shown [52] that the trapezoidal integration of the FP operator yields a similar relaxation frequency, although instead mediated by the friction factor \(\gamma\), \[\omega^{\rm(FP)}=\frac{\gamma\delta t}{1+\gamma\delta t/2}. \tag{17}\] In the BGK case, the zeroth-through-second moments coincide for both \(f,\tilde{f}\), but this is not the case for the FP operator in which only the zeroth moments coincide. As such, sampling to compute the higher-order moments \(\sum_{i}f_{i}\mathbf{v}_{i},\sum_{i}f_{i}v_{i,\alpha}v_{i,\beta}\) must be done on the basis of \(f_{i}\) (and similarly for \(g_{i}\)). This is relevant as the moments are necessary for carrying out the Hermite-series expansion, as we will present in the following. ### Collision and interaction Following Reyhanian _et al._[50], we adopt the exact equilibria in (11,12) in the BGK collision model. However, we still need to establish a lattice model for the FP operator, which will be facilitated with a Hermite expansion. In the following we repeat some of the work already documented in the article by Moroni _et al._[1], but do so for the sake of completeness. However, as we will see later, our explicit formulations for the operator is different from that reported by Moroni _et al._ owing to the characteristic eigenvalue property of operator modified by the discrete velocity set (10) of PonD. We adopt a notation for the operators in which circumflex-accented (\(\ \widehat{}\ \)\) operators denote that the operators are to be considered only as the operational expressions excluding the populations. This notation is important in the derivation of the expansions. The discretization of the operators as well as the populations \(f,g\) is facilitated by a Hermite-polynomial tensor \(\mathcal{H}_{\underline{\alpha}}^{(l)}(\mathbf{v})\) basis in \(D\) dimensions. Just as these tensors can be exploited to transform gradient terms in the continuous FPE (6), the continuous population \(f(\mathbf{v};\mathbf{x},t)\) in phase-space can be expanded in terms of its Hermite coefficients \(\mathcal{T}_{\underline{\alpha}}^{(l)}(\mathbf{x},t)\) as [1] \[f(\mathbf{v};\mathbf{x},t)=w(\mathbf{v})\sum_{l=0}^{\infty}\frac{1}{v_{T}^{2l}l!}\mathcal{ G}_{\underline{\alpha}}^{(l)}\mathcal{H}_{\underline{\alpha}}^{(l)}(\mathbf{v}), \tag{18}\] where the populations are proportional to the Gaussian weight function [1], \[w(\mathbf{v})=\frac{1}{\big{(}2\pi v_{T}^{2}\big{)}^{D/2}}\exp\left[-\frac{(\mathbf{v} -\mathbf{u})^{2}}{2v_{T}^{2}}\right]. \tag{19}\] The subscript \(\underline{\alpha}=\alpha_{1}\cdots\alpha_{l}\) in the Hermite coefficients and polynomials contracts the \(l\) indices of the \(l\)-rank Hermite tensor \(\mathcal{H}_{\underline{\alpha}}^{(l)}\). To that end, the zeroth-through third-rank tensors are [53; 54], \[\mathcal{H}^{(0)}(\mathbf{v}) =1, \tag{20a}\] \[\mathcal{H}_{\alpha}^{(1)}(\mathbf{v}) =v_{\alpha},\] (20b) \[\mathcal{H}_{\alpha\beta}^{(2)}(\mathbf{v}) =v_{\alpha}v_{\beta}-v_{T}^{2}\delta_{\alpha\beta},\] (20c) \[\mathcal{H}_{\alpha\beta\gamma}^{(3)}(\mathbf{v}) =v_{\alpha}v_{\beta}v_{\gamma}-v_{T}^{2}\big{(}v_{\alpha}\delta_{ \beta\gamma}+v_{\beta}\delta_{\alpha\gamma}+v_{\gamma}\delta_{\alpha\beta} \big{)}, \tag{20d}\] and further detailed by Moroni _et al._[1]. The Hermite coefficients for the populations are the velocity integrals which occupy \(\mathbb{R}^{D}\), \[\mathcal{T}_{\underline{\alpha}}^{(l)}(\mathbf{x},t)=\int d\mathbf{v}\ f(\mathbf{v};\mathbf{x},t)\mathcal{H}_{\underline{\alpha}}^{(l)}(\mathbf{v}). \tag{21}\] To obtain a tenable series expansion, we truncate (18) at an order \(K\) over \(l\) ranks and discard those exceeding \(K\), such that we approximate \(f(\mathbf{v};\mathbf{x},t)\) as [1], \[f(\mathbf{v};\mathbf{x},t)=w(\mathbf{v})\sum_{l=0}^{K}\frac{1}{v_{T}^{2l}l!}\mathcal{G}_{ \underline{\alpha}}^{(l)}\mathcal{H}_{\underline{\alpha}}^{(l)}(\mathbf{v}). \tag{22}\] By the Gauss-Hermite quadratures, we seek to carry out the velocity integrals at a finite set of velocity quadrature-nodes, or abscissae, such that the continuous integrals become \(Q\) easily-digestible discrete sums over the index \(i=0,...,Q-1\). Crucially, given any polynomial \(p(\mathbf{v})\) with a lower order than \(2K\), the continuous integral of its product with a Gaussian function becomes the quadrature [1], \[\int d\mathbf{v}\ w(\mathbf{v})p(\mathbf{v})=\sum_{i=0}^{Q-1}w_{i}p(\mathbf{v}_{i}), \tag{23}\] accompanied by \(\mathbf{v}_{i}\in\mathbb{R}^{D}\) and \(w_{i}\) identified as the quadrature nodes (10) and weights, respectively. In (22), \(f/w\) can be characterized as a maximum \(K\)-order polynomial, which conveniently can be exploited to define the familiar discrete populations \(f_{i},g_{i}\). Starting with the conventional \(m\)-order moments of \(f\), \[M_{\underline{\alpha}}^{(m)}\doteq\int d\mathbf{v}\ f(\mathbf{v};\mathbf{x},t)v_{\alpha_{ 1}}\cdots v_{\alpha_{m}}, \tag{24}\] can be rewritten as [1], \[\int d\mathbf{v} \ \frac{w(\mathbf{v})}{w(\mathbf{v})}f(\mathbf{v};\mathbf{x},t)v_{\alpha_{1}} \cdots v_{\alpha_{m}}\] \[=\sum_{i=0}^{Q-1}\frac{w_{i}}{w(\mathbf{v}_{i})}f_{i}(\mathbf{v}_{i};\bm {x},t)v_{i,\alpha_{1}}\cdots v_{i,\alpha_{m}}\] \[\doteq\sum_{i=0}^{Q-1}f_{i}v_{i,\alpha_{1}}\cdots v_{i,\alpha_{m}}, \tag{25}\] where we have arrived at the familiar discrete population \(f_{i}\) by the ratio, \[\frac{f_{i}(\mathbf{x},t)}{f_{i}(\mathbf{v}_{i};\mathbf{x},t)}=\frac{w_{i}}{w(\mathbf{v}_{i})}. \tag{26}\] \(K\geq 2\) is required to reproduce the correct hydrodynamic behaviour governed by the Navier-Stokes-Fourier (NSF) equations. The series up to these \(K\)-orders require the conventional quadratures of the \(f_{i},g_{i}\) populations which represent mass and total energy, respectively [50; 51], \[\rho \doteq\int d\mathbf{v}\ f(\mathbf{v};\mathbf{x},t)=\sum_{i=0}^{Q-1}f_{i}, \tag{27a}\] \[j_{\alpha} \doteq\int d\mathbf{v}\ f(\mathbf{v};\mathbf{x},t)v_{\alpha}=\sum_{i=0}^{Q-1 }f_{i}v_{i,\alpha},\] (27b) \[P_{\alpha\beta} \doteq\int d\mathbf{v}\ f(\mathbf{v};\mathbf{x},t)v_{\alpha}v_{\beta}=\sum_{ i=0}^{Q-1}f_{i}v_{i,\alpha}v_{i,\beta},\] (27c) \[Q_{\alpha\beta\gamma} \doteq\int d\mathbf{v}\ f(\mathbf{v};\mathbf{x},t)v_{\alpha}v_{\beta}v_{ \gamma}=\sum_{i=0}^{Q-1}f_{i}v_{i,\alpha}v_{i,\beta}v_{i,\gamma}, \tag{27d}\] and \[\mathcal{E} \doteq\int d\mathbf{v}\ g(\mathbf{v};\mathbf{x},t)=\sum_{i=0}^{Q-1}g_{i}, \tag{28a}\] \[q_{\alpha} \doteq\int d\mathbf{v}\ g(\mathbf{v};\mathbf{x},t)v_{\alpha}=\sum_{i=0}^{Q-1 }g_{i}v_{i,\alpha},\] (28b) \[R_{\alpha\beta} \doteq\int d\mathbf{v}\ g(\mathbf{v};\mathbf{x},t)v_{\alpha}v_{\beta}=\sum_{ i=0}^{Q-1}g_{i}v_{i,\alpha}v_{i,\beta}, \tag{28c}\] Here, we emphasize that \(\mathcal{E}=2\rho E\), and \(\rho,u_{\alpha}=j_{\alpha}/\rho,E=e+u_{\alpha}u_{\alpha}/2\) are the macroscopic observables density, continuum velocity, and total energy, which are of the interest in prospective simulations, and recoverable as in any other conventional lattice-Boltzmann model. They also participate in the hydrodynamic limit derived in SSIII. Furthermore, we can observe that the Hermite coefficients \(\mathcal{G}_{\underline{\alpha}}^{(l)},\mathcal{G}_{\underline{\alpha}}^{(l)}\) are merely linear combinations of these, and read, \[\mathcal{F}^{(0)} =\rho, \tag{29a}\] \[\mathcal{G}_{\alpha}^{(1)} =j_{\alpha},\] (29b) \[\mathcal{G}_{\alpha\beta}^{(2)} =P_{\alpha\beta}-v_{T}^{2}\rho\delta_{\alpha\beta},\] (29c) \[\mathcal{G}_{\alpha\beta\gamma}^{(3)} =Q_{\alpha\beta\gamma}-v_{T}^{2}\big{(}\delta_{\alpha\beta}j_{ \gamma}+\delta_{\alpha\gamma}j_{\beta}+\delta_{\beta\gamma}j_{\alpha}\big{)}, \tag{29d}\] for \(f_{i}\) and, \[\mathcal{G}_{\mathbf{\alpha}}^{(0)} =\mathcal{E}=2\rho E, \tag{30a}\] \[\mathcal{G}_{\alpha}^{(1)} =q_{\alpha},\] (30b) \[\mathcal{G}_{\alpha\beta}^{(2)} =R_{\alpha\beta}-v_{T}^{2}\big{(}2\rho E\big{)}\delta_{\alpha\beta}, \tag{30c}\] for \(g_{i}\). In the following we will establish the Hermite series for \(\Omega_{i}^{(\text{FP})}\) and \(\Omega_{i}^{(\eta)}\). As these reside in the exact same subspace as (22), we can utilize the quadrature of \(\mathcal{G}_{\underline{\alpha}}^{(l)}\) and \(\mathcal{G}_{\underline{\alpha}}^{(l)}\) in the series. #### ii.2.1 The lattice Fokker-Planck operator We are now prepared to apply the above methodology on the operators \(\widehat{\Omega}^{(\text{FP})}=\gamma\partial_{v_{\alpha}}\big{(}v_{\alpha}+v _{T}^{2}\partial_{v_{\alpha}}\big{)}\) and \(\widehat{\Omega}^{(n)}=-\eta_{\alpha}\partial_{v_{\alpha}}\) in order to derive explicit functionals for \(\Omega_{i}^{(\text{FP})}\big{(}f_{i};g_{i}\big{)}\) and \(\Omega_{i}^{(\eta)}\big{(}f_{i};g_{i}\big{)}\). We start with the FP operator. Extending the function \(\widehat{\Omega}^{(\text{FP})}\circ f\) by \(w_{i}/w(\mathbf{v}_{i})\) enables us to write the action of the Hermite expansion on that function [1], \[\Omega_{i}^{(\text{FP})}=w_{i}\sum_{l=0}^{K}\frac{v_{T}^{2l}l!}\Omega_{ \underline{\alpha}}^{(\text{FP})}\mathscr{H}_{\underline{\alpha}}^{(l)}(\mathbf{v} _{i}),\] (31a) where the Hermite coefficient for the operator is computed as, \[\Omega_{\underline{\alpha}}^{(\text{FP})}=\int d\mathbf{v}\ \widehat{\Omega}^{(\text{FP})}\circ f\mathscr{H}_{\underline{\alpha}}^{(l)}( \mathbf{v}). \tag{31b}\] We are interested in finding an operational expression that can be used to rewrite \(\widehat{\Omega}^{(\text{FP})}\) in a more convenient, explicit form, to which end we need to exploit the eigenvalue property of the characteristic Gaussian \(w(\mathbf{v})\). The product \(\widehat{\Omega}^{(\text{FP})}\big{[}w(\mathbf{v})\mathcal{H}_{\underline{\alpha}} ^{(l)}(\mathbf{v})\big{]}\), which is identifiable in (31) by insertion of (22), can be constructed and will comprise a set of gradient and divergence terms up to the second order, which can be simplified by this eigenvalue property. In App. A we use the property to expand the product, where the former is found to be, \[\partial_{v_{\beta}}w(\mathbf{v})=-\frac{\sqrt{\theta}c_{\beta}}{v_{T}^{2}}w(\mathbf{ v}), \tag{32}\] which indeed is an eigenfunction with the eigenvector \(-\sqrt{\theta}c_{\beta}/v_{T}^{2}\). This result accounts for the recast velocities (10), whereas the corresponding eigenvalue \(-c_{\beta}/v_{T}^{2}\) found in the analysis of Moroni _et al._Moroni _et al._ (2015) considers the conventional peculiar velocity set \(\mathbf{c}_{i}\in\mathbb{R}^{Q}\) used by all LB models with exact streaming. As we demonstrate in App. B, the operational expression derived from \(\widehat{\Omega}^{(\text{FP})}\big{[}w(\mathbf{v})\mathcal{H}_{\underline{\alpha} }^{(l)}(\mathbf{v})\big{]}\) can be used to compute the Hermite coefficients \(\Omega_{\underline{\alpha}}^{(\text{FP})}\), melted with the previously found Hermite coefficients for \(f(\mathbf{v};\mathbf{x},t)\) and \(\mathcal{G}_{\underline{\alpha}}^{(l)}\). With the eigenvalue property, we find that the product expands to, \[\widehat{\Omega}^{(\text{FP})}\big{[}w(\mathbf{v})\mathcal{H}_{ \underline{\alpha}}^{(l)}(\mathbf{v})\big{]}=\gamma\left(1+u_{\beta}\frac{v_{i, \beta}+u_{\beta}}{v_{T}^{2}}-l\right)\times\\ \big{[}w(\mathbf{v})\mathcal{H}_{\underline{\alpha}}^{(l)}(\mathbf{v}) \big{]}-\gamma\frac{2u_{\beta}}{v_{T}^{2}}\big{[}w(\mathbf{v})\mathcal{H}_{ \underline{\alpha}}^{(l+1)}(\mathbf{v})\big{]}, \tag{33}\] whereas Moroni _et al._Moroni _et al._ (2015) reported the result \(\widehat{\Omega}^{(\text{FP})}\Big{[}w(\mathbf{v})\mathcal{H}_{\underline{\alpha }}^{(l)}(\mathbf{v})\Big{]}=-\gamma l\Big{[}w(\mathbf{v})\mathcal{H}_{\underline{ \alpha}}^{(l)}(\mathbf{v})\Big{]}\) with the peculiar velocity set. In (33) \(v_{i,\beta}\) are the \(Q\) recast velocity vectors (10), and \(\mathcal{H}_{\underline{\alpha}}^{(l+1)}\) is the \((l+1)\)-rank tensor relative to the current rank being considered for \(\mathcal{H}_{\underline{\alpha}}^{(l)}\). Thus, the local thermo-hydrodynamics imposed by \(\mathbf{v}_{i}\) directly affect the FP dynamics. As (33) is fully explicit and \(\mathcal{T}_{\underline{\alpha}}^{(l)}\) (29) are merely linear combinations of the population moments (24), we have all the ingredients for computing the Hermite coefficients \(\Omega_{\underline{\alpha}}^{(\text{FP})}\). The first three are computed in Tab. 1 using the explicit form of the coefficients (10). To compute the integral in the coefficients \(\widehat{\Omega}^{(\text{FP})}\) we have utilized the Hermite orthonormality relation as is also showcased in App. B. These coefficients, combined with (31a), yield the series expanded up to \(l\leq K=2\), for both \(f_{i},g_{i}\): \[\Omega_{i}^{(\text{FP})}(f_{i}) =w_{i}\gamma\Bigg{[}\rho\left(1+u_{\gamma}\frac{v_{i,\gamma}+u_{ \gamma}}{v_{T}^{2}}\right)+\frac{v_{i,\alpha}}{v_{T}^{2}}\left(u_{\gamma} \frac{v_{i,\gamma}+u_{\gamma}}{v_{T}^{2}}-2\right)j_{\alpha}\] \[+\frac{v_{i,\alpha}v_{i,\beta}-v_{T}^{2}\delta_{\alpha\beta}}{2v _{T}^{4}}\left(\left\{u_{\gamma}\frac{v_{i,\gamma}+u_{\gamma}}{v_{T}^{2}}-1 \right\}\left\{R_{\alpha\beta}-v_{T}^{2}(2\rho E)\delta_{\alpha\beta}\right\}-2 \left\{u_{\alpha}g_{\beta}+u_{\beta}g_{\alpha}\right\}\right)\Bigg{]}, \tag{34a}\] \[\Omega_{i}^{(\text{FP})}(g_{i}) =w_{i}\gamma\Bigg{[}\delta\left(1+u_{\gamma}\frac{v_{i,\gamma}+u _{\gamma}}{v_{T}^{2}}\right)+\frac{v_{i,\alpha}}{v_{T}^{2}}\left(u_{\gamma} \frac{v_{i,\gamma}+u_{\gamma}}{v_{T}^{2}}-2\right)q_{\alpha}\] \[+\frac{v_{i,\alpha}v_{i,\beta}-v_{T}^{2}\delta_{\alpha\beta}}{2v _{T}^{4}}\left(\left\{u_{\gamma}\frac{v_{i,\gamma}+u_{\gamma}}{v_{T}^{2}}-1 \right\}\left\{R_{\alpha\beta}-v_{T}^{2}(2\rho E)\delta_{\alpha\beta}\right\}-2 \left\{u_{\alpha}g_{\beta}+u_{\beta}g_{\alpha}\right\}\right)\Bigg{]}. \tag{34b}\] A similar analysis can be carried out for \(\widehat{\Omega}^{(\eta)}\) via its corresponding \(\widehat{\Omega}^{(\eta)}\Big{[}w(\mathbf{v})\mathcal{H}_{\underline{\alpha}}^{(l) }(\mathbf{v})\Big{]}\) function. #### iii.2.2 The lattice acceleration operator Moroni _et al._Moroni _et al._ (2015) found the operational expression \(\widehat{\Omega}^{(\eta)}\Big{[}w(\mathbf{v})\mathcal{H}_{\underline{\alpha}}^{(l) }(\mathbf{v})\Big{]}=\frac{\eta_{\beta}}{v_{T}^{2}}\Big{[}w(\mathbf{v})\mathcal{H}_{ \underline{\alpha}}^{(l+1)}(\mathbf{v})\Big{]}\) with the peculiar velocity set. With the recast velocities, we instead report the result, \[\widehat{\Omega}^{(\eta)}\Big{[}w(\mathbf{v})\mathcal{H}_{\underline{\alpha}}^{(l) }(\mathbf{v})\Big{]}=\frac{\eta_{\beta}}{v_{T}^{2}}\Big{[}w(\mathbf{v})\big{(}\mathcal{H} _{\beta\underline{\alpha}}^{(l+1)}(\mathbf{v})-u_{\beta}\mathcal{H}_{\underline{ \alpha}}^{(l)}(\mathbf{v})\big{)}\Big{]}. \tag{35}\] This result can again be exploited in the same Hermite series (31a) as for the FP operator, recognizing that we are aiming at obtaining the combined lattice operator \(\Omega_{i}^{(\text{FP},\eta)}=\Omega_{i}^{(\text{FP})}+\Omega_{i}^{(\eta)}\), \[\Omega_{i}^{(\eta)}=w_{i}\sum_{l=0}^{K}\frac{1}{v_{T}^{2l}l!}\Omega_{\underline{ \alpha}}^{(\eta)}\mathcal{H}_{\underline{\alpha}}^{(l)}(\mathbf{v}_{i}),\] (36a) where the Hermite coefficient for the operator is computed as, \[\Omega_{\underline{\alpha}}^{(\eta)}=\int d\mathbf{v}\ \widehat{\Omega}^{(\eta)}[f] \mathcal{H}_{\underline{\alpha}}^{(l)}(\mathbf{v}). \tag{36b}\] Using (35), we get the relation for the Hermite coefficients (17) derived in App. B, which we use to obtain the first three Hermite coefficients in Tab. 1. These coefficients, in concert with (36a) yield the full (\(l\leq K=2\))-kinetic operators, \[\Omega_{i}^{(\eta)}(f_{i})=w_{i} \gamma\Bigg{[}-\frac{v_{\gamma}^{(\eta)}u_{\gamma}}{v_{T}^{2}}\rho+ \frac{v_{i,\alpha}}{v_{T}^{2}}\left(v_{\alpha}^{(\eta)}\rho-\frac{v_{\gamma}^{( \eta)}u_{\gamma}}{v_{T}^{2}}j_{\alpha}\right)\] \[+\frac{v_{i,\alpha}v_{i,\beta}-v_{T}^{2}\delta_{\alpha\beta}}{2v_ {T}^{4}}\bigg{(}\Big{\{}v_{\alpha}^{(\eta)}j_{\beta}+v_{\beta}^{(\eta)}j_{ \alpha}\Big{\}}\] \[-\frac{v_{\gamma}^{(\eta)}u_{\gamma}}{v_{T}^{2}}\left\{P_{\alpha \beta}-v_{T}^{2}\rho\delta_{\alpha\beta}\right\}\bigg{)}\Bigg{]}, \tag{37a}\] \[\Omega_{i}^{(\eta)}(g_{i})=w_{i} \gamma\Bigg{[}-\frac{v_{\gamma}^{(\eta)}u_{\gamma}}{v_{T}^{2}} \mathcal{E}+\frac{v_{i,\alpha}}{v_{T}^{2}}\left(v_{\alpha}^{(\eta)}\mathcal{E }-\frac{v_{\gamma}^{(\eta)}u_{\gamma}}{v_{T}^{2}}q_{\alpha}\right)\] \[+\frac{v_{i,\alpha}v_{i,\beta}-v_{T}^{2}\delta_{\alpha\beta}}{2v_ {T}^{4}}\bigg{(}\Big{\{}v_{\alpha}^{(\eta)}q_{\beta}+v_{\beta}^{(\eta)}q_{ \alpha}\Big{\}}\] \[-\frac{v_{\gamma}^{(\eta)}u_{\gamma}}{v_{T}^{2}}\left\{R_{\alpha \beta}-v_{T}^{2}\mathcal{E}\delta_{\alpha\beta}\right\}\bigg{)}\Bigg{]}, \tag{37b}\] where we adopted \(v_{\alpha}^{(\eta)}=\eta_{\alpha}\gamma^{-1}\) as a pseudo velocity designed by the dimensionality of \([\gamma]=s^{-1}\). Having rewritten the acceleration term this way, enables us to factorize terms with corresponding ranks of \(\mathcal{F}_{\underline{\alpha}}^{(l)},\mathcal{G}_{\underline{\alpha}}^{(l)}\) into the same groups as all coefficients now are prefixed by \(\gamma\). Having discretized both operators on the lattice we can combine them in more compact, convenient forms, \[\Omega_{i}^{(\mathrm{FP},\eta)}(f_{i}) =\Omega_{i}^{(\eta)}+\Omega_{i}^{(\mathrm{FP})}\] \[=w_{i}\gamma\left[\overline{\rho}+\frac{v_{i,\alpha}}{v_{T}^{2}} \mathcal{I}_{\alpha}+\frac{v_{i,\alpha}v_{i,\beta}-v_{T}^{2}\delta_{\alpha \beta}}{2v_{T}^{4}}\overline{P}_{\alpha\beta}\right], \tag{38a}\] \[\Omega_{i}^{(\mathrm{FP},\eta)}(g_{i}) =\Omega_{i}^{(\eta)}+\Omega_{i}^{(\mathrm{FP})}\] \[=w_{i}\gamma\left[\overline{\mathcal{E}}+\frac{v_{i,\alpha}}{v_{T }^{2}}\overline{q}_{\alpha}+\frac{v_{i,\alpha}v_{i,\beta}-v_{T}^{2}\delta_{ \alpha\beta}}{2v_{T}^{4}}\overline{R}_{\alpha\beta}\right], \tag{38b}\] where we recasted the \(f\)-moments yielding density, momentum flux, and pressure and higher-order tensors into new functionals, \[\overline{\rho}=\left(1+u_{\gamma}\frac{v_{i,\gamma}+u_{\gamma}-v _{\gamma}^{(\eta)}}{v_{T}^{2}}\right)\rho, \tag{38c}\] \[\mathcal{I}_{\alpha}=j_{\alpha}^{(\eta)}+\left(u_{\gamma}\frac{v_ {i,\gamma}+u_{\gamma}-v_{\gamma}^{(\eta)}}{v_{T}^{2}}-2\right)j_{\alpha},\] (38d) \[\overline{P}_{\alpha\beta}=\left(v_{\alpha}^{(\eta)}-2u_{\alpha} \right)j_{\beta}+\left(v_{\beta}^{(\eta)}-2u_{\beta}\right)j_{\alpha}\] \[+\left(u_{\gamma}\frac{v_{i,\gamma}+u_{\gamma}-v_{\gamma}^{(\eta) }}{v_{T}^{2}}-1\right)\left(P_{\alpha\beta}-v_{T}^{2}\rho\delta_{\alpha\beta} \right), \tag{38e}\] where \(j_{\alpha}^{(\eta)}=\rho v_{\alpha}^{(\eta)}\) is a pseudo momentum. We have omitted the quadrature-node index notation in the recast moments and note that their dependence on \(v_{i,\gamma}\) is implied going forward. Evidently, \(v_{\gamma}^{(\eta)}\) originates from the redefinition of \(\eta_{\gamma}\). Similarly the recast functionals for the \(g_{i}\)-population moments read, \[\overline{\mathcal{E}}=\left(1+u_{\gamma}\frac{v_{i,\gamma}+u_{ \gamma}-v_{\gamma}^{(\eta)}}{v_{T}^{2}}\right)\mathcal{E}, \tag{38f}\] \[\overline{q}_{\alpha}=u_{\gamma}\frac{v_{i,\gamma}+u_{\gamma}-v_{ \gamma}^{(\eta)}}{v_{T}^{2}}q_{\alpha}+\Big{(}v_{\alpha}^{(\eta)}-2u_{\alpha} \Big{)}\mathcal{E},\] (38g) \[\overline{R}_{\alpha\beta}=\Big{(}v_{\alpha}^{(\eta)}-2u_{\alpha }\Big{)}q_{\beta}+\Big{(}v_{\beta}^{(\eta)}-2u_{\beta}\Big{)}q_{\alpha}\] \[+\Bigg{(}u_{\gamma}\frac{v_{i,\gamma}+u_{\gamma}-v_{\gamma}^{(\eta )}}{v_{T}^{2}}-1\Big{)}\left(R_{\alpha\beta}-v_{T}^{2}\mathcal{E}\delta_{ \alpha\beta}\right). \tag{38h}\] The lattice operators (38a,38h) are the final results for the kinetic scheme. They define the "interaction" process and can be implemented after the semi-Lagrangian advection step to successively simulate the spatio-temporal evolution of \(\rho,\mathbf{u},E\) and other macroscopic quantities of interest. \begin{table} \begin{tabular}{l l l} \(l\) & \(\Omega_{\underline{\alpha}}^{(\mathrm{FP})}(l)\circ\{f_{i};g_{i}\}\) & \(\Omega_{\underline{\alpha}}^{(\eta)}(l)\circ\{f_{i};g_{i}\}\) \\ \hline 0 & \(\gamma\left[1+u_{\gamma}\frac{v_{i,\gamma}+u_{\gamma}}{v_{T}^{2}}\right] \left\{\mathcal{F}^{(0)};\mathcal{G}^{(0)}\right\}\) & \(-\left[\frac{\eta_{\gamma}u_{\gamma}}{v_{T}^{2}}\right]\left\{\mathcal{F}^{(0)}; \mathcal{G}^{(0)}\right\}\) \\ 1 & \(\gamma\left[u_{\gamma}\frac{v_{i,\gamma}+u_{\gamma}-v_{\gamma}^{(\eta)}}{v_{T}^{2} }-2\right]\left\{\mathcal{F}^{(1)};\mathcal{G}^{(1)}_{\underline{\alpha}}\right\}\) \\ 2 & \(\gamma\Big{[}\Big{(}u_{\gamma}\frac{v_{i,\gamma}+u_{\gamma}-v_{\gamma}^{(\eta)}}{ v_{T}^{2}}-1\Big{)}\left\{\mathcal{F}^{(2)}_{\alpha\beta};\mathcal{G}^{(2)}_{ \alpha\beta}\right\}\) \\ & \(-2\Big{(}u_{\alpha}\Big{\{}\mathcal{F}^{(1)}_{\beta};\mathcal{G}^{(1)}_{ \beta}\Big{\}}+u_{\beta}\Big{\{}\mathcal{F}^{(1)}_{\alpha};\mathcal{G}^{(1)}_{ \alpha}\Big{\}}\Big{)}\Big{]}\) \\ \end{tabular} \end{table} Table 1: Hermite coefficients for the FP and acceleration lattice operators evaluated with (B10, B17) as well as the Hermite coefficients for the populations \(\mathcal{J}_{\underline{\alpha}}^{(l)}(f_{i})\) (29) and \(\mathcal{G}_{\underline{\alpha}}^{(l)}(g_{i})\) (30). The operator coefficients are ultimately used to complete the series expansion yielding \(\Omega_{i}^{(\mathrm{FP})}\) and \(\Omega_{i}^{(\eta)}\). ### Thermodynamics Phase transformations include non-equilibrium thermodynamics and large variations in viscosity and density that reside on a multitude of spatio-temporal scales that historically have been difficult to treat in the continuum without resorting to limiting assumptions that appear in many empirical mass-transfer models for multicomponent mixtures, as we have documented in a recent review [3]. Whereas mixture models account for the latent heat transfer between species \(\dot{m}h_{\mathrm{lv}}\), our current single-component implementation evolves \(T,p\) and phase transitions under the saturation curve in tandem with the empirical van der Waals (vdW) equation of state (EOS). Even though the EOS is empirical its simple non-monotonic topology with a distinct energy minimum enables nucleation in kinetic theories as was shown in [2] that employ the same EOS. Moreover, the cubic vdW EOS is the simplest EOS that retains the ability to predict spinodals [55] and is inherently mechanically unstable in the spinodal region and metastable in the binodal region. When a metastable fluid is perturbed the fluid state will transition towards the binodal curve where the local free energy is minimized [56]. Reyhanian _et al._ presented the governing thermodynamic equations, but for conciseness we reproduce them here for later reference in our multi-scale analysis (SSIII). We dub note that the kinetic equations conserve the density (27a), momentum (27b), and total energy (28a), where the latter predicts the internal energy \(e\) via, \[E=e+\frac{u_{\alpha}u_{\alpha}}{2}. \tag{39}\] Thus, given \(e\), the temperature can be computed by reorganizing the EOS, \[e=c_{v}T-a\rho-\frac{p_{0}}{\rho}, \tag{40}\] in which the term \(p_{0}/\rho\) is included to retain positive temperatures in the subcritical region \(T/T_{\mathrm{cr}}<0.84375\)[50]. We can then proceed with defining the thermodynamic pressure as, \[p=\frac{\rho RT}{1-b\rho}-a\rho^{2}, \tag{41}\] which contributes the non-ideal energy contribution \(T\big{(}\partial p/\partial T\big{)}_{v}\partial_{\alpha}u_{\alpha}\) in the Fourier equation, as we will document later in the Chapman-Enskog multiscale analysis in SSIII.2.2. This analysis further necessitates deriving the isobaric specific heat and speed-of-sound, to arrive at the thermo-hydrodynamic limit. The former reads, \[c_{p}\doteq\left(\frac{\partial h}{\partial T}\right)_{p}, \tag{42}\] and the real-gas speed-of-sound, \[\varsigma^{2}\doteq\left(\frac{\partial p}{\partial\rho}\right)_{s}, \tag{43}\] and can be rewritten using the cyclic and Maxwell relations [57] as, \[c_{p} =\left(\frac{\partial e}{\partial T}\right)_{p}+p\bigg{(}\frac{ \partial v}{\partial T}\bigg{)}_{p}, \tag{44}\] \[\varsigma^{2} =\left(\frac{\partial p}{\partial\rho}\right)_{T}+\frac{T}{\rho ^{2}c_{v}}\bigg{(}\frac{\partial p}{\partial T}\bigg{)}_{v}^{2}. \tag{45}\] ## III Chapman-Enskog multiscale analysis As it is our goal to utilize the kinetic model (13) in mesoscale simulations where we recover macroscopic quantities (27, 28) from microscopic populations, it is crucial to investigate how the equations behave across scales. Specifically, we conduct a Chapman-Enskog multiscale analysis [51]. Initially, the post-advection state is approximated by a Taylor series, after which the modified kinetic equations are expanded in terms of perturbation series of all the variables. Subsequently, all terms in the equations can be separated by their representative orders forming a hierarchy of perturbation relations for the advection and interaction/collision sides of the equations. Then, the zero-through-second moments can be taken of these relations to form moment equations, i.e. scale-dependent forms of the continuum conservation laws. This step is crucial as it comprises the moments stemming from non-equilibrium, higher-order contributions of the perturbation series that can be chosen to either be included or excluded with the aim of assessing their mathematical formalism and physical implications in the continuum, thus creating a direct avenue between the microscopic and macroscopic scales. Finally, the moment equations can be recombined by inverting the perturbation series which results in the Navier-Stokes-Fourier (NSF) equations. By including selected non-equilibrium populations in the moment equations it is possible to elucidate the asymptotic behaviour of different parts of the NSF hierarchy, to which end we initially focus on the shear-stress tensor. ### Perturbation series The mesoscale behaviour, especially that stemming from the fluctuations of the FPE, evolves at different characteristics length and time scales. Analysis of the scale-effects as they manifest in the continuum can be treated with multiscale methods in which we seek to separate derivative terms, populations, and macroscopic observables into contributions that scale with a "smallness" parameter \(\epsilon\) where \(\epsilon\ll 1\) is associated with the thermo-hydrodynamic limit [41]. By exemplification, the viscous effects of FP may reside on longer time scales whereas shocks propagate on shorter time scales. Specifically, we decompose our distributions \(f_{i},g_{i}\), spatial and temporal gradients \(\partial_{\alpha},\partial_{t}\), and macroscopic moments into a family of perturbation series around \(\epsilon\) raised to different exponent values. Firstly, the populations expand to, \[f_{i} =f_{i}^{(0)}+\epsilon f_{i}^{(1)}+\epsilon^{2}f_{i}^{(2)}+\Theta \big{(}\epsilon^{3}\big{)}, \tag{46a}\] \[g_{i} =g_{i}^{(0)}+\epsilon g_{i}^{(1)}+\epsilon^{2}g_{i}^{(2)}+\Theta \big{(}\epsilon^{3}\big{)}, \tag{46b}\] where the zeroth-order contributions are the known equilibrium distributions, such that the higher orders act as unknown, stochastic, fluctuating components. Propagation and diffusion phenomena are captured by decomposing the time derivative into two time variables, \[\partial_{t}=\epsilon\partial_{t}^{(1)}+\epsilon^{2}\partial_{t}^{(2)}+ \Theta\big{(}\epsilon^{3}\big{)}. \tag{46c}\] As for the spatial derivatives, these are evolved on the hydrodynamic scale that is appropriately resolved as, \[\partial_{\alpha}=\epsilon\partial_{\alpha}^{(1)}+\Theta\big{(}\epsilon^{2} \big{)}. \tag{46d}\] Moreover, we are interested in the zeroth-through-second-order perturbations in all of the macroscopic observables in order to recover the correct continuum-conservation laws, \[\rho =\rho^{(0)}+\epsilon\rho^{(1)}+\epsilon^{2}\rho^{(2)}, \tag{46e}\] \[\mathcal{E} =\mathcal{E}^{(0)}+\epsilon\mathcal{E}^{(1)}+\epsilon^{2} \mathcal{E}^{(2)},\] (46f) \[j_{\alpha} =j_{\alpha}^{(0)}+\epsilon j_{\alpha}^{(1)}+\epsilon^{2}j_{ \alpha}^{(2)},\] (46g) \[q_{\alpha} =q_{\alpha}^{(0)}+\epsilon q_{\alpha}^{(1)}+\epsilon^{2}q_{ \alpha}^{(2)},\] (46h) \[P_{\alpha\beta} =P_{\alpha\beta}^{(0)}+\epsilon P_{\alpha\beta}^{(1)}+\epsilon^{ 2}P_{\alpha\beta}^{(2)},\] (46i) \[R_{\alpha\beta} =R_{\alpha\beta}^{(0)}+\epsilon R_{\alpha\beta}^{(1)}+\epsilon^{ 2}R_{\alpha\beta}^{(2)}, \tag{46j}\] and similarly for the linear recombinations of the above, \(\overline{\rho}=\overline{\rho}^{(0)}+\epsilon\overline{\rho}^{(1)}+\epsilon^ {2}\overline{\rho}^{(2)}\) and so forth. We interpret the zeroth orders as "equilibrium" contributions that dictate the phenomena that evolve on the macroscales, whereas increasing orders represent perturbations on diminishing scales. Consequently, we can imagine that microscopic fluctuations due to thermal noise may be observable on the \(\epsilon^{1},\epsilon^{2}\) scales, where the coarser fluctuations associated with \(\epsilon^{1}\) are likely more relevant to analyze in the context of mesoscopic nucleation. ### Expansions: Taylor series, perturbations, separation, moments, and recombination #### iv.2.1 Perturbation-series expansion We expand the previously derived lattice equations (13) to prove their consistency in the thermohydrodynamic limit. In the following we adopt the notation for the pre-advection populations \(f_{i}=f_{i}(\mathbf{x}-\mathbf{v}_{i}\delta t,t-\delta t)\) and post-advection populations \(f_{i}^{\prime}=f_{i}(\mathbf{x},t)\). First we treat the advection side of the equations, and seek to approximate the advected state \(f_{i}^{\prime}\) by its Taylor expansion around the datum \(f_{i}\) where we exploit the common _ansatz_ that third and higher order terms \(n>2\) are very small and do not significantly affect the macroscopic behaviour [51]. As such, to find the Navier-Stokes-Fourier equations, we only retain the two lowest orders in the _Taylor-series_ expansion, \[f_{i}^{\prime}=f_{i} +\delta t\big{(}\partial_{t}+v_{i,\delta}\partial_{\delta}\big{)} f_{i}\] \[+\frac{\delta t^{2}}{2}\big{(}\partial_{t}+v_{i,\delta}\partial_{ \delta}\big{)}\big{(}\partial_{t}+v_{i,\epsilon}\partial_{\varepsilon}\big{)} f_{i}+\Theta\big{(}\delta t^{3}\big{)}, \tag{47}\] and similarly for \(g_{i}\). It follows that the lattice equations (13) can be rewritten to, \[\delta t\big{(}\partial_{t}+v_{i,\delta}\partial_{\delta}\big{)} f_{i}+\frac{\delta t^{2}}{2}\big{(}\partial_{t}+v_{i,\delta}\partial_{ \delta}\big{)}\big{(}\partial_{t}+v_{i,\varepsilon}\partial_{\varepsilon} \big{)}f_{i}\] \[=\omega^{(\text{FP})}\Omega_{i}^{(\text{FP})}+\omega^{(\text{B})} \Omega_{i}^{(\text{B})}, \tag{48}\] where we have neglected the higher-order \(\Theta\big{(}\delta t^{3}\big{)}\) terms. Before treating the collision side of the equation, we make the initial perturbation analysis on the advection terms and retain them for further analysis. The analysis start with another common _ansatz_: only the two lowest orders in \(Kn\) are required to obtain the continuum NSEs where only the coarse non-equilibrium effects in \(f_{i}^{(1)}\) are considered in addition to the equilibrium \(f_{i}^{(0)}\)[51]. Substituting in the pertinent variables from (46) yields the expanded advection side, \[\bigg{\{}\delta t\Big{[}\big{(}\epsilon\partial_{t}^{(1)}+\epsilon^{2} \partial_{t}^{(2)}\big{)}+\epsilon v_{i,\alpha}\partial_{\alpha}^{(1)}\Big{]} +\frac{\delta t^{2}}{2}\Big{[}\big{(}\epsilon\partial_{t}^{(1)}+\epsilon^{2} \partial_{t}^{(2)}\big{)}+\epsilon v_{i,\alpha}\partial_{\alpha}^{(1)}\Big{]} \Big{[}\big{(}\epsilon\partial_{t}^{(1)}+\epsilon^{2}\partial_{t}^{(2)}\big{)}+ \epsilon v_{i,\beta}\partial_{\beta}^{(1)}\Big{]}\bigg{\}}\big{(}f_{i}^{(0)}+ \epsilon f_{i}^{(1)}+\epsilon^{2}f_{i}^{(2)}\big{)}, \tag{49}\] where contributions _separated_ by orders in \(\epsilon\) are, \[\epsilon^{0}: 0, \tag{50a}\] \[\epsilon^{1}: \delta t\big{(}\partial_{t}^{(1)}+v_{i,\alpha}\partial_{\alpha}^{( 1)}\big{)}f_{i}^{(0)},\] (50b) \[\epsilon^{2}: \Big{[}\delta t\partial_{t}^{(2)}+\frac{\delta t^{2}}{2}\big{(} \partial_{t}^{(1)}+v_{i,\alpha}\partial_{\alpha}^{(1)}\big{)}\big{(}\partial_ {t}^{(1)}+v_{i,\beta}\partial_{\beta}^{(1)}\big{)}\Big{]}f_{i}^{(0)}\] \[+\delta t\Big{(}\partial_{t}^{(1)}+v_{i,\alpha}\partial_{\alpha}^ {(1)}\Big{)}f_{i}^{(1)}. \tag{50c}\] These contributions will eventually equate with the corresponding-order contributions from the analogously expanded collisions operators. To that end, the BGK operator is expanded with the perturbation series yielding the separated orders, \[\epsilon^{0}: \omega^{\rm(B)}\big{(}f_{i}^{\rm(eq)}-f_{i}^{(0)}\big{)}, \tag{51a}\] \[\epsilon^{1}: -\omega^{\rm(B)}f_{i}^{(1)},\] (51b) \[\epsilon^{2}: -\omega^{\rm(B)}f_{i}^{(2)}. \tag{51c}\] Proceeding with the FP operator, we similarly expand it by parts in \(\Omega_{i}^{\rm(FP)}=\Omega_{i}^{\rm(FP,0)}+\epsilon\Omega_{i}^{\rm(FP,1)}+ \epsilon^{2}\Omega_{i}^{\rm(FP,2)}\) (wherein the perturbation order \(n_{\epsilon}\) should not be confused with those of the Hermite series), in which each of the orders are further decomposed by \(n_{\epsilon}=\{0,1,2\}\), \[\Omega_{i}^{\rm(FP,n_{\epsilon})}=w_{i}\left[\overline{\rho}^{(n_{\epsilon}) }+\frac{v_{i,\alpha}}{v_{T}^{2}}\overline{J}_{\alpha}^{(n_{\epsilon})}+\frac {v_{i,\alpha}v_{i,\beta}-v_{T}^{2}\delta_{\alpha\beta}}{2v_{T}^{4}}\overline{ P}_{\alpha\beta}^{(n_{\epsilon})}\right]. \tag{52}\] Thus, separation of orders simply yields, \[\epsilon^{0}: \omega^{\rm(FP)}\Omega_{i}^{\rm(FP,0)}, \tag{53a}\] \[\epsilon^{1}: \omega^{\rm(FP)}\Omega_{i}^{\rm(FP,1)},\] (53b) \[\epsilon^{2}: \omega^{\rm(FP)}\Omega_{i}^{\rm(FP,2)}, \tag{53c}\] where the various functionals (38c)-(38h) are linear combinations of the perturbation series of the macroscopic variables (46e)-(46j), such that the corresponding \(\epsilon\)-orders are also linearly retained. Now, the next step is to recombine the different orders of the perturbation series of the advection terms, and the BGK and FP operators, resulting in, \[\epsilon^{0}: 0=\omega^{\rm(B)}\big{(}f_{i}^{\rm(eq)}-f_{i}^{(0)}\big{)}\] \[+\omega^{\rm(FP)}w_{i}\left[\overline{\rho}^{(0)}+\frac{v_{i, \alpha}}{v_{T}^{2}}\overline{J}_{\alpha}^{(0)}+\frac{v_{i,\alpha}v_{i,\beta}- v_{T}^{2}\delta_{\alpha\beta}}{2v_{T}^{4}}\overline{P}_{\alpha\beta}^{(0)} \right], \tag{54a}\] \[\epsilon^{1}: \big{(}\partial_{t}^{(1)}+v_{i,\delta}\partial_{\delta}^{(1)} \big{)}f_{i}^{(0)}=-\frac{\omega^{\rm(B)}}{\delta t}f_{i}^{(1)}\] \[+\frac{\omega^{\rm(FP)}}{\delta t}w_{i}\left[\overline{\rho}^{(1 )}+\frac{v_{i,\alpha}}{v_{T}^{2}}\overline{J}_{\alpha}^{(1)}+\frac{v_{i, \alpha}v_{i,\beta}-v_{T}^{2}\delta_{\alpha\beta}}{2v_{T}^{4}}\overline{P}_{ \alpha\beta}^{(1)}\right],\] (54b) \[\epsilon^{2}: \Big{[}\partial_{t}^{(2)}+\frac{\delta t}{2}\big{(}\partial_{t} ^{(1)}+v_{i,\delta}\partial_{\delta}^{(1)}\big{)}\big{(}\partial_{t}^{(1)}+v_ {i,\epsilon}\partial_{\epsilon}^{(1)}\big{)}\Big{]}f_{i}^{(0)}\] \[+\Big{(}\partial_{t}^{(1)}+v_{i,\delta}\partial_{\delta}^{(1)} \Big{)}f_{i}^{(1)}=-\frac{\omega^{\rm(B)}}{\delta t}f_{i}^{(2)}\] \[+\frac{\omega^{\rm(FP)}}{\delta t}w_{i}\left[\overline{\rho}^{(2) }+\frac{v_{i,\alpha}}{v_{T}^{2}}\overline{J}_{\alpha}^{(2)}+\frac{v_{i, \alpha}v_{i,\beta}-v_{T}^{2}\delta_{\alpha\beta}}{2v_{T}^{4}}\overline{P}_{ \alpha\beta}^{(2)}\right], \tag{54c}\] to which end we make a third _ansatz_: there exists multiple roots in the zeroth-order equality (54a) where \(f_{i}^{(0)}=f_{i}^{\rm(eq)}\) is a candidate as is conventionally the case in multi-scale analyses of the Boltzmann equation. We can further infer that for the equality to hold, we must require that, \[\overline{\rho}^{(0)} =0, \tag{55}\] \[\overline{J}_{\alpha}^{(0)} =0_{\alpha},\] (56) \[\overline{P}_{\alpha\beta}^{(0)} =0_{\alpha\beta}. \tag{57}\] The zero equalities are the only information we can extract from the zeroth-order relation for now. Prior to taking the moments of the perturbation relations, we can simplify the first and second-order relations. If we factorize \(f_{i}^{(1)}\) out of \(\Omega_{i}^{\rm(FP,1)}\) in (54b) and adopt the notation \(\widehat{\Omega}_{i}^{\rm(FP,1)}\circ f_{i}^{(1)}\) as the FP operational expression operating on the first-order populations, the material derivative of the equilibrium populations can be written as, \[\epsilon^{1}: D_{t}^{(\mathbf{v},1)}f_{i}^{(0)}=f_{i}^{(1)}\left(\frac{\omega^{\rm(FP) }}{\delta t}\widehat{\Omega}_{i}^{\rm(FP)}-\frac{\omega^{\rm(B)}}{\delta t} \right), \tag{58}\] where we employ the notation \(D_{t}^{(\mathbf{v},1)}=\partial_{t}^{(1)}+v_{i,\delta}\partial_{\delta}^{(1)}\) for conciseness, and recognize that \(\widehat{\Omega}_{i}^{\rm(FP)}\) is in fact independent of the perturbation order. This result is useful for eliminating the first-order derivatives of \(f_{i}^{(0)}\) in (54c), and combining all prefactors of \(f_{i}^{(1)}\), by substituting in this reformulated \(\epsilon^{1}\) contribution yielding, \[\epsilon^{2}: \partial_{t}^{(2)}f_{i}^{(0)}+D_{t}^{(\mathbf{v},1)}\left(1+\frac{ \omega^{\rm(FP)}\widehat{\Omega}_{i}^{\rm(FP)}}{2}-\frac{\omega^{\rm(B)}}{2} \right)f_{i}^{(1)} \tag{59}\] \[=-\frac{\omega^{\rm(B)}}{\delta t}f_{i}^{(2)}+\frac{\omega^{\rm(FP) }}{\delta t}\Omega_{i}^{\rm(FP,2)}.\] This allows us to again in a reverse manner recast the relation with \(f_{i}^{(1)}\) isolated in (58), such that the second-order contributions now obey, \[\epsilon^{2}: \partial_{t}^{(2)}f_{i}^{(0)}+\Big{(}\partial_{t}^{(1)}+v_{i, \delta}\partial_{\delta}^{(1)}\Big{)}\Big{(}\partial_{t}^{(1)}+v_{i,\varepsilon }\partial_{\varepsilon}^{(1)}\Big{)}\delta t \tag{60}\] \[\times\left(1+\frac{2-\omega^{(\mathrm{B})}}{2\omega^{(\mathrm{ FP})}\widehat{\Omega}_{i}^{(\mathrm{FP})}}-\frac{2+\omega^{(\mathrm{FP})} \widehat{\Omega}_{i}^{(\mathrm{FP})}}{2\omega^{(\mathrm{B})}}\right)f_{i}^{(0)}\] \[\qquad=-\frac{\omega^{(\mathrm{B})}}{\delta t}f_{i}^{(2)}+\frac{ \omega^{(\mathrm{FP})}}{\delta t}\Omega_{i}^{(\mathrm{FP},2)},\] where we have obtained an equilibrium differential equation with \(f_{i}^{(0)}\) instead of \(f_{i}^{(1)}\), where the non-equilibrium effects are administered through the \(\omega^{(\mathrm{B})},\omega^{(\mathrm{FP})}\) coefficients of \(f_{i}^{(0)}\). The perturbation equations (54a), (58), (60) are now at a stage where we can further the analysis by forming the moment equations. #### iii.2.2 Moment equations Initially we can take the zero, second, and third moments of (54b), from which the continuity and momentum equation are obtained. The zeroth-moment equation is obtained by summing both sides of the relation over \(i\), the first-moment equation by summing over \(i\) multiplied by \(v_{i,\varepsilon}\), the second-moment equation by \(v_{i,\varepsilon}v_{i,\zeta}\), and so forth. Thereto, we can investigate the effects of perturbations on the different macroscopic observables by including or excluding various moments. In the simplest analysis, we can make the _ansatz_ that the non-equilibrium moments [58], \[\sum_{i=0}^{Q-1}\{f_{i}^{(\mathrm{neq})},g_{i}^{(\mathrm{neq})}\} =\big{\{}\rho^{(\mathrm{neq})},\xi^{(\mathrm{neq})}\big{\}}=0, \tag{61a}\] \[\sum_{i=0}^{Q-1}\{f_{i}^{(\mathrm{neq})},g_{i}^{(\mathrm{neq})}\} v_{i,\delta} =\big{\{}j_{\delta}^{(\mathrm{neq})},q_{\delta}^{(\mathrm{neq})}\big{\}}=0_{\delta},\] (61b) \[\sum_{i=0}^{Q-1}\{f_{i}^{(\mathrm{neq})},g_{i}^{(\mathrm{neq})}\} v_{i,\delta}v_{i,\varepsilon} =\big{\{}P_{\delta\varepsilon}^{(\mathrm{neq})},R_{\delta\varepsilon}^{( \mathrm{neq})}\big{\}}=0_{\delta\varepsilon}. \tag{61c}\] are all zero for the non-equilibrium states in \(n_{\varepsilon}\geq 1\). This conventionally renders the continuity and Euler equations, and does not give any insight into non-equilibrium effects. Now, as we are interested in investigating the non-equilibrium behaviour of the FP equation on smaller scales it may be worthwhile to contest the conventional zero-assumptions of (61a-61c) and include the coarse first-order perturbations in the analysis. As we know that the entropy-extremum principle corresponds to a minimization of energy [14], we can include the contributions from \(\mathcal{E}^{(1)}=\sum_{i}g_{i}^{(1)}\). Additionally, for thermally-driven phase-change processes such as pool boiling it is apt to consider the fluctuations in the heat flux \(q_{\delta}^{(1)}=\sum_{i}g_{i}^{(1)}v_{i,\delta}\), and for cavitation which is a stress-induced phenomenon, the momentum flux \(P_{\delta\varepsilon}^{(1)}=\sum_{i}f_{i}^{(1)}v_{i,\delta}v_{i,\varepsilon}\). As the inclusion of \(P_{\delta\varepsilon}^{(1)}\) in the second-moment equation of the \(\epsilon\)-relation based on the Boltzmann equation results in the viscous-stress tensor \(\tau_{\delta\varepsilon}\) with the dynamic viscosity \(\mu=\big{(}1/\omega^{(\mathrm{B})}-1/2\big{)}p\delta t\)[2], one could suspect that \(\omega^{(\mathrm{FP})}\) (17) could be attributed to a pseudo-viscosity from the \(\gamma\)-damping of thermal fluctuations--analogous to the eddy-viscosity in turbulence models. Thus, we are interested in the same non-equilibrium moments and how they are affected by the FP dynamics, and as such _assume_ that the values and correlations of all non-equilibrium moments, except for that _unknown_ and _non-zero_\(P_{\delta\varepsilon}^{(1)}\) moment, are zero, \[\overline{\rho}^{(2)} \doteq 0, \tag{62a}\] \[j_{\alpha}^{(1)}=j_{\alpha}^{(2)}=\overline{J}_{\alpha}^{(1)}= \overline{J}_{\alpha}^{(2)} \doteq 0_{\alpha},\] (62b) \[P_{\alpha\beta}^{(2)}=\overline{P}_{\alpha\beta}^{(2)} \doteq 0_{\alpha\beta},\] (62c) \[\mathcal{E}^{(1)}=\mathcal{E}^{(2)}=\overline{\xi}^{(1)}= \overline{\xi}^{(2)} \doteq 0,\] (62d) \[q_{\alpha}^{(2)}=\overline{q}_{\alpha}^{(2)} \doteq 0_{\alpha},\] (62e) \[R_{\alpha\beta}^{(1)}=R_{\alpha\beta}^{(2)}=\overline{R}_{\alpha \beta}^{(1)}=\overline{R}_{\alpha\beta}^{(2)} \doteq 0_{\alpha\beta}, \tag{62f}\] in addition to the non-zero counterpart, \[P_{\alpha\beta}^{(1)}\neq 0.\] (63a) This ultimately allows us to take the zero-through-second moments of (54b, 59), which usually are necessary to recover the NS equations. Moreover, for the FP contributions, we are primarily interested in the non-equilibrium effects to the momentum and energy equations, and as such we omit the density-fluctuations in the zero-moment continuity equation. As for the Boltzmann collision operator, we rely on the usual methodology of omitting non-equilibrium contributions from the zero and first moment equations of the perturbation equations [51]. From the moments of the first-order perturbation expansion we get the (first-order) continuity, Euler, and a higher-order equation, \[\partial_{t}^{(1)}\rho^{(\mathrm{eq})}+\partial_{\delta}^{(1)}j_{ \delta}^{(\mathrm{eq})} =0, \tag{64a}\] \[\partial_{t}^{(1)}j_{\delta}^{(\mathrm{eq})}+\partial_{\varepsilon}^ {(1)}P_{\delta\varepsilon}^{(\mathrm{eq})} =0,\] (64b) \[\partial_{t}^{(1)}P_{\delta\varepsilon}^{(\mathrm{eq})}+\partial_{ \zeta}^{(1)}Q_{\delta\varepsilon\zeta}^{(\mathrm{eq})} =-\frac{\omega^{(\mathrm{B})}}{\delta t}P_{\delta\varepsilon}^{(1)}+ \frac{\omega^{(\mathrm{FP})}}{\delta t}\widehat{\Omega}^{(\mathrm{FP})}P_{ \delta\varepsilon}^{(1)}, \tag{64c}\] in which we exploited the zero-non-equilibrium moments of \(f_{i}^{(1)}\), as well as factorized out \(\sum_{i=0}^{Q-1}f_{i}\) from \(\Omega^{(\mathrm{FP})}\) leaving the moment of the operational expression itself \(\widehat{\Omega}^{(\mathrm{FP})}\). We will show later that the last moment equation, as in the LBM, recovers the viscous-stress tensor, albeit with additional stresses manifested by \(\widehat{\Omega}^{(\mathrm{FP})}P_{\delta\varepsilon}^{(1)}\). To that end, the explicit formalism of \(P_{\delta\varepsilon}^{(1)}\) is not recovered directly from (64c) but rather appears in the first-moment of the second-order perturbation expansion (60). Lastly, the equilibrium moments are defined from the equilibrium populations (11) as, \[\rho^{\rm(eq)} =\sum_{i=0}^{Q-1}f_{i}^{\rm(eq)}=\rho, \tag{64d}\] \[j_{\delta}^{\rm(eq)} =\sum_{i=0}^{Q-1}f_{i}^{\rm(eq)}v_{i,\delta}=\rho u_{\delta},\] (64e) \[P_{\delta\varepsilon}^{\rm(eq)} =\sum_{i=0}^{Q-1}f_{i}^{\rm(eq)}v_{i,\delta}v_{i,\varepsilon}=\rho u _{\delta}u_{\varepsilon}+p\delta_{\delta\varepsilon},\] (64f) \[Q_{\delta\varepsilon\zeta}^{\rm(eq)} =\sum_{i=0}^{Q-1}f_{i}^{\rm(eq)}v_{i,\delta}v_{i,\varepsilon}= \rho u_{\delta}u_{\varepsilon}u_{\zeta}+p\big{[}u\delta\big{]}_{\delta \varepsilon\zeta}, \tag{64g}\] where \(\big{[}u\delta\big{]}_{\delta\varepsilon\zeta}=u_{\delta}\delta_{\varepsilon \zeta}+u_{\varepsilon}\delta_{\delta\zeta}+u_{\varepsilon}\delta_{\delta\varepsilon}\). Applying the product rule to \(\partial_{\delta}^{(1)}(\rho u_{\delta})\) in the continuity equation (64a) enables rewriting the material derivative as, \[D_{t}^{(1)}\rho=-\rho\partial_{\delta}^{(1)}u_{\delta}. \tag{65}\] This material derivative appears in the product-rule expanded momentum equation considering the equilibrium moments \(j_{\delta}^{\rm(eq)}\) and \(P_{\delta\varepsilon}^{\rm(eq)}\), which after some algebra allows us to rewrite the momentum equation as, \[D_{t}^{(1)}u_{\delta}=-\frac{1}{\rho}\partial_{\varepsilon}^{(1)}p\delta_{ \delta\varepsilon}. \tag{66}\] Noting that the same perturbation series apply to \(g_{i}\), its corresponding moment equation hierarchy is, \[\partial_{t}^{(1)}\delta^{\rm(eq)}+\partial_{\delta}^{(1)}q_{ \delta}^{\rm(eq)} =0, \tag{67a}\] \[\partial_{t}^{(1)}q_{\delta}^{\rm(eq)}+\partial_{\varepsilon}^{ (1)}R_{\delta\varepsilon}^{\rm(eq)} =-\frac{\omega^{\rm(B)}}{\delta t}q_{\delta}^{(1)}+\frac{\omega^{ \rm(FP)}}{\delta t}\widehat{\Omega}^{\rm(FP)}q_{\delta}^{(1)}, \tag{67b}\] where the moments of the equilibrium populations (12) read, \[\mathcal{E}^{\rm(eq)} =\sum_{i=0}^{Q-1}g_{i}^{\rm(eq)}=2\rho E, \tag{67c}\] \[q_{\delta}^{\rm(eq)} =\sum_{i=0}^{Q-1}g_{i}^{\rm(eq)}v_{i,\delta}=2\rho u_{\delta}H,\] (67d) \[R_{\delta\varepsilon}^{\rm(eq)} =\sum_{i=0}^{Q-1}g_{i}^{\rm(eq)}v_{i,\delta}v_{i,\varepsilon}=2 \rho u_{\delta}u_{\varepsilon}\big{(}H+p/\rho\big{)}+2pH\delta_{\delta \varepsilon}, \tag{67e}\] and \(H=E+p/\rho=e+u^{2}/2+p/\rho\) is the total enthalpy. All of the equilibrium moments except \(Q_{\delta\varepsilon\zeta}^{\rm(eq)}\) are symmetric tensors. To arrive at the NSF equations, the second \(g_{i}\)-moment equation is not required, and thus we have excluded it. After some expansion with the product rule, and substituting the van der Waals internal energy (40), the derivative relation \(T\big{(}\partial p/\partial T\big{)}_{v}=a\rho^{2}+p\) on a vdW-EOS basis, and the rewritten momentum equation (65), it can be shown that the first-order contribution to the temperature equation is, \[D_{t}^{(1)}T=-\frac{T}{\rho C_{v}}\bigg{(}\frac{\partial p}{\partial T}\bigg{)} _{v}\partial_{\delta}^{(1)}u_{\delta}. \tag{68}\] Furthermore, an analogous pressure equation can be established by considering that \(p=p(\rho,T)\) where applying the corollary chain rule, \[D_{t}^{(1)}p=\bigg{(}\frac{\partial p}{\partial\rho}\bigg{)}_{T}D_{t}^{(1)} \rho+\bigg{(}\frac{\partial p}{\partial T}\bigg{)}_{\rho}D_{t}^{(1)}T, \tag{69}\] can be rewritten with the first-order continuity and energy equation contributions (65,68), as well as the speed-of-sound (45), yielding, \[D_{t}^{(1)}p=-\rho\varsigma^{2}\partial_{\delta}^{(1)}u_{\delta}. \tag{70}\] This relation will be exploited later to derive the viscous stresses in the moment equations. As of now we have obtained the necessary zero and first-moment equations of \(f_{i},g_{i}\), and are now left with deriving an operational expression for the moment \(P_{\delta\varepsilon}^{(1)}\) terms in the second-moment equation (64c). This last part of the analysis revolves around the second-order perturbations (59) from which the zero moment becomes, \[\partial_{t}^{(2)}\rho=0, \tag{71}\] and consequently we can conclude that the density predominantly evolves on the \(\epsilon\) scales via (65). Formulating the first-moment equation of the \(\epsilon^{2}\) scales necessitates more algebraic work. First we note that in the expanded first-moment equation stemming from (60) successive instances of the momentum equation (66) appear and significantly simplify the result to, \[\partial_{t}^{(2)}\rho u_{\delta}=\partial_{\varepsilon}^{(1)} \bigg{(}\partial_{t}^{(1)}P_{\delta\varepsilon}^{\rm(eq)}+\partial_{\zeta}^{ \rm(1)}Q_{\delta\varepsilon\zeta}^{\rm(eq)}\bigg{)}\,\delta t\bigg{[}\bigg{(} \frac{1}{\omega^{\rm(B)}}-1\bigg{)}\] \[\qquad\qquad+\bigg{(}\frac{\omega^{\rm(FP)}}{2\omega^{\rm(B)}} \widehat{\Omega}^{\rm(FP)}-\frac{1-\omega^{\rm(B)}/2}{\omega^{\rm(FP)}}\frac{1 }{\widehat{\Omega}^{\rm(FP)}}\bigg{)}\bigg{]}, \tag{72}\] where the moment \(\widehat{\Omega}^{\rm(FP)}\) of the operator is \(\sum_{i=0}^{Q-1}\widehat{\Omega}_{i}^{\rm(FP)}\). Nevertheless, given that \(\widehat{\Omega}^{\rm(FP)}=\widehat{\Omega}^{\rm(FP)}(\mathbf{x},\mathbf{u},t)\) we cannot immediately exclude the relaxation-frequency-dependent terms, except for \(\big{(}1/\omega^{\rm(B)}-1\big{)}\), in the derivative terms, and thus need to dissect the equation further. After some algebra it follows that the second-order contributions to the momentum equation take the form, \[\partial_{t}^{(2)}u_{\delta}=\frac{\delta t}{\rho}\bigg{\{}\bigg{(} \frac{1}{\omega^{\rm(B)}}-1\bigg{)}\,\partial_{\varepsilon}^{(1)}\left(\partial_ {t}^{(1)}P_{\delta\varepsilon}^{\rm(eq)}+\partial_{\zeta}^{(1)}Q_{\delta \varepsilon\zeta}^{\rm(eq)}\right)\] \[\quad+\frac{\omega^{\rm(FP)}}{2\omega^{\rm(B)}}\partial_{ \varepsilon}^{(1)}\bigg{[}\widehat{\Omega}^{\rm(FP)}\left(\partial_{t}^{(1)}P _{\delta\varepsilon}^{\rm(eq)}+\partial_{\zeta}^{(1)}Q_{\delta\varepsilon \zeta}^{\rm(eq)}\right)+\left(P_{\delta\varepsilon}^{\rm(eq)}\partial_{t}^{( 1)}+Q_{\delta\varepsilon\zeta}^{\rm(eq)}\partial_{\zeta}^{(1)}\right) \widehat{\Omega}^{\rm(FP)}\bigg{]}\] \[\quad-\frac{1-\omega^{\rm(B)}/2}{\omega^{\rm(FP)}}\partial_{ \varepsilon}^{(1)}\bigg{[}\bigg{(}\widehat{\Omega}^{\rm(FP)}\bigg{)}^{-1} \left(\partial_{t}^{(1)}P_{\delta\varepsilon}^{\rm(eq)}+\partial_{\zeta}^{( 1)}Q_{\delta\varepsilon\zeta}^{\rm(eq)}\right)+\left(P_{\delta\varepsilon}^{ \rm(eq)}\partial_{t}^{(1)}+Q_{\delta\varepsilon\zeta}^{\rm(eq)}\partial_{ \zeta}^{(1)}\right)\left(\widehat{\Omega}^{\rm(FP)}\right)^{-1}\bigg{]}\bigg{\}}, \tag{73}\] in which the superposition of the three prefactors of \(\partial_{\varepsilon}^{(1)}(\,\cdots)\) recover the dynamic viscosity in the thermodynamic limit that together with \(\partial_{t}^{(1)}P_{\delta\varepsilon}^{\rm(eq)}+\partial_{\zeta}^{(1)}Q_{ \delta\varepsilon\zeta}^{\rm(eq)}\) constitute the viscous-stress tensor, as we will show in the following. In the result we can observe three major components to the stresses: the ordinary Boltzmann-equation contributions by \(\big{(}1/\omega^{\rm(B)}-1\big{)}\) as well as contributions from the FP operator with both the material derivative of itself (either in the continuous (6) or derived Hermite-space formalism) linearly dependent on the moments, as well as more FPE-analogous material derivatives of the moments operated on by the operator. In an analogous analysis of the perturbation expansion of \(g_{i}\), only the zero-moment is required for recovering the Fourier equation. Considering the perturbation expansion of not \(f_{i}\) but \(g_{i}\) in (60) the resulting moment equation simplifies to, \[\partial_{t}^{(2)}T=\frac{1}{2\rho C_{v}}\bigg{\{}\partial_{ \delta}^{(1)}\left(\partial_{t}^{(1)}q_{\delta}^{(\rm eq)}+\partial_{ \varepsilon}^{(1)}R_{\delta\varepsilon}^{\rm(eq)}\right)\delta t\bigg{[} \bigg{(}\frac{1}{\omega^{\rm(B)}}-1\bigg{)}\] \[\quad+\bigg{(}\frac{\omega^{\rm(FP)}}{2\omega^{\rm(B)}}\widehat {\Omega}^{\rm(FP)}-\frac{1-\omega^{\rm(B)}/2}{\omega^{\rm(FP)}}\frac{1}{ \widehat{\Omega}^{\rm(FP)}}\bigg{)}\bigg{]}-2\rho u_{\delta}\partial_{t}^{(2 )}u_{\delta}\bigg{\}}, \tag{74}\] which requires exploiting the thermodynamic pressure (41), the internal energy (40), the continuity equation (65), and the aforementioned pressure equation (70). Finally, \(\widehat{\Omega}^{\rm(FP)}=\widehat{\Omega}^{\rm(FP)}(\mathbf{x},\mathbf{u},t)\), necessitates further expansion yielding, \[\partial_{t}^{(2)}T=\frac{1}{2\rho C_{v}}\bigg{\{}\delta t\left( \frac{1}{\omega^{\rm(B)}}-1\right)\partial_{\delta}^{(1)}\left(\partial_{t}^ {(1)}q_{\delta}^{(\rm eq)}+\partial_{\varepsilon}^{(1)}R_{\delta\varepsilon} ^{\rm(eq)}\right)-2\rho u_{\delta}\partial_{t}^{(2)}u_{\delta}\] \[\quad+\delta t\left(\frac{\omega^{\rm(FP)}}{2\omega^{\rm(B)}} \right)\partial_{\delta}^{(1)}\left[\widehat{\Omega}^{\rm(FP)}\left(\partial_ {t}^{(1)}q_{\delta}^{(\rm eq)}+\partial_{\varepsilon}^{(1)}R_{\delta \varepsilon}^{\rm(eq)}\right)+\left(q_{\delta}^{(\rm eq)}\partial_{t}^{(1)}+ R_{\delta\varepsilon}^{\rm(eq)}\partial_{\varepsilon}^{(1)}\right) \widehat{\Omega}^{\rm(FP)}\right]\] \[\quad-\delta t\left(\frac{1-\omega^{\rm(B)}/2}{\omega^{\rm(FP)}} \right)\partial_{\delta}^{(1)}\left[\left(\widehat{\Omega}^{\rm(FP)}\right)^{-1 }\left(\partial_{t}^{(1)}q_{\delta}^{(\rm eq)}+\partial_{\varepsilon}^{(1)}R_ {\delta\varepsilon}^{\rm(eq)}\right)+\left(q_{\delta}^{(\rm eq)}\partial_{t}^ {(1)}+R_{\delta\varepsilon}^{\rm(eq)}\partial_{\varepsilon}^{(1)}\right) \left(\widehat{\Omega}^{\rm(FP)}\right)^{-1}\right]\bigg{\}}. \tag{75}\] This hierarchy of equations--specifically the continuity (65,71), the momentum (66,73), and the energy (68,75) equations--form the foundation for recovering the NSF equations by _recombination_ of the perturbation components into the native series formulation (46). #### iv.2.3 Thermo-hydrodynamic limit: compressible, fluctuating Navier-Stokes-Fourier equations At this point, we recover the fluctuating Navier-Stokes-Fourier equations from the hierarchy of perturbed moment equations by recombining each of the orders in \(\epsilon\). By merging (65,71), and recognizing that \(\epsilon\partial_{t}^{(1)}+\epsilon^{2}\partial_{t}^{(2)}=\partial_{t}\) and \(\epsilon\partial_{\delta}^{(1)}=\partial_{\delta}\) we get the continuity equation, \[\partial_{t}\rho+u_{\delta}\partial_{\delta}\rho=-\rho\partial_{ \delta}u_{\delta}. \tag{76}\] Similarly, the momentum equation becomes, \[\rho D_{t}u_{\delta}=-\partial_{\varepsilon}p\delta_{\delta\varepsilon}-\partial_ {\varepsilon}\big{(}\tau_{\delta\varepsilon}+\widetilde{\tau}_{\delta \varepsilon}\big{)},\] (77a) where the viscous-stress tensor stemming exclusively from Boltzmann-type collisions is, \[\tau_{\delta\varepsilon}=-\mu^{\rm(B)}\left(\partial_{\delta}u_{ \varepsilon}+\partial_{\varepsilon}u_{\delta}-\frac{2}{D}\big{(}\partial_{ \zeta}u_{\zeta}\big{)}\delta_{\delta\varepsilon}\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\zeta^{\rm(B)} \big{(}\partial_{\zeta}u_{\zeta}\big{)}\delta_{\delta\varepsilon},\] (77b) with the shear and volume viscosities respectively as, \[\mu^{\rm(B)}=\left(\frac{1}{\omega^{\rm(B)}}-1\right)p\delta t, \tag{77c}\] \[\zeta^{\rm(B)}=\left(\frac{1}{\omega^{\rm(B)}}-1\right)\left( \frac{D+2}{D}-\frac{\rho\varsigma^{2}}{p}\right)p\delta t. \tag{77d}\] In addition to this "traditional" stress tensor, we also get "noisy" stress tensor components perturbed directly by the FP operator and its inverse, separated by complementing _perturbative_ components (denoted by "\(+\)" and "\(-\)" subscripts) that concurrently amplify and attenuate stresses, \[\widetilde{\tau}_{\delta\varepsilon} =\overbrace{\widehat{\Omega}^{(\text{FP})}\left[-\mu_{+}^{( \text{FPB})}\left(\partial_{\delta}u_{\varepsilon}+\partial_{\varepsilon}u_{ \delta}-\overbrace{{2\over D}\big{(}\partial_{\zeta}u_{\zeta}\big{)}\delta_{ \delta\varepsilon}\big{)}-\zeta_{+}^{(\text{FPB})}\big{(}\partial_{\zeta}u_{ \zeta}\big{)}\delta_{\delta\varepsilon}\right]}^{(\text{I})}\] \[+\frac{1}{\widehat{\Omega}^{(\text{FP})}}\left[-\mu_{-}^{(\text{ FPB})}\left(\partial_{\delta}u_{\varepsilon}+\partial_{\varepsilon}u_{ \delta}-\frac{2}{D}\big{(}\partial_{\zeta}u_{\zeta}\big{)}\delta_{\delta \varepsilon}\right)-\zeta_{-}^{(\text{FPB})}\big{(}\partial_{\zeta}u_{\zeta} \big{)}\delta_{\delta\varepsilon}\right]\] \[+\mu_{+}^{(\text{FPB})}\frac{P_{\delta\varepsilon}^{(\text{eq})} }{p}\left\{\left[\left(\partial_{v_{\zeta}^{(\text{eq})}}\widehat{\Omega}^{( \text{FP})}\right)\frac{A_{\zeta}\rho\varsigma^{2}}{\gamma m}+\left(\partial_{ v_{\overline{\zeta}}^{2}}\widehat{\Omega}^{(\text{FP})}\right)\frac{k_{B}T}{m \rho c_{v}}\left(\frac{\partial p}{\partial T}\right)_{v}\right]\left(\partial _{\zeta}u_{\zeta}\right)+\left(\partial_{u_{\zeta}}\widehat{\Omega}^{( \text{FP})}\right)\frac{1}{\rho}\big{(}\partial_{\delta}p\big{)}\delta_{\delta \zeta}\right\} \tag{77e}\] in which the viscosities manifest due to both the Boltzmann and FP relaxation frequencies, \[\mu_{+}^{(\text{FPB})} =\left(\frac{\omega^{(\text{FP})}}{2\omega^{(\text{B})}}\right)p \delta t,\] \[\mu_{-}^{(\text{FPB})} =\left(-\frac{1-\omega^{(\text{B})}/2}{\omega^{(\text{FP})}} \right)p\delta t,\] \[\zeta_{+}^{(\text{FPB})} =\left(\frac{\omega^{(\text{FP})}}{2\omega^{(\text{B})}}\right) \left(\frac{D+2}{D}-\frac{\rho\varsigma^{2}}{p}\right)p\delta t,\] \[\zeta_{-}^{(\text{FPB})} =\left(-\frac{1-\omega^{(\text{B})}/2}{\omega^{(\text{FP})}} \right)\left(\frac{D+2}{D}-\frac{\rho\varsigma^{2}}{p}\right)p\delta t. \tag{77f}\] Thus, we get four pair-wise complementing "fluctuating" stress tensors in (77e): two traditional stress tensor components (I,II) pre-factored by FP-operator \(\widehat{\Omega}^{(\text{FP})}\) and its inverse counterpart \(1/\widehat{\Omega}^{(\text{FP})}\). The remaining two viscous-stress components (III,IV) are due to the spatial and temporal derivatives of \(\widehat{\Omega}^{(\text{FP})}\) in (77e), and account for the diffusion contributed from the FP operator itself as it varies due to external forces (via \(\mathbf{v}^{(\eta)}\) in the \(\partial_{v_{\zeta}^{(\eta)}}\) terms), thermal fluctuations (via \(v_{T}^{2}\) in the \(\partial_{v_{T}^{2}}\) terms), and continuum velocity (\(\mathbf{u}\) in the \(\partial_{u_{\zeta}}\) terms), respectively. We duly note that by _fluctuating_ we imply that the stochastic dynamics of the Langevin equation is captured by the dynamics in the velocity space of the FPE rather than computed directly as discrete, stochastic events sampled at random from a characteristic Gaussian white noise distribution, and that this noise permeates the stresses. In obtaining the velocity gradient and divergence terms in the stresses, i.e. \(\big{(}\partial_{\xi}u_{\varepsilon}+\partial_{\varepsilon}u_{\delta}-2/D \big{(}\partial_{\zeta}u_{\zeta}\big{)}\delta_{\delta\varepsilon}\big{)}\) and \(\big{(}\partial_{\zeta}u_{\zeta}\big{)}\delta_{\delta\varepsilon}\), we have exploited the expanded formulation, \[\partial_{t}^{(1)}P_{\delta\varepsilon}^{(\text{eq})} +\partial_{\zeta}^{(1)}Q_{\delta\varepsilon\zeta}^{(\text{eq})}=p \left(\partial_{\delta}^{(1)}u_{\varepsilon}+\partial_{\varepsilon}^{(1)}u_{ \delta}\right)\] \[+\left(p-\rho\varsigma^{2}\right)\partial_{\zeta}^{(1)}u_{\zeta} \delta_{\delta\varepsilon}, \tag{78}\] previously identified in (73). The noisy stresses arise analogously to the stresses by the Boltzmann operator. During the recombination we obtain the auxiliary components with the derivatives \(\left(P_{\delta\varepsilon}^{(\text{eq})}\partial_{t}^{(1)}+Q_{\delta \varepsilon\zeta}^{(\text{eq})}\partial_{\zeta}^{(1)}\right)\left\{\widehat{ \Omega}^{(\text{FP})};1/\widehat{\Omega}^{(\text{FP})}\right\}\) imposed on the FP operational expressions and its inverse counterpart, respectively. This form can be recast to a convenient material-derivative basis as \(Q_{\delta\varepsilon\zeta}^{(\text{eq})}=P_{\delta\varepsilon}^{(\text{eq})}u_ {\zeta}\) and thus, \[\left(P_{\delta\varepsilon}^{(\text{eq})}\partial_{t}^{(1)}+Q_{\delta \varepsilon\zeta}^{(\text{eq})}\partial_{\zeta}^{(1)}\right)=P_{\delta \varepsilon}^{(\text{eq})}D_{t}^{(1)}. \tag{79}\] This trick enables us to reuse the chain rule to effectively rewrite \(P_{\delta\varepsilon}^{(\text{eq})}D_{t}^{(1)}\Big{\{}\widehat{\Omega}^{( \text{FP})},1/\widehat{\Omega}^{(\text{FP})}\Big{\}}\) where we note that \(\widehat{\Omega}^{(\text{FP})}=\widehat{\Omega}^{(\text{FP})}\big{(}\mathbf{u}^{ \eta},\mathbf{v}^{(\eta)},v_{T}^{\eta}(T)\big{)}\) is a function of velocity polynomials of various even orders \(n\), \[D_{t}^{(1)}\widehat{\Omega}^{(\text{FP})} =\left(\frac{\partial\widehat{\Omega}^{(\text{FP})}}{\partial u_ {\zeta}}\right)_{v_{\zeta}^{(\eta)},v_{T}^{2}}D_{t}^{(1)}u_{\zeta}\] \[+\left(\frac{\partial\widehat{\Omega}^{(\text{FP})}}{\partial v_ {\zeta}^{(\eta)}}\right)_{v_{T}^{2},u_{\zeta}}D_{t}^{(1)}v_{\zeta}^{(\eta)}\] \[+\left(\frac{\partial\widehat{\Omega}^{(\text{FP})}}{\partial v_{T} ^{2}}\right)_{u_{\zeta},v_{\zeta}^{(\eta)}}D_{t}^{(1)}v_{T}^{2}. \tag{80}\] Therein, we in addition to (66) can identify the following material derivatives derived previously by reformu lating \(v_{T}^{2},v_{\zeta}^{(\eta)}\). Firstly, we can exploit the previously derived material derivative (68) that combined with the direction-independent, root-mean-square formulation of the average thermal molecular speed \(v_{T}^{2}=k_{B}T/m\) for molecules of mass \(m\) yields, \[D_{t}^{(1)}v_{T}^{2} =\frac{k_{B}}{m}D_{t}^{(1)}T\] \[=-\frac{k_{B}T}{m\rho c_{v}}\bigg{(}\frac{\partial p}{\partial T }\bigg{)}_{v}\partial_{\zeta}^{(1)}u_{\zeta}. \tag{81}\] As we have not specified a particular force model in this study we rely on Newton's second law of motion \(\eta_{\zeta}=F_{\zeta}/m\) and (70) to formulate the material derivative of \(v_{\zeta}^{(\eta)}\), \[D_{t}^{(1)}v_{\zeta}^{(\eta)}=-\frac{A_{\zeta}\rho\varsigma^{2}}{\gamma m} \partial_{\zeta}^{(1)}u_{\zeta}, \tag{82}\] where we have further introduced the unit area \(A_{\zeta}\) as a consequence of converting the force into a pressure \(p=F_{\zeta}/A_{\zeta}\), such that \(A_{\zeta}\) in a simulation scenario would be the grid-cell cross-sectional area. The non-ideal speed of sound \(\varsigma^{2}\) was previously found in (45). Consequently, we obtain the explicit result for, \[P_{\delta\varepsilon}^{(\mathrm{eq})}D_{t}^{(1)}\widehat{ \Omega}^{(\mathrm{FP})}=-P_{\delta\varepsilon}^{(\mathrm{eq})}\Bigg{\{}\bigg{[} \Big{(}\partial_{v_{\zeta}^{(\eta)}}\widehat{\Omega}^{(\mathrm{FP})}\Big{)} \,\frac{A_{\zeta}\rho\varsigma^{2}}{\gamma m}\] \[\quad+\Big{(}\partial_{v_{\overline{v}}^{2}}\widehat{\Omega}^{ (\mathrm{FP})}\Big{)}\,\frac{k_{B}T}{m\rho c_{v}}\bigg{(}\frac{\partial p}{ \partial T}\bigg{)}_{v}\bigg{]}(\partial_{\zeta}u_{\zeta})\] \[\quad+\Big{(}\partial_{u_{\zeta}^{\ast}}\widehat{\Omega}^{( \mathrm{FP})}\Big{)}\,\frac{1}{\rho}\big{(}\partial_{\delta p}\big{)}\delta_{ \delta\zeta}\Bigg{\}}, \tag{83}\] and similarly for the \(1/\widehat{\Omega}^{(\mathrm{FP})}\) destructive components with \(\mu_{-}^{(\mathrm{FPB})}\), where we in both cases recombine \(\partial_{\zeta}^{(1)}\) according to our previously defined perturbation series. To obtain the Fourier equation we consider (75) wherein we substitute with (73). The shear velocities arise from the derivatives of the moments, \[\partial_{t}^{(1)}q_{\delta}^{(\mathrm{eq})}+\partial_{\varepsilon }^{(1)}R_{\delta\varepsilon}^{(\mathrm{eq})}=2\left(p-\rho\varsigma^{2}\right) \partial_{\zeta}^{(1)}u_{\zeta}u_{\delta}\] \[\qquad\qquad+2pu_{\varepsilon}\left(\partial_{\varepsilon}^{(1)}u _{\delta}+\partial_{\delta}^{(1)}u_{\varepsilon}\right)+2p\partial_{\delta}^{( 1)}h, \tag{84}\] where \(h=e+p/\rho\) is the enthalpy. As noted by Rebhanian _et al._[2, 50] the resulting heat flux would be \(\mathbf{q}=-\mu\nabla h\) and consequently the \(g_{i}\)-kinetic equation needs to be augmented with an energy correction term \(M_{0}=2\partial_{\alpha}\big{(}-\mu\partial_{\alpha}h+k\partial_{\alpha}T\big{)}\) to recover the correct Fourier law. For real gasses \(\partial_{\alpha}h=c_{p}\partial_{\alpha}T+v\big{(}1-\beta\big{)}\partial_{ \alpha}p\) with \(\beta=\rho\big{(}\partial v/\partial T\big{)}_{p}\) being the thermal-expansion coefficient. After applying the product rule to the shear-velocity terms, the substituted result can be recombined with the first-order perturbation contribution (68) confirming the conservation of the Fourier equation, \[\rho c_{v}D_{t}T=-\big{(}\tau_{\delta\varepsilon}+\widetilde{\tau}_{\delta \varepsilon}\big{)}\partial_{\delta}u_{\varepsilon}-T\bigg{(}\frac{\partial p }{\partial T}\bigg{)}_{v}\partial_{\delta}u_{\delta}-\partial_{\delta}q_{ \delta}, \tag{85}\] where the normal and fluctuating shear stresses organize as expected in the same formulation of the tensors (77b,77e). In summary, we recover the compressible, fluctuating Navier-Stokes-Fourier hierarchy of equations, \[D_{t}\rho =-\rho\nabla\cdot\mathbf{u}, \tag{86}\] \[\rho D_{t}\mathbf{u} =-\nabla p-\nabla\cdot(\mathbf{\tau}+\widetilde{\mathbf{\tau}}),\] (87) \[\rho c_{v}D_{t}T =-(\mathbf{\tau}+\widetilde{\mathbf{\tau}}):\nabla\mathbf{u}-T\bigg{(}\frac{ \partial p}{\partial T}\bigg{)}_{v}\nabla\cdot\mathbf{u}-\nabla\cdot\mathbf{q}, \tag{88}\] in which the Fourier heat flux is defined as \(\mathbf{q}=-k\nabla T\). To enable simulations we superimpose the presented viscosities into effective shear and volume viscosities, \[\mu =\mu^{(\mathrm{B})}+\mu_{+}^{(\mathrm{FPB})}+\mu_{-}^{(\mathrm{ FPB})}\] \[=\bigg{[}\bigg{(}\frac{1}{\omega^{(\mathrm{B})}}-1\bigg{)}+\bigg{(} \frac{\omega^{(\mathrm{FP})}}{2\omega^{(\mathrm{B})}}\bigg{)}-\bigg{(}\frac{1- \omega^{(\mathrm{B})}/2}{\omega^{(\mathrm{FP})}}\bigg{)}\bigg{]}\,p\delta t, \tag{89}\] \[\zeta =\zeta^{(\mathrm{B})}+\zeta_{+}^{(\mathrm{PPB})}+\zeta_{-}^{( \mathrm{FPB})}\] \[=\bigg{[}\bigg{(}\frac{1}{\omega^{(\mathrm{B})}}-1\bigg{)}+ \bigg{(}\frac{\omega^{(\mathrm{FP})}}{2\omega^{(\mathrm{B})}}\bigg{)}-\bigg{(} \frac{1-\omega^{(\mathrm{B})}/2}{\omega^{(\mathrm{FP})}}\bigg{)}\bigg{]}\] \[\qquad\times\bigg{(}\frac{D+2}{D}-\frac{\rho\varsigma^{2}}{p} \bigg{)}\,p\delta t, \tag{90}\] where known initial values of \(\mu,\zeta\) can be defined and subsequently used to tune the time-step size \(\delta t\), and the kinetic relaxation frequencies \(\omega^{(\mathrm{B})},\omega^{(\mathrm{FP})}(\gamma)\), which in turn sets the thermal noise intensity governed by the Langevin equation. This result along with the kinetic equations (13) constitute the foundation of our numerical method. ## IV Discussion As the macroscopic observables \(\mathbf{\lambda}(\mathbf{x},t)=\{\rho,\mathbf{u},E\}\) inherently are driven by the Langevin forces imposed via our LFPBE, it is important to convince ourselves that the reciprocal behaviour between the perturbations instigated by these Langevin forces and the response in \(\mathbf{\lambda}\) evolves in a thermodynamically consistent manner, especially when the perturbations bring the thermodynamic state far away from equilibrium. A systematic approach for assessing this is provided by the celebrated Onsager relations [59], which are used to set the thermal noise strength while obeying the fluctuation-dissipation theorem (FDT) and ensuring that the simulated populations in the LFPBEs meet the detailed balance conditions [46]. Not only can they be applied to the Markovian Ornstein-Uhlenbeck process that constitutes the FPE, they can be used to observe the FDT applied to macroscopic observables [60]. This is particularly important in our derived framework as we have heuristically added BGK relaxation into our kinetic equations (13) resulting in the viscous coefficients (89, 90) which do not yet provide a physics-informed procedure for setting the fluctuation intensity and dissipation, independently of the BGK collisions. Setting the noise intensity should additionally be done in tandem with mixture rules for the BGK collisions which account for hard-sphere collisions in a liquid/vapour mixture, relative to the FP interactions that are more important in liquids. From the Langevin equation, we stated that FDT for the noise intensity for the Weiner process \(W(t)\) in the molecular velocities evolves only in time according to the ensemble average in (2). However, as the hydrodynamic dissipation attributed with the fluctuating stresses \(\widetilde{\mathbf{\tau}}\) (and possibly a corresponding heat flux \(\widetilde{\mathbf{q}}\) yet to be derived) are indirectly correlated with the fluctuating fields \(\widetilde{\mathbf{\lambda}}(\mathbf{x},t)=\big{\{}\widetilde{\rho},\widetilde{\mathbf{u}},\widetilde{E}\big{\}}\) corresponding to the Weiner process, the fluctuating variables would need to obey tailored fluctuation-dissipation balances. In addition, \(\mu,\zeta\) should similarly obey a fluctuation-dissipation balance as they are functions of \(\widetilde{p},\widetilde{\rho}\) and also have equilibrium values \(\big{\{}\mu^{\rm(eq)},\zeta^{\rm(eq)}\big{\}}(p^{\rm(eq)},\rho^{\rm(eq)})\). Examples of such balances could be inspired by the work of Gallo Gallo (1998). The assumed white noise in our Weiner process underlying our Langevin and FPE equations represents an initial prototype for a simulation framework of thermal noise. The independence of the white noise spectrum is an idealization and may not apply accurately to all fluids, and especially not impure fluids. By exemplification, previous works on Brownian motion in water showed evidence for yellow noise scaling with the square-root of the frequency in its weak-noise spectrum in proximity to solid walls (Kurz and Grest, 2015), effectively rendering the thermal noise non-Markovian, i.e. the acceleration of particles is inherently dependent on its past motion introducing a memory effect. Moreover, the colored-noise amplitude depended strongly on the distance to the wall. As such, extensions of our framework to impure, multi-component cavitating and boiling fluids may need to revisit the frequency spectrum of the thermodynamic noise. We furthermore append a commentary on the implications of thermal fluctuations on nucleation phenomena. Although thermal fluctuations are irrelevant in disordered, homogeneous fluid domains, thermal noise is very important in proximity to critical points, such as a spinodal where fluctuations can perturb the fluid into a new stable thermodynamic state (Kurz and Grest, 2015). Prospectively, assessing the fluctuating behaviour around these critical points should be done rigorously on the basis of variables that exhibit sensitivity to the noise strength and correlate with the local metastable state. To that end, order-parameters \(\psi(\mathbf{x},t)\), such as that evolved by the Cahn-Hillard-Cook (CHC) equation \(\partial_{t}\psi=\nabla\cdot\left[\mathscr{D}\nabla\big{(}\delta\mathscr{T}( \psi)/\delta\psi\big{)}+\tilde{\theta}(\mathbf{x},t)\right]\)(Cahn and Hilliard, 2015), which can give an indication of the mutual interaction and similarity between non-homogenizing regions of a phase-transitioning fluid. The CHC equation is informed by a diffusion coefficient \(\mathscr{D}\), the Helmholtz potential \(\mathscr{T}=E(\psi)-TS(\psi)\), and lastly the noise term \(\tilde{\theta}\) that obeys the FDT (Bartos and Hansen, 2015; Grest and Grest, 2015) responsible for the correct equilibration towards the Maxwell-Boltzmann distribution. Usually, this order term contributes an additional surface-tension force \(-\psi\nabla\mu_{\mathscr{D}}\) to the RHS of (87), where \(\mu_{\mathscr{D}}(\mathbf{x},t)=\delta\mathscr{T}(\psi)/\delta\psi\) is the chemical potential. Additionally, the order parameter contributes a noisy-momentum flux \(\nabla\cdot\widetilde{\mathbf{\tau}}^{(\psi)}\) that also obeys the FDT. Evolving an equation such as the CHC may provide critical insight into metastable effects interacting with local thermodynamics and surface tension forces at interfaces. We have not considered surface-tension forces in our current single-component implementation of the FPBE, but it naturally becomes relevant in multi-component extensions where multiple species separate at anomalous rates. To that end, there already exist work on \(N\)-component formulations of the FPE (Gallo, 2015). The entropy generation associated with irreversibilities from thermal fluctuations can be exactly achieved by further mathematical analysis of the FPE (Gallo, 2015). Ultimately, this contribution to the system entropy may be non-trivial in phase-transitioning fluids in which perturbations from noise and metastability, and therefore the characteristic relaxation rates for the phase transitions, are fundamentally dictated by the entropy-extremum principle (Bartos and Hansen, 2015). This entropy generation in turn predicts the Helmholtz and Gibbs free-energies \(G(S)\) such that there is a coupling between the noise, thermodynamics, and disorder in phase transitions. The free energy further enables predicting the degree of metastability under the saturation curve (Gallo, 2015). ## V Conclusion To enable the simulation of the diffusive effects from thermal noise of Langevin particles in thermal flows, we present a lattice-Fokker-Planck-Boltzmann model of the phase-space continuous Fokker-Planck-Boltzmann equation based on Hermite quadrature and the conventional _DmQn_ lattice models. As reported in the literature, the lattice-Fokker-Planck equation usually suffers from numerical instability at lower Mach numbers than the Bhatnagar-Gross-Krook operator. Thus, to mitigate sources of instability in the Hermite series in highly compressible and thermal flow scenarios, we have employed the kinetic theory with discrete velocities \(\mathbf{v}_{i}=\sqrt{\theta}\mathbf{c}_{i}+\mathbf{u}\) from Particles-on-Demand (Bartos and Hansen, 2015), which for the BGK operator has reported to exhibit unconditional instability. This is facilitated by the adaptivity of these velocities to the local thermohydrodynamics captured in the reduced pressure \(\theta\) and the continuum velocity \(\mathbf{u}\). Moreover, this adaptivity dovetails nicely with the introduction of the thermal noise and associated non-equilibrium thermodynamic contributions of the Fokker-Planck as the fluctuations are directly incorporated into the discrete velocity space. By Chapman-Enskog multiscale analysis we show that the two resulting lattice kinetic equations (13) with lattice-FP operators (38a, 38b) by proxy solve the Navier-Stokes-Fourier equations, and crucially that the dissipative effects of thermal noise induce viscous stresses \(\widetilde{\mathbf{\tau}}\) where the shearing components are damped by the dynamic viscosity \(\mu\) (89) and the diverging terms by the volume viscosity \(\zeta\) (90). The viscosities are tuned by independently setting the relaxation frequencies \(\omega^{\rm(FP)}\) and \(\omega^{\rm(B)}\) of Fokker-Planck interactions and BGK collisions, respectively, where \(\omega^{\rm(FP)}\) directly and uniquely defines the thermal-noise spectrum via the friction factor \(\gamma\) in the Langevin equation (1). Consequently, the lattice equations in tandem with the relaxation frequencies enable numerical simulations of metastable fluids with phase transitions induced by thermal white noise. ###### Acknowledgements. We wish to thank C.S. Wang (University of British Columbia, Okanagan) for his thorough introduction to the lattice-Boltzmann method, and A. Garcia and M. Sharifi (University of British Columbia, Okanagan) for fruitful discussions. K.J.P. sincerely thanks T. Colonius (California Institute of Technology) for advisory support during his stay at the Institute, that kick-started this research, J.R. Chreim (California Institute of Technology) for numerous, inspiring discussions on the current research. Moreover, this research would not have been possible without the financial support of the Natural Sciences and Engineering Research Council of Canada (NSERC)--under Development Grant No. 519885, the American-Scandinavian Foundation, Hedorf's Foundation, and Mitacs Globalink (IT27768). K.J.P. conceptualized this research, reviewed the literature, conducted the mathematical derivations, and wrote the manuscript draft. J.R.B. provided review and editing of the written material and advisory support for the project. ## Appendix A Eigenfunctions of the Fokker-Planck and acceleration operators In this section, we aim to derive the explicit eigenfunctions of the continuous FP and acceleration operators. A crucial step in discretizing the operators is to recast the entire continuous operators in convenient, functional forms that do not include any gradient terms. Reiterating, we seek two Hermite series that approximates each of the continuous operators, \[\Big{\{}\Omega_{i}^{(\eta)},\Omega_{i}^{(\text{FP})}\Big{\}}=w_{i}\sum_{l=0}^{K }\frac{1}{v_{T}^{2l}l!}\Big{\{}\Omega_{\underline{\alpha}}^{(\eta)},\Omega_{ \underline{\alpha}}^{(\text{FP})}\Big{\}}\mathscr{H}_{\underline{\alpha}}^{(l )}(\mathbf{v}_{i}),\] (10a) where the Hermite coefficients integrate the operators acting on the population \[f(\mathbf{v};\mathbf{x},t)\], in phase space, \[\Big{\{}\Omega_{\underline{\alpha}}^{(\eta)},\Omega_{\underline{ \alpha}}^{(\text{FP})}\Big{\}}(l)=\int d\mathbf{v}\ \Big{\{}\widehat{\Omega}^{(\eta)},\widehat{\Omega}^{(\text{FP})}\Big{\}}[f] \mathscr{H}_{\underline{\alpha}}^{(l)}(\mathbf{v}), \tag{10b}\] In the following derivations we treat the _hat_-operators as actions on \([f]\), but omitting the population itself. \(f\) in phase-space also needs to be discretized with its Hermite coefficients \(\mathscr{F}_{\underline{\alpha}}^{(l)}\), \[f(\mathbf{v};\mathbf{x},t)=w(\mathbf{v})\sum_{l=0}^{K}\frac{1}{v_{T}^{2l}l!}\mathscr{F}_{ \underline{\alpha}}^{(l)}(\mathbf{x},t)\mathscr{H}_{\underline{\alpha}}^{(l)}( \mathbf{v}). \tag{11}\] As it turns out, one can exploit the eigenvalue property of the operators on the products \(\widehat{\Omega}^{(\text{FP})}\big{[}w(\mathbf{v})\mathscr{H}_{\underline{\alpha }}^{(l)}(\mathbf{v})\big{]}\) identified in the Hermite series of both operators, to eliminate the derivative terms and arrive at an explicit formulation. This eigenvalue property is derived from the Gaussian, \[w(\mathbf{v})=\frac{1}{(2\pi v_{T}^{2})^{\rho_{2}}}\exp\left(-\frac{\big{(}\mathbf{v}- \mathbf{u}\big{)}^{2}}{2v_{T}^{2}}\right), \tag{12}\] that is the common, null-space of the acceleration, FP, and BGK operators [32]. The velocity-derivative of this results in the eigenvector \(-\sqrt{\theta}c_{\alpha}/v_{T}^{2}\) as, \[\partial_{v_{\alpha}}w(\mathbf{v})=-\frac{\sqrt{\theta}c_{\alpha}}{v_{T}^{2}}w( \mathbf{v}), \tag{13}\] where \(\theta=p/(\rho RT_{L})\) is the reduced pressure [2], \(c_{\alpha}\) the conventional peculiar velocity for any \(DmQn\) lattice model, and \(v_{T}^{2}=\sqrt{RT}^{2}\) the squared mean thermal speed. Considering \(\Omega^{(\text{FP})}\) excluding \(f\), and omitting the implied notation \(w(\mathbf{v})\) we can expand the product, \[\widehat{\Omega}^{(\text{FP})}\Big{[}w(\mathbf{v})\mathscr{H}_{ \underline{\alpha}}^{(l)}(\mathbf{v})\Big{]}=\gamma\partial_{v_{\beta}}\big{(}v_{ \beta}+v_{T}^{2}\partial_{v_{\beta}}\big{)}\Big{[}w\mathscr{H}_{\underline{ \alpha}}^{(l)}\Big{]}\] \[=\gamma\Big{[}\big{(}w\mathscr{H}_{\underline{\alpha}}^{(l)}+v_{ \beta}\mathscr{H}_{\underline{\alpha}}^{(l)}\partial_{v_{\beta}}w+v_{\beta}w \partial_{v_{\beta}}\mathscr{H}_{\underline{\alpha}}^{(l)}\big{)}\] \[+\big{(}v_{T}^{2}\partial_{v_{\beta}}\partial_{v_{\beta}}w \mathscr{H}_{\underline{\alpha}}^{(l)}\big{)}\Big{]}. \tag{14}\] Using the eigenproperty (13) in the first parenthesis and rewriting the second, we obtain, \[\widehat{\Omega}^{(\text{FP})}\Big{[}w(\mathbf{v})\mathscr{H}_{ \underline{\alpha}}^{(l)}(\mathbf{v})\Big{]}=\gamma\Bigg{[}\Bigg{(}w\mathscr{H}_{ \underline{\alpha}}^{(l)}-v_{\beta}\mathscr{H}_{\underline{\alpha}}^{(l)} \frac{\sqrt{\theta}c_{\beta}}{v_{T}^{2}}w\] \[+v_{\beta}w\partial_{v_{\beta}}\mathscr{H}_{\underline{\alpha}}^{( l)}\Bigg{)}+v_{T}^{2}\partial_{v_{\beta}}\left(w\partial_{v_{\beta}}\mathscr{H}_{ \underline{\alpha}}^{(l)}+\mathscr{H}_{\underline{\alpha}}^{(l)}\partial_{v_{ \beta}}w\right)\Biggr{]}. \tag{15}\] For now we cannot manipulate the first parenthesis' terms further and we turn our attention to the second where we can use the eigenproperty as well as the chain-rule, \[\eqref{eq: where we further exploited that \(2v_{\beta}-\sqrt{\theta}c_{\beta}=v_{\beta}+u_{\beta}\). Thus, we find the explicit relation that can be used in the \((l\leq K)\)-Hermite series expansion for \(\Omega_{i}^{(\text{FP})}\) in (11a): \[\widehat{\Omega}^{(\text{FP})}\Big{[}w(\mathbf{v})\mathscr{H}_{ \underline{\alpha}}^{(l)}(\mathbf{v})\Big{]}=\\ \gamma w\left[\Big{(}1+u_{\beta}\frac{v_{\beta}+u_{\beta}}{v_{T} ^{2}}-l\Big{)}\,\mathscr{H}_{\underline{\alpha}}^{(l)}-\frac{2u_{\beta}}{v_{T }^{2}}\mathscr{H}_{\underline{\alpha}}^{(l+1)}\right]. \tag{147}\] In order to compute the integral, we can exploit the Hermite orthonormality relation [1; 54] which reads, \[\int d\mathbf{v}\ w(\mathbf{v})\frac{1}{v_{T}^{l+m}}\mathscr{H}_{\underline{\alpha}}^{(l) }(\mathbf{v})\mathscr{H}_{\underline{\beta}}^{(m)}(\mathbf{v})=\delta_{lm}\delta_{ \underline{\alpha}\underline{\beta}}^{(l)}, \tag{101}\] where \(\delta_{\underline{\alpha}\underline{\beta}}^{(l)}\) is the sum of products of \(l\) Kronecker \(\mathbf{\delta}\)'s where each of them associates with one index in \(\underline{\alpha}\) and one in \(\underline{\beta}\) such that there are \(l!\) terms in \(\delta_{\underline{\alpha}\underline{\beta}}^{(l)}\)[54]. \(\delta_{lm}\) simply denotes that only tensors of identical rank \(l=m\) contribute to the sum, and all other combinations \(l\neq m\) evaluates to zero for the integral. As we will have to apply (101) twice in (101), once one each of the \(\mathscr{H}_{\underline{\alpha}}^{(l)},\mathscr{H}_{\underline{\beta}\underline {\alpha}}^{(l+1)}\) terms, we treat them separately as follows, and combine their results after. The \(l\)-rank term becomes, \[\sum_{m=l=0}^{K}\frac{\mathscr{T}_{\underline{\alpha}}^{(l)}}{l!} \times\gamma\left[1+u_{\beta}\frac{v_{\beta}+u_{\beta}}{v_{T}^{2}}-l\right] \int d\mathbf{v}\ w\frac{1}{v_{T}^{2l}}\mathscr{H}_{\underline{\gamma}}^{(m)} \mathscr{H}_{\underline{\alpha}}^{(l)}\] \[=\frac{\mathscr{T}_{\underline{\alpha}}^{(l)}\gamma}{l!}\left[1+ u_{\beta}\frac{v_{\beta}+u_{\beta}}{v_{T}^{2}}-l\right]\delta_{\underline{ \gamma}\underline{\alpha}}^{(l)}, \tag{102}\] where we can observe that \(1/v_{T}^{l+m}=1/v_{T}^{2l}\), as \(m=l\), and that only \(\mathscr{T}_{\underline{\alpha}}^{(l)}\) associates with the indices in \(\delta_{\underline{\gamma}\underline{\alpha}}^{(l)}\). Thus, we get the discrete sum, \[\eqref{eq:H2}=\gamma\left[1+u_{\beta}\frac{v_{\beta}+u_{\beta}}{v_{T}^{2}}-l \right]\mathscr{T}_{\gamma_{1},\ldots,\gamma_{l}}^{(l)}, \tag{103}\] where the \(l!\) sums of \(\mathscr{T}_{\underline{\alpha}}^{(l)}\) cancelled out with the \(1/l!\) prefix in (102). Now we repeat the analysis for the second part of (101) with \(-2u_{\beta}\mathscr{H}_{\underline{\alpha}}^{(l+1)}/v_{T}^{2}\). In this case, we note that only the \(l=m-1\) ranks of \(\mathscr{H}_{\beta\underline{\alpha}}^{(l+1)}\) yield a non-zero contribution to the integral. Consequently, we seek to compute the integral, \[\sum_{m=0}^{K}\frac{\mathscr{T}_{\underline{\alpha}}^{(l)}}{v_{T }^{2l}!l!}\int d\mathbf{v}\ \mathscr{H}_{\underline{\gamma}}^{(m)}\times\gamma w\left[-\frac{2u_{\beta}}{v_ {T}^{2}}\mathscr{H}_{\underline{\beta}\underline{\alpha}}^{(l+1)}\right]\] \[=-\sum_{m=0}^{K}\frac{2\gamma u_{\beta}\mathscr{T}_{\underline{ \alpha}}^{(l)}}{v_{T}^{2l+2}!}\int d\mathbf{v}\ w\mathscr{H}_{\underline{\gamma}}^ {(m)}\mathscr{H}_{\underline{\beta}\underline{\alpha}}^{(l+1)} \tag{104}\] \[=-\sum_{l=m-1}^{K}\frac{2\gamma u_{\beta}\mathscr{T}_{\underline{ \alpha}}^{(m-1)}}{(m-1)!}\int d\mathbf{v}\ w\frac{1}{v_{T}^{2(m-1)+2}}\mathscr{H}_ {\underline{\gamma}}^{(m)}\mathscr{H}_{\underline{\beta}\underline{\alpha}}^{ (m)}. \tag{105}\] Therein, we note that \(\mathscr{H}_{\underline{\gamma}}^{(m)},\mathscr{H}_{\beta\underline{\alpha}}^ {(m)}\) are of the same ranks, and that indeed \(v_{T}^{-[2(m-1)+2]}=1/v_{T}^{l+m}\), such that we again can apply the orthonormality relation (101). Upon substitution we get the result, \[\eqref{eq:H2}=-\frac{2\gamma}{(m-1)!}u_{\beta}\mathscr{T}_{\underline{ \alpha}}^{(m-1)}\delta_{\underline{\gamma},\beta\underline{\alpha}}^{(m-1)}, \tag{106}\] where both \(u_{\beta},\mathscr{T}_{\underline{\alpha}}^{(m-1)}\) associate with the indices in \(\delta_{\underline{\gamma},\beta\underline{\alpha}}^{(m-1)}\). Ultimately, we compute the discrete sum as, \[\eqref{eq:H2}=-2\gamma\Big{[}u_{\gamma_{1}}\mathscr{T}_{\gamma_{2},\ldots, \gamma_{m}}^{(m-1)}+\cdots+u_{\gamma_{m}}\mathscr{T}_{\gamma_{1},\ldots,\gamma_ {m-1}}^{(m-1)}\Big{]}. \tag{107}\] Combining the two results we get the coefficients, \[\Omega_{\underline{\gamma}}^{(\text{FP})}(l)=\gamma\left[1+u_{ \beta}\frac{v_{\beta}+u_{\beta}}{v_{T}^{2}}-l\right]\mathscr{T}_{\underline{ \gamma}}^{(l)}\] \[\frac{-2\gamma\Big{[}u_{\gamma_{1}}\mathscr{T}_{\gamma_{2},\ldots, \gamma_{m}}^{(m-1)}+\cdots+u_{\gamma_{m}}\mathscr{T}_{\gamma_{1},\ldots,\gamma_ {m-1}}^{(m-1)}\Big{]}}{}. \tag{108}\] We repeat the procedure for the acceleration operator now, to which end we seek to compute the coefficients, \[\Omega_{\underline{\gamma}}^{(\eta)} =\int d\mathbf{v}\ \mathscr{H}_{\underline{\gamma}}^{(m)}\Big{[}\widehat{ \Omega}^{(\eta)}\circ f\Big{]}\] \[=\sum_{l=0}^{K}\frac{\mathscr{T}_{\underline{\alpha}}^{(l)}}{v_{T }^{2l}!l!}\int d\mathbf{v}\ \mathscr{H}_{\underline{\gamma}}^{(m)}\times\frac{\eta_{\beta}w}{v_{T}^{2}} \Big{[}\mathscr{H}_{\beta\underline{\alpha}}^{(l+1)}-u_{\beta}\mathscr{H}_{ \underline{\alpha}}^{(l)}\Big{]} \tag{109}\] \[=\sum_{l=0}^{K}\frac{\mathscr{T}_{\underline{\alpha}}^{(l)}}{v_{T} ^{2l}!l!}\int d\mathbf{v}\ \mathscr{H}_{\underline{\gamma}}^{(m)}\times\frac{\eta_{\beta}w}{v_{T}^{2}} \Big{[}\mathscr{H}_{\beta\underline{\alpha}}^{(l+1)}-u_{\beta}\mathscr{H}_{ \underline{\alpha}}^{(l)}\Big{]}\] \[=\sum_{l=0}^{K}\frac{\mathscr{T}_{\underline{\alpha}}^{(l)}\eta_{ \beta}}{l!}\int d\mathbf{v}\ w\frac{1}{v_{T}^{2l+2}}\mathscr{H}_{\underline{ \gamma}}^{(m)}\Big{[}\mathscr{H}_{\beta\underline{\alpha}}^{(l+1)}-u_{\beta} \mathscr{H}_{\underline{\alpha}}^{(l)}\Big{]}\] \[=\sum_{l=0}^{K}\frac{\mathscr{T}_{\underline{\alpha}}^{(l)}\eta_{ \beta}}{l!}\Bigg{[}\int d\mathbf{v}\ w\frac{1}{v_{T}^{2l+2}}\mathscr{H}_{ \underline{\gamma}}^{(m)}\mathscr{H}_{\underline{\beta}\underline{\alpha}}^{(l +1)}\] \[\quad-\frac{u_{\beta}}{v_{T}^{2}}\int d\mathbf{v}\ w\frac{1}{v_{T}^{2l}} \mathscr{H}_{\underline{\gamma}}^{(m)}\mathscr{H}_{\underline{\beta}}^{(l)} \Bigg{]}, \tag{110}\] where we in the last line group the terms in a convenient for aligned with the orthonormality relations. Again, letting \(l\gets m-1\) and then applying (101), the first integral becomes: \[\sum_{l=0}^{K}\frac{\mathscr{T}_{\underline{\alpha}}^{(l)}\eta_{\beta}}{l!} \left[\int d\mathbf{v}\ w\frac{1}{v_{T}^{2l+2}}\mathscr{H}_{\underline{\gamma}}^{(m)} \mathscr{H}_{\beta\underline{\alpha}}^{(l+1)}\right]\] \[=\sum_{l=m-1}^{K}\frac{\mathscr{T}_{\underline{\alpha}}^{(m-1)} \eta_{\beta}}{(m-1)!}\left[\int d\mathbf{v}\ w\frac{1}{v_{T}^{2(m-1)+2}}\mathscr{H}_ {\underline{\gamma}}^{(m)}\mathscr{H}_{\beta\underline{\alpha}}^{(m)}\right]\] \[=\frac{1}{(m-1)!}\mathscr{T}_{\underline{\alpha}}^{(m-1)}\eta_{\beta} \delta_{\underline{\gamma},\beta\underline{\alpha}}^{(m-1)}\] \[=\Big{[}\eta_{\gamma_{1}}\mathscr{T}_{\gamma_{2},\ldots,\gamma_ The second integral in (107) can be directly manipulated with (106) such that we obtain the following: \[\sum_{l=0}^{K}\frac{\mathcal{G}_{\underline{\alpha}}^{(l)}\eta_{ \beta}}{l!}\left[-\frac{u_{\beta}}{v_{T}^{2}}\int d\mathbf{v}\ w\frac{1}{v_{T}^{2l}} \mathcal{H}_{\underline{\gamma}}^{(m)}\mathcal{H}_{\underline{\alpha}}^{(l)}\right]\] \[\quad=-\sum_{m=l=0}^{K}\frac{\eta_{\beta}u_{\beta}\mathcal{G}_{ \underline{\alpha}}^{(l)}}{v_{T}^{2}l!}\left[\int d\mathbf{v}\ w\frac{1}{v_{T}^{2l }}\mathcal{H}_{\underline{\gamma}}^{(m)}\mathcal{H}_{\underline{\alpha}}^{(l) }\right]\] \[\quad=-\frac{\eta_{\beta}u_{\beta}}{v_{T}^{2}l!}\mathcal{G}_{ \underline{\alpha}}^{(l)}\delta_{\underline{\gamma}\underline{\alpha}}^{(l)}\] \[\quad=-\frac{\eta_{\beta}u_{\beta}}{v_{T}^{2}}\Big{[}\mathcal{G} _{\gamma_{1},\ldots,\gamma_{l}}^{(l)}\Big{]}. \tag{108}\] Combining the two integrals yields the sought after coefficients, \[\Omega_{\underline{\gamma}}^{(\eta)}(l)=\] \[\frac{\Big{[}\eta_{\gamma_{1}}\mathcal{G}_{\gamma_{2},\ldots, \gamma_{m}}^{(m-1)}+\cdots+\eta_{\gamma_{m}}\mathcal{G}_{\gamma_{1},\ldots, \gamma_{m-1}}^{(m-1)}\Big{]}-\frac{\eta_{\beta}u_{\beta}}{v_{T}^{2}}\Big{[} \mathcal{G}_{\gamma_{1},\ldots,\gamma_{l}}^{(l)}\Big{]}, \tag{109}\] that we together with (108) can use to compute the full Hermite series. Furthermore, instead of using the \(f\) distribution we can also use the \(g\) distributions given that we use \(\mathcal{G}_{\underline{\alpha}}^{(l)}\) instead of \(\mathcal{G}_{\underline{\alpha}}^{(l)}\) in the coefficients.
2307.04636
On the importance of illustration for mathematical research
Mathematical understanding is built in many ways. Among these, illustration has been a companion and tool for research for as long as research has taken place. We use the term illustration to encompass any way one might bring a mathematical idea into physical form or experience, including hand-made diagrams or models, computer visualization, 3D printing, and virtual reality, among many others. The very process of illustration itself challenges our mathematical understanding and forces us to answer questions we may not have posed otherwise. It can even make mathematics an experimental science, in which immersive exploration of data and representations drive the cycle of problem, conjecture, and proof. Today, modern technology for the first time places the production of highly complicated models within the reach of many individual mathematicians. Here, we sketch the rich history of illustration, highlight important recent examples of its contribution to research, and examine how it can be viewed as a discipline in its own right.
Rémi Coulon, Gabriel Dorfsman-Hopkins, Edmund Harriss, Martin Skrodzki, Katherine E. Stange, Glen Whitney
2023-07-10T15:25:29Z
http://arxiv.org/abs/2307.04636v2
# On the importance of illustration for mathematical research ###### Abstract In the last decade, it has become increasingly possible for researchers to augment their experience of abstract mathematics with immersive sensory experience: 3D-printed or CNC-milled models, the ability to walk through impossible physical spaces with virtual reality, and the potential to explore high-dimensional mathematical spaces through computer visualisation, to name a few. Now much more than simply an aid to understanding, these tools have reached a level of sophistication that makes them indispensable to many frontiers of mathematical research. To preview one particular case recounted below, the tantalizing structure visible in Figure 1 (and many others like it) led to conjectures and proofs that would likely otherwise have been inaccessible. The list of examples of research driven by illustration is rapidly expanding in recent years. We use the term _illustration_ to encompass any one might bring a mathematical idea into physical form or experience, including hand-made diagrams or models, computer visualization, 3D printing, or virtual reality, among many others. We will discuss instances of this interplay in fields ranging from representation theory to geometry and many others. Many readers will also be aware of the recent and celebrated solution of the einstein problem with the hat monotile and its chiral version, the spectre [14]. Illustration is beginning to find a home at programs like the special semester in _Illustrating Mathematics_ in Fall 2019 at the Institute for Computational and Experimental Research in Mathematics (ICERM) and the Institute for Advanced Study (IAS)/Park City Math Institute (PCMI) virtual program in Summer 2021,1 and a community is forming around many modern tools. Footnote 1: See [http://illustratingmath.org/](http://illustratingmath.org/) for links to these two programs, along with many other resources. Of course, the importance of illustration to research is not new: abstra Figure 1: Detail of a \(\mathbb{Z}^{2}\)-lattice sandpile stabilized from a starting configuration with 500 million grains at the origin. Whenever any position has at least 4 grains, it distributes one to each neighbour. Nodes in the stable configuration have 0, 1, 2 or 3 grains (indicated by colour). Image by Stange. agrams in the work of the ancients, including Euclid's _Elements_ or the Chinese treatise _The Nine Chapters on the Mathematical Art_. Precise three-dimensional models were produced by skilled artisans in the 19th century, notable examples of which remain in the collections at the Institute Henri Poincare2 or Gottingen University3, among many other institutions. When computer visualization first became widely available in the 1980's, the Geometry Center was founded at the University of Minnesota, with a mission to exploit these new tools on behalf of mathematics. But we are now at another cusp: modern technological tools have suddenly made 3D models and virtual reality widely available, and computation and computer visualization is more accessible and more powerful than ever. We can now collect huge mathematical datasets and examples, and it has become urgent to develop ways to interact immersively with this data. Footnote 2: [https://patrimoine.ibp.fr/](https://patrimoine.ibp.fr/) Footnote 3: [https://sammlungen.uni-goettingen.de/sammlung/slg_1017/](https://sammlungen.uni-goettingen.de/sammlung/slg_1017/) Making full use of modern tools is not without its challenges: beyond the obvious technical challenges and software learning curves, there are important questions about how an illustration, much like a statistic or an experiment, can subtly mislead the researcher, or miss the essential mathematical pattern sought. Researchers often individually reinvent the necessary skill sets as they seek to advance their own projects, and these projects are pushing the boundaries of the possible. But by building a discipline around this enterprise, we can develop its full potential to advance mathematical research. ## Some highlights from the history of mathematical illustration Illustration of mathematics goes back as far as mathematical ideas themselves. In fact, some of the earliest evidence we have for abstract thinking comes from human-made designs, for example the cross-hatched carvings in Blombas Caves in South Africa, potentially from 73,000 years ago [11]. A little more recently, the middle-eastern tradition of geometry presented in Euclid's _Elements_ provides a structural link between statements deduced from axioms and figures made with straight edge and compass. These two tools provide a physical realization of the two key objects (straight lines and circles) described by the axioms. Euclid's diagrams give a map to help follow (or discover!) the chain of deduction in a proof. Conversely, the proof validates the image (which could otherwise mislead by error or the selection of a non-generic example). This approach leads at the conclusion of Book 1 to a proof of the Pythagorean theorem; see Figure 3. In Chinese mathematics, this theorem is the \(\mathcal{HB}\) (Gougu) theorem. In the classic _Nine Chapters on the Mathematical Art_, it plays a key role in applying the arithmetical mathematics of the text to geometric problems, for example in measuring altitude. The Chinese tradition also gives an elegant visual proof of the result by rearranging triangles, as in Figure 4. Although the Chinese proof is not considered rigorous by modern standards, Euclid was also criticized by Bertrand Russell when he wrote "A valid proof retains its demonstrative force when no figure is drawn, but very many of Euclid's earlier proofs fail before this test." [12]. This criticism reveals one of the challenges of mathematical illustration. A powerful example comes from a well-known "proof" that all triangles are equilateral, wherein a slightly misleading diagram (shown in Figure 5) can be used together with an otherwise correct proof. Disallowing these particular subtle errors requires axioms capturing the Figure 2: An abstract ochre drawing on silica from the 73,000 year old layers of the Blombas Cave, studied in [11]. meaning of "between," that took considerable work by David Hilbert to formulate [14]. A related pitfall - when a good illustration, overused, can become a pair of blinders - is illustrated by the following example. In the _Elements_, the concept of number is based on the concept of length. So the squares in the Pythagorean theorem are actual squares (the area of which are equal), not squared numbers. In the 11th century algebra treatise of Omar Khayyam, although he gives solutions to equations with higher powers than three, he also states: "Square-square, which, to the algebraists, is the product of the square by itself, has no meaning in continuous objects. This is because how can one multiply a square, which is a surface, by itself? Since the square is a two-dimensional object... and two-dimensional by two-dimensional is a four dimensional object. But solids cannot have more than three dimensions." [13]. The relation between number and length was also an important factor in the European reluctance to consider negative numbers. A line, after all, cannot have negative length. In contrast, negative quantities are used freely in the _Nine Chapters_, where arithmetic is the foundational idea, with geometry built from it. In Europe the development of the number line, starting with John Wallis, Figure 4: Two pages from the _Arithmetical Classic on the Gnomon and the Circular Paths of Heaven_ (), a Chinese work on astronomy and mathematics showing a proof of the right triangle () (Gougu) theorem. Image credit: [https://www.maa.org/press/periodicals/convergence/mathematical-treasures-zhoubi-suanjing](https://www.maa.org/press/periodicals/convergence/mathematical-treasures-zhoubi-suanjing). Figure 5: This diagram from Walter W. Rouse Ball’s 1882 _Mathematical Recreations and Essays_ subtly misleads the reader. In reality, either \(E\) lies between \(A\) and \(C\) or \(F\) lies between \(A\) and \(B\). Image credit: Wellesley College Library Figure 3: The proof of Book 1 Proposition 47, half of Pythagoras’ theorem, in a Oliver Byrne’s edition from 1847. The figure shown here had appeared in printed copies of the _Elements_ since the fifteenth century, but Byrne’s rendition links it tightly and visually to the proof alongside. Image Credit: Harriss and University of Arkansas Library. gave an alternative illustration of number (see Figure 6) with the capacity to include negative quantities as numbers in their own right. Powers and negative numbers are but two examples of a productive pattern of mathematics developing from the tension between illustration and symbolic idea. The study of complex numbers advanced significantly with the concept of the complex plane, and then allowed a new algebraic approach to the geometry of the plane. Both quaternions and matrices were developed to try to extend that understanding to higher dimensions. In the case of real numbers, although the symbolic ideas would refine the illustrations needed, it was not until the late nineteenth century when fully symbolic definitions were developed, such as Dedekind cuts and Cauchy sequences. At that point the need for illustrations as foundational objects was removed, although the potential for developing intuition and challenging what might be done with the concepts remained. Projective geometry, first developed (as perspective) by artists as a tool to create realistic images, provided one such challenge. These ideas were explored mathematically by Johannes Kepler, Gerard Desargues and Blaise Pascal. In the early nineteenth century, perspective was developed by Gaspard Monge into "descriptive geometry" for the training of engineers in constructing forts and later developed and axiomatized in the foundational work by Jean-Victor Poncelet [1]. In turn this work would be key in establishing models for non-euclidean geometry, explored axiomatically by Nikolai I. Lobachevsky and Janos Bolyai [12]. In this case it was such models, themselves illustrations, that convinced mathematicians of the existence and interest of the non-euclidean geometries. Projective geometry also spurred the study of algebraic geometry. In the late nineteenth century an industry emerged to reveal the surfaces constructed in this field and their properties, such as cone singularities and embedded straight lines. One pioneer was Alexander Brill, a student of Alfred Clebsch with a degree in architecture. Following the work of Peter Henrici (another student of Clebsch), Brill made sliceform paper models of surfaces. He later worked with Felix Klein in Munich to set up a laboratory for the design and production of mathematical objects. This lab grew into a company that, when it was taken over by Martin Schilling in 1911, had a catalogue of over 400 models. His work combined deep mathematical understanding with a knowledge of printing and construction from his family business [14]. The need to combine mathematical knowledge with fabrication techniques is also highlighted by a story of missed opportunity: how to make physical patches of hyperbolic planes. In addition to his disk model (often called the Poincare disk model), Eugenio Beltrami also attached together strips of paper to approximate the surface. Other examples used paper polygons connected to make a sort of hyperbolic "soccer ball." These paper models are often fragile, and the rigidity of the paper means that it cannot change its local geometry; thus such models are crude approximations. Roughly a century later, Daina Taimina realised that crocheting could produce far more resilient surfaces, with local stretching that meant the negative curvature was more smoothly distributed [10]. An example of this medium of representation is shown in Figure 7. In fact, similar techniques had been used to create ruffles in scarves and skirts for decades. If the methods of fiber arts had earlier been considered seriously and not dismissed Figure 6: An early depiction of the now-familiar number line, from Wallis’ 1685 _A Treatise of Algebra_; image credit: Max Planck Institute for the History of Science, Library as "work for women," researchers could have had the opportunity to handle robust hyperbolic planes far sooner. ## The incredible potential for mathematical illustration Turning to recent developments, the work of Lionel Levine, Wesley Pegden, and Charles K. Smart provides an excellent example of the value of illustration as a research tool. Their _Annals of Mathematics_ paper _The Apollonian structure of integer superharmonic matrices_[17] was motivated by the study of Abelian sandpiles on \(\mathbb{Z}^{2}\): place a large number \(N\) of sand grains at the origin, and allow any position with at least four grains to distribute those grains equally to its four neighbours. The stable configuration that results from this simple system displays impressive large-scale structure that can be discovered through computer visualization (see Figure 1). Especially striking is the vivid visual impression that the structure continues to refine at larger \(N\) toward a continuum scaling limit, which was proven earlier by Pegden and Smart. To describe the PDE governing this process, the individual periodic tilings in the regions of the limit must be understood. They are each governed by an _integer superharmonic matrix_. Levine, Pegden, and Smart generated a picture of the set of integer superharmonic matrices, and were astonished to see the familiar fractal structure of an Apollonian circle packing (Figure 8). Each circle of the packing was associated to a periodic pattern appearing in the scaling limit. Through extensive computer investigation, the authors were able to determine the intricate recursive relationships between the patterns for circles generated from one another ('ancestors' overlap and merge to form 'descendent' patterns according to complicated rules). These recursions led to a difficult inductive proof that the set did indeed have the Apollonian structure evident in experiments. The development of these results provide a perfect example of the role illustration can play in the cycle of conjecture, theorem, and proof. Without the data available through large-scale computer experimentation and the ability to explore it visually, the question of the scaling limit may not have been raised at all, and the recursive proof of their main result would likely not have been discovered. Another area where research is intertwined with Figure 8: Set of integer superharmonic matrices in the space of all symmetric real matrices. Image by Stange. Figure 7: There can be two parallels through a point not on a line, in the (crocheted) hyperbolic plane. Photo credit: Taimina illustration is in the study of William Thurston's geometrization conjecture, proved by Grigori Perelman. This key tool in our understanding of 3-manifolds implies, for instance, the famous Poincare conjecture. Geometrization states that any compact _topological_ 3-manifold can be cut into finitely many pieces, each of which carries a _geometric_ structure. There are eight possible such structures, known as _Thurston geometries_. Some of them are rather familiar to mathematicians, such as the 3-dimensional euclidean and hyperbolic spaces or the 3-sphere. Despite the fact that Thurston's geometries have been intensively studied, the more exotic geometries such as _Nil_ and _Sol_ still defy our "Euclidean-grown" spatial intuition. Keeping in mind the well-established power of our physical and visual intuition to aid geometrical research, Remi Coulon, Elisabetta Matsumoto, Henry Segerman, and Steve Trettel developed virtual reality software to immerse the user in any of the eight Thurston geometries [12] (see Figure 9). Besides building the much-needed intuition for these spaces, the development of the software itself raised mathematical questions. The meshes used in most animations must be replaced with raymarching techniques, which require computation of distances between objects. But, for example, there is no closed formula for the distance in _Nil_ or _Sol_! Thus, the development of the algorithms Figure 9: In-space view of a finite-volume hyperbolic 3-manifold lit by a single white light. Image by Coulon, Matsumoto, Segerman and Trettel [12] themselves becomes a mathematical result in its own right. Work on Thurston's geometries has very often been closely tied with illustration. For example, the study of _Spheres in Sol_ by Matei P. Coiculescu and Rich Schwartz in _Geometry and Topology_ (positively) answers an old open question, whether metric spheres in _Sol_ are homeomorphic to \(S^{2}\)[13]. Each step of the proof was found after numerous graphical experiments, and 3D printing brings yet another perspective (see Figure 10). For an example at the intersection of algebraic geometry and number theory, a few key illustrations have helped drive developments in the field of \(p\)-adic analytic geometry. At the same time, illustrating the \(p\)-adic analogs of complex analytic manifolds presents unique challenges, not the least of which is the fact that the \(p\)-adic numbers themselves are topologically a Cantor set. Nevertheless, clever and meaningful illustrations of \(p\)-adic analogs to the complex upper half-plane and complex unit disk have proved incredibly fruitful. An illustration of Vladimir Drinfeld's \(p\)_-adic upper half plane_ as tubular neighborhoods of Bruhat-Tits trees (Figure 11) clarified the behavior of the action of \(\mathrm{GL}_{2}(\mathbb{Q}_{p})\) by Mobius transformations. Understanding this action was instrumental in the construction of \(p\)-adic analytic uniformization of elliptic curves (reflecting the famous complex analytic uniformization of elliptic curves as quotients of the complex upper half plane). Similarly, Peter Scholze's illustrations of the _adic unit ball_ (Figure 12) provide access to the foundational geometric construction in his theory of perfectoid spaces [14]. The act of illustrating the central geometric objects of \(p\)-adic analysis has proven both beneficial and uniquely challenging, demanding a systematic and critical approach. An example arising somewhat further afield of geometry is the work of Allen Knutson, Terence Tao, and Christopher Woodward in representation theory [15]. Knutson and Tao introduced the notion of _honeycombs_ (subsets of the plane as in Figure 13) to solve a longstanding open problem: Alfred Horn's conjectured shape of the polyhedral cone (sometimes called the Littlewood-Richardson cone) of triples of eigenvalue spectra \((\lambda,\mu,\nu)\) for Hermitian matrices \(A,B,C\) which satisfy \(A+B+C=0\). This _sum-of-Hermitian-matrices_ problem has applications to perturbation theory, quantum measurement theory, and the spectral theory of self-adjoint operators. Knutson and Tao were able to show that there exist such Hermitian matrices with the specified spectra if and only if there exist honeycombs with a specified boundary. They used this correspondence to prove Horn's conjecture. The honeycomb formalism also led naturally to a polynomial time algorithm to decide whether a triple of spectra can be realized by Hermitian matrices. In a follow-up, Knutson, Tao, and Woodward extended the study of honeycombs to define _puzzles_ (Figure 14), which they described as replacing the Schubert calculus in past approaches to the Hermitian matrices problem, and used geometric arguments to give a complete characterization of the facets of the cone [15]. Puzzles and honeycombs provide an example of the power of rephrasing an algebraic problem as one about visual objects, where we can draw on other types of intuition. In what circumstances can we expect these sort of insightful geometric versions to exist for algebraic problems? When a geometric analog exists, it naturally exhibits additional features - can we then find new corresponding objects in the original problem? For example, what do the vertices of a honeycomb actually represent? There are, of course, many more examples. Among these, the most famous may be the computer exploration of the Mandelbrot set and fractal geometry in the 1980's (Figure 15). In the 1990's, Jeffrey Weeks created SnapPea (which now exists as SnapPy under the guidance of Marc Culler and Nathan Dunfield4) as part of his doctoral thesis [20], to explore the cusp structures of hyperbolic 3-manifolds. Its use inspired David Gabai, Robert Meyerhoff, and Peter Milley to invent _mom structures_ to answer questions of the volumes of hyperbolic 3-manifolds [12]. In the same decade, the _Geometry Center_ founded by Al Marden was focused on the use of computer visualization in mathematics.5 It hosted mathematicians such as Eugenio Calabi, John Horton Conway, Donald E. Knuth, Mumford, and Thurston, among others, and produced the GeomView software used to create some famous early computer visualizations, including the sphere version6 and illustrations for knot theory.7 Illustration has shown its importance in virtually all areas of mathematics, from random tilings in combinatorics, to diagrammatic approaches to algebra, to Apollonian circle packings and Schmidt arrangements in number theory, and their higher dimensional analogs, to mention just a few. Footnote 6: _Outside In_, (1994), [http://www.geom.uiuc.edu/docs/outreach/oi/](http://www.geom.uiuc.edu/docs/outreach/oi/) Footnote 7: _Not Knot_, (1991), [http://www.geom.uiuc.edu/video/Notknot/](http://www.geom.uiuc.edu/video/Notknot/) The examples above focus on pure mathematics, Figure 11: The Bruhat-Tits tree of \(\mathbb{Q}_{2}\), with geodesics [1, Figure 1]. Figure 10: 3D printed models of the spheres in Sol produced during the ICERM program “Illustrating Mathematics” in Fall 2019. Models by Coulon, Image Credit: Harriss which is poised to join a great many other areas of scientific endeavour embracing illustration. In applied mathematics, illustration has already made great strides. Consider for instance the process of Alan H. Schoen, when describing the _gyroid_ decades before it was mathematically proven to be a minimal surface. He worked with both a sculpture of the surface and various models in _Computer-Aided Design / Modelling_ (CAD/CAM), which ultimately led to the structure being found in various lipid and liquid crystalline systems [15]. Other fields, like _mathematical geometry processing_ rely equally on quantitative measures and qualitative visualizations for judging the quality of their results [14]. Still, a back-and-forth between the development of mathematical procedures and their application to real-world data yields results that are well-grounded in mathematical quality guarantees, yet efficient and relevant for their applications. In the field of _exploratory data analysis_, visualizations even form the main tool for finding research results. Here, large, possibly high-dimensional, datasets are investigated for patterns by embedding them, e.g., as 2D scatter plots that can then be inspected by domain experts. With this technique, in 2020, a novel type of anti-tumor cell Figure 14: Knutson manufactured puzzles to study the sum-of-Hermitian-matrices problem. Image credit: Knutson Figure 13: A honeycomb [12, Figure 1]. Figure 15: The first graph (in ASCII) of the Mandel-brot set by Brooks and Matelski in 1980. was discovered [dVvUI\({}^{+}\)20]. None of these research results would have been possible without the utilization of illustrations. Furthermore, this last example utilized non-linear dimensionality reduction techniques for the visualization of high-dimensional data. These techniques were themselves the result of research driven by the desire for better illustrations. The very closely allied field of _computation_ in mathematics is a little ahead of illustration in its maturity as a tool for mathematical research. To give just one significant example in number theory, much recent activity has centered around the multi-million-dollar _Simons Collaboration on Arithmetic Geometry, Number Theory, and Computation_,8 whose mission states: "Our common perspective is that advances in computational techniques accelerate research in arithmetic geometry and number theory, both as a source of data and examples, and as an impetus for effective results. The dynamic interplay between experiment, theory, and computation has historically played a pivotal role in the development of number theory." The work supported by the collaboration is rapidly expanding the _L-Functions and Modular Forms Database_,9 an online database of mathematical objects (including visualizations) that is at the center of much modern progress in number theory.10 The discipline of mathematical computation is supported by a number of journals11 and has engendered areas of research in their own right, such as _computational geometry_. Illustration appears to be following a similar trajectory. As it becomes more accessible and pervasive it demands rigorous and careful study, leading to the development of mathematical illustration as a discipline in its own right. Footnote 8: [https://simonscollab.icerm.brown.edu/](https://simonscollab.icerm.brown.edu/) Footnote 9: [http://www.lmfdb.org](http://www.lmfdb.org) Footnote 10: See the extensive list of publications arising from the collaboration: [https://simonscollab.icerm.brown.edu/publications/](https://simonscollab.icerm.brown.edu/publications/). Footnote 11: Consider for instance “Advances in Computational Mathematics”, [https://www.springer.com/journal/10444](https://www.springer.com/journal/10444), the “Journal for Computational and Applied Mathematics”, [https://www.sciencedirect.com/journal/journal-of-computational-and-applied-mathematics](https://www.sciencedirect.com/journal/journal-of-computational-and-applied-mathematics), or the “Journal of Computational Mathematics”, [https://www.jstor.org/journal/jcompmath](https://www.jstor.org/journal/jcompmath). ## Illustration as a discipline Thurston once said, "mathematicians usually have fewer and poorer figures in their papers and books than in their heads" [21]. Although the power of good illustrations to advance mathematical knowledge is clear, they are not simple to produce. The challenges to creating powerful and trustworthy illustrations come on many levels. On the one hand, some challenges are technical and concern rather practical questions regarding the production of mathematical illustrations. Especially with newer technologies like virtual reality or 3D modeling, the learning curves are steep and while there are general tutorials available, just a handful are targeting issues specific to the illustration of mathematics.12 Consider for instance [17] for a nice discussion of some of the challenges of 3D printing for mathematical illustration. Footnote 12: A noteworthy example for introductory material, aimed at illustration of mathematics, is the _Processing_ tutorial of Roger Antonsen, to be found online: [https://rant.codes/pcmi/](https://rant.codes/pcmi/). On the other hand, there are challenges within the mathematics itself. The objects to be illustrated do not necessarily come with a description that lends itself to a suitable illustration. Thus, a necessary initial step is the translation of the underlying mathematical object into a form that allows illustration in the first place. However, this transformation is usually not enough by itself. Subsequent steps aim at making the illustration effective, which can entail bridging the gap between the theoretical and the computational, crafting a responsive and immersive experience, or ensuring the illustration actually imparts the desired aspects of the mathematical object. In particular the last part implies important theoretical considerations: What exactly do we want to illustrate? And how do we do so faithfully, i.e., without creating wrong impressions of the mathematical object illustrated? Mathematics is not the first field of research to tackle these difficulties. There are parallels to be found in the development of the scientific method and statistical methods for the natural sciences: Which experimental designs and statistics can be relied upon for developing conjectures and conclusions? Corner stones of the scientific methods were laid down, such as the important notion of falsifiability of a scientific theory. Similarly, statistical methods amplified their usefulness and trustworthiness when expanded from pure descriptive statistics to inference statistics and statistical tests to assert the validity of results. So in fact, all scientific fields have progressed by examining head-on some of the questions raised by their methodologies. The question of _illustrating well_ has been asked in statistics and data visualization, as explored in Darrell Huff's best-selling book _How to Lie with Statistics_, which became a standard college text. The pioneering and richly illustrated books of Edward Tufte and Tamara Munzner on data visualization established that field in its own right. Every year, new research in data visualization is discussed at various venues, such as the Institute for Electrical and Electronics Engineers (IEEE) VIS meeting or the EuroVis conference, and published in outlets like the IEEE Transactions on Visualization and Computer Graphics. As it matures, the data visualization community addresses meta-questions on its research, such as where "the value of visualization" lies [25] or "Are we making progress in visualization research?" [12]. Thus, the example of data visualization provides a pattern of development that the field of mathematical illustration might follow. However, in comparison, mathematical illustration is just taking the first steps on its journey towards being a research field. It is still facing basic challenges with regard to creating and evaluating the illustrations it produces. As an example of these challenges, consider the images in Figure 16 showing polynomial roots near \(i\) in the complex plane. The leftmost is an image of all roots of polynomials of degree 3 with integer coefficients between \(N\) and \(-N\), where here \(N=10\)[13]. The rightmost is an image of all roots of polynomials with coefficients from \(\{-1,1\}\) and degree no more than \(D\), where in this case \(D=13\). In both, in the region around \(i\), there appears to be a hole shaped like two ellipses overlapping at right angles. How to interpret this shape? It turns out that at left it is very much an artifact of the algorithm for creating these images. If you consider the picture as an approximation of all cubic roots (by allowing \(N\) to tend to infinity), there are infinitely many such polynomials. By limiting \(N\), we are looping through them in a growing hypercube in the coefficient space. The corners of this cube are the corners jutting in toward \(i\), and as the cube expands in the coefficient space, this hole will get filled in. If instead of looping through coefficient space in a growing cube, we choose a different ordering, the limiting shape changes. This is shown in the center image of Figure 16. On the right, however, we can think of approximating the set of all roots of polynomials with coefficients \(\pm 1\) by allowing \(D\) to tend to infinity. In this case, the size and shape of the void remains essentially fixed, no matter how large \(D\) is taken. So this hole'really exists' in the picture! The shapes one sees at the boundaries of the limiting set of roots are explained in terms of fractal geometry and certain symmetries of this set.13 Footnote 13: These features are beautifully described by John Baez on his personal website: [https://math.ucr.edu/home/baez/roots/](https://math.ucr.edu/home/baez/roots/). As another example of the challenges discussed above, the virtual reality versions of Thurston's geometries of [10] are a profound way to experience these spaces, but can feel overwhelming and nearly psychedelic, as our brains struggle to make sense of what we are seeing. As an alternative, for several of the geometries, it is possible to place the geodesics of the geometry into familiar euclidean space as curves (see Figure 17). The interplay between these two methods of illustration can be much more enlightening than either one alone. The mathematical arguments that are developed to explain how one view can predict the other can end up as the basis of a mathematical proof. Conjectures and mathematical arguments about the space can quickly be evaluated by predicting their effect on these illustrations. ## Looking forward Illustrations have been used both historically and in recent state-of-the-art research projects to expand the boundaries of knowledge in pure mathematics. Other fields of research, such as statistics and micro Figure 16: Roots of polynomials around \(i\). Each dot is sized by \(\frac{1}{\sqrt[5]{\Delta}}\), where \(\Delta\) is the discriminant of the polynomial and \(d\) is the degree. All images show the region of the complex plane around \(i\), with real values between \(-.3\) and \(.3\) and imaginary values between \(.7\) and \(1.3\). (a) shows all cubic roots with coefficients between \(-10\) and \(10\). (b) shows all cubic roots where the sum of the absolute value of the coefficients is less than or equal to \(26\). (c) shows all roots of polynomials up to degree \(13\), with coefficients \(\pm 1\)[1]. Figure 17: On the left: extrinsic view of some geodesics in Nil identified as a set with \(\mathbb{R}^{3}\). On the right: in-space view of a sphere in Nil seen from a long distance, where light is assumed to travel along geodesics. Images by Coulon, Matsumoto, Segerman and Trettel. biology, have systematized visualization, and studied it in its own right. However, as our gallery of examples shows, the quality of illustrations in pure mathematics varies, and there is no common framework to create, discuss, or evaluate them. To further the possibilities that illustrations provide, there needs to be a dedicated community to tackle the next important problems. These include, among others: 1. How to identify illustrations that have rich potential to provide insight? 2. How to identify (and mitigate) the ways that illustrations can mislead and distract? 3. How to measure the fidelity of an illustration; are perceived patterns a result of its construction or the underlying mathematics? 4. How can we harness the processing power and pattern-recognition capabilities of the human visual system? 5. How can we empower a next generation of mathematical illustrators to create and leverage sophisticated illustrations? 6. And how do we increase professional recognition of the illustration of mathematics? Exploring these questions will lay the foundation of a discipline built around the illustration of mathematics, providing powerful tools for the advancement of mathematical research.
2307.03159
On the Linear Stability of the Lamb-Chaplygin Dipole
The Lamb-Chaplygin dipole (Lamb1895,Lamb1906,Chaplygin1903) is one of the few closed-form relative equilibrium solutions of the 2D Euler equation characterized by a continuous vorticity distribution. We consider the problem of its linear stability with respect to 2D circulation-preserving perturbations. It is demonstrated that this flow is linearly unstable, although the nature of this instability is subtle and cannot be fully understood without accounting for infinite-dimensional aspects of the problem. To elucidate this, we first derive a convenient form of the linearized Euler equation defined within the vortex core which accounts for the potential flow outside the core while making it possible to track deformations of the vortical region. The linear stability of the flow is then determined by the spectrum of the corresponding operator. Asymptotic analysis of the associated eigenvalue problem shows the existence of approximate eigenfunctions in the form of short-wavelength oscillations localized near the boundary of the vortex and these findings are confirmed by the numerical solution of the eigenvalue problem. However, the time-integration of the 2D Euler system reveals the existence of only one linearly unstable eigenmode and since the corresponding eigenvalue is embedded in the essential spectrum of the operator, this unstable eigenmode is also shown to be a distribution characterized by short-wavelength oscillations rather than a smooth function. These findings are consistent with the general results known about the stability of equilibria in 2D Euler flows and have been verified by performing computations with different numerical resolutions and arithmetic precisions.
Bartosz Protas
2023-07-06T17:38:59Z
http://arxiv.org/abs/2307.03159v2
# On the Linear Stability of the Lamb-Chaplygin Dipole ###### Abstract The Lamb-Chaplygin dipole (Lamb, 1895, 1906; Chaplygin, 1903) is one of the few closed-form relative equilibrium solutions of the 2D Euler equation characterized by a continuous vorticity distribution. We consider the problem of its linear stability with respect to 2D circulation-preserving perturbations. It is demonstrated that this flow is linearly unstable, although the nature of this instability is subtle and cannot be fully understood without accounting for infinite-dimensional aspects of the problem. To elucidate this, we first derive a convenient form of the linearized Euler equation defined within the vortex core which accounts for the potential flow outside the core while making it possible to track deformations of the vortical region. The linear stability of the flow is then determined by the spectrum of the corresponding operator. Asymptotic analysis of the associated eigenvalue problem shows the existence of approximate eigenfunctions in the form of short-wavelength oscillations localized near the boundary of the vortex and these findings are confirmed by the numerical solution of the eigenvalue problem. However, the time-integration of the 2D Euler system reveals the existence of only one linearly unstable eigenmode and since the corresponding eigenvalue is embedded in the essential spectrum of the operator, this unstable eigenmode is also shown to be a distribution characterized by short-wavelength oscillations rather than a smooth function. These findings are consistent with the general results known about the stability of equilibria in 2D Euler flows and have been verified by performing computations with different numerical resolutions and arithmetic precisions. Keywords: Vortex instability, Computational methods Introduction The Lamb-Chaplygin dipole is a relative equilibrium solution of the two-dimensional (2D) Euler equations in an unbounded domain \(\mathbb{R}^{2}\) that was independently obtained by Lamb (1895, 1906) and Chaplygin (1903); the history of this problem was surveyed by Meleshko & van Heijst (1994). The importance of the Lamb-Chaplygin dipole stems from the fact that this is a simple exact solution with a continuous vorticity distribution which represents a steadily translating vortex pair (Leweke _et al._, 2016). Such objects are commonly used as models in geophysical fluid dynamics where they are referred to as "modons" (Flierl, 1987). Interestingly, despite the popularity of this model, the stability properties of the Lamb-Chaplygin dipole are still not well understood and the goal of the present investigation is to shed some new light on this question. We consider an unbounded flow domain \(\Omega:=\mathbb{R}^{2}\) ("\(:=\)" means "equal to by definition"). Flows of incompressible inviscid fluids are described by the 2D Euler equation which can be written in the vorticity form as \[\frac{\partial\omega}{\partial t}+\left(\mathbf{u}\cdot\boldsymbol{\nabla} \right)\omega=0\qquad\text{in }\Omega, \tag{1}\] where \(t\in(0,T]\) is the time with \(T>0\) denoting the length of the interval considered, \(\omega~{}:~{}(0,T]\times\Omega\to\mathbb{R}\) is the vorticity component perpendicular to the plane of motion and \(\mathbf{u}=[u_{1},u_{2}]^{T}~{}:~{}(0,T]\times\Omega\to\mathbb{R}^{2}\) is a divergence-free velocity field (i.e., \(\boldsymbol{\nabla}\cdot\mathbf{u}=0\)). The space coordinate will be denoted \(\mathbf{x}=[x_{1},x_{2}]^{T}\). Introducing the streamfunction \(\psi~{}:~{}(0,T]\times\Omega\to\mathbb{R}\), the relation between the velocity and vorticity can be expressed as \[\mathbf{u}=\boldsymbol{\nabla}^{\perp}\psi,\qquad\text{where}\quad\boldsymbol{ \nabla}^{\perp}:=\left[\frac{\partial}{\partial x_{2}},-\frac{\partial}{ \partial x_{1}}\right]^{T}\quad\text{and}\quad\Delta\psi=-\omega. \tag{2}\] System (1)-(2) needs to be complemented with suitable initial and boundary conditions, and they will be specified below. In the frame of reference translating with the velocity \(-U\mathbf{e}_{1}\), where \(U>0\) and \(\mathbf{e}_{i}\), \(i=1,2\), is the unit vector associated with the \(i\)th axis of the Cartesian coordinate system, equilibrium solutions of system (1)-(2) satisfy the boundary-value problem (Wu _et al._, 2006) \[\Delta\psi =F(\psi), \text{in }\Omega, \tag{3a}\] \[\psi \to\psi_{\infty}:=Ux_{2}, \text{for }|\mathbf{x}|\to\infty, \tag{3b}\] where the "vorticity function" \(F~{}:~{}\mathbb{R}\to\mathbb{R}\) need not be continuous. Clearly, the form of the equilibrium solution is determined by the properties of the function \(F(\psi)\). Assuming without loss of generality that it has unit radius (\(a=1\)), the Lamb-Chaplygin dipole is obtained by taking \[F(\psi)=\begin{cases}-b^{2}(\psi-\eta),&\psi>\eta\\ 0,&\text{otherwise}\end{cases}, \tag{4}\] where \(b\approx 3.8317059702075123156\) is the first root of the Bessel function of the first kind of order one, \(J_{1}(b)=0\), and \(\eta\in(-\infty,\infty)\) is a parameter characterizing the asymmetry of the dipole (in the symmetric case \(\eta=0\)). The solution of (3)-(4) then has the form of a circular vortex core of unit radius embedded in a potential flow. The vorticity and streamfunction are given by the following expressions stated in the cylindrical coordinate system \((r,\theta)\) (hereafter we will adopt the convention that the subscript "0" refers to an equilibrium solution) * inside the vortex core (\(0<r\leq 1,\ \ 0<\theta\leq 2\pi\)): \[\omega_{0}(r,\theta) =\frac{2Ub}{J_{0}(b)}\left[J_{1}(br)\sin\theta-\frac{\eta b}{2U}J_ {0}(br)\right],\] (5a) \[\psi_{0}(r,\theta) =\frac{2U}{bJ_{0}(b)}J_{1}(br)\sin\theta+\eta\left[1-\frac{J_{0}( br)}{J_{0}(b)}\right],\] (5b) * outside the vortex core (\(r>1,\ \ 0<\theta\leq 2\pi\)): \[\omega_{0}(r,\theta) =0,\] (6a) \[\psi_{0}(r,\theta) =U\left(1-\frac{1}{r}\right)\sin\theta.\] (6b) The vortical core region will be denoted \(A_{0}:=\{\mathbf{x}\in\mathbb{R}^{2}\ :\ \|\mathbf{x}\|\leq 1\}\) and \(\partial A_{0}\) will denote its boundary. The streamline pattern inside \(A_{0}\) in the symmetric (\(\eta=0\)) and asymmetric (\(\eta>0\)) case is shown in figures 1a and 1b, respectively. Various properties of the Lamb-Chaplygin dipole are discussed by Meleshko & van Heijst (1994). In particular, it is shown that regardless of the value of \(\eta\) the total circulation of the dipole vanishes, i.e., \(\Gamma_{0}:=\int_{A_{0}}\omega_{0}\,dA=0\). We note that in the limit \(\eta\to\pm\infty\) the dipole approaches a state consisting of a monopolar vortex with a vortex sheet of opposite sign coinciding with the part of the boundary \(\partial A_{0}\) above or below the flow centerline, respectively, for positive and negative \(\eta\). Generalizations of the Lamb-Chaplygin dipole corresponding to differentiable vorticity functions \(F(\psi)\) were obtained numerically by Albrecht _et al._ (2011), whereas multipolar generalizations were considered by Vudez (2019\(b\),_a_). Most investigations of the stability of the Lamb-Chaplygin dipole were carried out in the context of viscous flows governed by the Navier-Stokes system. While relations (5)-(6) do not represent an exact steady-state solution of the Navier-Stokes system, this approximate approach was justified by the assumption that viscous effects occur on time scales much longer than the time scales characterizing the growth of perturbations. A first study of this type was conducted by Billant _et al._ (1999) who considered perturbations with dependence on the axial wavenumber and found several unstable eigenmodes together with their growth rates by directly integrating the three-dimensional (3D) linearized Navier-Stokes equations in time. Additional unstable eigenmodes were found in the 2D limit corresponding to small axial wavenumbers by Brion _et al._ (2014). The transient growth due to the non-normality of the linearized Navier-Stokes operator was investigated in the related case of a vortex pair consisting of two Lamb-Oseen vortices by Donnadieu _et al._ (2009) and Jugier _et al._ (2020), whereas Sipp & Jacquin (2003) studied Widnall-type instabilities of such vortex pairs. The effect of stratification on the evolution of a perturbed Lamb-Chaplygin dipole in 3D was considered by Waite & Smolarkiewicz (2008); Bovard & Waite (2016). The history of the studies concerning the stability of vortices in ideal fluids was recently surveyed by Gallay (2019). The only stability analysis of the Lamb-Chaplygin dipole in the inviscid setting we are aware of is due to Luzzatto-Fegiz & Williamson (2012); Luzzatto-Fegiz (2014) who employed methods based on imperfect velocity-impulse diagrams applied to an approximation of the dipole in terms of a piecewise-constant vorticity distribution and concluded that this configuration is stable. Finally, there is a recent mathematically rigorous result by Abe & Choi (2022) who established orbital stability of the Lamb-Chaplygin dipole (orbital stability implies that flows corresponding to "small" perturbations of the dipole remain "close" in a certain norm to the translating dipole; hence, this is a rather weak notion of stability). As noted by several authors (Meleshko & van Heijst, 1994; Waite & Smolarkiewicz, 2008; Luzzatto-Fegiz & Williamson, 2012; Abe & Choi, 2022), the stability properties of the Lamb-Chaplygin dipole are still to be fully understood despite the fact that it was introduced more than a century ago. The purpose of this work is to shed some new light on this question. We demonstrate that the Lamb-Chaplygin dipole is in fact linearly unstable, but the nature of this instability is quite subtle and cannot be understood Figure 1: Streamline pattern inside the vortex core \(A_{0}\) of (a) a symmetric (\(\eta=0\)) and (b) asymmetric (\(\eta=1/4\)) Lamb-Chaplygin dipole. Outside the vortex core the flow is potential. The thick blue line represents the vortex boundary \(\partial A_{0}\) whereas the red symbols mark the hyperbolic stagnation points \(\mathbf{x}_{a}\) and \(\mathbf{x}_{b}\). without referring to the infinite-dimensional nature of the linearized governing equations. More specifically, both the asymptotic and numerical solution of an eigenvalue problem for the 2D linearized Euler operator suitably localized to the vortex core \(A_{0}\) confirm the existence of an essential spectrum with the corresponding approximate eigenfunctions in the form of short-wavelength oscillations localized near the vortex boundary \(\partial A_{0}\). However, the time-integration of the 2D Euler system reveals the presence of a single exponentially growing eigenmode and since the corresponding eigenvalue is embedded in the essential spectrum of the operator, this unstable eigenmode is also found not to be a smooth function and exhibits short-wavelength oscillations. These findings are consistent with the general mathematical results known about the stability of equilibria in 2D Euler flows (Shvidkoy & Latushkin, 2003; Shvydkoy & Friedlander, 2005) and have been verified by performing computations with different numerical resolutions and, in the case of the eigenvalue problem, with different arithmetic precisions. The structure of the paper is as follows: in the next section we review some basic facts about the spectra of the 2D linearized Euler equation and transform this system to a form in which its spectrum can be conveniently studied with asymptotic methods and numerically; a number of interesting properties of the resulting eigenvalue problem is also discussed, an approximate asymptotic solution of this eigenvalue problem is presented in SS 3, the numerical approaches used to solve the eigenvalue problem and the initial-value problem (1)-(2) are introduced in SS 4, whereas the obtained computational results are presented in SS 5 and SS 6, respectively; discussion and final conclusions are deferred to SS 7; some more technical material is collected in three appendices. ## 2 2D Linearized Euler Equations The Euler system (1)-(2) formulated in the moving frame of reference and linearized around an equilibrium solution \(\{\psi_{0},\omega_{0}\}\) has the following form, where \(\psi^{\prime},\omega^{\prime}\ :\ (0,T]\times\Omega\to\mathbb{R}\) are the perturbation variables (also defined in the moving frame of reference) \[\frac{\partial\omega^{\prime}}{\partial t} =-\left(\boldsymbol{\nabla}^{\perp}\psi_{0}-U\mathbf{e}_{1} \right)\cdot\boldsymbol{\nabla}\omega^{\prime}-\boldsymbol{\nabla}^{\perp} \psi^{\prime}\cdot\boldsymbol{\nabla}\omega_{0}\] \[=-\left(\boldsymbol{\nabla}^{\perp}\psi_{0}-U\mathbf{e}_{1} \right)\cdot\boldsymbol{\nabla}\omega^{\prime}+\boldsymbol{\nabla}\omega_{0} \cdot\left(\boldsymbol{\nabla}^{\perp}\Delta^{-1}\right)\omega^{\prime}\] \[=:\mathcal{L}\omega^{\prime}, \text{in }\Omega, \tag{7a}\] \[\Delta\psi^{\prime} =-\omega^{\prime}, \text{in }\Omega,\] (7b) \[\psi^{\prime} \to 0, \text{for }|\mathbf{x}|\to\infty,\] (7c) \[\omega^{\prime}(0) =w^{\prime}, \text{in }\Omega, \tag{7d}\] in which \(\Delta^{-1}\) is the inverse Laplacian corresponding to the far-field boundary condition (7c) and \(w^{\prime}\) is an appropriate initial condition assumed to have zero circulation, i.e., \(\int_{\Omega}w^{\prime}\,dA=0\). Unlike for problems in finite dimensions where, by virtue of the Hartman-Grobman theorem, instability of the linearized system implies the instability of the original nonlinear system, for infinite-dimensional problems this need not, in general, be the case. However, for 2D Euler flows it was proved by Vishik & Friedlander (2003); Lin (2004) that the presence of an unstable eigenvalue in the spectrum of the linearized operator does indeed imply the instability of the original nonlinear problem. Arnold's theory (Wu _et al._, 2006) predicts that equilibria satisfying system (3) are nonlinearly stable if \(F^{\prime}(\psi)\geq 0\), which however is not the case for the Lamb-Chaplygin dipole, since using (4) we have \(F^{\prime}(\psi_{0})=-b^{2}<0\) for \(\psi_{0}\geq\eta\). Thus, Arnold's criterion is inapplicable in this case (Meleshko & van Heijst (1994) refer to this condition, but there seems to be confusion as regards signs in their analysis leading the authors to an incorrect conclusion about the stability of the dipole). ### Spectra of Linear Operators When studying spectra of linear operators, there is fundamental difference between the finite- and infinite-dimensional cases. To elucidate this difference and its consequences, we briefly consider an abstract evolution problem \(du/dt={\cal A}u\) on a Banach space \({\cal X}\) (in general, infinite-dimensional) with the state \(u(t)\in{\cal X}\) and a linear operator \({\cal A}\ :\ {\cal X}\to{\cal X}\). Solution of this problem can be formally written as \(u(t)=e^{{\cal A}t}\,u_{0}\), where \(u_{0}\in{\cal X}\) is the initial condition and \(e^{{\cal A}t}\) the semigroup generated by \({\cal A}\)(Curtain & Zwart, 2013). While in finite dimensions linear operators can be represented as matrices which can only have point spectrum \(\Pi_{0}({\cal A})\), in infinite dimensions the situation is more nuanced since the spectrum \(\Lambda({\cal A})\) of the linear operator \({\cal A}\) may in general consist of two parts, namely, the _approximate point spectrum_\(\Pi({\cal A})\) (which is a set of numbers \(\lambda\in{\mathbb{C}}\) such that \(({\cal A}-\lambda)\) is not bounded from below) and the _compression spectrum_\(\Xi({\cal A})\) (which is a set of numbers \(\lambda\in{\mathbb{C}}\) such that the closure of the range of \(({\cal A}-\lambda)\) does not coincide with \({\cal X}\)). We thus have \(\Lambda({\cal A})=\Pi({\cal A})\cup\Xi({\cal A})\) and the two types of spectra may overlap, i.e., \(\Pi({\cal A})\cap\Xi({\cal A})\neq\emptyset\)(Halmos, 1982). A number \(\lambda\in{\mathbb{C}}\) belongs to the approximate point spectrum \(\Pi({\cal A})\) if and only if there exists a sequence of unit vectors \(\{f_{n}\}\), referred to as approximate eigenvectors, such that \(\|({\cal A}-\lambda)f_{n}\|_{\cal X}\to 0\) as \(n\to\infty\). If for some \(\lambda\in\Pi({\cal A})\) there exists a unit element \(f\) such that \({\cal A}f=\lambda f\), then \(\lambda\) and \(f\) are an eigenvalue and an eigenvector of \({\cal A}\). The set of all eigenvalues \(\lambda\) forms the point spectrum \(\Pi_{0}({\cal A})\) which is contained in the approximate point spectrum, \(\Pi_{0}({\cal A})\subset\Pi({\cal A})\). If \(\lambda\in\Pi({\cal A})\) does not belong to the point spectrum, then the sequence \(\{f_{n}\}\) is weakly null convergent and consists of functions characterized by increasingly rapid oscillations as \(n\) becomes large. The set of such numbers \(\lambda\in{\mathbb{C}}\) is referred to as the _essential_ spectrum \(\Pi_{\rm ess}({\cal A}):=\Pi({\cal A})\backslash\Pi_{0}({\cal A})\), a term reflecting the fact that this part of the spectrum is normally independent of boundary conditions in eigenvalue problems involving differential equations. It is, however, possible for "true" eigenvalues to be embedded in the essential spectrum. When studying the semigroup \(e^{{\cal A}t}\) one is usually interested in understanding the relation between its growth abscissa \(\gamma({\cal A}):=\lim_{t\to\infty}t^{-1}\ln\|e^{{\cal A}t}\|_{\cal X}\) and the spectrum \(\Lambda({\cal A})\) of \({\cal A}\). While in finite dimensions \(\gamma({\cal A})\) is determined by the eigenvalues of \({\cal A}\) with the largest real part, in infinite dimensions the situation is more nuanced since there are examples in which \(\sup_{z\in\Lambda({\cal A})}\Re(z)<\gamma({\cal A})\), e.g., Zabczyk's problem (Zabczyk, 1975) also discussed by Trefethen (1997); some problems in hydrodynamic stability where such behavior was identified are analyzed by Renardy (1994). In regard to the 2D linearized Euler operator \(\mathcal{L}\), cf. (7a), it was shown by Shvidkoy & Latushkin (2003) that its essential spectrum is a vertical band in the complex plane symmetric with respect to the imaginary axis. Its width is proportional to the largest Lyapunov exponent \(\lambda_{\max}\) in the flow field and to the index \(m\in\mathbb{Z}\) of the Sobolev space \(H^{m}(\Omega)\) in which the evolution problem is formulated (i.e., \(\mathcal{X}=H^{m}(\Omega)\) above). The norm in the Sobolev space \(H^{m}(\Omega)\) is defined as \(\|u\|_{H^{m}}:=\left[\int_{\Omega}\sum_{|\alpha|\leq m}\left(\frac{\partial^{ |\alpha|}u}{\partial^{\alpha_{1}}x_{1}\,\partial^{\alpha_{2}}x_{2}}\right)^{2 }\,dA\right]^{1/2}\), where \(\alpha_{1},\alpha_{2}\in\mathbb{Z}\) with \(|\alpha|:=\alpha_{1}+\alpha_{2}\)(Adams & Fournier, 2005). More specifically, we have (Shvydkoy & Friedlander, 2005) \[\Pi_{\mathrm{ess}}(\mathcal{L})=\left\{z\in\mathbb{C},\ -|m|\lambda_{\max} \leq\Re(z)\leq|m|\lambda_{\max}\right\}. \tag{8}\] In 2D flows Lyapunov exponents are determined by the properties of the velocity gradient \(\boldsymbol{\nabla}\mathbf{u}(\mathbf{x})\) at hyperbolic stagnation points \(\mathbf{x}_{0}\). More precisely, \(\lambda_{\max}\) is given by the largest eigenvalue of \(\boldsymbol{\nabla}\mathbf{u}(\mathbf{x})\) computed over all stagnation points. As regards the Lamb-Chaplygin dipole, it is evident from figures 1a and 1b that in both the symmetric and asymmetric case it has two stagnation points \(\mathbf{x}_{a}\) and \(\mathbf{x}_{b}\) located at the fore and aft extremities of the vortex core. Inspection of the velocity field \(\boldsymbol{\nabla}^{\perp}\psi_{0}\) defined in (5a) shows that the largest eigenvalues of \(\boldsymbol{\nabla}\mathbf{u}(\mathbf{x})\) evaluated at these stagnation points, and hence the Lyapunov exponents, are \(\lambda_{\max}=2\) regardless of the value of \(\eta\). While characterization of the essential spectrum of the 2D linearized Euler operator \(\mathcal{L}\) is rather complete, the existence of a point spectrum remains in general an open problem. Results concerning the point spectrum are available in a few cases only, usually for shear flows where the problem can be reduced to one dimension (Drazin & Reid, 1981; Chandrasekhar, 1961) or the cellular cat's eyes flows (Friedlander _et al._, 2000). In these examples unstable eigenvalues are _outside_ the essential spectrum (if one exists). On the other hand, it was shown by Lin (2004) that when an unstable eigenvalue is embedded in the essential spectrum, then the corresponding eigenfunctions need not be smooth. One of the goals of the present study is to consider this issue for the Lamb-Chaplygin dipole. ### Linearization Around the Lamb-Chaplygin Dipole The linear system (7) is defined on the entire plane \(\mathbb{R}^{2}\), however, in the Lamb-Chaplygin dipole the vorticity \(\omega_{0}\) is supported within the vortex core \(A_{0}\) only, cf. (6b). This will allow us to simplify system (7) so that it will involve relations defined only within \(A_{0}\), which will facilitate both the asymptotic analysis and numerical solution of the corresponding eigenvalue problem, cf. SS 3 and SS 5. If the initial data \(w^{\prime}\) in (7d) is also supported in \(A_{0}\), then the initial-value problem (7) can be regarded as a free-boundary problem describing the evolution of the boundary \(\partial A(t)\) of the vortex core (we have \(A(0)=A_{0}\) and \(\partial A(t)=\partial A_{0}\)). However, as explained below, the evolution of this boundary can be deduced from the evolution of the perturbation streamfunction \(\psi^{\prime}(t,{\bf x})\), hence need not be tracked independently. Thus, the present problem is different from, e.g., the vortex-patch problem where the vorticity distribution is fixed (piecewise constant in space) and in the stability analysis the boundary is explicitly perturbed (Elcrat & Protas, 2013). Denoting \(\psi^{\prime}_{1}\;:\;(0,T]\times A_{0}\to\mathbb{R}\) and \(\psi^{\prime}_{2}\;:\;(0,T]\times\mathbb{R}^{2}\backslash\overline{A}_{0}\to \mathbb{R}\) the perturbation streamfunction in the vortex core and in its complement, system (7) can be recast as \[\frac{\partial\omega^{\prime}}{\partial t} =-\left(\mathbf{\nabla}^{\perp}\psi_{0}-U{\bf e}_{1}\right)\cdot\mathbf{ \nabla}\omega^{\prime}-\mathbf{\nabla}^{\perp}\psi^{\prime}_{1}\cdot\mathbf{\nabla} \omega_{0}, \text{in }A_{0}, \tag{9a}\] \[\Delta\psi^{\prime}_{1} =-\omega^{\prime}, \text{in }A_{0},\] (9b) \[\Delta\psi^{\prime}_{2} =0, \text{in }\mathbb{R}^{2}\backslash\overline{A}_{0},\] (9c) \[\psi^{\prime}_{1} =\psi^{\prime}_{2}=f^{\prime}, \text{on }\partial A_{0},\] (9d) \[\frac{\partial\psi^{\prime}_{1}}{\partial n} =\frac{\partial\psi^{\prime}_{2}}{\partial n}, \text{on }\partial A_{0},\] (9e) \[\psi^{\prime}_{2} \to 0, \text{for }|{\bf x}|\to\infty,\] (9f) \[\omega^{\prime}(0) =w^{\prime}, \text{in }A_{0}, \tag{9g}\] where \({\bf n}\) is the unit vector normal to the boundary \(\partial A_{0}\) pointing outside and conditions (9d)-(9e) represent the continuity of the normal and tangential perturbation velocity components across the boundary \(\partial A_{0}\) with \(f^{\prime}\;:\;\partial A_{0}\to\mathbb{R}\) denoting the unknown value of the perturbation streamfunction at that boundary. The velocity normal to the vortex boundary \(\partial A(t)\) is \(u_{n}:={\bf u}\cdot{\bf n}=\partial\psi_{1}/\partial s=\partial\psi_{2}/\partial s\), where \(s\) is the arc-length coordinate along \(\partial A(t)\), cf. (9d). While this quantity identically vanishes in the equilibrium state (5)-(6), cf (17), in general it will be nonzero resulting in a deformation of the boundary \(\partial A(t)\). This deformation can be deduced from the solution of system (9) as follows. Given a point \({\bf z}\in\partial A(t)\), the deformation of the boundary is described by \(d{\bf z}/dt={\bf n}\,u_{n}|_{\partial A(t)}\). Integrating this expression with respect to time yields \[{\bf z}(\tau)={\bf z}(0)+\int_{0}^{\tau}{\bf n}\,u_{n}\big{|}_{ \partial A_{\tau}}\,d\tau^{\prime}={\bf z}(0)+\tau{\bf n}\,u_{n}|_{\partial A _{0}}+\mathcal{O}(\tau^{2}), \tag{10}\] where \({\bf z}(0)\in\partial A_{0}\) and \(0<\tau\ll 1\) is the time over which the deformation is considered. Thus, the normal deformation of the boundary can be defined as \(\rho(\tau):={\bf n}\cdot[{\bf z}(\tau)-{\bf z}(0)]\approx u_{n}|_{\partial A_{ 0}}\tau\). We also note that at the leading order the area of the vortex core \(A(t)\) is preserved by the considered perturbations \[\oint_{\partial A_{0}}\rho(\tau)\,ds=\tau\oint_{\partial A_{0}} \frac{\partial\psi}{\partial s}\,ds=\tau\oint_{\partial A_{0}}\,d\psi=0\quad \Longrightarrow\quad|A(t)|\approx|A_{0}|. \tag{11}\] We notice that in the exterior domain \(\mathbb{R}^{2}\backslash\overline{A}_{0}\) the problem is governed by Laplace's equation (9c) subject to boundary conditions (9d)-(9f). Therefore, this subproblem can be eliminated by introducing the corresponding Dirichlet-to-Neumann (D2N) map \(M\ :\ \psi_{2}^{\prime}\big{|}_{\partial A_{0}}\to\left.\frac{\partial\psi_{2}^{ \prime}}{\partial n}\right|_{\partial A_{0}}\) which is constructed in an explicit form in Appendix A. Thus, equation (9c) with boundary conditions (9d)-(9f) can be replaced with a single relation \(\frac{\partial\psi_{1}^{\prime}}{\partial n}=M\psi_{1}^{\prime}\) holding on \(\partial A_{0}\) such that the resulting system is defined in the vortex core \(A_{0}\) and on its boundary only. We therefore conclude that while the vortex boundary \(\partial A(t)\) may deform in the course of the linear evolution, this deformation can be described based solely on quantities defined within \(A_{0}\) and on \(\partial A_{0}\) using relation (10). In particular, the transport of vorticity out of the vortex core \(A_{0}\) into the potential flow is described by the last term on the right-hand side (RHS) in (9a) evaluated on the boundary \(\partial A_{0}\). Noting that the base state satisfies the equation \(\Delta\psi_{0}=-b^{2}(\psi_{0}-\eta)\) in \(A_{0}\), cf. (3)-(4), and using the identity \((\mathbf{\nabla}^{\perp}\psi_{1}^{\prime})\cdot\mathbf{\nabla}\psi_{0}=-(\mathbf{\nabla} \psi_{1}^{\prime})\cdot\mathbf{\nabla}^{\perp}\psi_{0}\), the vorticity equation (9a) can be transformed to the following simpler form \[\frac{\partial\Delta\psi_{1}^{\prime}}{\partial t}=-\left(\mathbf{\nabla}^{\perp }\psi_{0}\right)\cdot\mathbf{\nabla}\left(\Delta\psi_{1}^{\prime}+b^{2}\psi_{1}^{ \prime}\right)\qquad\text{in }A_{0}, \tag{12}\] where we also used (9b) to eliminate \(\omega^{\prime}\) in favor of \(\psi_{1}^{\prime}\). Supposing the existence of an eigenvalue \(\lambda\in\mathbb{C}\) and an eigenfunction \(\widetilde{\psi}\ :\ A_{0}\to\mathbb{C}\), we make the following ansatz for the perturbation streamfunction \(\psi_{1}^{\prime}(t,\mathbf{x})=\widetilde{\psi}(\mathbf{x})\,e^{\lambda t}\) which leads to the eigenvalue problem \[\lambda\widetilde{\psi} =\Delta_{M}^{-1}\left[\left(\mathbf{\nabla}^{\perp}\psi_{0}\right) \cdot\mathbf{\nabla}\left(\Delta\widetilde{\psi}+b^{2}\widetilde{\psi}\right) \right] \text{in }A_{0}, \tag{13a}\] \[\frac{\partial\Delta\widetilde{\psi}}{\partial r} =0, \text{at }r=0, \tag{13b}\] where \(\Delta_{M}^{-1}\) is the inverse Laplacian subject to the boundary condition \(\partial\widetilde{\psi}/\partial n-M\widetilde{\psi}=0\) imposed on \(\partial A_{0}\) and the additional boundary condition (13b) ensures the perturbation vorticity is differentiable at the origin (such condition is necessary since the differential operator on the RHS in (13a) is of order three). Depending on whether or not the different differential operators appearing in it are inverted, eigenvalue problem (13) can be rewritten in a number of different, yet mathematically equivalent, forms. However, all these alternative formulations have the form of generalized eigenvalue problems and are therefore more difficult to handle in numerical computations. Thus, formulation (13) is preferred and we will focus on it hereafter. We note that the proposed formulation ensures that the eigenfunctions \(\widetilde{\psi}\) have zero circulation, as required \[\Gamma^{\prime}:=\int_{A_{0}}\omega^{\prime}\,dA=-\int_{A_{0}}\Delta\psi_{1}^ {\prime}\,dA=-\oint_{\partial A_{0}}\frac{\partial\psi_{1}^{\prime}}{\partial n }\,ds=-\oint_{\partial A_{0}}\frac{\partial\psi_{2}^{\prime}}{\partial n}\,ds= -\int_{\mathbb{R}^{2}\setminus\overline{A}_{0}}\Delta\psi_{2}^{\prime}\,dA=0, \tag{14}\] where we used the divergence theorem, equations (9b)-(9c) and the boundary conditions (9e)-(9f). Since it will be needed for the numerical discretization described in SS 5, we now rewrite the eigenvalue problem (13) explicitly in the polar coordinate system \[\lambda\widetilde{\psi} =\Delta_{M}^{-1}\left[\left(u_{0}^{r}\frac{\partial}{\partial r}+ \frac{u_{0}^{\theta}}{r}\frac{\partial}{\partial\theta}\right)\left(\Delta+b^{2 }\right)\widetilde{\psi}\right]=:\mathcal{H}\widetilde{\psi} \quad\text{for }0<r\leq 1,\ 0\leq\theta\leq 2\pi, \tag{15a}\] \[\frac{\partial\Delta\widetilde{\psi}}{\partial r} =0, \text{at }r=0, \tag{15b}\] where \(\Delta=\frac{\partial^{2}}{\partial r^{2}}+\frac{1}{r}\frac{\partial}{ \partial r}+\frac{1}{r^{2}}\frac{\partial^{2}}{\partial\theta^{2}}\) and the velocity components obtained as \(\left[u_{0}^{r},u_{0}^{\theta}\right]:=\boldsymbol{\nabla}^{\perp}\psi_{0}= \left[\frac{1}{r}\frac{\partial}{\partial\theta},-\frac{\partial}{\partial r }\right]\psi_{0}\) are \[u_{0}^{r} =\frac{2UJ_{1}(br)\cos\theta}{bJ_{0}(b)r}, \tag{16a}\] \[u_{0}^{\theta} =-\frac{2U\left[J_{0}(br)-\frac{J_{1}(br)}{br}\right]\sin\theta+ \eta bJ_{1}(br)}{J_{0}(b)}. \tag{16b}\] They have the following behavior on the boundary \(\partial A_{0}\) \[u_{0}^{r}(1,\theta)=0,\qquad u_{0}^{\theta}(1,\theta)=2U\sin\theta. \tag{17}\] Since \(\|\psi\|_{L^{2}}\sim\|\Delta\psi\|_{H^{-2}}=\|\omega\|_{H^{-2}}\), where "\(\sim\)" means the norms on the left and on the right are equivalent (in the precise sense of norm equivalence), the essential spectrum (8) of the operator \(\mathcal{H}\) will have \(m=-2\), so that \(\Pi_{\text{ess}}(\mathcal{H})\) is a vertical band in the complex plane with \(|\Re(z)|\leq 4\), \(z\in\mathbb{C}\) (since \(\lambda_{\text{max}}=2\)). Operator \(\mathcal{H}\), cf. (15a) has a non-trivial null space \(\text{Ker}(\mathcal{H})\). To see this, we consider the "outer" subproblem \[\mathcal{K}\phi :=\left(u_{0}^{r}\frac{\partial}{\partial r}+\frac{u_{0}^{ \theta}}{r}\frac{\partial}{\partial\theta}\right)\phi=0 \quad\text{for}\quad 0<r\leq 1,\quad 0\leq\theta\leq 2\pi, \tag{18a}\] \[\frac{\partial\phi}{\partial r} =0, \text{at }r=0, \tag{18b}\] whose solutions are \(\phi(r,\theta)=\phi_{C}(r,\theta):=B\left[J_{1}(br)\sin\theta\right]^{C}\), \(B\in\mathbb{R}\), \(C=2,3,\dots\) (see Appendix B for derivation details). Then, the eigenfunctions \(\widetilde{\psi}_{C}\) spanning the null space of operator \(\mathcal{H}\) are obtained as solutions of the family of "inner" subproblems \[\left(\frac{\partial^{2}}{\partial r^{2}}+\frac{1}{r}\frac{ \partial}{\partial r}+\frac{1}{r^{2}}\frac{\partial^{2}}{\partial\theta^{2}}+ b^{2}\right)\widetilde{\psi}_{C} =\phi_{C}\quad\text{for}\quad 0<r\leq 1,\ 0\leq\theta\leq 2\pi, \tag{19a}\] \[\frac{\partial\widetilde{\psi}_{C}}{\partial r}+M\widetilde{\psi}_{C} =0, \text{at }r=0, \tag{19b}\] where \(C=2,3,\dots\). Some of these eigenfunctions are shown in figures 2a-d, where distinct patters are evident for even and odd values of \(C\). Figure 2: Eigenfunctions \(\widetilde{\psi}_{C}\), \(C=2,3,4,5\), corresponding to the zero eigenvalue of problem (15). Asymptotic Solution of Eigenvalue Problem (15) A number of interesting insights about certain properties of solutions of eigenvalue problem (15) can be deduced by performing a simple asymptotic analysis corresponding to the short-wavelength limit. We focus here on the case of the symmetric dipole (\(\eta=0\)) and begin by introducing the ansatz \[\widetilde{\psi}(r,\theta)=\sum_{m=0}^{\infty}f_{m}(r)\cos(m\theta)+g_{m}(r) \sin(m\theta), \tag{20}\] where \(f_{m},g_{m}\ :\ [0,1]\to\mathbb{C}\), \(m=1,2,\dots\), are functions to be determined. Substituting this ansatz in (15a) with the Laplacian moved back to the left-hand side (LHS) and applying well-known trigonometric identities leads after some algebra to the following system of coupled third-order ordinary differential equations (ODEs) for the functions \(f_{m}(r)\), \(m=1,2,\dots\), \[\lambda\mathcal{B}_{m}f_{m}= \frac{1}{2}P(r)\frac{d}{dr}\left(\mathcal{B}_{m-1}f_{m-1}+b^{2}f_{ m-1}+\mathcal{B}_{m+1}f_{m+1}+b^{2}f_{m+1}\right)\] \[\frac{1}{2}Q(r)\frac{m}{r}\left(\mathcal{B}_{m-1}f_{m-1}+b^{2}f_{ m-1}-\mathcal{B}_{m+1}f_{m+1}-b^{2}f_{m+1}\right), r\in(0,1), \tag{21a}\] \[f_{m}\quad\text{bounded} \text{at }r=0,\] (21b) \[\frac{d}{dr}f_{m}= -mf_{m} \text{at }r=1,\] (21c) \[\frac{d}{dr}\mathcal{B}_{m}f_{m}= 0 \text{at }r=0, \tag{21d}\] where the Bessel operator \(\mathcal{B}_{m}\) is defined via \(\mathcal{B}_{m}f:=\frac{d^{2}}{dr^{2}}f+\frac{1}{r}\frac{d}{dr}f-\frac{m^{2}}{ r}f\), whereas the coefficient functions have the form, cf. (16), \[P(r) :=\frac{2UJ_{1}(br)}{bJ_{0}(b)r}, \tag{22a}\] \[Q(r) :=-\frac{2U\left[J_{0}(br)-\frac{J_{1}(br)}{br}\right]}{J_{0}(b)}. \tag{22b}\] The functions \(g_{m}(r)\), \(m=1,2,\dots\), satisfy a system identical to (21) which shows that the eigenfunctions \(\widetilde{\psi}(r,\theta)\) are either even or odd functions of \(\theta\) (i.e., they are either symmetric or antisymmetric with respect to the flow centerline). Moreover, the coupled form of system (21) implies that the eigenvectors \(\widetilde{\psi}(r,\theta)\) are not separable as functions of \(r\) and \(\theta\). Motivated by our discussion in SS 2.1 about the properties of approximate eigenfunctions of the 2D linearized Euler operator, we will construct approximate solutions of system (21) in the short-wavelength limit \(m\to\infty\). We thus consider the asymptotic expansions \[\lambda=\lambda^{0}+\frac{1}{m}\lambda^{1}+\mathcal{O}\left(\frac{1}{m^{2}} \right),\qquad f_{m}(r)=f_{m}^{0}(r)+\frac{1}{m}f_{m}^{1}(r)+\mathcal{O}\left( \frac{1}{m^{2}}\right), \tag{23}\] where \(\lambda^{0},\lambda^{1}\in\mathbb{C}\) and \(f_{m}^{0},f_{m}^{1}\ :\ [0,1]\to\mathbb{C}\). Plugging these expansions into system (21) and collecting terms proportional to the highest powers of \(m\) we obtain \[\mathcal{O}(m^{3}): f_{m-1}^{0}-f_{m+1}^{0} =0, \tag{24a}\] \[\mathcal{O}(m^{2}): \frac{1}{2}\frac{Q(r)}{r^{3}}\left(f_{m-1}^{1}-f_{m+1}^{1}\right) =\frac{1}{2}P(r)\frac{d}{dr}\left[\frac{1}{r^{2}}\left(f_{m-1}^{0} +f_{m+1}^{0}\right)\right]\] \[\quad+\frac{Q(r)}{r^{3}}\left(f_{m-1}^{0}+f_{m+1}^{0}\right)+ \frac{\lambda^{0}}{r^{2}}f_{m}^{0}. \tag{24b}\] It follows immediately from (24a) that \(f_{m-1}^{0}=f_{m+1}^{0}\). Since this analysis does not distinguish between even and odd values of \(m\), we will assume that \(f_{m}^{0}=f_{m-1}^{0}=f_{m+1}^{0}\). Furthermore, we will also assume that \(f_{m-1}^{1}=f_{m+1}^{1}\) (as will be evident below, these assumptions do lead to consistent asymptotic solutions of system (21); however, it is possible that they do not exhaust all possibilities and that solutions with other properties can also be obtained if these assumptions are not made). With these assumptions, the LHS in (24b) vanishes and the RHS takes the form \[P(r)\frac{d}{dr}\left(\frac{1}{r^{2}}f_{m}^{0}\right)-2\frac{Q(r)}{r^{3}}f_{m }^{0}-\frac{\lambda^{0}}{r^{2}}f_{m}^{0}=0,\qquad r\in(0,1), \tag{25}\] which is a first-order differential equation defining the leading-order term \(f_{m}^{0}(r)\) in (23). Without loss of generality the boundary condition (21b) can be replaced with \(f(0)=1\). In this analysis the eigenvalue \(\lambda^{0}\) can be treated as a free parameter such that equation (25), which is separable, can be solved via quadrature to give \[f_{m}^{0}(r)=\exp\left[i\int_{0}^{r}I_{i}(r^{\prime})\,dr^{\prime}\right]\exp \left[\int_{0}^{r}I_{r}(r^{\prime})\,dr^{\prime}\right],\qquad r\in[0,1], \tag{26}\] where \[I_{r}(r) :=\frac{\Re(\lambda^{0})bJ_{0}(b)r^{2}-4UbJ_{0}(br)r+8UJ_{1}(br)} {2UJ_{1}(br)r}, \tag{27a}\] \[I_{i}(r) :=\frac{\Im(\lambda^{0})bJ_{0}(b)r}{2UJ_{1}(br)}. \tag{27b}\] The limiting (as \(r\to 1\)) behavior of functions (27a)-(27b) exhibits an interesting dependence on \(\lambda^{0}\), namely, \[\lim_{r\to 1}I_{r}(r) =\begin{cases}+\infty,&\quad\Re(\lambda^{0})<4\\ \quad 0,&\quad\Re(\lambda^{0})=4\\ -\infty,&\quad\Re(\lambda^{0})>4\end{cases}, \tag{28a}\] \[\lim_{r\to 1}I_{i}(r) =-\operatorname{sign}\left[\Im(\lambda^{0})\right]\infty. \tag{28b}\] In particular, the limiting value of \(I_{r}(r)\) as \(r\to 1\) changes when \(\Re(\lambda^{0})=4\), which defines the right boundary of the essential spectrum in the present problem, cf. (8). Both \(I_{r}(r)\) and \(I_{i}(r)\) diverge as \(\mathcal{O}(1/(1-r))\) when \(r\to 1\) which means that the integrals under the exponentials in (26), and hence the entire formula, are not defined at \(r=1\). While the factor involving \(I_{i}(r)\) is responsible for the oscillation of the function \(f_{m}^{0}(r)\), the factor depending on \(I_{r}(r)\) determines its growth as \(r\to 1\): we see that \(|f_{m}^{0}(r)|\) becomes unbounded in this limit when \(\Re(\lambda^{0})<4\) and approaches zero otherwise. The real and imaginary parts of \(f_{m}^{0}(r)\) obtained for different eigenvalues \(\lambda^{0}\) are shown in figures 2(a),b, where it is evident that both the unbounded growth and the oscillations of \(f_{m}^{0}(r)\) are localized in the neighbourhood of the endpoint \(r=1\). Given the singular nature of the solutions obtained at the leading order, the correction term \(f_{m}^{1}(r)\) is rather difficult to compute and we do not attempt this here. We thus conclude that when \(\Re(\lambda^{0})<4\), the solutions of eigenvalue problem (15) constructed in the form (20) are dominated by short-wavelength oscillations whose asymptotic, as \(m\to\infty\), structure involves oscillations in both the radial and azimuthal directions and are localized near the boundary \(\partial A_{0}\). We remark that the asymptotic solutions constructed above do not satisfy the boundary conditions (21c)-(21d), which is consistent with the fact that they represent approximate eigenfunctions associated with the essential spectrum \(\Pi_{\text{ess}}(\mathcal{H})\) of the 2D linearized Euler operator. In order to find solutions of eigenvalue problem (15) which do satisfy all the boundary conditions we have to solve this problem numerically which is done next. Figure 3: Radial dependence (a) of the eigenvectors \(f_{m}^{0}(r)\) associated with real eigenvalues \(\lambda^{0}=2\) (red solid line) and \(\lambda^{0}=6\) (blue dashed line), and (b) of the real part (red solid line) and the imaginary part (blue dashed line) of the eigenvector \(f_{m}^{0}(r)\) associated with complex eigenvalue \(\lambda^{0}=3+10i\). Panel (b) shows the neighbourhood of the endpoint \(r=1\). Numerical Approaches In this section we first describe the numerical approximation of eigenvalue problem (15)-(16) and then the time integration of the 2D Euler system (1)-(2) with the initial condition in the form of the Lamb-Chaplygin dipole perturbed with some approximate eigenfunctions obtained by solving eigenvalue problem (15)-(16). These computations will offer insights about the instability of the dipole complementary to the results of the asymptotic analysis presented in SS 3. ### Discretization of Eigenvalue Problem (15)-(16) Eigenvalue problem (15)-(16) is solved using the spectral collocation method proposed by Fornberg (1996), see also the discussion in Trefethen (2000), which is based on a tensor grid in \((r,\theta)\). The discretization in \(\theta\) involves trigonometric (Fourier) interpolation, whereas that in \(r\) is based on Chebyshev interpolation where we take \(r\in[-1,1]\) which allows us to avoid collocating (15a) at the origin when the number of grid points is even. Since then the mapping between \((r,\theta)\) and \((x_{1},x_{2})\) is 2-to-1, the solution must be constrained to satisfy the condition \[\widetilde{\psi}(r,\theta)=\widetilde{\psi}(-r,(\theta+\pi)(\mathrm{mod}\ 2\pi)),\qquad r\in[-1,1],\quad\theta\in[0,2\pi] \tag{29}\] which is fairly straightforward to implement (Trefethen, 2000). In contrast to (15a), the boundary condition (15b) does need to be evaluated at the origin which necessities modification of the differentiation matrix (since our Chebyshev grid does not include a grid point at the origin). The numbers of grid points discretizing the coordinates \(r\in[-1,1]\) and \(\theta\in[0,2\pi]\) are linked and both given by \(N\) which is an even integer. The resulting algebraic eigenvalue problem then has the form \[\lambda\,\boldsymbol{\psi}=\mathbf{H}\,\boldsymbol{\psi}, \tag{30}\] where \(\boldsymbol{\psi}\in\mathbb{C}^{N^{2}}\) is the vector of approximate nodal values of the eigenfunction and \(\mathbf{H}\in\mathbb{R}^{N^{2}\times N^{2}}\) the matrix discretizing the operator \(\mathcal{H}\), cf. (15a), obtained as described above. Since the operator \(\mathcal{H}\) and hence also the matrix \(\mathbf{H}\) are singular, conditioning of problem (30) is improved by eliminating a part of its null space by performing projections on a certain number \(N_{C}\) of eigenfunctions associated with the eigenvalue \(\lambda=0\). They are obtained by solving problem (19) with different source terms \(\phi_{C}\), \(C=2,3,\ldots,N_{C}+1\), cf. (45). Problem (30) is implemented in MATLAB and solved using the function eig. In addition to examining convergence of the results with respect to grid refinement (by increasing the resolution \(N\) as discussed in SS 5), we have also checked the effect of arithmetic precision using the toolbox Advanpix (2017). However, increasing the arithmetic precision up to \(\mathcal{O}(10^{2})\) significant digits was not found to have a significant effect on the results obtained with small and medium resolutions \(N\leq 100\) (at higher resolutions the cost of such computations becomes prohibitive). In the light of the discussion in SS 2.1-SS 2.2, we know the spectrum of the operator \(\mathcal{H}\) includes essential spectrum in the form of a vertical band in the complex plane \(|\Re(z)|\leq 4\), \(z\in\mathbb{C}\). Available literature on the topic of numerical approximation of infinite-dimensional non-self-adjoint eigenvalue problems, especially ones featuring essential spectrum, is very scarce. However, since the discretized problem (30) is finite-dimensional and therefore can only have point spectrum, it is expected that at least some of the eigenvalues of the discrete problem will be approximations of the approximate eigenvalues in the essential spectrum \(\Pi_{\text{ess}}(\mathcal{H})\), whereas the corresponding eigenvectors will approximate the approximate eigenfunctions (we note that the term "approximate" is used here with two distinct meanings: its first appearance refers to the _numerical_ approximation and the second to the fact that these functions are defined as only "close" to being true eigenfunctions, cf. SS 2.1). As suggested by the asymptotic analysis presented in SS 3, these approximate eigenfunctions are expected to be dominated by short-wavelength oscillations which cannot be properly resolved using any finite resolution \(N\). Thus, since these eigenfunctions are not smooth, we do not expect our numerical approach to yield an exponential convergence of the approximation error. To better understand the properties of these eigenfunction, we also solve a regularized version of problem (15) in which \(\widetilde{\psi}\) is replaced with \(\widetilde{\psi}_{\delta}:=\mathcal{R}_{\delta}^{-1}\widetilde{\psi}\), where \(\mathcal{R}_{\delta}:=(\operatorname{Id}-\delta^{2}\Delta)\), \(\delta>0\) is a regularization parameter and the inverse of \(\mathcal{R}_{\delta}\) is defined with the homogeneous Neumann boundary conditions. The regularized version of the discrete problem (30) then takes the form \[\lambda_{\delta}\,\boldsymbol{\psi}=\mathbf{R}_{\delta}\,\mathbf{H}\,\mathbf{ R}_{\delta}^{-1}\boldsymbol{\psi}=:\mathbf{H}_{\delta}\,\boldsymbol{\psi}, \tag{31}\] where the subscript \(\delta\) denotes regularized quantities and \(\mathbf{R}_{\delta}\) is the discretization of the regularizing operator \(\mathcal{R}_{\delta}\). Since the operator \(\mathcal{R}_{\delta}^{-1}\) can be interpreted as a low-pass filter with the cut-off length given by \(\delta\), the effect of this regularization is to smoothen the eigenvectors by filtering out components with wavelengths less than \(\delta\). Clearly, in the limit when \(\delta\to 0\) the original problem (30) is recovered. An analogous strategy was successfully employed by Protas & Elcrat (2016) in their study of the stability of Hill's vortex where the eigenfunctions also turned out to be singular distributions. ### Solution of the Time-Dependent Problem (1)-(2) The 2D Euler system (1)-(2) is solved in the frame of reference moving with the velocity \(-U\mathbf{e}_{1}\) and with the vorticity equation (1) rewritten in terms of the difference with respect to the equilibrium solution, i.e., for \(\omega_{1}(t,\mathbf{x}):=\omega(t,\mathbf{x})-\omega_{0}(\mathbf{x})\). Since the resulting system is solved using a standard Fourier pseudospectral method (Canuto _et al._, 1988), we assume that the flow domain in a 2D periodic box \(\Omega=\mathbb{T}^{2}\) instead of the 2D plane \(\mathbb{R}^{2}\). We note that a similar approximation was also used in earlier studies by Billant _et al._ (1999); Donnadieu _et al._ (2009); Brion _et al._ (2014); Jugier _et al._ (2020). Since the instability has the form of localized short-wavelength oscillations, interaction of the perturbed dipole with its periodic images does not have a significant effect. The exponential filter proposed by Hou & Li (2007) is used in lieu of dealiasing and the discretized problem is integrated in time using the RK4 method. We use a massively-parallel implementation based on MPI with Fourier transforms computed using the FFTW library (Frigo & Johnson, 2003). Convergence of the results with refinement of the resolution \(M\), representing the number of grid points in each direction, and of the time step \(\Delta t\) was carefully checked. ## 5 Solution of the Eigenvalue Problem In this section we describe solutions of the discrete eigenvalue problem (30) and its regularized version (31). We mainly focus on the symmetric dipole with \(\eta=0\), cf. figure (a)a. In order to study dependence of the solutions on the numerical resolution, problems (30)-(31) were solved with \(N\) ranging from 20 to 260, where the largest resolution was limited by the amount of RAM memory available on a single node of the computer cluster we had access to. The discrete spectra of problem (30) obtained with \(N=40,80,160,260\) are shown in figures (a)a-d. We see that for all resolutions \(N\) the spectrum consists of purely imaginary eigenvalues densely packed on the vertical axis and a "cloud" of complex eigenvalues clustered around the origin (for each \(N\) is there is also a pair of purely real spurious eigenvalues increasing as \(|\lambda|={\cal O}(N)\) when the resolution is refined; they are not shown in figures (a)a-d). We see that as \(N\) increases the cloud formed by the complex eigenvalues remains restricted to the band \(-2\lessapprox\Re(\lambda)\lessapprox 2\), but expands in the vertical (imaginary) direction. The spectrum is symmetric with respect to the imaginary axis as is expected for a Hamiltonian system. The eigenvalues fill the inner part of the band ever more densely as \(N\) increases and in order to quantify this effect in figures (a)a-d we show the eigenvalue density defined as the number of eigenvalues in a small rectangular region of the complex plane, i.e., \[\mu(z):=\frac{\mbox{number of eigenvalues }\lambda\in\{\zeta\in\mathbb{C}\,: \,|\Re(\zeta-z)|\leq\Delta\lambda_{r},\,\,|\Im(\zeta-z)|\leq\Delta\lambda_{i }\}}{4\Delta\lambda_{r}\Delta\lambda_{i}}, \tag{32}\] where \(\Delta\lambda_{r},\Delta\lambda_{i}\in\mathbb{R}\) are half-sizes of a cell used to count the eigenvalues with \(\Delta\lambda_{i}\approx 500\Delta\lambda_{r}\) reflecting the fact that the plots are stretched in the vertical direction. We see that as the resolution \(N\) is refined the eigenvalue density \(\mu(z)\) increases near the origin. However, with the exception of the eigenvalue \(\lambda_{0}\), we did not find evidence for individual eigenvalues to converge to well-defined limits as the resolution \(N\) increases. As discussed in SS 2.1, a key question concerning the linear stability of 2D Euler flows is the existence of point spectrum \(\Pi_{0}({\cal L})\) of the linear operator \({\cal L}\), cf. (7). Anticipating the discussion in SS 6, for each resolution \(N\) we have identified a conjugate pair of eigenvalues \(\lambda_{0}\) associated with the linearly unstable mode discussed in that section. These eigenvalues are given in Table 1 and are marked (together with their counterparts with negative real parts) in figures (a)a-d. As we see from Table 1, the differences between the real parts of the eigenvalue \(\lambda_{0}\) computed with different resolutions \(N\) are very small and just over \(1\%\), although the variation of the imaginary part is larger. We now take a closer look at the purely imaginary eigenvalues which are plotted for different resolutions \(N\) in figure 6. It is known that these approximate eigenvalues are related to the periods of Lagrangian orbits associated with closed streamlines in the \begin{table} \begin{tabular}{r|c} \(N\) & \(\lambda_{0}\) \\ \hline \(40\) & \(0.1272\pm i31.5543\) \\ \(80\) & \(0.1263\pm i25.2577\) \\ \(160\) & \(0.1256\pm i32.7466\) \\ \(260\) & \(0.1260\pm i42.2629\) \\ \hline \end{tabular} \end{table} Table 1: Eigenvalue \(\lambda_{0}\) associated with the linearly growing mode, cf. § 6, obtained by solving the discrete eigenvalue problem (30) with different resolutions \(N\). Figure 4: Eigenvalues obtained by solving the discrete eigenvalue problem (30) with different indicated resolutions \(N\). The eigenvalues associated with the unstable mode discussed in § 6 and their stable counterparts are marked in red. Figure 5: Eigenvalue densities (32) corresponding to the spectra shown in figures 3(a)–d. base flow (Cox, 2014). In particular, if the maximum period is bounded \(\tau_{\max}<\infty\), this implies the presence of a horizontal gaps in the essential spectrum. However, as shown in Appendix C, the Lamb-Chaplygin dipole does involve Lagrangian orbits with arbitrarily long periods, such that the essential spectrum \(\Pi_{\mathrm{ess}}(\mathcal{L})\) includes the entire imaginary axis \(i\mathbb{R}\). The results shown in figure 6 are consistent with this property since the gap evident in the spectra shrinks, albeit very slowly, as the numerical resolution \(N\) is refined. The reason why these gaps are present is that the orbits sampled with the discretization described in SS 4.1 have only _finite_ maximum periods which however become longer as the discretization is refined. Finally, we analyze eigenvectors of problem (30) and choose to present them in terms of vorticity, i.e., we show \(\widetilde{\omega}_{i}=-\Delta\widetilde{\psi}_{i}\), \(i=0,1,2\). To fix attention in figures 6(a),c,e we show the real parts of these eigenvectors associated with the following eigenvalues: the complex eigenvalue \(\lambda_{0}\) corresponding to the exponentially growing mode, cf. Table 1, a purely real eigenvalue \(\lambda_{1}\) and a purely imaginary eigenvalue \(\lambda_{2}\). It is clear that all these eigenvectors are dominated by short-wavelength oscillations mostly localized near the boundary \(\partial A_{0}\) of the vortex core, a feature that was predicted by the asymptotic solution constructed in SS 3, cf. figures 2(a),b. However, in the eigenvector \(\widetilde{\omega}_{0}\) associated with the eigenvalue \(\lambda_{0}\) these oscillations are mostly concentrated near the azimuthal angles \(\theta=\pm\pi/4,\pm 3\pi/4\), cf. figure 6(a). In the other eigenvectors \(\widetilde{\omega}_{1}\) and \(\widetilde{\omega}_{2}\) the oscillations are mostly concentrated near the stagnation points \(\mathbf{x}_{a}\) and \(\mathbf{x}_{b}\). In addition, while the eigenvector \(\widetilde{\omega}_{0}\) is symmetric with respect to the flow centerline, the eigenvectors \(\widetilde{\omega}_{1}\) and \(\widetilde{\omega}_{2}\) are antisymmetric. The eigenvectors associated with all other eigenvalues are also dominated by short-wavelength oscillations localized near different parts of the boundary \(\partial A_{0}\). Since due to their highly oscillatory nature the eigenvectors shown in figures 6(a),c,e are not fully resolved, in figures 6(b),d,f we show the corresponding eigenvectors of the regularized eigenvalue problem (31) where we set \(\delta=0.05\). The eigenvalues \(\lambda_{\delta,i}\), \(i=0,1,2\), these regularized eigenvectors correspond to are slightly shifted with respect to the original eigenvalues \(\lambda_{i}\), \(i=0,1,2\), since regularization affects some fine details of the spectrum, although its key properties Figure 6: Purely imaginary eigenvalues obtained by solving the discrete eigenvalue problem (30) with different indicated resolutions \(N\). remain unchanged. We see that in the regularized eigenvectors oscillations are shifted to the interior of the domain \(A_{0}\) and their typical wavelengths are much larger. Solution of the discrete eigenvalue problem (30) for asymmetric dipoles with \(\eta>0\) leads to eigenvalue spectra and eigenvectors qualitatively very similar to those shown in figures 4a-d and 7a,c,e, hence for brevity they are not shown here. The only noticeable difference is that the eigenvectors are no longer symmetric or antisymmetric with respect to the flow centerline. ## 6 Solution of the Evolution Problem As in SS 5, we focus on the symmetric case with \(\eta=0\). The 2D Euler system (1)-(2) is solved numerically as described in SS 4.2 with the initial condition for the perturbation vorticity \(\omega_{1}(t,{\bf x})\) given in terms of the eigenvectors shown in figures 7a-f, i.e., \[\omega_{1}(0,{\bf x})=\varepsilon\frac{\|\widetilde{\omega}_{i}\|_{L^{2}( \Omega)}}{\|\omega_{0}\|_{L^{2}(\Omega)}}\widetilde{\omega}_{i}({\bf x})\quad \mbox{or}\quad\omega_{1}(0,{\bf x})=\varepsilon\frac{\|\widetilde{\omega}_{ \delta,i}\|_{L^{2}(\Omega)}}{\|\omega_{0}\|_{L^{2}(\Omega)}}\widetilde{\omega} _{\delta,i}({\bf x}),\quad i=0,1,2. \tag{33}\] Unless indicated otherwise, the numerical resolution is \(M=512\) grid points in each direction. By taking \(\varepsilon=10^{-4}\) and \(T=40\) we ensure that the evolution of the perturbation vorticity is effectively linear and to characterize its growth we define the perturbation enstrophy as \[{\cal E}(t):=\int_{\Omega}\omega_{1}(t,{\bf x})^{2}\,d{\bf x}. \tag{34}\] The time evolution of this quantity is shown in figure 8a for the six considered initial conditions. In all cases we see that after a transient period the perturbation enstrophy starts to grow exponentially as \(\exp(\widetilde{\lambda}t)\), where the growth rate \(\widetilde{\lambda}\approx 0.127\) is very close to the real part of the eigenvalue \(\lambda_{0}\), cf. Table 1. The duration of the transient, which involves an initial decrease of the perturbation enstrophy, is different in different cases and is shortest when the eigenfunctions \(\widetilde{\omega}_{0}\) and \(\widetilde{\omega}_{\delta,0}\) are used as the initial conditions in (33) (in fact, in the latter case the transient is barely present). Hereafter we will focus on the flow obtained with the initial condition (33) given in terms of the eigenfunction \(\widetilde{\omega}_{0}\), cf. figure 7a. The effect of the numerical resolution \(N\) used in the discrete eigenvalue problem (30) is analyzed in figure 8b, where we show the perturbation enstrophy (34) in the flows with the eigenvector \(\widetilde{\omega}_{0}\) used in the initial conditions (33) computed with different \(N\). We see that refined resolution leads to a longer transient period while the rate of the exponential growth \(\widetilde{\lambda}\) is unchanged. The enstrophy spectrum of the initial condition (33) and of the perturbation vorticity \(\omega_{1}(t,{\bf x})\) at different times \(t\in(0,60]\) is shown in figure 9 as a function of the wavenumber \(k:=|{\bf k}|\). It is defined as \[e(t,k):=\int_{S_{k}}\big{|}\left[\widehat{\omega}_{1}(t)\right]_{{\bf k}} \big{|}^{2}\,d\sigma, \tag{35}\] Figure 7: Real parts of the eigenvectors corresponding to the indicated eigenvalues obtained by solving (a,c,e) eigenvalue problem (30) and (b,d,f) the regularized problem (31) using the resolution \(N=80\). Figure 8: Time evolution of the normalized perturbation enstrophy \(\mathcal{E}(t)/\mathcal{E}(0)\) in the flow with the initial condition (33) given in terms of (a) the different eigenvectors shown in figures 6(a)–f and obtained with a fixed resolution \(N=80\), and (b) the eigenvector \(\widetilde{\omega}_{0}\) computed with different resolutions \(N\). In panel (a) the red solid lines correspond to the eigenvectors \(\widetilde{\omega}_{0}\) and \(\widetilde{\omega}_{0,\delta}\), black dotted lines correspond to \(\widetilde{\omega}_{1}\) and \(\widetilde{\omega}_{1,\delta}\), and blue dashed lines to \(\widetilde{\omega}_{2}\) and and \(\widetilde{\omega}_{2,\delta}\); thin and thick lines represent flows with initial conditions involving eigenvectors obtained as solutions of the discrete eigenvalue problem (30) and its regularized version (31), respectively. In panel (b) the blue dashed and red solid lines correspond to initial conditions involving eigenvectors \(\widetilde{\omega}_{0}\) obtained with the resolutions \(N=40\) and \(N=80\), respectively. where \(\left[\widehat{\omega}_{1}(t)\right]_{\mathbf{k}}\), \(\mathbf{k}\in\mathbb{Z}^{2}\), are the Fourier coefficients of the perturbation vorticity \(\omega_{1}(t,\mathbf{x})\), \(\sigma\) is the angle in the wavenumber space and \(S_{k}\) denotes the circle of radius \(k\) in this space (with some abuse of notation justified by simplicity, here we have treated the wavevector \(\mathbf{k}\) as a continuous rather than discrete variable). Since its spectrum is essentially independent of the wavenumber \(k\), the eigenvector \(\widetilde{\omega}_{0}\) in the initial condition (33) turns out to be a distribution rather than a smooth function. The enstrophy spectra of the perturbation vorticity \(\omega_{1}(t,\mathbf{x})\) during the flow evolution show a rapid decay at high wavenumbers which is the effect of the applied filter, cf. 4.2. However, after the transient, i.e., for \(20\lessapprox t\leq 60\), the enstrophy spectra have very similar forms, except for a vertical shift which increases with time \(t\). This confirms that the time evolution is dominated by linear effects as there is little energy transfer to higher (unresolved) modes. This is also attested to by the fact that for all the cases considered in figure 8a the relative change of the _total_ enstrophy \(\int_{\Omega}\omega(t,\mathbf{x})^{2}\,d\mathbf{x}\), which is a conserved quantity, does not exceed \(0.1\%\). We now go on to discuss the time evolution of the perturbation vorticity in the physical space and in figures 10a and 10b we show \(\omega_{1}(t,\mathbf{x})\) at the times \(t=4\) and \(t=21\), respectively, which correspond to the transient regime and to the subsequent period of an exponential growth. During that period, i.e., for \(20\lessapprox t\leq 60\), the structure of the perturbation vorticity field does not change much. We see that as the perturbation evolves a number of thin vorticity filaments is ejected from the vortex core \(A_{0}\) into the potential flow with the principal ones emerging at the azimuthal angles \(\theta\approx\pm\pi/4,\pm 3\pi/4\), i.e., in the regions of the vortex boundary where most of the short-wavelength oscillations evident in the eigenvector \(\widetilde{\omega}_{0}\) are localized, cf. figure 7a. The perturbation remains symmetric with respect to the flow centerline for all times and since the vorticity \(\omega_{0}\) of the base flow is antisymmetric, the resulting total flow \(\omega(t,\mathbf{x})\) does not possess any symmetries. The perturbation vorticity \(\omega_{1}(t,\mathbf{x})\) realizing the exponential growth in the flows corresponding to the initial condition involving the eigenvectors \(\widetilde{\omega}_{1}\) and \(\widetilde{\omega}_{2}\) (and Figure 9: Enstrophy spectra (35) of (blue squares) the initial condition (33) involving the eigenvector \(\widetilde{\omega}_{0}\) and (red circles) the corresponding perturbation vorticity \(\omega_{1}(t,\mathbf{x})\) at times \(t=10,20,\ldots,60\). The arrow indicates the trend with the increase of time \(t\). their regularized versions \(\widetilde{\omega}_{1,\delta}\) and \(\widetilde{\omega}_{2,\delta})\) is essentially identical to the perturbation vorticity shown in figure (b)b, although its form during the transient regime can be quite different. In particular, the perturbation eventually becomes symmetric with respect to the flow centerline even if the initial condition (33) is antisymmetric. The same is true for flows obtained with initial condition corresponding to all approximate eigenvalues other than \(\lambda_{0}\), \(\lambda_{1}\) and \(\lambda_{2}\). We did not attempt to study the time evolution of asymmetric dipoles with \(\eta>0\) in (5a), since their vorticity distributions are discontinuous making computation of such flows using the pseudospectral method described in SS 4.2 problematic. ## 7 Discussion and Final Conclusions In this study we have considered an open problem concerning the linear stability of the Lamb-Chaplygin dipole which is a classical equilibrium solution of the 2D Euler equation in an unbounded domain. We have considered its stability with respect to 2D circulation-preserving perturbations and while our main focus was on the symmetric configuration with \(\eta=0\), cf. figure (a)a, we also investigated some aspects of asymmetric configurations with \(\eta>0\). Since the stability of the problem posed on a unbounded domain is difficult to study both with asymptotic methods and numerically, we have introduced an equivalent formulation with all relations defined entirely within the compact vortex core \(A_{0}\), which was accomplished with the help of a suitable D2N map accounting for the potential flow Figure 10: Perturbation vorticity \(\omega_{1}(t,\mathbf{x})\) in the flow corresponding the initial condition (33) involving the eigenvector \(\widetilde{\omega}_{0}\) during (a) the transient regime and (b) the period of exponential growth. outside the core, cf. Appendix A. The initial-value problem for the 2D Euler equation with a compactly supported initial condition is of a free-boundary type since the time evolution of the vortex boundary \(\partial A(t)\) is a priori unknown and must be determined as a part of the solution of the problem. This important aspect is accounted for in our formulation of the linearized problem, cf. relation (10). The operator representing the 2D Euler equation linearized around the Lamb-Chaplygin dipole has been shown to have an infinite-dimensional null space \(\mbox{Ker}({\cal L})\) and the eigenfunctions \(\widetilde{\psi}_{C}\), \(C=2,3,\dots\), spanning this null space, cf. figures 2a-d, can potentially be used to search for nearby equilibrium solutions. An approximate solution of eigenvalue problem (15) obtained in SS 3 using an asymptotic technique reveals the existence of approximate eigenfunctions in the form of short-wavelength oscillations localized near the vortex boundary \(\partial A_{0}\). Remarkably, eigenfunctions with such properties exist when \(\Re(\lambda^{0})<4\), i.e., when \(\lambda^{0}\) is in the essential spectrum \(\Pi_{\rm ess}({\cal H})\) of the 2D linearized Euler operator and it is interesting that the asymptotic solution has been able to capture this value exactly. We remark that with exponential terms involving divergent expressions as arguments, cf. (26), this approach has the flavour of the WKB analysis. We note however that \(\lambda^{0}\) serves as a parameter, rather than an eigenvalue, in this approach. Moreover, since the obtained approximate solution represents only the asymptotic (in the short-wavelength limit \(m\to\infty\)) structure of the eigenfunctions, it does not satisfy the boundary conditions (21c)-(21d). To account for these limitations, complementary insights have been obtained by solving eigenvalue problem (15) numerically as described in SS 4.1. Our numerical solution of eigenvalue problem (15) obtained in SS 5 using different resolutions \(N\) yields results consistent with the general mathematical facts known about the spectra of the 2D linearized Euler operator, cf. SS 2.1. In particular, these results feature eigenvalues of the discrete problem (30) filling ever more densely a region around the origin which is bounded in the horizontal (real) direction and expands in the vertical (imaginary) direction as the resolution \(N\) is increased, which is consistent with the existence of an essential spectrum \(\Pi_{\rm ess}({\cal H})\) in the form of a vertical band with the width determined by the largest Lyapunov exponent of the flow, cf. (8). The corresponding eigenvectors are dominated by short-wavelength oscillations localized near the vortex boundary \(\partial A_{0}\), a feature that was predicted by the asymptotic solution constructed in SS 3. However, solutions of the evolution problem for the perturbation vorticity with the initial condition (33) corresponding to different eigenvectors obtained from the discrete problems (30)-(31) reveal that \(\lambda_{0}\) (and its complex conjugate \(\lambda_{0}^{*}\)) are the only eigenvalues associated with an exponentially growing mode with a growth rate effectively equal to the real part of the eigenvalue, i.e., for which \(\widetilde{\lambda}\approx\Re(\lambda_{0})\). When eigenvectors associated with eigenvalues other than \(\lambda_{0}\) or \(\lambda_{0}^{*}\) are used in the initial condition (33), the perturbation enstrophy (34) reveals transients of various duration followed by exponential growth with the growth rate again given by \(\Re(\lambda_{0})\). This demonstrates that \(\pm\lambda_{0}\) and \(\pm\lambda_{0}^{*}\) are the only "true" eigenvalues and form the point spectrum \(\Pi_{0}({\cal H})\) of the operator associated with the 2D Euler equation linearized around the Lamb-Chaplygin dipole. On the other hand, all other eigenvalues of the discrete problems (30)-(31) can be interpreted as numerical approximations to _approximate_ eigenvalues belonging to the essential spectrum \(\Pi_{\rm ess}(\mathcal{H})\). More precisely, for each resolution \(N\) the eigenvalues of the discrete problems other than \(\pm\lambda_{0}\) and \(\pm\lambda_{0}^{*}\) approximate a different subset of approximate eigenvalues in the essential spectrum \(\Pi_{\rm ess}(\mathcal{H})\) and the corresponding eigenvectors are approximations to the associated _approximate_ eigenvectors. This interpretation is confirmed by the eigenvalue density plots shown in figures 5a-d and is consistent with what is known in general about the spectra of the 2D linearized Euler operator, cf. SS 2.1. In figure 8a we noted that when the initial condition (33) is given in terms of the eigenvector \(\widetilde{\omega}_{0}\), the perturbation enstrophy \(\mathcal{E}(t)\) also exhibits a short transient before attaining exponential growth with the rate \(\widetilde{\lambda}\approx\Re(\lambda_{0})\). The reason for this transient is that, being non-smooth, the eigenvector \(\widetilde{\omega}_{0}\) is not fully resolved, which is borne out in figure 9 (in fact, due to the distributional nature of this and other eigenvectors, they cannot be accurately resolved with any _finite_ resolution). Thus, this transient period is needed for some underresolved features of the perturbation vorticity to emerge, cf. figure 10a vs. figure 10b. However, we note that in the flow evolution originating from the eigenvector \(\widetilde{\omega}_{0}\) the transient is actually much shorter than when other eigenvectors are used as the initial condition (33), and is nearly absent in the case of the regularized eigenvector \(\widetilde{\omega}_{0,\delta}\). We emphasize that non-smoothness of eigenvectors associated with eigenvalues embedded in the essential spectrum is consistent with the known mathematical results (Lin, 2004). Interestingly, the eigenfunctions \(\widetilde{\psi}_{C}\), \(C=2,3,\dots\), associated with the zero eigenvalue \(\lambda=0\) are smooth, cf. figures 2a-d. We also add that there are analogies between our findings and the results of the linear stability analysis of Hill's vortex with respect to axisymmetric perturbations where the presence of both the continuous and point spectrum was revealed, the latter also associated with non-smooth eigenvectors (Protas & Elcrat, 2016). There is a potentially intriguing connection with the so-called "tygers" which are short-wavelength oscillations arising when a truncated inviscid system begins to thermalize. They have been observed in 1D inviscid Burgers and 3D Euler flows (Ray _et al._, 2011). In the course of the linear evolution of the instability the vortex region \(A(t)\) changes shape as a result of the ejection of thin vorticity filaments from the vortex core \(A_{0}\), cf. figures 10a,b. However, both the area \(|A(t)|\) of the vortex and its total circulation \(\Gamma\) are conserved at the leading order, cf. (11) and (14). We reiterate that the perturbation vorticity fields shown in figures 10a,b were obtained with underresolved computations and increasing the resolution \(M\) would result in the appearance of even finer filaments such that in the continuous limit (\(M\to\infty\)) some of the filaments would be infinitely thin. In this study we have considered the linear stability of the Lamb-Chaplygin dipole with respect to 2D perturbations. It is an interesting open question how the picture presented here would be affected by inclusion of 3D effects. We are also exploring related questions in the context of the stability of other equilibria in 2D Euler flows, including various cellular flows. ## Acknowledgments The author wishes to thank Roman Shvydkoy for bringing the mathematical results concerning the stability of equilibria in 2D Euler flows to his attention. The author is also thankful to Xinyu Zhao for her help with the solution of the time-dependent problem and to the Matthew Colbrook for discussions about numerical solution of eigenvalue problems for non-self-adjoint infinite-dimensional operators. Miguel Bustamante is acknowledged for pointing out the potential connection with tygers. Partial support for this research was provided by the Natural Sciences and Engineering Research Council of Canada (NSERC) through a Discovery Grant. The author would also like to thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme "Mathematical aspects of turbulence: where do we stand?" where a part of this study was conducted. This work was supported by EPSRC grant number EP/R014604/1. Computational resources were provided by the Digital Research Alliance of Canada (DRAC) under its Resource Allocation Competition. ## Appendix A Construction of the Dirichlet-to-Neumann Map We consider the Laplace subproblem consisting of (9c)-(9d) and (9f) whose solution has the general form \[\psi_{2}^{\prime}(r,\theta)=\sum_{k=1}^{\infty}\frac{\alpha_{k}\cos(k\theta)+ \beta_{k}\sin(k\theta)}{r^{k}},\qquad r\geq 1,\quad 0\leq\theta\leq 2\pi, \tag{36}\] where \(\alpha_{k},\beta_{k}\in\mathbb{R}\), \(k=1,2,\dots\), are expansion coefficients to be determined and the constant term is omitted since we adopt the normalization \(\oint_{\partial A_{0}}f^{\prime}(s)\,ds=0\). The boundary value \(f^{\prime}\) of the perturbation streamfunction on \(\partial A_{0}\) serves as the argument of the D2N operator, cf. (9c). Expanding it in a Fourier series gives \[f^{\prime}(\theta)=\sum_{k=1}^{\infty}\widehat{f}_{k}^{c}\cos(k\theta)+ \widehat{f}_{k}^{s}\sin(k\theta), \tag{37}\] where \(\widehat{f}_{k}^{c},\widehat{f}_{k}^{s}\in\mathbb{R}\), \(k=1,2,\dots\), are known coefficients. Then, using the boundary condition \(\psi_{2}^{\prime}(1,\theta)=f^{\prime}(\theta)\), \(\theta\in[0,2\pi]\), cf. (9c), the corresponding Neumann data can be computed as \[[Mf^{\prime}](\theta):=\frac{\partial\psi_{2}^{\prime}}{\partial n}\bigg{|}_{ \partial A_{0}}=\frac{\partial\psi_{2}^{\prime}}{\partial r}\bigg{|}_{r=1}=- \sum_{k=1}^{\infty}k\left[\widehat{f}_{k}^{c}\cos(k\theta)+\widehat{f}_{k}^{s }\sin(k\theta)\right] \tag{38}\] which expresses the action of the D2N operator \(M\) on \(f^{\prime}\). In order to make this expression explicitly dependent on \(f^{\prime}\), rather than on its Fourier coefficients as in (38), we use the formulas for these coefficients together with their approximations based on the trapezoidal quadrature (which are spectrally accurate when applied to smooth periodic functions (Trefethen, 2000)) \[\widehat{f}_{k}^{c} =\frac{1}{\pi}\int_{0}^{2\pi}f^{\prime}(\theta^{\prime})\cos(k \theta^{\prime})\,d\theta^{\prime}\!\approx\!\frac{2}{N}\sum_{l=1}^{N}f^{\prime} (\theta_{l})\cos(k\theta_{l}), \tag{39a}\] \[\widehat{f}_{k}^{s} =\frac{1}{\pi}\int_{0}^{2\pi}f^{\prime}(\theta^{\prime})\sin(k \theta^{\prime})\,d\theta^{\prime}\!\approx\!\frac{2}{N}\sum_{l=1}^{N}f^{\prime }(\theta_{l})\sin(k\theta_{l}), \tag{39b}\] where \(\{\theta_{l}\}_{l=1}^{N}\) are grid points uniformly discretizing the interval \([0,2\pi]\). Using these relations, the D2N map (38) truncated at \(N/2\) Fourier modes and evaluated at the grid point \(\theta_{j}\) can be written as \[[Mf^{\prime}](\theta_{j})\approx\sum_{l=1}^{N}M_{jl}f^{\prime}(\theta_{l}), \qquad j=1,\ldots,N, \tag{40}\] where \[M_{jl}:=-\frac{2}{N}\sum_{k=1}^{N/2}k\left[\cos(k\theta_{j})\cos(k\theta_{l})+ \sin(k\theta_{j})\sin(k\theta_{l})\right] \tag{41}\] are entries of a symmetric matrix \(\mathbf{M}\in\mathbb{R}^{N\times N}\) approximating the D2N operator. ## Appendix B Solution of Outer Problem (18) Assuming separability, we use the ansatz \(\phi(r,\theta)=R(r)T(\theta)\), where \(R\;:\;[0,1]\rightarrow\mathbb{R}\) and \(T\;:\;[0,2\pi]\rightarrow\mathbb{R}\). Plugging this ansatz into (18a), we obtain the relation \(u_{0}^{r}\,T(\theta)\,(dR/dr)=-(u_{0}^{\theta}/r)\,R(r)\,(dT/d\theta)\), which using expressions (16a)-(16b) for the velocity components can be rewritten as \[\frac{rJ_{1}(br)}{brJ_{0}(br)-J_{1}(br)}\frac{1}{R(r)}\frac{dR}{dr}=\frac{\tan (\theta)}{\mathcal{T}(\theta)}\frac{dT}{d\theta}=C \tag{42}\] with some real constant \(C\neq 0\). The azimuthal part \(dT/d\theta-C\cot(\theta)\,T(\theta)=0\) can be integrated using the periodic boundary conditions to give \[T(\theta)=A\sin^{C}(\theta),\qquad A\in\mathbb{R}. \tag{43}\] The radial part of (42) is \(dR/dr-C\left[brJ_{0}(br)-J_{1}(br)\right]/\left[rJ_{1}(br)\right]\,R(r)=0\), which upon integration gives \[R(r)=B\left[J_{1}(br)\right]^{C},\qquad B\in\mathbb{R}. \tag{44}\] Imposing the boundary condition (18b) and requiring the solution to be real-valued while noting that \(J_{1}(0)=0\) and \((d/dr)J_{1}(br)|_{r=0}\neq 0\), restricts the values of \(C\) to integers larger than \(1\). Thus, combining (43) and (44) finally gives \[\phi(r,\theta)=\phi_{C}(r,\theta):=B\left[J_{1}(br)\sin\theta\right]^{C}, \qquad C=2,3,\ldots. \tag{45}\] Maximum Periods of Lagrangian Orbits In this appendix we estimate the maximum period \(\tau_{\max}\) of Lagrangian orbits in the flow field of the Lamb-Chaplygin dipole where we focus on the symmetric case with \(\eta=0\) in (5). We consider the heteroclinic trajectory connecting the two hyperbolic stagnation points \(\mathbf{x}_{a}\) and \(\mathbf{x}_{b}\), cf. figure 1a, which coincides with a part of the boundary \(\partial A_{0}\). Let \(s=s(t)\) denote the arc-length coordinate of a material point on this orbit. Then, assuming the dipole has unit radius \(a=1\), we have, cf. (17), \[\frac{ds}{dt}=\frac{d\theta}{dt}=u_{0}^{\theta}(1,\theta)=2U\sin\theta,\qquad \theta\in[0,\pi]. \tag{46}\] Separating variables and integrating, we obtain \[\int_{0}^{\pi}\frac{d\theta}{\sin\theta}=2U\int_{0}^{\tau_{\max}}dt=2U\tau_{ \max}, \tag{47}\] where the integral on the left-hand side is \(\int_{0}^{\pi}\frac{d\theta}{\sin\theta}=\ln\frac{1-\cos\theta}{\sin\theta} \big{|}_{0}^{\pi}=\infty\) and hence \(\tau_{\max}=\infty\). Since there are closed orbits in the interior of the dipole lying arbitrarily close to this heteroclinic trajectory, their orbit periods are not bounded and can be arbitrarily long. **Declaration of Interests.** The author reports no conflict of interest.
2310.11192
Electroweak Nuclear Properties from Single Molecular Ions in a Penning Trap
We present a novel technique to probe electroweak nuclear properties by measuring parity violation (PV) in single molecular ions in a Penning trap. The trap's strong magnetic field Zeeman shifts opposite-parity rotational and hyperfine molecular states into near degeneracy. The weak interaction-induced mixing between these degenerate states can be larger than in atoms by more than twelve orders of magnitude, thereby vastly amplifying PV effects. The single molecule sensitivity would be suitable for applications to nuclei across the nuclear chart, including rare and unstable nuclei.
Jonas Karthein, Silviu-Marian Udrescu, Scott B. Moroch, Ivana Belosevic, Klaus Blaum, Anastasia Borschevsky, Yuly Chamorro, David DeMille, Jens Dilling, Ronald F. Garcia Ruiz, Nick R. Hutzler, Lukáš F. Pašteka, Ryan Ringle
2023-10-17T12:14:46Z
http://arxiv.org/abs/2310.11192v1
# Electroweak Nuclear Properties from Single Molecular Ions in a Penning Trap ###### Abstract **We present a novel technique to probe electroweak nuclear properties by measuring parity violation (PV) in single molecular ions in a Penning trap. The trap's strong magnetic field Zeeman shifts oppositely-parity rotational and hyperfine molecular states into near degeneracy. The weak interaction-induced mixing between these degenerate states can be larger than in atoms by more than twelve orders of magnitude, thereby vastly amplifying PV effects. The single molecule sensitivity would be suitable for applications to nuclei across the nuclear chart, including rare and unstable nuclei.** _Introduction -_ Of Nature's four known fundamental forces, the weak force is the only one known to violate parity (P) and charge-parity (CP) symmetry. In this context, precision studies of the weak interaction provide powerful tests of the Standard Model (SM) [1], violations of the fundamental symmetries, and the existence of new physics [1; 2; 3]. Accelerator-based experiments and atomic parity violation studies have provided key insights into the weak interaction between the electrons and nucleons, mediated by \(Z^{0}\)-boson exchange [4; 5; 6]. However, the electroweak interactions between nucleons are only poorly understood [7; 8; 9; 10; 11; 12]. A clear disagreement exists between measurements [1; 13; 14]. Recent progress in precision control and interrogation of molecules has demonstrated powerful routes for precision studies of symmetry-violating properties [1; 15; 16; 17; 18]. Parity violation (PV) can produce unique signatures in the molecular energy levels, enabling the isolation of weak force effects from the overwhelmingly dominant strong and electromagnetic forces [19; 20; 21]. The proximity of opposite parity molecular levels provides high sensitivity to symmetry-violating properties, which can be several orders of magnitude larger than in atomic systems. Moreover, external magnetic fields can drive these opposite-parity states into near degeneracy, enhancing their sensitivity to PV properties [22]. The possibility of about eleven orders of magnitude of enhancement of PV-induced state mixing was recently demonstrated with a neutral beam of \({}^{138}\)BaF [23]. In this work, we propose and analyze a new method for measuring PV nuclear properties using single molecular ions and a Penning trap, which allows for long coherence times (\(\gg 1\,\mathrm{ms}\)) [20]. Combined with its well-controlled electric and magnetic fields, an enhancement in excess of twelve orders of magnitude in PV-induced state mixing relative to atoms can be achieved, thereby vastly increasing sensitivity to electroweak nuclear properties. The precision and versatility of our technique will enable measurements of many isotopes across the nuclear chart. These include species that may be difficult to manipulate and measure in neutral forms, such as short-lived nuclei [24; 18; 25]. In a diatomic molecule, PV properties are dominated by the nuclear-spin-dependent interactions (NSD-PV): (i) Electrons penetrating the nucleus can interact at short range via \(Z^{0}\)-boson exchange through electron-vector and nucleon axial-vector currents [1]; (ii) Parity-violating weak interactions between nucleons lead to a nuclear-internal current that causes a P-odd magnetic moment, known as the nuclear anapole moment [1; 26; 27]. So far, only one non-zero measurement of the nuclear anapole moment has been performed in \({}^{133}\)Cs [4]; (iii) A third contribution, typically suppressed compared to the effects above [28; 29], is induced by a combination of the hyperfine interaction and \(Z^{0}\)-boson exchange through electron-axial-vector and nucleon-vector \(V_{e}A_{N}\) currents [1]; (iv) A fourth contribution could come from new interactions beyond the SM between electrons and nucleons, mediated by yet-to-be-discovered gauge bosons [30; 31; 32]. Our proposed method should be highly general for various molecular ions. However, we will focus on \({}^{29}\)SiO\({}^{+}\) due to practical and theoretical advantages for the initial demonstration: Its rotational and electronic structure is known [33], the ground electronic state is \({}^{2}\Sigma^{+}\), and it was demonstrated suitable for laser cooling [34; 35]. _Effective Hamiltonian and Electroweak Properties -_ Our scheme builds on the concepts introduced in [20; 36]. The effective Hamiltonian describing the lowest rotational and hyperfine energy levels of \({}^{29}\)SiO\({}^{+}\), in the absence of PV effects, can be expressed as: \[H_{0}=B_{0}\mathbf{N}^{2}+D_{0}\mathbf{N}^{4}+\gamma\mathbf{N}\cdot\mathbf{S}+b\mathbf{I}\cdot\mathbf{S} +c(\mathbf{I}\cdot\mathbf{n})(\mathbf{S}\cdot\mathbf{n}),\] with \(\mathbf{N}=\mathbf{R}+\mathbf{L}\), where \(\mathbf{R}\) is the mechanical rotation of the molecular framework, \(\mathbf{L}\) is the orbital angular momentum of the electron, \(\mathbf{S}\) and \(\mathbf{I}\) are the molecular frame electron and nuclear spin operator, respectively, and \(\mathbf{n}\) is the unit vector along the internuclear axis. The rotational, centrifugal distortion, and spin-rotational constants are \(B_{0}\), \(D_{0}\), and \(\gamma\). \(b\) and \(c\) are hyperfine structure constants associated with the \({}^{29}\)Si nucleus. The rotational constant of \({}^{29}\)SiO\({}^{+}\) is far larger than all the other molecular parameters in \(H_{0}\)[37]. Thus, \(N\) is a good quantum number for levels of energy \(E_{N}\approx B_{0}N(N+1)\) and parity \(P_{N}=(-1)^{N}\). When a magnetic field of a particular magnitude \(B\) is applied (see Fig. 1), sub-levels of the \(N^{P}=0^{+}\) and \(1^{-}\) states can be Zeeman-shifted close to degeneracy. For \({}^{29}\)SiO\({}^{+}\), this magnetic field strength is \(B\approx\frac{E_{1}-E_{2}}{2\mu_{B}}\approx 1.5\) T, since the coupling to the electron spin \(S\) dominates the Zeeman shift via the Hamiltonian \(H_{Z}=-\mathbf{g}\mu_{B}\mathbf{S}\cdot\mathbf{B}\) with \(g\)-factor \(g\approx-2\), the Bohr magneton \(\mu_{B}\), and the magnetic field aligned with the \(\mathbf{z}\)-axis \(\mathbf{B}=B\mathbf{z}\)[23]. This field is strong enough to decouple \(\mathbf{S}\) from \(\mathbf{I}\) and \(\mathbf{N}\). Hence, the rotational and hyperfine levels are better described in the decoupled basis used for the rest of the paper: \(|N,m_{N}\rangle|S,m_{S}\rangle|I,m_{I}\rangle\). The NSD-PV interactions can mix opposite-parity levels. The Hamiltonian \(H_{\text{PV}}=\kappa^{\prime}\frac{G_{F}}{\sqrt{2}}\frac{\alpha\mathbf{I}}{I}\rho (\mathbf{r})\)[26] describes such PV interactions, where \(\kappa^{\prime}\) includes all the NSP-PV contributions. We denote the Fermi constant \(G_{F}\), Dirac matrices vector \(\mathbf{\alpha}\), nuclear spin \(\mathbf{I}\), and nuclear density with respect to the nuclear center \(\rho(\mathbf{r})\). An effective Hamiltonian acting only within the subspace of rotational and hyperfine levels can be obtained by averaging the previous Hamiltonian over the electronic wave function, given by \(H_{\text{eff}}=\kappa^{\prime}W_{\text{A}}C\), where \(W_{\text{A}}\) is a matrix element that includes the expectation value of \(H_{\text{PV}}\) over the electronic wave function in the \({}^{2}\Sigma\)-state in the rotating frame of the molecule, which can be computed numerically using state-of-the-art quantum chemistry methods with uncertainties as low as a few percent [38]. \(C=\frac{(\mathbf{n}\times\mathbf{S})\cdot\mathbf{I}}{I}\) contains the angular momentum dependence of \(H_{\text{eff}}\) and its matrix elements can be calculated analytically using angular momentum algebra [21]. _Measurement Strategy -_ Our proposed experiment will be performed in a Penning ion trap. This device is widely used in precision atomic and nuclear physics, providing the highest mass accuracy [41] and longest trapping times of stable [42], radioactive [43], and antimatter particles [44]. The trap consists of a strong magnetic and a weak electrostatic field, allowing three-dimensional trapping of ions (see [45] for a review on Penning traps). We take advantage of the trapping magnetic field to Zeeman-shift two opposite parity states into near degeneracy (see arrow in Fig. 1). Moreover, the intrinsic trap design allows for magnetic field strengths up to 12T [46], thus providing maximal flexibility in the choice of ion species and rotational-hyperfine states. Our experimental principle is identical to the one for neutral molecules in Refs. [20; 23]. In the presence of axial (i.e., aligned with the magnetic field) and radial electric fields, \(E_{z}\) and \(E_{r}\), the effective Hamiltonian of this two-level system is: \[H_{\pm}=\begin{pmatrix}\alpha_{r}E_{r}^{2}+\alpha_{z}E_{z}^{2}&iW+d\cdot E_{z} \\ -iW+d\cdot E_{z}&\Delta\end{pmatrix},\] with the weak interaction matrix element \[iW(m_{N}^{\prime},m_{I}^{\prime},m_{N},m_{I})=\] \[\kappa^{\prime}W_{\text{A}}\langle\Psi_{1}^{-}(m_{N}^{\prime},m_ {I}^{\prime})|C|\Psi_{\uparrow}^{+}(m_{N},m_{I})\rangle,\] the expectation value \(d\) of the dipole moment operator, \(\mathbf{D}\), between the two levels and the general wave function, \(|\Psi(t)\rangle=c_{+}(t)|\Psi_{\uparrow}^{+}\rangle+e^{-i\Delta t}c_{-}(t)| \Psi_{-}^{-}\rangle\), of the two-level system with its eigenstates \(|\Psi_{m_{S}}^{P}\rangle\) of parity \(P\) and spin projection \(m_{S}\), and its time-dependent amplitudes \(c_{P}(t)\) (see Refs. [20; 23] and the Supplemental Material (SM)-B for details). \(\Delta\) is a small detuning of the two levels from perfect degeneracy and depends on the applied magnetic field strength \(B\); \(\alpha_{r}\) and \(\alpha_{z}\) represent the radial and axial contributions to the differential polarizability of the two levels [47], while \(E_{r}\) and \(E_{z}\) are any external radial and axial \(E\)-fields. In the ideal case of a single ion resting in a stable magnetic field \(B\) at \(t_{0}=0\) with zero external electric fields prepared in the \(|\Psi_{\uparrow}^{+}\rangle\) state in the center of our trap, we measure \(W\) using the Stark-interference procedure described in Ref. [20]. Thereby, we "kick" the ion to a well-defined amplitude in the harmonic trapping potential, leading to an electric field \(E_{z}(t)=E_{\text{ext}}\cdot\sin(\omega_{\text{ext}}t)\) experienced in the ion's rest frame. We repeat this measurement for several \(N_{0}\) ions to determine the population transfer probability from the initial to the other parity state, \(|\Psi_{\uparrow}^{-}\rangle\), by measuring the average Figure 1: Calculated energies of opposite parity rotational and hyperfine states in \({}^{29}\)SiO\({}^{+}\) for different magnetic field strengths, based on the Hamiltonian \(H_{0}\) and parameters given in [39; 40]. Near degeneracy can be achieved at \(B\approx 1.5\) T, and one possible crossing useful for detecting PV is indicated by an arrow. The positive parity states \(|\Psi_{\uparrow}^{+}\rangle\) are rising, while the negative ones \(|\Psi_{\downarrow}^{-}\rangle\) are descending. signal \(S=N_{0}|c_{-}(t)|^{2}\) (see SM-B for details). The existence of parity violation leads to a non-zero asymmetry, defined as \(A_{\rm PV}=\frac{S(+E_{\rm ext})-S(-E_{\rm ext})}{S(+E_{\rm ext})-S(-E_{\rm ext })}\)[23], where \(S(+E_{\rm ext})\) and \(S(-E_{\rm ext})\) refer to the signal obtained for measurements with the initial "kick" applied in positive (\(+\)) or negative (\(-\)) axial direction. For \({}^{29}\)SiO\({}^{+}\), the population transfer and, hence, the asymmetry can be estimated using first-order perturbation theory (see SM-B for details). For interrogation times \(t_{\rm x}\approx\frac{2\pi N}{\omega_{\rm ext}}\approx\frac{\pi}{\Delta}\) at integer \(N\), the PV asymmetry becomes [20]: \[A_{\rm PV}=\frac{\frac{2W}{\Delta}\cdot\frac{\Omega_{\rm R}}{\omega_{\rm ext}} }{\left(\frac{W}{\Delta}\right)^{2}+\left(\frac{\Omega_{\rm R}}{\omega_{\rm ext }}\right)^{2}}, \tag{1}\] with \(\Omega_{\rm R}=dE_{\rm ext}\). Ultimately, \(W\) is determined via the population transfer probability for different values of \(\Delta\), i.e., magnetic field strengths \(B\) we can easily scan in our setup. Its statistical uncertainty is \[\delta W=\frac{\Delta}{4\sqrt{2N_{0}}\sin\left(\frac{\Delta t_{\rm x}}{2} \right)}\frac{\sqrt{\eta^{2}+1}}{\eta} \tag{2}\] using \(\eta\equiv\left(\frac{\Omega_{R}}{\omega_{\rm ext}}\right)/\left(\frac{W}{ \Delta}\right)\) for the number of molecules \(N_{0}\). To reduce \(\delta W\), we want to minimize \(\Delta\). Since we are technically limited in arbitrarily reducing \(\Delta\) (as discussed in the following section), we set the interrogation time to \(t_{\rm x}=\frac{\pi}{\Delta}\) once \(\Delta\) is minimized. Thus, the precise control of the interrogation time \(t_{\rm x}\) in our trap for a minimal uncertainty on \(\delta W\) and precise variation of \(t_{\rm x}\) to check for systematic effects, are clear advantages we can leverage over experiments performed on molecular beams. From our measurement of \(W\) and the calculated \(W_{\rm A}\) and \(C\), we can extract \(\kappa^{\prime}\approx\kappa^{\prime}_{2}+\kappa^{\prime}_{\rm a}\), encoding the physics of the weak interaction that leads to NSD-PV: \(\kappa^{\prime}_{2}\), arising from the \(V_{e}A_{N}\) term in the electron-nucleon-\(Z^{0}\)-boson exchange, and the electron electromagnetic interaction with the anapole moment, \(\kappa^{\prime}_{\rm a}\). Applying our technique to a wide range of isotopic chains, including radioactive ones [18, 24, 25], could possibly allow for a separation of \(\kappa^{\prime}_{2}\) and \(\kappa^{\prime}_{\rm a}\) based on the dependence of \(\kappa^{\prime}_{\rm a}\) on the nuclear mass \(A\) and spin \(I\)[20, 27]. _Experimental Details_ - Trapped ions in a Penning trap move on three superimposed eigenmotions inside the trap: two radial ones perpendicular to the magnetic field and one axial along the magnetic field. The eigenmotions' frequency, phase, and amplitude can be controlled and coupled through radio-frequency excitations on the ion trap's electrodes [45]. The eigenmotions can be further cooled by coupling the axial motion to a resonance circuit at \(1\,\)K. The radial eigenmotions can be cooled to the same temperature by side-band coupling to the axial eigenmotion [48]. Once the ion is located in the trap center in equilibrium with the \(1\)-K-environment, it is decoupled from the resonance circuitry using a cryogenic switch. It remains in a nominally zero \(E_{\rm ext}\)-field, allowing for the above assumptions on the Hamiltonian due to low reheating rates of \(\sim 65\,\)mK/s [49]. An additional, significant advantage of our proposed method is that the magnetic field strength \(B\) experienced by the molecular ion with charge-to-mass ratio \(q/m\) can be precisely determined through a cyclotron frequency \(\nu_{c}=\frac{Bq}{2\pi m}\) determination via the Fourier-transform ion-cyclotron-resonance (FT-ICR) method [50] to the \(10^{-11}\) level of precision [51] or better using a cryogenic resonance circuit of high quality (\(Q>5000\)). In our proposed setup, neutral \({}^{29}\)SiO molecules are produced by laser-ablating a silicon rod in the supersonic expansion of a mixture of oxygen and argon gas [52]. The molecules are photo-ionized using resonant laser light [53] and bend towards the Penning trap. The ions are produced into the ground electronic and vibrational states and populate only low rotational levels [54]. The measurement scheme shown in Fig. 2 works as follows: (i) The molecular ions are trapped in the Penning trap, and a single molecule is selected using the evaporative cooling technique [55]. Once the ion is located at the trap center in equilibrium with the \(1\)-K-environment (assumed as the kinetic temperature of the ions moving forward) and decoupled from the resonant circuit, it is optically pumped into its rotational ground state (94(3)% fidelity were shown in Ref. [34] for \({}^{28}\)SiO\({}^{+}\)). This level is further split into four hyperfine substates. Given the large splitting between these substates (\(>100\,\)MHz), they can be addressed individually after the rotational cooling using lasers or microwaves to transfer the population to the state of interest, \(|\Psi^{+}_{\uparrow}\rangle\) (Fig. 1, solid black line), with \(>90\%\) fidelity. (ii) To ensure the molecule is not in the negative parity state \(|\Psi^{-}_{\downarrow}\rangle\) (Fig. 1, colored lines) even after the state transfer, the molecule in \(|\Psi^{-}_{\downarrow}\rangle\) is state-selectively dissociated via excitation to a higher-lying auto-dissociating state [34]. The time scale for this process is \(\sim\!10\,\)ns, i.e., short compared to all inverse frequencies in this measurement; thus, it corresponds to an instantaneous (but conditional) quantum projection onto unaffected states. (iii) This step constitutes the starting point of the measurement. It will be executed after step (i) and in parallel to step (ii) since \(|\Psi^{+}_{\uparrow}\rangle\) would start to evolve in time even without an external electric field. In the ion's rest frame, we have it experience a sinusoidal electric field \(E_{z}(t)=E_{\rm ext}\cdot\sin(\omega_{\rm ext}t)\) with \(E_{\rm ext}\approx 6\)V/cm and \(\Omega_{\rm R}/2\pi\approx 3\,\)kHz. This is achieved by exciting the ion to an axial amplitude of \(\sim 0.3\,\)mm in the harmonic trapping potential with a \(\sim 20\,\)V single cycle, resonant sinusoidal-wave "kick" to the trap's end caps as routinely achieved in prac Figure 2: Schematic layout and measurement principle with a laser port for the ionization, cooling, and dissociation lasers. Our measurement procedure, (i)-(iv), is described in the text. tice [56]. Population transfer from the initial positive to the negative parity state occurs due to the PV matrix element and the interaction with this sinusoidal electric field. The minimum useful working value of the splitting is limited by the uncertainty associated with \(\Delta\). The main contribution to this effect is expected to come from the AC Stark shift of the energy levels of interest due to the transverse and axial components of the electric field, with the effects proportional to \(\alpha_{r}E_{r}^{2}\) and \(\alpha_{z}E_{z}^{2}\), respectively. The uncertainty associated with this shift arising from the thermal distribution of ion positions and velocities is expected to be \(\delta\Delta/2\pi\approx 30\) Hz (see SM-A for details of the calculations). To clearly tell apart the two opposite parity levels of interest, we assume a value of \(\Delta/2\pi\approx 100\) Hz, and therefore \(t_{\rm x}=\pi/\Delta=5\) ms to minimize \(\delta W\). (iv) The final state detection is performed by molecular dissociation of the negative parity state \(|\Psi_{-}^{-}\rangle\), using the same auto-ionizing state as in step (ii) as soon as the oscillating field in step (iii) is switched "off" by reversing the sinusoidal "kick". Since the dissociation process is parity-state selective, we can perform a "double-dip" mass measurement [56] in search of \({}^{29}\)SiO\({}^{+}\), \({}^{29}\)Si\({}^{+}\), or \({}^{16}\)O\({}^{+}\) as a measurement of the final parity state. If a dissociation had occurred, we can remove the \({}^{29}\)Si\({}^{+}\) or \({}^{16}\)O\({}^{+}\) ion from the trap and load a new \({}^{29}\)SiO\({}^{+}\) ion. If no dissociation had occurred, the measurement would be restarted at step (i). Figure 3 shows the simulated PV asymmetry, \(A_{PV}\), in Eq. 1, as a function of \(\Delta\) for a range of possible \(W\) values. For \({}^{29}\)SiO\({}^{+}\), we assume \(\Omega_{\rm R}/2\pi=3\) kHz, \(\omega_{\rm ext}/2\pi=350\) kHz, and scan \(\Delta/2\pi\) ranging from \(-150\) Hz to \(150\) Hz in steps of \(50\) Hz. Measuring different values of \(\Delta\) was shown to be effective in avoiding various systematic uncertainties [23, 57]. Measuring also at other relevant level crossings will allow diagnosing systematics. Heavier molecules with larger weak matrix elements comparable to \(\Delta\) (\(W\gtrsim 100\) Hz), such as the potentially laser-coolable TIF\({}^{+}\)[58] (see Tab. 1), do not require additional external Stark mixing for amplifying the sought signal. As suggested in Ref. [36], the level crossing shown in Fig. 1 turns into a pseudo crossing, which can be measured directly. This approach requires an advanced level of systematic control planned to be investigated in the future. _Uncertainty Estimates_ - Here, we estimate the primary sources and magnitude of uncertainty for \({}^{29}\)SiO\({}^{+}\) with \(\Delta/2\pi=100\) Hz and \(W/2\pi=0.4\) Hz. These values lead to a maximum state transfer probability of the positive parity state's population of \(\sim 0.06\%\) after \(t_{\rm x}=5\) ms for \({}^{29}\)SiO\({}^{+}\), corresponding to an asymmetry of \(\sim 0.75\) (red dots in Fig. 3). (i) _Initial Axial Amplitude_ - Besides the already mentioned induced AC Stark shift of the energy levels of interest, leading to \(\delta\Delta/2\pi\approx 30\) Hz, i.e., \(\delta W/W\approx 30\%\) from a single observed state transfer event, a second major source of uncertainty is expected to derive from the thermal noise in the initial axial amplitude of cooled ions. Once cooled and resting in the center of the Penning trap, the ions' energy is Boltzmann distributed with an average initial axial amplitude of \(z_{0}=\sqrt{\frac{2k_{\rm b}Td_{\rm thr}^{2}}{q_{\rm ion}Q_{0}C_{2}}}\), where \(k_{\rm b}\) is the Boltzmann constant, \(q_{\rm ion}\) is the electron charge \(e\), and we assume \(T=1\) K. Based on our trap design [56] optimized for \(E\)-field homogeneity of the electric quadrupole potential \(\phi(z,\rho)=\frac{|\langle L_{\rm b}|C_{2}\rangle}{2d_{\rm thr}}\left(z^{2}- \rho^{2}/2\right)\), we further assumed for the characteristic trap length \(d_{\rm char}=\sqrt{0.5(z_{\rm trap}^{2}+r_{\rm trap}^{2}/2)}=3\) mm (with the central ring electrode's length \(z_{\rm trap}\) and radius \(r_{\rm trap}\)), the trap potential \(U_{0}=-85\) V, and the dimensionless quadrupole constant \(C_{2}=-0.6\). The initial axial motion is then \(z_{0}\approx 10\) \(\mu\)m, which would result in an average thermal noise of \(\delta E_{\rm th}\approx 0.2V/\)cm, corresponding to \(\delta W/W\approx 3\%\) for \({}^{29}\)SiO\({}^{+}\). Both of these effects are statistical, i.e., they can be reduced by increasing the number of measurements. (ii) _Magnetic Field_ - Short-term magnetic field instabilities (for the measurement time of up to many milliseconds) are expected to be \(\delta B/B\lesssim 10^{-10}\)[59, 60]. Observed temporal changes in the magnetic field tracked in a neighboring trap center will be used for live adjustment of slow magnetic field drifts on top of typical temperature and pressure stabilization of the magnet [56]. With this method we anticipate \(\delta B/B\approx 10^{-10}\) for the duration of the data taking [61]. Furthermore, deviations from spatial uniformity due to higher-order field effects not accounted for by shimming coils are expected to be \(\delta B/B<10^{-10}\) for the small probed volume of \(\ll 0.1\) mm\({}^{3}\)[56]. All of these effects can be quantified based on precise measurements of \(\nu_{\rm c}\) for well-known species. These effects lead to a total systematic uncertainty from the magnetic field of \(\delta B/B\approx 10^{-10}\), or \(\delta\Delta/2\pi\approx 4\) Hz, i.e., \(\delta W/W\approx 4\%\) for \({}^{29}\)SiO\({}^{+}\). This uncertainty can be reduced by at least one order of magnitude by improving the stability and uniformity of the magnetic field. (iii) _Electric Field_ - A relative electric field uncertainty of \(\delta E/E\ll 1\%\), which can be routinely achieved in practice [56], would have negligible effect on \(\delta W\). We thus anticipate a total systematic uncertainty of \(\delta W/W<5\%\) for \({}^{29}\)SiO\({}^{+}\). To achieve a 10% statistical uncertainty on the proposed measurement, we need on the order of \(10^{5}\) trapped molecular ions. Given a measurement cycle of a few seconds (dominated by mass selection, cooling, and state preparation), a 10% relative uncertainty measurement would thus be feasible in about one week of measurement time for \({}^{29}\)SiO\({}^{+}\). _Calculated Sensitivity Factors_ - We calculated the molecular matrix element of the anapole moment \(W_{\rm A}\) for the \({}^{2}\Sigma_{1/2}\) ground states of BF\({}^{+}\), \({}^{29}\)SiO\({}^{+}\), and TlF\({}^{+}\) at the 4-component relativistic Fock-space coupled-cluster (FSCC) level of theory using the finite field approach. This formalism includes \(H_{\rm PV}\) as a perturbation to the Dirac-Coulomb Hamiltonian. The \(W_{\rm A}\) factor is obtained as the first derivative of the total energy to this perturbation [38]. We used the dyall.cv4z basis sets [62; 63] and correlated 13 (all), 21 (all), and 51 electrons for BF\({}^{+}\), \({}^{29}\)SiO\({}^{+}\), and TlF\({}^{+}\), respectively. A Gaussian charge distribution represented the nucleus. All the calculations were performed using an adapted version of the Dirac program package [64; 65]. Furthermore, we calculated \(W_{\rm A}\) for Ac, Th, and Lr-containing molecular ions. Here, we used the 4-component relativistic Dirac-Hartree-Fock (DHF) level of theory. In this case, \(W_{\rm A}\) was extracted from the off-diagonal matrix elements of the operator \(\alpha\rho(r)\) acting on the degenerate \(\Omega=|\pm 1/2\rangle\) states in the molecular spinor basis. We employed the dyall.cv4z basis set for all the elements [66; 62; 63; 67; 62]. The molecular geometries were optimized at the exact 2-component [68; 69] coupled-cluster level of theory, including single and double excitations in the parallel implementation of the Dirac program package [70]. The cut-off was set to \(-20\) to 30 a.u. We used the dyall.v3z basis sets [63; 66; 67] for all the systems, except for \({}^{29}\)SiO\({}^{+}\) (experimental bond length [71]), and BF\({}^{+}\)/TlF\({}^{+}\) (s-aug-dyall.v4z basis sets [62; 63]). All results are presented in Table 1. Besides \({}^{29}\)SiO\({}^{+}\)[33; 34; 35], spectroscopic information in the literature among the presented molecular ions is not available to the best of our knowledge. Hence, prior studies of each molecular ion are necessary to find the needed rotational/hyperfine parameters and laser-cooling transitions. _Outlook -_ We proposed a new technique that can provide a highly sensitive route to investigate yet-to-be-explored nuclear parity-violating properties using single molecular ions. These measurements will enable stringent tests of the weak interaction in stable and short-lived isotopes across the nuclear chart. This technique could be directly applied to light isotopes, for which PV nuclear properties can already be calculated on the lattice [72; 73] and with ab-inito methods [74]. For diatomic molecules containing elements as light as the deuteron, the magnetic fields for ground-state level-crossings exceed the latest magnet technology in diatomic molecules; however, this challenge could be overcome by using ground-rotational states in polyatomic molecules [20; 75]. Furthermore, applying advanced cooling techniques already demonstrated in Penning traps would enable reducing the trapped molecule's kinetic energy even further to \(\sim 10-100\) mK [76; 77; 78] or even \(\sim 1\) mK [79; 80], resulting in a reduction of the uncertainty on \(W\) by one to two orders of magnitude. _Acknowledgments -_ This work was supported by the U.S. Department of Energy (DOE), Office of Science (OS), and Office of Nuclear Physics under Award numbers DE-SC0021176 and DE-SC0021179. This research is partly based on work supported by Laboratory Directed Research and Development (LDRD) funding from Argonne National Laboratory, provided by the OS Director of the U.S. DOE under Contract DE-AC02-06CH11357. We thank the Center for Information Technology of the University of Groningen for its support and access to the Peregrine high-performance computing cluster. The INCITE program awarded computer time. This research also used resources from the Oak Ridge Leadership Computing Facility, a DOE-OS User Facility supported under Contract DE-AC05-00OR22725. We also acknowledge the support from High Sector Fock space coupled cluster method: benchmark accuracy across the periodic table (with project number VI.Vidi.192.088 of the research program Vidi, financed by the Dutch Research Council) and the 2020 Incite Award: "PRECISE: Predictive Electronic Structure Modeling of Heavy Elements." JK acknowledges the support of a Feodor Lynen Fellowship of the Alexander-von-Humboldt Foundation. SBM acknowledges the support of a National Science Foundation Graduate Research Fellowship (NSF Grant #2141064) and a Fannie and John Hertz Graduate Fellowship.
2308.08689
Are we visible to advanced alien civilizations?
We considered the question of how our artificial constructions are visible to advanced extraterrestrial civilizations. Taking the universality of the laws of physics, we found that the maximum distance where the detection is possible is of the order of $3000$ ly and under certain conditions Type-II advanced alien societies might be able to resolve this problem.
Z. N. Osmanov
2023-08-16T22:24:19Z
http://arxiv.org/abs/2308.08689v1
# Are we visible to advanced alien civilizations? ###### Abstract We considered the question of how our artificial constructions are visible to advanced extraterrestrial civilizations. Taking the universality of the laws of physics, we found that the maximum distance where the detection is possible is of the order of 3000 ly and under certain conditions Type-II advanced alien societies might be able to resolve this problem. keywords: SETI -- Technosignatures -- Astrobiology + Footnote †: journal: Elsevier ## 1 Introduction In the context of the search for extraterrestrial (ET) intelligence (SETI), a closely related question can be formulated: If they exist, are we visible to them? Answering this question inevitably implies consideration of the technological level to which an alien civilization belongs. In this context, it is worth noting that Kardashev (1964) has introduced a technological classification of advanced alien societies. According to his approach, there might be three major technological civilizations: Type-I is an alien society that consumes the total energy incident on a planet from their host star; Type-II is an extraterrestrial civilization utilizing the total energy of the star; and Type-III is an advanced alien society, that consumes the total galactic energy. Many aspects of the mentioned societies have been considered in a series of papers (Hsiao et al., 2022; Zuckerman, 2022; Osmanov, 2021, 2016, 2018; Dyson, 1960), but in this paper, we consider only galactic civilizations, thus, Type-I and Type-II societies and study the possibility of how visible we are to them. In particular, the question is: can the artifacts of our technological society be visible and potentially detectable by the telescopes of ETs. Since the question is to identify our society with civilization, the major focus should be on the search for large ships, buildings and space satellites etc. Such artifacts might easily be identified as artificial constructions. For this purpose, it is natural to focus on the visible light reflected from the corresponding objects. To identify an observed object with an artificial one, the best way is to spatially resolve it. Therefore, optical telescopes will be used. But, on the other hand, the angular resolution depends on the diameter of a telescope, \(\theta\simeq 1.22\times\lambda/D\)(Carroll & Ostlie, 2010), which for very small angles requires extremely large telescopes (here, \(\lambda\) is the wavelength). Instead of using large telescopes of astronomical sizes (although, such a possibility cannot be excluded from consideration), one can apply long baseline optical interferometry (Monnier, 2003), by using at least two telescopes separated by a huge distance - baseline, \(B\), when the resolving angle might be very small, \(\theta\simeq\lambda/B\). The processing of data might be significantly simplified by the possibility of recording the optical signal, which is now impossible because no device can record a signal with a time-scale of the order of \(10^{-15}-10^{-14}\) sec, a typical scale of the corresponding period. In (Dvali & Osmanov, 2023) we have shown that Type-I,II,III advanced alien societies might use quantum computers based on artificial black holes, which are able to record the mentioned signals, since the minimum time-scales might be much less than \(10^{-15}-10^{-14}\) sec. In this paper, we analyze how visible we are to advanced ETs, depending on their technological level. The article is organized as follows: In Sec. 2, we consider the basic ideas and obtain the results, and in Sec. 3, we summarize them. ## 2 Discussion and results In order to understand the capabilities of advanced civilizations, it is better to first estimate the timescale for reaching the corresponding technological levels. As we have already mentioned, the Type-I alien society is the one that utilizes the total power incident from the solar-type host star on an Earth-type planet, \(P_{{}_{I}}\simeq\frac{L_{\odot}}{4}\times\left(\frac{R_{{}_{E}}}{R_{AU}}\right) ^{2}\simeq 1.7\times 10^{24}\) erg s\({}^{-1}\), where \(L_{\odot}\simeq 3.8\times 10^{33}\) erg s\({}^{-1}\) denotes the Solar luminosity, \(R_{AU}\simeq 1.5\times 10^{13}\) cm is the distance from the Sun to the Earth (one astronomical unit - AU) and \(R_{E}\simeq 6.4\times 10^{8}\) cm is the Earth's radius. Unlike the mentioned power, our current power consumption, \(1.5\times 10^{20}\) erg s\({}^{-1}\), is by several orders of magnitude less. By following Dyson (1960) and assuming that 1% of annual growth rate of industry is maintained, one can straightforwardly show that our civil lization, which is of type 0.75, will reach Type-I in approximately \(t_{I}=1000\) yrs. For the Type-II the corresponding time-scale equals \(t_{II}=3000\) yrs (see also Dyson (1960)). Therefore, it is natural to assume that advanced alien societies might be able to launch telescopes on circular trajectories separated by the distance of the order of \(\sim 10AU\). 1 Footnote 1: Even our civilization was able to launch Voyager-I,II, which have already moved away from the sun at distances respectively 160 AU and 130 AU: [https://voyager.jpl.nasa.gov/](https://voyager.jpl.nasa.gov/). Then, by considering an object of a length-scale, \(l\), which should be spatially resolved at the maximum distance \(r_{max}\) from it, one can obtain an expression of \(r_{max}\) by combining two expressions of the angular resolution, \(\theta\simeq\lambda/B\) and \(\theta\simeq l/r_{max}\) \[r_{max}\simeq B\times\frac{l}{\lambda}\simeq 3000\times\frac{B}{10\;AU}\times \frac{l}{10\;m}\times\frac{550nm}{\lambda}\;ly. \tag{1}\] Here we have taken into account that the wavelength for visible light lies in the interval \((400-700)\) nm and we used its average value, 550 nm. The estimate shows that ETs will be able to identify artificial constructions from distances of \(\sim 3\) kly. For example, the pyramids of Egypt might have been detected. On the other hand, the artificial satellites can be visible, but if the observing ETs are more than 60 ly away, detection will be impossible because the optical signal from the first artificial satellites has traveled only \(\sim 60\) ly. In order to be sure, that the advanced societies can detect our constructions one should study the flux sensitivity of the observed objects. For this purpose, it is necessary is to estimate the spectral luminosity of the solar-type stars in the visible area (Carroll & Ostlie, 2010) \[L_{vis}\simeq 4\pi R_{\odot}^{2}\times\frac{2\left(kT\right)^{4}}{h^{3}c^{2}} \times\int_{x_{min}}^{x_{max}}\frac{x^{3}}{e^{x}-1}dx\simeq 4.5\times 10^{32}\;erg/s, \tag{2}\] where \(T\simeq 5777\) K is the temperature of the star's surface, \(R_{\odot}\simeq 7\times 10^{10}\) cm is its radius, \(k\) is the Boltzmann's constant, \(c\) is the speed of light, \(h\) denotes the Planck's constant, \(x_{min}=\epsilon_{min}/(kT)\), \(x_{max}=\epsilon_{max}/(kT)\) and \(\epsilon_{min}=hc/\lambda_{max}\) and \(\epsilon_{max}=hc/\lambda_{min}\) are respectively the minimum and maximum photon energies corresponding to the visible spectrum (\(\lambda_{min}=400\) nm, \(\lambda_{max}=700\) nm). Then, if one intends to observe an object of the length-scale, \(l\), reflecting a fraction \(\xi\) of the incident light of a star from a distance, \(R\), visible with the angle, \(\theta_{0}\), for the power incident on the telescope one obtains \[P\simeq\xi\times l^{2}\times\frac{L_{vis}}{4\pi R^{2}}\times\frac{\theta_{0}^{2} }{2\pi}. \tag{3}\] For the observation time-scale, \(\tau\), the total energy \(P\times\tau\) should be guaranteed by the incident \(N\) photons with the average energy, \(\epsilon=hc/\lambda\). Therefore, the energy balance condition, \(P\times\tau\simeq\epsilon N\) combined with Eq. (1) and with a natural condition, \(\theta_{0}\simeq D/r_{max}\) leads to an estimate of the telescope diameter \[D\simeq\frac{2RB}{\lambda}\times\left(\frac{\pi hcN}{\xi\lambda\tau L_{vis}} \right)^{1/2}\simeq\] \[\simeq 3\times 10^{6}\times\frac{R}{1\ AU}\times\frac{B}{10\ AU}\times\left( \frac{550}{\lambda}\right)^{3/2}\times\left(\frac{N}{10^{6}}\times\frac{0.5}{ \xi}\times\frac{1\ hour}{\tau}\right)^{1/2}\ km, \tag{4}\] where we have assumed that the reflection coefficient equals 0.5, the typical distance between a star and a planet and the baseline are of the order of 1 AU and 10 AU respectively, the time-scale of observation is of the order of 1 hour, and the number of photons required to resolve a building should be at least \(10^{6}\). From Eq. (4) it is clear that the diameter of the telescope should be on the order of several million kilometers. Such huge megastructures might be built only by Type-II civilizations, but not by Type-I alien societies. In particular, in (Osmanov, 2021) it has been shown that during a typical time-scale, \(\tau_{I}=1000\) yrs, one might be able to construct only earth-sized megastructures. Therefore, henceforth, the focus will be on Type-II advanced technologies. Analyzing the maximum distances, it is significant to estimate the possible distribution of advanced ETs in the Milky Way galaxy. In his seminal work Drake (1961) has derived an expression to estimate the number of communicating civilizations \[N=R_{\star}\times f_{p}\times n_{e}\times f_{l}\times f_{i}\times f_{t}\times \mathcal{L}, \tag{5}\] where \(R_{\star}\) is the rate of star formation in the galaxy, \(f_{p}\) is the fraction of stars with planetary systems, \(n_{e}\) denotes the average number of habitable planets per star, \(f_{l}\) is the fraction of planets potentially supporting life which actually developed it, \(f_{i}\) is the fraction of planets with intelligent life, \(f_{t}\) denotes the fraction of technological civilizations that can communicate, and \(\mathcal{L}\) denotes the average length of a time-scale for which ET civilizations release signals into space. From modern observations, the value of the star formation rate is relatively well defined \(R_{\star}=(1.5-3)\) stars per year (Kennicutt & Evans, 2012). The Kepler mission has enabled us to estimate that the average number of planets in the habitable zones is on the order of 40 billion. According to this study \(f_{p}\times n_{e}\simeq 0.4\)(Petigura et al., 2013). Since life emerged on Earth soon after the conditions became favorable for life, it is widely accepted that \(f\sim 1\). According to the statistical approach developed by Maccone (2012) its value is close to \(1/3\) and \(f_{i}\times f_{t}\) is of the order of 0.01. In general, it is believed that \(f_{l}\times f_{i}\times f_{t}\) should vary in the interval \((10^{-3}-1)\). Following the discussion presented in (Dvali & Osmanov, 2023) and assuming \(\mathcal{L}_{II}\gtrsim t_{II}\), one can estimate an interval of the number of Type-II civilizations: \(N_{II}=(6-3.6\times 10^{3})\). It is worth noting that the maximum values are derived for \(\mathcal{L}_{II}\simeq t_{II}\) and, consequently, it is clear that if the civilizations exist for longer time-scales, the values of \(N_{II}\) might be (significantly) increased. As an order of magnitude, we assume that the civilizations are uniformly distributed over the galactic plane. Then, considering the upper limit of \(N_{II}\) and taking an average diameter of MW into account \(D_{MW}\simeq 26,8\) kpc (Carroll & Ostlie, 2010), for an average distance among civilizations one obtains \[R_{II}\simeq\left(\frac{\pi D_{MW}^{2}}{4N_{II}}\right)^{1/2}\simeq 1300\;ly. \tag{6}\] As one can see from the obtained value, \(r_{max}>R_{II}\), indicating that technologically advanced ETs might detect our large constructions, belonging to the period from ancient up to medieval times. One can straightforwardly check that if the number of civilizations is not less than 650 our civilization will be visible to them. They can detect our modern constructions only if their total number in the MW is of the order of \(10^{6}\), which has been hypothesized by Sagan (1963). This is possible if the civilization time-scale is of the order of \(10^{6}\) yrs. In this context it is worth noting that star ages in the MW might differ from the Solar age by starting from hundreds of millions of years up to several billions of years (Carroll & Ostlie, 2010), therefore, if evolution has started in an older planetary system, the time-scale of the technological civilization might be quite large. ## 3 Summary In the paper we have considered the possibility of detection of our technological artifacts by advanced civilizations by using optical interferometry and mega-telescopes. We have found that the maximum distance, where a 10 meter length-scale construction might be spatially resolved is of the order of 3000 ly. By analysing the spectral sensitivity, it has been shown that the telescope diameters might be extremely large for Type-I civilizations to be constructed and only Type-II advanced societies might be able to spatially resolve the artificial constructions. By analysing the Drake equation, it has been found that if the number of civilizations is of the order of 650 they will be able to detect our artificial constructions. ## 4 Acknowledgments The research was partially supported by the EU fellowships for Georgian researchers, 2023 (57655523). Z.O. also would like to thank Torino Astrophysical Observatory and Universita degli Studi di Torino for hospitality during working on this project.
2307.10643
Study of (3He, t) charge exchange reactions to isobaric analog states in inverse kinematics
The transition between isobaric analog states (IAS) in the (3He, t) charge exchange reaction presents a unique opportunity to access the isospin structure of the nuclei. In this study not only the Fermi transition but also the Gamow-Teller (G-T) transition of the IAS reaction were investigated for the 13,14C(3He, t) and 17,18,19,20O(3He, t) reactions, in order to explore the neutron number dependence of the IAS reaction for the light neutron-rich nuclei. It was found that the G-T type IAS reaction also exhibited a significant dependence of the transition strength on the neutron number and the angular momentum configuration of the nuclei. Additionally, the inverse kinematics was also discussed for extracting the yields of the interested reaction channels in the proposed experiments on radioactive beams. The calculated triton yields demonstrated the capability of the proposed experiments to obtain meaningful results.
Zhixuan He, Wenjuan Bu, Chaoyuan Xiao, Meng Li, Herun Yang, Bitao Hu, Yi Zhang
2023-07-20T07:13:41Z
http://arxiv.org/abs/2307.10643v1
Study of (\({}^{3}\)He, _t_) charge exchange reactions to isobaric analog states in inverse kinematics ###### Abstract The transition between isobaric analog states (IAS) in the (\({}^{3}\)He, _t_) charge exchange reaction presents a unique opportunity to access the isospin structure of the nuclei. In this study not only the Fermi transition but also the Gamow-Teller (G-T) transition of the IAS reaction were investigated for the \({}^{13,14}\)C(\({}^{3}\)He, _t_) and \({}^{17,18,19,20}\)(\({}^{3}\)He, _t_) reactions, in order to explore the neutron number dependence of the IAS reaction for the light neutron-rich nuclei. It was found that the G-T type IAS reaction also exhibited a significant dependence of the transition strength on the neutron number and the angular momentum configuration of the nuclei. Additionally, the inverse kinematics was also discussed for extracting the yields of the interested reaction channels in the proposed experiments on radioactive beams. The calculated triton yields demonstrated the capability of the proposed experiments to obtain meaningful results. \({}^{1}\) School of Nuclear Science and Technology, Lanzhou University, 22 South Tanshui Road, Lanzhou, 73000, Gansu Province, China \({}^{2}\) Institute of Modern Physics, Chinese Academy of Sciences, 509 Nanchang Road, Lanzhou 73000, Gansu Province, China \({}^{3}\) China Nuclear Power Technology Research Institute Co., Ltd., 1001 Shangbu Middle Road, Shenzhen 518000, Guangdong Province, China \({}^{4}\) Department of Oncology, The Second Xiangya Hospital, Central South University, No.139 Renmin Road Central, Changsha 410011, Hunan Province, China Charge exchange reaction, Isospin excitation, Isospin symmetry, Isobaric analog state, Double Folding Model ## 1 Introduction Direct nuclear reactions at intermediate energies offer a clean and convenient way to observe dynamical properties in nuclei. Through the charge exchange (CE) reaction at this energy region, we can access not only the isospin structure of the nucleus [1-3], but also the isovector interaction between the projectile and target [4]. Within the distorted wave Born approximation (DWBA), the 'double-folding' analysis has shown that the isoscalar and isovector terms of the Lane-form potential represent the rescaled isoscalar and isovector densities of the target nuclei, respectively [5]. Furthermore, not only the interaction strength but also the shape of the isovector potential plays a key role in the CE reaction [6]. Extensive work had been done to reliably understand the Lane-form optical potential with a microscopic framework [7-9]. As the isovector term of the Lane-form potential shows how different the protons and neutrons behave during a reaction, and the isovector density shows how different the protons and neutrons distribute inside a nucleus, it turns out that both originate from the same mechanics termed charge symmetry [10-11]. Due to the neutron-proton asymmetry of the target nucleus, (_N-Z)/A_, the isovector potential is normally small compared with the isoscalar counterpart. Even though extensive data on CE reactions had been accumulated, it was still not enough for an optical potential to give a satisfactory description of the experiment [12]. Rare-isotope beam nuclear reactions in inverse kinematics would have significant advantages, especially in the light-nucleus region. Taking the spin dimension into account, the CE reactions can be grouped into two types: spin-flip (Gamow-Teller or G-T) transition and non-spin-flip (Fermi or F) transition, respectively. In the G-T transition, the spin of target nuclei changes by 1 (\(\Delta S\)\(=\) 1), and the cross section contains the off-diagonal elements of the transition matrix. Therefore, the G-T transition has been studied in experiments extensively to access the spin-dipole (SD) nuclear matrix element, which is useful to access the isovector-spin resonance of the nuclei [13-16], and is of special interest in searching the neutrinoless double \(\beta\) decay [17-18]. On the other hand, the Fermi transition where \(\Delta S=\) 0 is a pure isospin excitation. In addition, there is a special excitation ideal to access the isovector structure, which is the excitation between the isobaric analog states (IAS). The nuclei in the initial and final states of the IAS reaction have a similar structure, where only one nucleon is replaced. Since the total angular momentum and the parity of the initial and final states are the same in the IAS reaction, the transition can be treated exactly as an elastic scattering except for the isospin orientation of the flipped nuclei. Thus, it could be employed as a primary tool to study the isospin symmetry of the nuclei in the initial and final states, such as the nuclear symmetry energy and the neutron skin [19]. In both theory and experiment, there were several recent works to analyze the isovector densities and diffuseness of the target nucleus based on the isovector term of the optical potential and the double-folding formalism [20-22]. The experimental data of the (\({}^{3}\)He, _t_) scattering to the IAS of \({}^{90}\)Zr and \({}^{208}\)Pb at \(E_{\rm lab}\) = 420 MeV have been studied to deduce neutron skin values for these two nuclei [20]. A neutron skin \(\Delta R_{np}\)\(\approx\) 0.16 \(\pm\) 0.04 fm was obtained for \({}^{208}\)Pb and \(\Delta R_{np}\)\(\approx\) 0.09 \(\pm\) 0.03 fm for \({}^{90}\)Zr. In addition, SD excitations of \({}^{90}\)Zr are studied by the \({}^{90}\)Zr(_p_, _n_) and \({}^{90}\)Zr(_n_, _p_) CE reactions at 300 MeV, and the neutron skin thickness of \({}^{90}\)Zr is determined to be \(\Delta R_{np}\)\(\approx\) 0.07 \(\pm\) 0.04 fm [23]. These successful attempts set the stage for the extraction of neutron radii and neutron skin thicknesses using CE reactions. Compared with the basic (_p_, _n_) process, the (\({}^{3}\)He, _t_) process is more sensitive to the outer, rather than the inner, structure of the target nuclei [24-25] and was measured with a significantly higher resolution [26-27]. A series of theoretical and experimental studies on (\({}^{3}\)He, _t_) reactions had been carried out for several nuclei, relying on the combination of \({}^{3}\)He beam and stable or long-lived isotopes as the targets [28-30]. As a complementary, with a radioactive beam, the CE reaction in inverse kinematics would offer a great opportunity to study the isospin structure of the nucleus far from the \(\beta\)-stable line [31-32]. It would be especially interesting to observe the IAS reactions of the neutron-rich light nuclei in the (\({}^{3}\)He, _t_) process, as the process would favor the isovector structure of the nucleus surface and isovector potential would be stronger due to the factor of (_N-Z_)/_A_. Therefore, a (\({}^{3}\)He, _t_) experiment plan was proposed on the radioactive beams of the Heavy Ion Research Facility in Lanzhou (HIRFL) [33-34]. This would be an excellent opportunity to investigate the structure of unstable neutron-rich nuclei, especially for neutron drip line nuclei, where the proton-neutron asymmetry is significant. Besides, for the light mirror nucleus such as \({}^{7}\)Li-\({}^{7}\)Be, \({}^{15}\)N-\({}^{15}\)O, and \({}^{17}\)O-\({}^{17}\)F, the possible cluster effect is another charming topic based on a similar experiment configuration [35-38]. In observations of IAS reactions, it is necessary to carefully choose the kinematics of the reaction to precisely identify the final state of the outgoing nuclei. As in the center-of-mass frame, the IAS reaction mainly distributes in the small-angle region, observing them in inverse kinematics could take advantage that those small-angle scattering events in the center-of-mass frame, who correspond to the large-angle scattering events in the laboratory frame, are far from the beam and clean to observe. Furthermore, in inverse kinematics, the energy of the recoiled nuclei varies rapidly with its scattering angle. This strong dependence can be employed as a powerful selection rule and calibration tool to make precise measurements. Even through, we noted that in previous works the observation was limited to the even-even nucleus [20-22,39-40]. In these cases, only the strength of the Fermi transition in the IAS reaction was proved to have an unambiguous correlation with the isovector structure. Meanwhile, in the odd-A case the IAS reaction might be contributed by both the Fermi and G-T transitions, which is more complex. It is necessary to investigate both transitions in the odd-A case, so as to separate their contributions in the same reaction. Moreover, for the radioactive beam experiment, there are several issues that need to be investigated in detail to demonstrate the feasibility of the experiment. In this work, the cross sections and their angular distributions for several CE reactions were calculated theoretically. The kinematic variables, including the kinetic energy and angle, have also been optimized for the target and beam conditions in inverse kinematics [33-34,41]. The target absorption of the recoiled triton can be effectively overcome by choosing the beam energy appropriately. The Fermi transition and G-T transition were extracted from other contributions in the IAS reaction by the demonstrated multipole decomposition analysis (MDA) [42-43], according to respective cross section and angular distribution. In Section 2, the theoretical framework is described. In Section 3, the cross sections of the CE reactions are calculated, the correlation between the cross sections and the nuclear radii is investigated, and the experimental plan is discussed. In Section 4, the calculation is summarized and the conclusion is presented. ## 2 Theoretical framework In this work the Double Folding Model (DFM) in the DWBA [44-46] was employed to analyze the CE reaction. In this framework, incoming and outgoing particles are regarded as the plane wave and spherical wave distorted by the mean field of the target nucleus, respectively. The transition matrix can be expressed as: \[T_{fl}=\left\langle\chi_{f}|F(s)|\chi_{i}\right\rangle, \tag{1}\] where \(\chi_{i(t)}\) is the distorted wave of the initial (final) state and \(F(s)\) is the form factor. Then the differential reaction cross section can be written as [47]: \[\frac{\mathrm{d}\sigma}{\mathrm{d}\Omega}=\left(\frac{\mu}{2\pi\hbar^{2}} \right)^{2}\frac{k_{f}}{k_{l}}\left|T_{fl}\right|^{2}, \tag{2}\] where \(\mu\) is the reduced mass, while \(k_{i(t)}\) is the incoming (outgoing) wave number. The form factor \(F(s)\) describes the interaction between the projectile and target comprehensively. According to the DFM, the transition densities of the projectile-ejectile and target-residue systems are 'folded' (integrated) with the effective nucleon-nucleon (_NN_) interaction to produce the form factor. The 'folding' process is expressed as an integral [48]: \[F(s)=\int\rho_{ab}\big{(}r_{p}\big{)}V_{eff}\big{(}s,r_{p},r_{t}\big{)}\rho_{AB} (r_{t})\;\mathrm{d}r_{p}\mathrm{d}r_{t}, \tag{3}\] where \(\rho_{ab}\) is the transition density for the target-residue system, \(\rho_{AB}\) is the transition density for the projectile-ejectile system, and \(V_{\mathit{eff}}\) is the nucleon effective interaction between the projectile and target. The transition density is defined as: \[\rho_{LSJ}=\sum_{np}\left\langle f\left|\left|a^{\dagger}a\right|\right|i \right\rangle\left|\phi^{*}\phi\right|, \tag{4}\] where \(i(f)\) is the initial (final) state, \(a\) and \(a^{\dagger}\) are the creation and annihilation operators, respectively, and \(\phi\) is the single-particle radial wave function. \(\left\langle f\left|\left|a^{\dagger}a\right|\right|i\right\rangle\) is defined as the one-body transition density (OBTD). It is worth noting that the form factor depends on the type of transition. Different transitions are identified by the angular momentum coupling of the projectile and target. With the total angular momentum of the projectile (target) being defined as \(J_{p(t)}\), the total angular momentum transfer in the relative coordinate can be expressed as \(J_{r}=J_{p}+J_{t}\). In case of the spin-orbit term is zero and only are the central and tensor forces considered, \(J_{r}\) is equal to the orbital angular momentum transfer \(L\)[29, 49]. The \(J_{r}J_{p}J_{t}\) can be written as the \(LSJ\) transfer language, where \(J_{r}=L\), \(J_{p}=S\), and \(J_{t}=J\)[49]. The transition with \(L=0\) and \(S=1\) is the G-T transition (in \(LSJ\) form as 011), while the transition with \(L=0\) and \(S=0\) is the Fermi transition (in \(LSJ\) form as 000). By selecting and adjusting \(LSJ\) combinations, form factors for different reaction channels and transitions can be calculated. In this work, the cross sections of \((^{3}\mathrm{He},t)\) IAS reactions for C and O isotopes are calculated with the FOLD package [50]. The FOLD package includes three parts: WSAW, FOLD, and DWHI. In the FOLD code, the form factor is calculated by employing Eq. (3) and Eq. (4). The OBTDs, single-particle radial wave functions, and effective interaction should be entered into it. In the FOLD code the OBTD is included in the '\(Z\)-coefficient' convention [51]: \[Z_{\Delta J,\Delta T}=a_{\Delta J,\Delta T}\big{\langle}T_{i}T_{xi}\Delta T \Delta T_{2}\big{|}T_{f}T_{2f}\big{\rangle}\sqrt{\frac{(2\Delta T+1)}{(2J_{t} +1)(2T_{f}+1)}}, \tag{5}\] where \(Z_{\Delta J,\Delta T}\) is the \(Z\)-coefficient, \(a_{\Delta J,\Delta T}\) is the OBTD calculated by the shell-model code NuShellX[52], and \(\big{\langle}T_{i}T_{xi}\Delta T\Delta T_{2}\big{|}T_{f}T_{2f}\big{\rangle}\) is the Clebsch-Gordan (C-G) coefficient [53]. In the NuShellX calculation, the CKPOT [54] and USDA [55] effective interactions are employed for the \(p\)-shell space cases and the \(sd\)-shell space cases, respectively. It is easy to calculate \(Z_{\Delta J,\Delta T}\) from \(a_{\Delta J,\Delta T}\), with the total angular momentum (\(J\)) and isospin (\(T\) and \(T_{z}\)) defined. The single-particle radial wave functions \(\phi\) in Eq. (4) are obtained by the WSAW code with Woods-Saxon form conveniently. In particular, for \({}^{3}\mathrm{He}\) and \(t\), the single-particle wave functions are from quantum Monte Carlo simulations [29, 56] rather than WSAW. On the other hand, the Love-Franey \(NN\) effective interaction [57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 110, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 159, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 224, 225, 226, 227, 228, 229, 230, 231, 232, 240, 233, 234, 235, 236, 237, 238, 239, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 289, 291, 280, 282, 284, 285, 286, 287, 288, 289, 292, 293, 288, 289, 294, 280, 285, 286, 287, 288, 289, 295, 289, 296, 297, 298, 299, 300, 311, 320, 321, 333, 341, 342, 343, 344, 345, 346, 347, 348, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 370, 371, 372, 373, 374, 375, 376, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 423, 434, 445, 446, 447, 448, 451, 461, 470, 408, 409, 411, 424, 449, 435, 44, 471, 48, 48, 496, 497, 400, 409, 410, 412, 44, 449, 44, 455, 46, 472, 48, 496, 400, 413, 404, 405, 406, 407, 409, 414, 408, 409, 415, 416, 417, 418, 419, 42, 443, 445, 410, 414, 44, 415, 415, 416, 417, 418, 42, 44, 45, 419, 436, 44, 45, 46, 473, 48, 49, 409, 42, 45, 46, 473, 49, 410, 40, 411, 42, 44, 45, 46, 474, 48, 496, 409, 42, 400, 413, 404, 414, 45, 416, 417, 418, 42, 44, 45, 46, 475, 48, 496, 409, 430, 44, 419, 44, 45, 46, 47, 48, 49, 400, 413, 404, 414, 45, 416, 417, 419, 424, 44, 45, 419, 46, 475, 496, 409, 42, 400, 409, 414, 409, 42, 409, 43, 414, 45, 46, 476, 48, 49, 400, 410, 413, 404, 414, 45, 416, 4 by employing the DWHI code. The form factors in Eq. **(1)** are calculated with the FOLD code, while the distorted wave functions are generated with the WSAW code. The optical potential parameters can be determined by a similar collision (see Table 1). For C isotopes, the potential parameters are obtained from the fit of measured differential cross sections of \({}^{3}\)He and \({}^{12}\)C elastic scattering data at 443 MeV [59]. For O isotopes, the parameters are from the fit of measured differential cross sections of \({}^{3}\)He and \({}^{16}\)O elastic scattering data at 420 MeV [26]. The parameters of the outgoing channel are set equal to those of the incoming channel, except for the real and imaginary depths. The potential depths are scaled by a common factor of 0.85, which is assuming that the potential depth of the outgoing channel is 85% of the incoming channel one [46]. Combining the potential parameters, form factors, spin and isospin information, the differential cross sections at a given scattering angle and kinematic energy are calculated by summing the contributions of various transitions of interest in CE reactions. ## 3 Results and discussion ### Calculation of IAS reaction cross section In this work, several (\({}^{3}\)He, \(t\)) reactions involving C and O isotopes are analyzed, including \({}^{13}\)C(\({}^{3}\)He, \(t\))\({}^{13}\)N, \({}^{14}\)C(\({}^{3}\)He, \(t\))\({}^{14}\)N, \({}^{17}\)O(\({}^{3}\)He, \(t\))\({}^{17}\)F, \({}^{18}\)O(\({}^{3}\)He, \(t\))\({}^{18}\)F, \({}^{19}\)O(\({}^{3}\)He, \(t\))\({}^{19}\)F, and \({}^{20}\)O(\({}^{3}\)He, \(t\))\({}^{20}\)F. In order to verify the accuracy of our calculation, the differential cross sections of \({}^{13}\)C(\({}^{3}\)He, \(t\))\({}^{13}\)N and \({}^{18}\)O(\({}^{3}\)He, \(t\))\({}^{18}\)F were compared with experimental results. For \({}^{13}\)C(\({}^{3}\)He, \(t\))\({}^{13}\)N reaction, both the ground state and an excited state (3.578 MeV) of the produced \({}^{13}\)N nuclei are taken into consider. The angular distribution of the differential cross sections of the \({}^{13}\)C(1/2', g.s.) \(\rightarrow\)\({}^{13}\)N(1/2', g.s.) reaction channel **(a)** and \({}^{13}\)C(1/2', g.s.) \(\rightarrow\)\({}^{13}\)N(3/2', 3.578 MeV) reaction channel **(b)** is shown in Fig. 1. For each channel, the contributions from different angular momentum combinations are shown as the dashed lines, while the combined differential cross sections are shown as the solid lines. The triangular data points are experiment measurements, in which the \({}^{3}\)He beam with an incidental energy of 150 MeV/nucleon was employed [2]. For compatibility, the kinematic conditions are the same in the center-of-mass frame in the calculation. Within the range of scattering angle below 30.0\({}^{\circ}\), the FOLD calculation and the experimental values are almost identical in shape, while the calculated values are slightly larger than the experimental values. It is similar to the \({}^{18}\)O(\({}^{3}\)He, \(t\))\({}^{18}\)F(0\({}^{\circ}\), 1.592 MeV) case shown in Fig. 2, where the beam energy is 140 MeV/nucleon. It is clear that the shape of the experimental angular distribution [60] is also in agreement with the calculated values in the range of 0\({}^{\circ}\)\(\sim\) 7.5\({}^{\circ}\). The consistency of differential cross sections demonstrates the reliability of the calculations. It is worth noting that in the \({}^{18}\)O(\({}^{3}\)He, \(t\))\({}^{18}\)F case the cross section is only contributed by the IAS reaction mediated by the Fermi excitation, whose strength has a definite correlation with the neutron radii of the target nuclei, as well established in literature [20-22]. On the contrary, in the \({}^{13}\)C(\({}^{3}\)He, \(t\))\({}^{13}\)N case both Fermi type (the 000 term) and G-T type (the 011 term) IAS reactions contribute to the cross section. These two types of excitation processes can be extracted separately according to \begin{table} \begin{tabular}{c c c c c c c} \hline & \(V_{R}\) / MeV & \(r_{R}\) / fm & \(a_{R}\) / fm & \(W_{I}\) / MeV & \(r_{I}\) / fm & \(a_{I}\) / fm \\ \hline \({}^{12}\)C + \({}^{3}\)He & 19.73 & 1.592 & 0.705 & 37.76 & 0.989 & 0.868 \\ \({}^{12}\)N + \(t\) & 16.80 & 1.592 & 0.705 & 32.10 & 0.989 & 0.868 \\ \({}^{16}\)O + \({}^{3}\)He & 22.08 & 1.540 & 0.740 & 42.66 & 0.890 & 0.960 \\ \({}^{16}\)F + \(t\) & 18.81 & 1.540 & 0.740 & 36.16 & 0.890 & 0.960 \\ \hline \end{tabular} \end{table} Table 1: Woods-Saxon optical potential parameters provided by the elastic scattering experiments their different azimuthal distribution, as shown in Fig. 1(a). It will be interesting to verify whether the IAS reaction mediated by the Fermi transition in odd-A isotopes also has correlations, as in the even-even case, and whether the IAS reaction mediated by the G-T transition has similar correlations with the isovector structure of the odd-A nucleus. In this work, a model-based investigation should shed light on these topics. Besides the two reactions mentioned above, in this work several other CE IAS reactions are studied, including \({}^{14}\)C(\({}^{3}\)He, \(t\))\({}^{14}\)N, \({}^{17}\)O(\({}^{3}\)He, \(t\))\({}^{17}\)F, \({}^{19}\)O(\({}^{3}\)He, \(t\))\({}^{19}\)F, and \({}^{20}\)O(\({}^{3}\)He, \(t\))\({}^{20}\)F. All reactions are divided into two groups according to the angular momentum of the initial and final states. The first group includes reactions of 0 \(\rightarrow\) 0 transition, such as \({}^{14}\)C \(\rightarrow\)\({}^{14}\)N, \({}^{18}\)O \(\rightarrow\)\({}^{18}\)F, and \({}^{20}\)O \(\rightarrow\)\({}^{20}\)F, which only are mediated by the Fermi transition, shown in Fig. 3**(b), (c)** and **(d)**. The second group includes reactions with \(J_{i}\) = \(J_{f}\) \(\neq\) 0, such as \({}^{13}\)C(1/2) \(\rightarrow\)\({}^{13}\)N(1/2), \({}^{17}\)O(5/2) \(\rightarrow\)\({}^{17}\)F(5/2), and \({}^{19}\)O(5/2) \(\rightarrow\)\({}^{19}\)F(5/2\({}^{+}\)), containing Fermi, G-T and other transitions, Figure 1: Angular distributions of the CE reaction cross sections with the 150 MeV/nucleon \({}^{3}\)He beam bombarding the \({}^{13}\)C target, including the \({}^{13}\)C(1/2’, g.s.) \(\rightarrow\)\({}^{13}\)N(1/2’, g.s.) reaction channel **(a)** and \({}^{13}\)C(1/2’, g.s.) \(\rightarrow\)\({}^{13}\)N(3/2’, 3.578 MeV) reaction channel **(b)**. The calculated curves are multiplied by a scale factor (2.5) to fit the experimental data Figure 2: Angular distribution of the differential cross section of the \({}^{18}\)O(0\({}^{+}\), g.s.) \(\rightarrow\)\({}^{18}\)F(0\({}^{+}\), 1.592 MeV) reaction channel with the 140 MeV/nucleon \({}^{3}\)He beam bombarding the \({}^{18}\)O target. The same scale factor (2.5) is multiplied for the calculated curve to fit the experimental data shown as Fig. 3**(a)**, **(d)** and **(e)**. The initial state is assumed to be the ground state. The final state may be the ground or excited state. As shown in Fig. 3**(a)**, \({}^{13}\)C and \({}^{13}\)N are mirror nuclei with \(|T_{zl}|=|T_{gl}|\). The ground states of the two nuclei are two members of the isobaric multiplet, as are the \({}^{17}\)O and \({}^{17}\)F. On the other hand, for reactions with \({}^{14}\)C and \({}^{18,19,20}\)O isotopes, the isospin of the ground state of the ejectile nuclei is \(T_{0}=T_{i}\) - 1. So only the excited state of the ejectile nuclei with proper isospin could be identified as the IAS, as shown in Fig. 3**(b)**, **(d)**, **(e)** and **(f)**. The lowest excitation energy of IAS reactions for \({}^{14}\)N is 2.69 MeV, and for \({}^{18,19,20}\)F are 1.592 MeV, 7.904 MeV, and 6.979 MeV, respectively, according to a NuShellX calculation. Cross sections of IAS reactions with higher excitation energy levels, such as 8.782 MeV for \({}^{13}\)N, 16.364 MeV for \({}^{14}\)N, and 5.865 MeV for \({}^{18}\)F, are orders of magnitudes smaller and are neglected for brevity. Differential cross sections for \({}^{13,14}\)C(\({}^{3}\)He, \(t\)) and \({}^{17,18,19,20}\)O(\({}^{3}\)He, \(t\)) IAS reactions at 515 MeV/nucleon, including various transition types and their sum, are shown in Fig. 4 and 5. Due to the mixing of the G-T transition in the cross section in the odd-_4_ case, there is no apparent correlation between the mass number \(A\) of the target nuclei and the total cross section of the IAS reaction for a particular isotope chain. However, in the case where only the Fermi transition is counted in the cross section, the differential cross sections show similar azimuthal dependence and an apparent positive correlation with the mass number \(A\), both in the even-_4_ and odd-_A_ isotopes of a specific isotope chain, as shown in Fig. 6. In addition, the IAS cross section for counting only the G-T transition shows the same correla This can be attributed to the fact that both excitation types are correlated with the nuclear structure by the same mechanism but with different angular momentum couplings. Thus, it is predictable that a precise and unified description of the IAS reactions for both even-\(A\) and odd-\(A\) nuclei will be useful for extracting the isovector structure of exotic nuclei from CE reactions. This is even more interesting when considering the relative variation between the Fermi and G-T transitions in different odd-\(A\) isotopes of an element. As shown in Fig. 5, in the \({}^{17}\)O case (a), the two types of transitions give rough the same contributions to the differential cross section, while in the \({}^{19}\)O case (b) the Fermi transition apparently contributes more than the G-T transition. In other words, in the (\({}^{3}\)He, \(t\)) process, the G-T excitation is more sensitive to the angular momentum configuration of the involved target nuclei on (\(2p_{1/2}\) for \({}^{15}\)O versus \(3d_{52}\) for \({}^{17}\)O). This feature would be valuable when probing the probabilistic clustering effect in the IAS reactions, since the clustering effect can be intuitively treated as a 'normal' energy state superimposed by a clustering state. Another notable advantage of employing G-T transition in (\({}^{3}\)He, \(t\)) process rather than in (\(p\), \(n\)) process is that (\({}^{3}\)He, \(t\)) process is more sensitive to the surface structure of the target nuclei, where the nuclear clustering takes place. In this work the differential cross sections for \({}^{7}\)Li(\({}^{3}\)He, \(t\))\({}^{7}\)Be, \({}^{15}\)N(\({}^{3}\)He, \(t\))\({}^{15}\)O and \({}^{17}\)O(\({}^{3}\)He, \(t\))\({}^{17}\)F IAS reactions at 515 MeV/nucleon are calculated under the theoretical framework mentioned above, as shown in Fig. 8. As the framework does not account for the clustering effect of the target nucleus, the results in Fig. 8 merely demonstrate a possible way to extract the strength of the G-T transition from the IAS cross section by partial wave analysis. The potential clustering effect can be observed by comparing the relative variation of the G-T type and Fermi type transitions within each isotope chains for \({}^{7}\)Li, \({}^{15}\)N, and \({}^{17}\)O, respectively. Figure 4: Differential cross sections for \({}^{13}\)C(\({}^{3}\)He, \(t\))\({}^{13}\)N **(a)** and \({}^{14}\)C(\({}^{3}\)He, \(t\))\({}^{14}\)N **(b)** IAS reactions at 515 MeV/nucleon versus scattering angles in the center-of-mass frame. The blue dashed line represents the Fermi transition, the green dashed line represents the G-T transition, and the black solid line represents the sum in **(a)**. The black solid line represents the pure Fermi transition in **(b)** Figure 5: Differential cross sections for \({}^{17}\)O(\({}^{3}\)He, _t_)\({}^{17}\)F **(a)**, \({}^{18}\)O(\({}^{3}\)He, _t_)\({}^{18}\)F **(b)**, \({}^{19}\)O(\({}^{3}\)He, _t_)\({}^{19}\)F **(c)**, and \({}^{20}\)O(\({}^{3}\)He, _t_)\({}^{20}\)F **(d)** IAS reactions at 515 MeV/nucleon versus scattering angles in the center-of-mass frame. The blue dashed line represents the Fermi transition, the green dashed line represents the G-T transition, and the black solid line represents the sum in **(a)** and **(c)**. The black solid line represents the pure Fermi transition in **(b)** and **(d)** Figure 6: Differential cross sections for the CE (\({}^{3}\)He, _t_) IAS reactions mediated by the Fermi transition of C and O isotopes at 515 MeV/nucleon versus scattering angles in the center-of-mass frame. **(a)** consists of \({}^{13,14}\)C(\({}^{3}\)He, _t_) IAS reactions and **(b)** consists of \({}^{17,18,19,20}\)O(\({}^{3}\)He, _t_) IAS reactions. In the purple box are two oscillating peaks ### Identification of the IAS reaction In (\({}^{3}\)He, _t_) reaction experiments, the identification of IAS reactions primarily relies on the kinetic energy of the outgoing triton, which serves as a direct reflection of the energy level of the recoiled target nuclei. Besides, selecting a specific range of the scattering angle where the IAS reaction dominates is also valuable. In Fig. 9 and Fig. 10, the differential cross sections of major channels close to the IAS reaction for \({}^{13}\)C(\({}^{3}\)He, _t_)\({}^{13}\)N and \({}^{19}\)O(\({}^{3}\)He, _t_)\({}^{19}\)F are calculated as illustrations. In case of the excitation energy of the IAS reaction is significantly different from those of other reaction channels, such as the different 3 channels of \({}^{13}\)C(\({}^{3}\)He, _t_)\({}^{13}\)N shown in Fig. 9, it is convenient to distinguish the IAS reaction (0.00 MeV channel) by the kinetic energy of the triton. On the other side, the IAS reaction in \({}^{19}\)O(\({}^{3}\)He, _t_)\({}^{19}\)F (shown in Fig. 10) relates to an excited state of 7.904 MeV, which is difficult to be distinguished from the neighboring states (7.922 MeV and 7.446 MeV). However, different channels may correspond to different azimuthal distributions. By selecting an appropriate range of scattering angles, the dilution of other channels to the IAS reaction channel can be effectively eliminated. For the \({}^{19}\)O(\({}^{3}\)He, _t_)\({}^{19}\)F case, at the 0\({}^{\circ}\) scattering angle both the 7.922 MeV and 7.904 MeV channels are very strong and cannot be distinguished from each other. But as the scattering angle increases, the Figure 8: Differential cross sections for \({}^{7}\)Li(\({}^{3}\)He, _t_)\({}^{7}\)Be **(a)**, \({}^{15}\)N(\({}^{3}\)He, _t_)\({}^{15}\)O **(b)** and \({}^{17}\)O(\({}^{3}\)He, _t_)\({}^{17}\)F **(c)** IAS reactions at 515 MeV/nucleon versus scattering angles in the center-of-mass frame. The golden dashed line represents the Fermi transition, the green dashed line represents the G-T transition, and the solid blue line represents the sum Figure 7: Differential cross sections for the CE (\({}^{3}\)He, _t_) IAS reactions mediated by the G-T transition of \({}^{17}\)O and \({}^{19}\)O at 515 MeV/nucleon versus scattering angles in the center-of-mass frame intensities of other channels rapidly decrease, especially for the 7.922 MeV channel and the IAS reaction gradually becomes the dominant component in a range between 5.0\({}^{\circ}\) to 10.0\({}^{\circ}\), corresponding to the 2nd oscillation peak in Fig. 6**(b)**. In this region, although the other channels such as the one with an excited energy of 8.544 MeV shown in Fig. 10 also has a significant strength, by considering the energy difference of the outgoing triton the IAS reaction can be measured with precision. As the scattering angle increases further (at or above 10.0\({}^{\circ}\)), the IAS cross section is too small to be measured with a satisfied statistic uncertainty. In summary, to identify the IAS reaction over other channels, the kinematic region of the measurement should be carefully determined with a guide of the theoretical calculation mentioned above. ### Neutron radius The IAS reaction in CE reaction is closely related to the neutron-proton asymmetry of the nucleus, which can be represented as the difference between the proton radius (\(R_{p}\))and neutron radius (\(R_{n}\)), \(R_{n}\) - \(R_{p}\). The proton radius \(R_{p}\) has already been extensively measured by the Figure 10: Differential cross sections versus scattering angles at different excitation energies of \({}^{19}\)O(\({}^{3}\)He, \(t\))\({}^{19}\)F reaction, including several major reaction channels. For \({}^{19}\)F, there are \(J_{f}\)\({}^{x}\) = 3/2\({}^{+}\), \(T_{f}\) = 1/2 (7.446 MeV), \(J_{f}\)\({}^{x}\) = 3/2\({}^{+}\), \(T_{f}\) = 3/2 (8.182 MeV), \(J_{f}\)\({}^{x}\) = 5/2\({}^{+}\), \(T_{f}\) = 1/2 (6.228 MeV, 7.922 MeV, 8.544 MeV, 8.999 MeV) and \(J_{f}\)\({}^{x}\) = 5/2\({}^{+}\), \(T_{f}\) = 3/2 (7.904 MeV) Figure 9: Differential cross sections versus scattering angles at different excitation energies of \({}^{13}\)C(\({}^{3}\)He, \(t\))\({}^{13}\)N reaction, including several major reaction channels. For \({}^{13}\)N, there are \(J_{f}\)\({}^{x}\) = 1/2\({}^{+}\), \(T_{f}\) = 1/2 (0.000 MeV, 8.782 MeV) and \(J_{f}\)\({}^{x}\) = 3/2\({}^{+}\), \(T_{f}\)= 1/2 (3.587 MeV) scattering experiment and the atomic spectroscopy, and there are several empirical formulas precisely describing the charge distribution of the nuclei [61-66]. Meanwhile, the neutron distribution of the nucleus is extracted primarily by other different theoretical analyses [67] and various experimental programmes such as hadronic scattering [68], pion photo production [69], and parity-violating electron scattering [70]. It will be a strict global inspection of nuclear theory that the difference of the extracted proton radius and extracted neutron radius by different theoretical model is coincident with the observation of the IAS reaction in the CE reaction, especially on the nuclei far from the \(\beta\)-stable line. To demonstrate the connection between the difference of proton and neutron radii of a nuclei and the cross section of the IAS reaction, we employ a calculation with the NuShellX code. The DENS branch of NuShellX can export neutron and proton density distributions. NuShellX results are employed for neutron and proton radii to maintain a uniform standard. The \(R_{p}\), \(R_{n}\) and \(R_{n}\) - \(R_{p}\) are listed in Table 2. The differential IAS (via the Fermi transition) reaction cross sections versus \(R_{n}\) - \(R_{p}\) at 0\({}^{\circ}\) and 4.0\({}^{\circ}\) scattering angle in the center-of-mass frame are shown in Fig. 11. With the increasing neutron number along an isotope chain, there should be a distinct increase in the neutron radius, while only minor changes are expected in the proton radius. As shown in Fig. 6 and Fig. 11, with the increasing neutron radius, the I \begin{table} \begin{tabular}{c c c c} \hline & \(R_{p}\) / fm & \(R_{n}\) / fm & \(R_{n}\) - \(R_{p}\) / fm \\ \hline \({}^{13}\)C & 2.3138 & 2.3912 & 0.0774 \\ \({}^{14}\)C & 2.3990 & 2.5293 & 0.1302 \\ \({}^{17}\)O & 2.6253 & 2.7039 & 0.0786 \\ \({}^{18}\)O & 2.6331 & 2.8119 & 0.1788 \\ \({}^{19}\)O & 2.6415 & 2.9289 & 0.2874 \\ \({}^{20}\)O & 2.6505 & 3.0339 & 0.3834 \\ \hline \end{tabular} \end{table} Table 2: Proton radii \(R_{p}\), neutron radii \(R_{n}\) and \(R_{n}\) - \(R_{p}\) calculated from the neutron and proton density distributions. The density distributions are exported by the NuShellX Figure 11: Differential IAS reactions cross sections (mediated by the Fermi transition) versus \(R_{n}\) - \(R_{p}\). The blue lines represent the C isotopes and the red lines represent the O isotopes. The differential cross sections at the 0\({}^{\circ}\) scattering angle in the center-of-mass frame are shown in **(a)**. The differential cross sections at the 4.0\({}^{\circ}\) scattering angle in the center-of-mass frame are shown in **(b)** by the Fermi transition increases as well. This correlation can be clearly observed near oscillation peaks (marked as the dashed box at approximately 0\({}^{\circ}\) and 5.0\({}^{\circ}\) in Fig. 6**(b)**). Therefore, the measurement of IAS reactions near the oscillation peaks is expected to shed a light on the relationship between the IAS reaction and nuclear size. ### Measurement in the inverse kinematics In the experiment, inverse kinematics has been employed to observe the IAS reactions on the unstable isotopes using radioactive beams. In this work, the kinematic conditions of the experiment have been optimized to compromise the limited beam luminosity and energy, as well as the detector efficiency for each targeted isotope. To choose a proper kinematic condition for the measurement, the relationship between the kinetic energies (i.e., the kinetic energy of the recoiled triton) and the scattering angle (in the center-of-mass frame) is first calculated (shown in Fig. 12). The reaction can be approximated as a two-body elastic scattering between a heavy ion and \({}^{3}\)He. The relativistic kinematics is calculated, where the beam energy is in a range of 400 \(\sim\) 600 MeV/nucleon. Compared with the CE experiments in the normal kinematic region [1-3,26-32], the beam energy must be higher to observe the residual nuclei (the recoiled triton) (see Fig. 13). The triton needs sufficient kinetic energy to overcome the target's self-absorption. As shown in Fig. 12, within the scattering angle range of 5.0\({}^{\circ}\) to 10.0\({}^{\circ}\), where the IAS reaction dominates, the kinetic energies of the triton are essentially the same for C and O beams with different energies. In the large-angle region (above 10.0\({}^{\circ}\)), the triton energies start varying with the ion type and beam energy. Therefore, the range of energies and scattering angles for different types of light nucleus and energies is essentially the same for the experiment. One detector design might fulfill the experimental requirements for the IAS reactions of light nucleus with an incident energy between 400 and 600 MeV/nucleon. A conceptional design of the proposed measurement is illustrated in Fig. 13. It is a \(\Delta E\)-\(E\) telescope [71-72] composed by a Time Projection Chamber (TPC) and a scintillator array. The TPC, served as the \(\Delta E\) detector, also offers a precise tracking detector. Since the CE reaction is Figure 12: Scattering kinetic energy versus the scattering angle of the triton in the center-of-mass frame. The C and O beams with energies of 400 MeV/nucleon **(a)**, 515 MeV/nucleon **(b)**, and 600 MeV/nucleon **(c)** bombard the \({}^{3}\)He target in the quasi-elastic region, there is a strong correlation between the scattering angle and the kinetic energy of the outgoing triton. This correlation can be employed as a very handful selecting rule on the kinematics. On the other hand, the scintillator detector, which is composed of CsI(Tl) crystals and serves as the \(E\) detector, also offers as an efficient trigger, which is necessary for the TPC. In this design, the range of scattering angle in the Laboratory frame is from 76.0\({}^{\circ}\) to 86.0\({}^{\circ}\), corresponding to about 5.0\({}^{\circ}\)\(\sim\) 15.0\({}^{\circ}\) in the center-of-mass frame. The corresponding kinetic energy range is from 10 MeV to 140 MeV, which is limited by the thickness of the detector window and the scintillator size. The \(\Delta E\)-\(E\) measurement offers a clean particle identification over all of the outgoing particles in the same scattering angle. In order to demonstrate the triton identification, a primary simulation based on Geant4 [73, 74, 75] toolkit was done. In the simulation, the \({}^{3}\)He target was bombarded with a 515 MeV/nucleon \({}^{17}\)C beam. The \(\Delta E\)-\(E\) distribution is shown in Fig. 14. The collision process is based on the FTFP_BERT_ATL physics list [76], which only give a statistical description about the scattering process. Taking the detector acceptance into consideration, the macroscopic cross section \(\sigma_{\rm{IAS}}\) of the CE reaction can be expressed as: \[\sigma_{\rm{IAS}}=\int_{\varphi_{1}}^{\varphi_{2}}{\rm d}\varphi\int_{\theta_{ 1}}^{\theta_{2}}{\rm sin}\theta\frac{{\rm d}\sigma}{{\rm d}\Omega}(\theta)|_{ \rm{IAS}}{\rm d}\theta, \tag{7}\] where \(\varphi\) is the azimuthal angle, \(\theta\) is the scattering angle, and \({\rm d}\sigma/{\rm d}\Omega\) is the differential cross section. The rate of yield \(N_{\rm{IAS}}\) can be estimated as: \[N_{\rm{IAS}}=\sigma_{\rm{IAS}}IN_{s}, \tag{8}\] where \(I\) is the beam intensity and \(N_{s}\) is the number density of the \({}^{3}\)He atoms in the target. Assuming the \(\varphi\) angle ranges from 0\({}^{\circ}\) to 180.0\({}^{\circ}\), the \(\theta\) angle ranges from 76.0\({}^{\circ}\) to 86.0\({}^{\circ}\), the Figure 14: Simulated \(\Delta E\)-\(E\) distribution of products from the bombardment of the \({}^{3}\)He target by the 500 MeV/nucleon \({}^{17}\)C beam Figure 13: A conceptional design of the measurement for the CE reaction beam intensity is 10\({}^{6}\) particles per second, and the target density is 2 amagats, the yield rates of the triton in IAS reactions are calculated by Eq. (8), shown as Table 3. According to the estimated rates, it is possible to get enough statistics within a limited beam time. ## 4 Conclusion In this work the differential cross sections for charge exchange reactions are calculated in the framework of the double-folding potential and distorted-wave Born approximation. In this theorical framework the \({}^{13,14}\)C(\({}^{3}\)He, _t_) and \({}^{17,18,19,20}\)O(\({}^{3}\)He, _t_) reactions are investigated, specifically focusing on the relationship between the outer shell structure and the IAS reactions channels. The IAS reactions for different nucleus may consist of various components. The \({}^{13}\)C(\({}^{3}\)He, _t_) and \({}^{17,19}\)O(\({}^{3}\)He, _t_) IAS reactions involve both the Fermi and the G-T transitions, while the \({}^{14}\)C(\({}^{3}\)He, _t_) and \({}^{18,20}\)O(\({}^{3}\)He, _t_) IAS reactions are mediated by the Fermi transition only. In addition to the established dependence of the Fermi transition in the IAS reaction on the neutron radius of the even-even isotopes, a similar correlation between the G-T transition and the neutron radius was also observed for the odd-A isotopes such as \({}^{13}\)C and \({}^{17,19}\)O. More interestingly, in the odd-A cases, it was observed that the G-T transition depends not only on the excess neutron but also on the angular momentum configuration of the outer neutrons. This feature might be utilized to explore the nuclear clustering phenomenon of the light neutron-rich isotopes, such as \({}^{7}\)Li, \({}^{15}\)N, and \({}^{17}\)O. Therefore, conducting accurate (\({}^{3}\)He, _t_) experiment to extract the IAS reactions of unstable nucleus will provide new insights into their isospin structure. For such a measurement, a range of scattering angle from 5.0\({}^{\circ}\) to 10.0\({}^{\circ}\) in the center-of-mass frame would be ideal, where the IAS reactions dominate and can be conveniently distinguished from other reaction channels. For nuclei such as \({}^{13}\)C and \({}^{17,19}\)O, the extraction of both Fermi and G-T transition from the total IAS reaction cross section by the MDA was demonstrated, according to the calculated reaction cross section. In this work, the inverse kinematics of the CE reaction was also discussed, with the beam condition of the HIRFL-CSR. For incident light nuclei with the intermediate energy, the operating range of the detector was determined to be within 76.0\({}^{\circ}\)\(\sim\)86.0\({}^{\circ}\) in the laboratory frame, corresponding to a range from 5.0\({}^{\circ}\) to 15.0\({}^{\circ}\) in the center-of-mass frame. The yields of the triton and the byproducts (proton and deuteron) are estimated. And it is demonstrated that enough statistics can be achieved within a limited beam time and luminosity. In the future, the experiment will be carried out. And more nuclei and isotopic chains will \begin{table} \begin{tabular}{l c c} \hline & Cross section / & Counting rate / h \\ & \(\upmu\)b & 1 \\ \hline \({}^{13}\)C(\({}^{3}\)He, _t_)\({}^{13}\)N (g.s.) & 10.79 & 55.70 \\ \hline \({}^{14}\)C(\({}^{3}\)He, _t_)\({}^{14}\)N (2.690 MeV) & 11.04 & 57.02 \\ \hline \({}^{17}\)O(\({}^{3}\)He, _t_)\({}^{17}\)F (g.s.) & 4.54 & 23.44 \\ \hline \({}^{18}\)O(\({}^{3}\)He, _t_)\({}^{18}\)F (1.592 MeV) & 1.76 & 9.10 \\ \hline \({}^{19}\)O(\({}^{3}\)He, _t_)\({}^{19}\)F (7.904 MeV) & 3.88 & 20.08 \\ \hline \({}^{20}\)O(\({}^{3}\)He, _t_)\({}^{20}\)F (6.979 MeV) & 3.68 & 19.02 \\ \hline \end{tabular} \end{table} Table 3: Prediction cross sections and counting rates of CE IAS reactions be calculated to investigate the nuclear structure involved in the G-T transition more carefully. **Acknowledge** This work is financially supported by National Key R&D Program of China (Grant No. 2022YFE0103900) and the National Natural Science Foundation of China (Grant Nos., U2032166, 11875301, and U1832167).
2305.06658
Linear System Analysis and Optimal Control of Natural Gas Dynamics in Pipeline Networks
We examine nonlinear and adaptive linear control systems that model compressor-actuated dynamics of natural gas flow in pipeline networks. A model-predictive controller (MPC) is developed for feedback control of compressor actions in which the internal optimization over the local time horizon is constrained by the dynamics of either the nonlinear system or the adaptive linear system. Stability of the local linear system is established and a rigorous bound on the error between the solutions of the nonlinear and linear systems is derived and used to devise situations when the linear MPC may be used instead of the nonlinear MPC without a significant difference between their respective predictions. We use several test networks to compare the performances of various controllers that involve nonlinear and adaptive linear models as well as moving-horizon and single-interval optimization. Our results demonstrate that the proposed moving-horizon MPC is well-equipped to adapt in local time to changes in system parameters and has the ability to reduce total computational costs by orders of magnitude relative to conventional transient optimization methods.
Luke S. Baker, Sachin Shivakumar, Dieter Armbruster, Rodrigo B. Platte, Anatoly Zlotnik
2023-05-11T08:53:05Z
http://arxiv.org/abs/2305.06658v3
# Linear System Analysis and Optimal Control of Natural Gas Dynamics in Pipeline Networks ###### Abstract We derive a linear system of ordinary differential equations (ODEs) to approximate the dynamics of natural gas in pipeline networks. Although a closed-form expression of the eigenvalues of the state matrix does not generally exist, the poles of an irrational transfer function corresponding to the linearized partial differential equations are used to approximate the eigenvalues of the ODE system. Our analysis qualitatively demonstrates that the eigenvalues of the state matrix of the entire network system are "pipeline separable" in the sense that the eigenvalues are dominated by the individual pipeline parameters and not the incidence connectivity of the network graph. The linear system is used as the dynamic constraints of a linear optimal control problem (OCP) to design the control actions of compressor units to minimize the energy that they expend. The motivation of this work is to reduce the computational complexity of optimizing gas dynamics in large networks to meet the unpredictable and highly variable demand from electric generators. The linear and corresponding nonlinear OCPs are discretized in time to obtain linear and nonlinear optimization problems, which are demonstrated on a test network to illustrate the validity of linear programming. Moreover, an analytical bound on the error between the solutions of the linear and nonlinear flow dynamics is presented using Lyapunov functions and verified computationally by plotting the error against the size of the flow variation around the steady-state solution. ## I Introduction Natural gas is the primary source of energy used to generate electricity in the United States, but substantial investments are being made to transition the U.S. economy from fossil fuels such as natural gas and coal to cleaner and more sustainable resources. The U.S. Energy Information Administration projects a rapid increase in the installation of renewable energy resources over the next 30 years as coal transitions into retirement [1]. However, the reliability of renewable energy sources, such as wind and sun, is a persistent issue because the energy output from these sources is generally unpredictable and highly variable throughout the course of a day. During hours of peak electricity demand, electric generators balance temporal shortages of renewable energy by using natural gas-fired power plants. Thus, there is a temporal variability in gas withdrawal from associated pipelines that creates highly transient flows throughout the natural gas infrastructure. Pipeline system operators must consistently moderate pressure, mass flows, and compressor activity as part of intra-day planning and operations. The transmission of natural gas through networks of pipelines has been studied in steady-state [2, 3, 4, 5, 6] and transient operations [7, 8, 9]. A major goal is to design compressor activity to minimize the expenditure of compressor energy [10] or maximize the economic value generated by the pipeline system [11], while satisfying the physics of gas flow throughout the network in such a way that engineering limits on pressure and compression are satisfied. In steady-state operation, the flow of gas in the network is balanced, so that the totality of inflows from processing plants and supply stations equates to the outflows from withdrawal stations. Steady-state pipeline flows are described using simple time-invariant algebraic equations that relate pressure drop in the direction of flow to mass flow along each pipeline. In transient operation, the computational complexity increases significantly because the flow in each pipeline is governed by a system of nonlinear partial differential equations (PDEs) [12, 13]. Optimization of gas network dynamics is usually implemented in a digital computer by using discretization methods in space and time to approximate the continuous PDEs with algebraic equations. Since these optimization problems are large, nonlinear, and nonconvex with nonlinear algebraic constraints, model reduction methods have been proposed to reduce the complexity in both steady-state and transient operations [14, 15, 16]. Green's functions are used in [17] to form a linear optimization program, which is applied on a test network to demonstrate a reduction of optimization time by two orders of magnitude in comparison to the nonlinear optimization program. A mixed-integer linear programming approach based on piecewise linearization of the nonlinear terms was applied in [18] for the optimal control of transient flows. A similar linearization approach for the coupled gas network and electric power grid is presented in [19] for steady-state gas flows. These linearization methods usually require additional discretization points to accurately interpolate the nonlinear terms with piecewise linear segments, which could significantly increase the size of the network. The key contribution of our study is the development of a linear state equation and linear program (LP) that approx imate the nonlinear state equation and nonlinear program (NLP) within the same state space, i.e., without the addition of interpolating piecewise segments. The linear system is derived in a way that can be utilized for the design of feedback controllers and the investigation of asymptotic stability and transient responses. We begin by defining the PDE-constrained optimal control problem (OCP) and discretize it in space using a finite volume method to form a nonlinear control system of ordinary differential equations (ODEs) that is subsequently written in matrix-vector form using the edge incidence matrix of the network [20]. This discretization approach was used to formulate NLPs in previous studies [9, 21], with applications in gas reservoir storage [22] and coordinated scheduling of natural gas and electric power infrastructures [23]. We derive the linear system and LP from the nonlinear system and NLP using linearization techniques around a steady-state solution. The rest of this paper is organized as follows. In Section II, the system of nonlinear PDEs that dictate the flow of gas in the network is presented and used to formulate the PDE-constrained OCP. In Section III, the PDE system is discretized in space to form a nonlinear ODE state equation and ODE-constrained OCP. The linear state equation and linear ODE-constrained OCP are derived in Section IV. The eigenvalues of the linear state matrix are compared to the poles of an irrational transfer function in Section V to gain insight into the transient behavior of the network system. The nonlinear and linear programs are implemented in Section VI using Euler's approximation for the time derivatives that arise in the state equations. Section VII demonstrates the performance of the two programs on a test network that was used in a previous study [24]. The error between the solutions of the linear and nonlinear systems is analyzed computationally and analytically in Section VIII. Concluding remarks are made in Section IX and a proof of the error bound is presented in the Appendix. ## II Network Flow Control Formulation A gas pipeline network is modeled as a connected and directed graph \((\mathcal{E},\mathcal{V})\) consisting of edges \(\mathcal{E}=\{1,\ldots,E\}\) and nodes \(\mathcal{V}=\{1,2,\ldots,V\}\), where \(E\) and \(V\) denote the numbers of edges and nodes of the graph. Edges represent pipelines and nodes represent junctions or stations where gas can be injected into or withdrawn from the network. It is assumed that the edges and nodes are ordered within their sets according to their integer labels. The symbol \(k\) is reserved for indexing edges in \(\mathcal{E}\) and the symbols \(i\) and \(j\) are reserved for indexing nodes in \(\mathcal{V}\). Supply nodes \(\mathcal{V}_{s}\subset\mathcal{V}\) and withdrawal nodes \(\mathcal{V}_{w}\subset\mathcal{V}\) are assumed to be disjoint sets that partition \(\mathcal{V}\), i.e., \(\mathcal{V}_{s}\cup\mathcal{V}_{w}=\mathcal{V}\) and \(\mathcal{V}_{s}\cap\mathcal{V}_{w}=\emptyset\). It is assumed that supply nodes are ordered in \(\mathcal{V}\) before withdrawal nodes so that \(i<j\) for all \(i\in\mathcal{V}_{s}\) and \(j\in\mathcal{V}_{w}\). The graph is directed by assigning a positive flow direction along each edge. Mass flux and velocity have positive values along an edge if gas physically flows along this edge in the prescribed positive direction of the graph. We assume that gas flows in the positive orientation of the graph. The notation \(k:i\mapsto j\) means that edge \(k\in\mathcal{E}\) is directed from node \(i\in\mathcal{V}\) to node \(j\in\mathcal{V}\). For each node \(j\in\mathcal{V}\), we define (potentially empty) incoming and outgoing sets of pipelines by \({}_{\mapsto}j=\{k\in\mathcal{E}|k:i\mapsto j\}\) and \(j_{\mapsto}=\{k\in\mathcal{E}|k:j\mapsto i\}\), respectively. For each pipe \(k\in\mathcal{E}\), the flow variables are density \(\rho_{k}(t,x)\) and mass flux \(\varphi_{k}(t,x)\) for \(t\in[0,T]\) and \(x\in[0,\ell_{k}]\), where \(T\) denotes the time horizon and \(\ell_{k}\) denotes the length of the pipe. Assuming that the pipe is horizontal, the flow is isothermal, and the transients do not excite shock waves, then the flow through edge \(k\in\mathcal{E}\) may be governed by the semilinear hyperbolic PDE system [8, 25] \[\partial_{t}\rho_{k}+\partial_{x}\varphi_{k} = 0, \tag{1}\] \[\partial_{t}\varphi+\sigma^{2}\partial_{x}\rho_{k} = -\frac{\lambda_{k}}{2D_{k}}\frac{\varphi_{k}|\varphi_{k}|}{\rho_{ k}}, \tag{2}\] where \(D_{k}\) and \(\lambda_{k}\) are the diameter and the friction factor of the pipe, respectively, and \(\sigma\) is the speed of sound through natural gas. The compressibility factor of the gas is assumed to be constant so that the equation of state is ideally given by \(p_{k}=\sigma^{2}\rho_{k}\), where \(p_{k}\) is the pressure. These assumptions are often adopted for analyzing flows in natural gas systems [26, 27]. Forces of friction between the interior wall of a pipe and gas flowing through the pipe cause pressure to decrease in the direction of flow. Compressor stations are strategically installed throughout the network to increase the pressure of gas at these locations to within limits required for transportation. We define \(\mathcal{C}\subset\mathcal{E}\) to be the set of edges \(k\in\mathcal{E}\) that are adjacent to a compressor station and we simply speak of the compressor station \(k\in\mathcal{C}\). We assume that each compressor station \(k\in\mathcal{C}\) is located at the entrance of its adjacent edge \(k\in\mathcal{E}\) with respect to the positive flow direction. The action of the compressor station \(k\in\mathcal{C}\) is modeled with the control input \(\mu_{k}(t)\). In particular, the pressure of gas leaving the compressor unit \((k:i\mapsto j)\in\mathcal{C}\) and entering the pipeline \((k:i\mapsto j)\in\mathcal{E}\) is \(\mu_{k}(t)\) times larger than the pressure of gas leaving node \(i\in\mathcal{V}\) and entering compressor \(k\in\mathcal{C}\). To simplify notation, we define \(\mu_{k}=1\) for all \(k\in\mathcal{E}\setminus\mathcal{C}\). The density of gas injected into the network at the supply node \(i\in\mathcal{V}_{s}\) is specified by the boundary condition profile \(\mathbf{s}_{i}(t)\) (kg/m\({}^{3}\)). The amount of gas withdrawn from the network at each withdrawal node \(j\in\mathcal{V}_{w}\) is specified by the mass outflow profile \(\mathbf{w}_{j}(t)\) (kg/s). For each \(j\in\mathcal{V}_{w}\), define the nodal density variable \(\mathbf{\rho}_{j}(t)\). All of the nodal quantities in this work are identified with bold symbols. Inlet and Fig. 1: Configuration of a pipeline segment \(k:i\mapsto j\) with \(i\in\mathcal{V}_{s}\) and \(j\in\mathcal{V}_{w}\) with state and control variables indicated. outlet edge variables are defined by attaching superscripts "0" and "\(\ell\)", respectively, to the associated edge variables. For example, \(\varphi_{k}^{0}(t)=\varphi_{k}(t,0)\) and \(\varphi_{k}^{\ell}(t)=\varphi_{k}(t,\ell_{k})\). Define \(\chi_{k}=\pi D_{k}^{2}/4\) to be the cross-sectional area of the pipeline \(k\in\mathcal{E}\). The boundary conditions of the network dynamics are given by \[\rho_{k}(t,0) = \mu_{k}(t)\mathbf{s}_{i}(t),\quad\rho_{k}(t,\ell_{k})=\mathbf{\rho}_{j}(t), \tag{3}\] \[\rho_{k}(t,0) = \mu_{k}(t)\mathbf{\rho}_{i}(t),\quad\rho_{k}(t,\ell_{k})=\mathbf{\rho}_{j }(t),\] (4) \[\mathbf{w}_{j}(t) = \sum_{k\in_{\leftrightarrow}j}\chi_{k}\varphi_{k}^{\ell}(t)-\sum _{k\in j_{\leftrightarrow}}\chi_{k}\varphi_{k}^{0}(t), \tag{5}\] where (3) is defined for \(k:i\mapsto j\) with \(i\in\mathcal{V}_{s}\), (4) is defined for \(k:i\mapsto j\) with \(i\in\mathcal{V}_{w}\), and (5) is defined for \(j\in\mathcal{V}_{w}\). The conditions in (3)-(4) represent the effects of compression and the conditions in (5) represent the conservation of mass flow through withdrawal nodes. The initial condition of density and mass flux in the network is assumed to be a steady-state solution given by [6] \[\rho_{k}(0,x)=\overline{\rho}_{k}(x),\qquad\varphi(0,x)=\overline{\varphi}_{k}, \tag{6}\] where \(\overline{\varphi}_{k}\) is constant for each \(k\in\mathcal{E}\). We provide details on the initial condition for the discretized system in the following section. We assume standard conditions for well-posedness [28], and specifically that the boundary conditions are smooth, slowly-varying, bounded in their respective domains, and compatible with the initial condition to ensure the existence of a smooth, slowly-varying, bounded solution. The flow of natural gas in the network is defined by (1)-(6). Gas network operators require pressure and compression to be within engineering limits to ensure the safety of transportation and the quality of gas delivered to customers. These limitations are modeled for all \(k\in\mathcal{E}\) with inequality constraints of the form \[p_{k}^{\min}\leq\sigma^{2}\rho_{k}\leq p_{k}^{\max},\quad 1\leq\mu_{k}\leq 2, \tag{7}\] where \(p_{k}^{\min}\) and \(p_{k}^{\max}\) are specified bounds on pressure. The compressor control variables are designed to minimize the energy that they expend. The accumulated energy is given by [2, 9] \[J=\sum_{k\in\mathcal{E}}\int_{0}^{T}c_{k}\varphi_{k}^{0}(t)\left((\mu_{k}(t)) ^{(\nu-1)/\nu}-1\right)dt, \tag{8}\] where \(c_{k}\) is related to the efficiency of the compressor \(\mu_{k}\) and \(\nu\) is the isentropic exponent of natural gas [29]. Therefore, the control design is defined by the PDE-constrained OCP \[\begin{array}{ll}\text{min}&J\triangleq\text{compressor energy in (\ref{eq:PDE-constrained-OCP})},\\ \text{s.t.}&\text{dynamic constraints: (\ref{eq:PDE-constrained-OCP})},\\ &\text{boundary conditions: (\ref{eq:PDE-constrained-OCP})},\\ &\text{initial condition: (\ref{eq:PDE-constrained-OCP})},\\ &\text{inequality constraints: (\ref{eq:PDE-constrained-OCP})}.\end{array} \tag{9}\] The decision variables are densities, mass fluxes, and compressor ratios throughout the network. Other OCPs of interest, such as carbon dioxide reduction [30, 31], may be defined similarly by adjusting the objective function in (8). ## III Network Flow Control Discretization The intial-boundary value system of PDEs from the previous section will be discretized in space to obtain an initial-value system of ODEs using a popular finite volume method for natural gas networks [20]. Discretization will be formalized by refining the graph of the gas network. A graph refinement \((\hat{\mathcal{E}},\mathcal{V})\) of the graph \((\mathcal{E},\mathcal{V})\) is made by adding auxiliary withdrawal nodes to \(\mathcal{V}\) that subdivide the edges of \(\mathcal{E}\) so that \(\ell_{k}\leq\ell\) for all \(k\in\hat{\mathcal{E}}\), where \(\ell\leq 10\) (km) is sufficiently small [15]. The refined graph inherits the prescribed direction of the parent graph. For sufficiently fine network refinement, the relative difference of the flow variables between adjacent nodes is small in magnitude by continuity of the flow variables (assuming well-posedness). We assume that the graph has been sufficiently refined and that the hats may be omitted moving forward. The system of ODEs is obtained by integrating the dynamic equations in (1)-(2) along the length of each refined pipeline segment so that \[\int_{0}^{\ell}\partial_{t}\rho+\partial_{x}\varphi dx = 0,\] \[\int_{0}^{\ell}\partial_{t}\varphi+\sigma^{2}\partial_{x}\rho dx = -\frac{\lambda}{2D}\int_{0}^{\ell}\frac{\varphi|\varphi|}{\rho}dx,\] where edge subscripts have been removed for readability. The above integrals of space derivatives are evaluated using the fundamental theorem of calculus. The remaining integrals are evaluated by approximating pipeline density with outlet density and pipeline flux with inlet flux. These approximations are independent of \(x\) and may be factored out of the integrals. The above equations become \[\ell\hat{\rho}^{\ell} = \varphi^{0}-\varphi^{\ell}, \tag{10}\] \[\ell\hat{\varphi}^{0}+\sigma^{2}\left(\rho^{\ell}-\rho^{0}\right) = -\frac{\lambda\ell}{2D}\frac{\varphi^{0}\left|\varphi^{0}\right|}{ \rho^{\ell}}, \tag{11}\] where a dot above a variable represents the time-derivative of the variable. We now write the discretized system in matrix form. Define the \(E\times E\) diagonal matrices \(L\), \(K\), and \(X\) with diagonal entries \(L_{kk}=\ell_{k}\), \(K_{kk}=\lambda_{k}/(2D_{k})\), and \(X_{kk}=\chi_{k}\). Define the time-varying (transposed) incidence matrix \(\Xi\) of size \(E\times V\) componentwise by \[\Xi_{ki}=\begin{cases}-\mu_{k}(t),&\text{edge $k\in i_{\mapsto}$ leaves node $i$,}\\ 1,&\text{edge $k\in_{\mapsto}$ $i$ enters node $i$},\\ 0,&\text{else.}\end{cases} \tag{12}\] Define the \(E\times r\) submatrix \(N\) of \(\Xi\) by the removal of columns \(i\in\mathcal{V}_{w}\) and the \(E\times(V-r)\) submatrix \(M\) of \(\Xi\) by the removal of columns \(i\in\mathcal{V}_{s}\), where \(r\) denotes the number of supply nodes. Define the signed matrix \(Q=\text{sign}(M)\) (which is well-defined by the inequalities in (7)). Define the positive and negative parts of \(Q\) by \(Q_{\ell}\) and \(Q_{0}\), respectively, so that \(Q=(Q_{\ell}+Q_{0})/2\) and \(|Q|=(Q_{\ell}-Q_{0})/2\), where \(|A|\) denotes the componentwise absolute value of a matrix \(A\). Define inlet and outlet edge mass flux vectors by \(\varphi^{0}=(\varphi^{0}_{1},\ldots,\varphi^{0}_{E})^{T}\) and \(\varphi^{\ell}=(\varphi^{\ell}_{1},\ldots,\varphi^{\ell}_{E})^{T}\). Define the vector of densities at supply nodes \(\mathbf{s}=(\mathbf{s}_{1},\ldots,\mathbf{s}_{r})^{T}\), the vector of mass withdrawals \(\mathbf{w}=(\mathbf{w}_{r+1},\ldots,\mathbf{w}_{V})^{T}\), and the vector of densities at withdrawal nodes \(\mathbf{\rho}=(\mathbf{\rho}_{r+1},\ldots,\mathbf{\rho}_{V})^{T}\), where the subscripts of the entries are indexed according to the node labels in \(\mathcal{V}\). Applying the above matrix definitions, the discretized equations in (10)-(11) together with the boundary conditions in (3)-(5) become \[LQ_{t}\dot{\mathbf{\rho}} = \varphi^{0}-\varphi^{\ell}, \tag{13}\] \[L\dot{\varphi}^{0}+\sigma^{2}\left(M\mathbf{\rho}+N\mathbf{s}\right) = -LK\frac{\varphi^{0}\odot|\varphi^{0}|}{Q_{\ell}\mathbf{\rho}},\] (14) \[\mathbf{w} = Q_{\ell}^{T}X\varphi^{\ell}+Q_{0}^{T}X\varphi^{0}, \tag{15}\] where \(\odot\) is the Hadamard (componentwise) product and the ratio of vectors on the right side of (14) is componentwise as well. Multiplying both sides of (13) on the left by \(Q_{\ell}^{T}X\) and using (15), we may combine (13) and (15) to form the equation \(Q_{\ell}^{T}XLQ_{\ell}\mathbf{\rho}=(Q^{T}X\varphi^{0}-\mathbf{w})\), where we have used \(Q=(Q_{0}+Q_{\ell})\). Therefore, outlet flux is a dependent variable and we define the state vector of inlet flux by \(\varphi=\varphi^{0}\). Moreover, we define \(I=Q_{\ell}\) for simplicity. Note that, for a connected tree network with one supply node, the nodes and edges of the network may be ordered so that \(I\) is the identity matrix. The above equations become \[\dot{\mathbf{\rho}} = (I^{T}XLI)^{-1}\left(Q^{T}X\varphi-\mathbf{w}\right), \tag{16}\] \[\dot{\varphi} = -\sigma^{2}L^{-1}\left(M\mathbf{\rho}+N\mathbf{s}\right)-K\frac{\varphi \odot|\varphi|}{I\mathbf{\rho}}. \tag{17}\] The matrices \(L\) and \(I^{T}XLI\) are invertible because \(L\) and \(X\) are invertible and \(I\) has full rank. In fact, each row \(k\) of \(I\) contains exactly one nonzero component with \(I_{kj}=1\) if and only if \(k\in_{\leftrightarrow}\!j\). Using the additional fact that \(X\) and \(L\) are diagonal, it can be shown that the mass matrix \(I^{T}XLI\) is diagonal with positive diagonal components given by \((I^{T}XLI)_{jj}=\sum_{k\in_{\leftrightarrow}j}\chi_{k}\ell_{k}\) for \(j\in\mathcal{V}_{w}\). Therefore, the matrices \(L\) and \(I^{T}XLI\) may be easily inverted to obtain the traditional control system presented above. The state variables are densities \(\mathbf{\rho}\) at withdrawal nodes and fluxes \(\varphi\) at the inlets of the edges. The compressor actuators are contained in the matrices \(M\) and \(N\). The other matrices are known and constant. The steady-state initial condition in (6), sampled at the refined nodes of the network, is the solution of the time-invariant flow equations \[Q^{T}X\overline{\varphi}=\overline{\mathbf{w}},\qquad\sigma^{2}\left(\overline{ M}\overline{\mathbf{\rho}}+\overline{N}\overline{\mathbf{s}}\right)=-LK\frac{ \overline{\varphi}\odot|\overline{\varphi}|}{I\overline{\mathbf{\rho}}}, \tag{18}\] where \(\overline{\mathbf{s}}=\mathbf{s}(0)\) and \(\overline{\mathbf{w}}=\mathbf{w}(0)\). Overlines attached to state variables, actuation matrices, and parameters are used throughout to denote a time-invariant steady-state solution. We assume that \(\overline{M}\), \(\overline{N}\), \(\overline{\varphi}\), and \(\overline{\mathbf{\rho}}\) are optimally determined to minimize compressor energy as in (8) while satisfying flow requirements in (18) and inequality constraints as in (20). The initial condition of the system in (16)-(17) is defined to be \[\mathbf{\rho}(0)=\overline{\mathbf{\rho}},\qquad\varphi(0)=\overline{ \varphi}. \tag{19}\] The discretized system in (16)-(17) is numerically consistent in the sense that its dynamics approach the continuous dynamics in (1)-(5) as the maximum length of the edges of the refined network approaches zero. Pressure and compression inequality constraints in (7) reduce to \[\mathbf{p}_{j}^{\min}\leq\sigma^{2}\mathbf{\rho}_{j}\leq\mathbf{p}_{j}^{\max},\quad 1 \leq\mu_{k}\leq 2, \tag{20}\] where \(\mathbf{p}_{j}^{\min}\) and \(\mathbf{p}_{j}^{\max}\) are specified for each node \(j\in\mathcal{V}_{w}\). The ODE-constrained OCP is formulated as \[\begin{array}{ll}\text{min}&J\triangleq\text{compressor energy in (\ref{eq: Linearizing (22)-(23) in \(\boldsymbol{\rho}\), \(\varphi\), \(\mu\), \(\boldsymbol{s}\), and \(\boldsymbol{w}\) around the origin gives the linear time-invariant system \[\begin{bmatrix}\dot{\boldsymbol{\rho}}\\ \varphi\end{bmatrix}=A\begin{bmatrix}\boldsymbol{\rho}\\ \varphi\end{bmatrix}+B\mu+W\begin{bmatrix}\boldsymbol{s}\\ \boldsymbol{w}\end{bmatrix}, \tag{25}\] where the state, input, and disturbance matrices are given in block matrix form by \[A=\begin{bmatrix}0&(I^{T}XLI)^{-1}Q^{T}X\\ \overline{\alpha}I-\sigma^{2}L^{-1}\overline{M}&\overline{\beta}\end{bmatrix}\] and \[B=\sigma^{2}L^{-1}\overline{B},\quad W=\begin{bmatrix}0&-(I^{T}XLI)^{-1}\\ -\sigma^{2}L^{-1}\overline{N}&0\end{bmatrix}.\] The entries with zeros denote zero matrices of approapriate dimensions. The initial condition of (25) is given by \[\boldsymbol{\rho}(0)=0,\qquad\varphi(0)=0. \tag{26}\] The initial-value system in (25)-(26) describes the gas dynamics for all variations around the steady-state that are confined to a region in which linearization is applicable. In subsequent sections, we will analyze this applicable linearization region. Linearizing (8) around the steady-state gives the linear objective \[J_{\text{lin}}=\sum_{k\in\mathcal{E}}c_{k}\int_{0}^{T}\varphi_{k }(t)\left(\overline{\mu}_{k}^{(\gamma-1)/\gamma}-1\right)\] \[\qquad\qquad+\left(\frac{\gamma-1}{\gamma}\overline{\varphi}_{k} \overline{\mu}_{k}^{-1/\gamma}\right)\mu_{k}(t)dt. \tag{27}\] The inequality constraints in (20) are linear in the state and control variables. Translating these inequalities from the steady-state to the origin results in \[\boldsymbol{p}_{j}^{\min}\leq\sigma^{2}\left(\boldsymbol{\rho}_{j}+\overline{ \boldsymbol{\rho}}_{j}\right)\leq\boldsymbol{p}_{j}^{\max},\quad 1\leq\mu_{k}+ \overline{\mu}_{k}\leq 2. \tag{28}\] The linear ODE-constrained OCP is given by \[\begin{array}{ll}\text{min}&J_{\text{lin}}\triangleq\text{compressor energy in \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq: time of a transient input depends largely on the real parts of one or more of the eigenvalues of \(A\). For our purpose, we approximate the settling time by \(t_{s}=1/\min|\text{Re}(\lambda_{m})|\). We investigate how the settling time changes as the parameters of the pipeline vary for two cases. 1. **Varying \(\ell\)**. First, assume that \(\ell<\pi\sigma/|\beta|\). In this case, all of the poles of \(G(s)\) lie on the asymptote \(c=\beta/2\) and the settling time is approximated with \(t_{s}=1/|c|\). Second, assume that \(\ell>\pi\sigma/|\beta|\). In this case, there are at least two purely real poles, the largest of all of which is given by \(\zeta_{1}^{+}=[\beta/2+\sqrt{(\beta/2)^{2}-(\pi\sigma/2\ell)^{2}}]\). As \(\ell\) increases without bound, the number of purely real poles of \(G(s)\) increases without bound and \(\zeta_{1}^{+}\) increases toward the origin of the complex plane. This indicates that the settling time \(t_{s}=1/|\zeta_{1}^{+}|\) increases as \(\ell\) increases, and that the system will never settle everywhere in the theoretical case of an infinitely long pipeline. 2. **Varying \(\overline{\varphi}\)**. For a single pipeline, the values \(\overline{\varphi}_{k}\) for \(k\in\mathcal{E}\) in (30) are all the same value, which we denote by \(\overline{\varphi}\). The poles of \(G(s)\) are, therefore, given by \[\zeta_{m}^{\pm}=b|\overline{\varphi}|\pm\mathbf{j}\sqrt{\left(\frac{\pi\sigma }{2\ell}\right)^{2}(2m+1)^{2}-(b|\overline{\varphi}|)^{2}},\] where \(b=-\lambda/(2DE)\sum_{k=1}^{E}1/(I\overline{\boldsymbol{\rho}})_{k}\). As \(|\overline{\varphi}|\to 0\), it is clear that all of the poles, hence the center of gravity, approach the imaginary axis of the complex plane. Thus, as \(|\overline{\varphi}|\to 0\), the friction term in the state equation goes to zero and the variations are undamped waves, leading to an infinite settling time. As \(|\overline{\varphi}|\) theoretically increases without bound, the center of gravity \(c\) decreases without bound along the real axis of the complex plane. The decrease of the center of gravity as \(|\boldsymbol{w}|\), hence \(|\overline{\varphi}|\), increases is illustrated in Figure 2. Let us return to the approximation of the eigenvalues. Recall that the center of gravity of the eigenvalues of \(A\) depends on \(\overline{\beta}_{kk}=-\lambda_{k}|\overline{\varphi}_{k}|/[D_{k}(I \overline{\boldsymbol{\rho}})_{k}]\). These values usually do not significantly change for changing indices \(k\) over a single refined pipe within a typical natural gas network. Moreover, the values \(\lambda_{k}/[D_{k}(I\boldsymbol{\rho})_{k}]\) do not change significantly for changing indices \(k\) over the network. Therefore, we expect to see two or more imaginary asymptotes in the eigenvalues of \(A\) if the network contains two or more pipelines that deliver significantly different amounts of gas through them. We extend the eigenvalue approximation from a single pipe to a connected network of pipelines by approximating the eigenvalues for each pipeline in the network separately using the average value of \(\overline{\beta}\) corresponding to each pipe and then collecting each of these subsets of approximate eigenvalues to form the total set of approximate eigenvalues. We demonstrate this approach for two network structures. First, we consider the same pipeline in Figure 2 with supply at the inlet and withdrawal at the outlet but with two additional withdrawal nodes located with equidistant spacing. The network graph of this pipeline is actually a cascade connection of three pipes of equal length. The withdrawal node closest to the supply node withdraws 500 (kg/m\({}^{2}\)s), the subsequent node withdraws 200 (kg/m\({}^{2}\)s), and the outlet node withdraws 70 (kg/m\({}^{2}\)s). From (5), the initial steady-state mass flux of gas in the first part of the pipeline is 770, that in the second part is 270, and that in the last part is 70. Since these mass flux values are significantly different from one another, we expect to see three distinct imaginary asymptotes in the eigenvalues of \(A\). The eigenvalues of \(A\) and combined poles of the three individual pipe segments are depicted on the left side of Figure 3. The right side of the figure depicts the eigenvalues and poles of the cyclic network in Figure 4 that we study later on in Section VII. This cyclic network contains five pipelines. Three of the pipelines have nearly equal center of gravity values and this is evident by the heavily-weighted imaginary asymptote that intersects the real axis of the complex plane at appr Fig. 3: Eigenvalues (\(*\)) of \(A\) and poles (\(\circ\)) of \(G(s)\) for (left) a single pipeline with three withdrawal nodes and (right) the cyclic network from Section VII. Fig. 2: Eigenvalues (\(*\)) of \(A\) and poles (\(\circ\)) of \(G(s)\) for a single pipeline with 30 refined edges. The pipeline parameters are \(\ell=100\) (km), \(D=0.75\) (m), \(\lambda=0.01\), and \(\sigma=377\) (m/s). One imaginary asymptote is more isolated at approximately \(s=-0.040\). Lastly, there is another imaginary asymptote at approximately \(s=-0.034\), but the pipeline corresponding to this asymptote contains only one internal refined node. ## VI Digital Implementation The nonlinear and linearized ODE-constrained OCPs in (21) and (29) may be generally expressed as \[\min_{x(t),u(t)} \int_{0}^{T}\mathcal{F}(x(t),u(t))dt\] (32a) s.t. \[\dot{x}(t)=f(x(t),u(t),d(t)), \tag{32b}\] \[x(0)=0,\] (32c) \[\text{l.b.}\leq(x(t),u(t))\leq\text{u.b.}, \tag{32d}\] where \(x=(\boldsymbol{\rho},\varphi)^{T}\) is the state, \(u=\mu\) is the input, and \(d=(\boldsymbol{s},\boldsymbol{w})^{T}\) is a known vector of disturbances. The symbols l.b. and u.b. represent vector bounds that constrain the components of the state and input vectors, where the inequalities are applied componentwise. Nonlinear and linear programs are obtained by discretizing the time interval \([0,T]\) into \(N\) subintervals \((t_{n},t_{n+1})\), for \(n=0,\ldots,(N-1)\), where the sampling times are defined by \(t_{n}=nT/N\). The vector-valued functions \(x(t)\), \(u(t)\), and \(d(t)\) are sampled to form finite sequences of discrete vector-values \(x[n]=x(t_{n})\), \(u[n]=u(t_{n})\), and \(d[n]=d(t_{n})\), for \(n=0,\ldots,N\). The integral in the objective function is approximated using the left-endpoint integration method, resulting in \[\int_{0}^{T}\mathcal{F}(x(t),u(t))dt\approx\sum_{n=0}^{N-1}\frac{T}{N} \mathcal{F}(x[n],u[n]).\] The time derivative of \(x(t)\), for \(t\in(t_{n},t_{n+1})\), is approximated with Euler's method \[\dot{x}(t)\approx\frac{x[n+1]-x[n]}{T/N}.\] The nonlinear and linear optimization programs are then defined by \[\min_{x[n],u[n]} \sum_{n=0}^{N-1}\frac{T}{N}\mathcal{F}(x[n],u[n])\] (33) s.t. \[\frac{x[n+1]-x[n]}{T/N}=f(x[n],u[n],d[n]), \tag{34}\] \[x[0]=0,\] (35) \[\text{l.b.}\leq(x[n],u[n])\leq\text{u.b.}, \tag{36}\] where (34) is defined for \(n=0,\ldots,(N-1)\) and (36) is defined for \(n=1,\ldots,N\). These linear and nonlinear optimization problems are performed on a HP Spectre x360 4-core CPU with 16GB of unified memory, and are implemented in Matlab with the sequential quadratic program algorithm using the function fmincon. The gradient of the objective function is supplied for an improvement in the performance. Fig. 4: Top: Network configuration (not to scale). The triangles represent compressor stations. Edge lengths in kilometer units: \(\ell_{1}=20\), \(\ell_{2}=70\), \(\ell_{3}=10\), \(\ell_{4}=80\), \(\ell_{5}=60\). The pipelines have uniform diameter (0.9140 m) and friction factor (0.01), except for edge \(5\in\mathcal{E}\) that has diameter \(D_{5}=0.635\) (m) and friction factor \(\lambda_{5}=0.015\). The speed of sound is \(\sigma=377\) (m/s). Bottom: Mass outflow boundary condition profiles at the color-coordinated nodes. Fig. 5: Optimal solution in pressure, mass flow at the inlets of the edges, and compressor actions. Pressure and compression are color-coordinated with the nodes and compressor stations of the network in Figure 4. Solid lines represent the nonlinear solution and marker symbols represent the linearized solution. ## VII Network Illustration The solutions of the linear and nonlinear optimization problems are examined for the test network and boundary conditions depicted in Figure 4. This network was used in a previous study to validate a staggered grid discretization method for simulation [24]. The blue node is the only supply node. The pressure of gas entering this node is specified to be 5 (MPa) by writing \(\mathbf{s}_{\text{blue}}=5\times 10^{6}/\sigma^{2}\). The five edges of the network are discretized into 48 refined edges, so that \(\ell_{k}=5\) (km) for all \(k\in\hat{\mathcal{E}}\). The size of the state matrix \(A\) is \(95\times 95\) and the size of the input matrix \(B\) is \(95\times 3\). The time interval \([0,T]\) is discretized into 24 subintervals using 25 evenly-spaced time samples, as described in the previous section. This results in 2,450 optimization variables for both the nonlinear and linearized programs. The optimal solutions of the nonlinear and linearized programs are shown in Figure 5, where the solutions are translated back to non-variation variables by adding their associated steady-state components. Figure 5 shows that the linear program performs reasonably well at deciding the optimal solution for these boundary conditions. The percent relative error between the nonlinear and linearized solutions is \(1.54\%\) in pressure, \(0.57\%\) in mass flow, and \(2.3\%\) in compression, where the percent relative error of pressure between the nonlinear solution \(\mathbf{p}_{\text{non}}\) and the linearized solution \(\mathbf{p}_{\text{in}}\) is defined by \[\text{Error in }\mathbf{p}=\max_{j\in\mathcal{V}_{w},t\in[0,T]}\left(\frac{|(\mathbf{p} _{\text{non}})_{j}(t)-(\mathbf{p}_{\text{in}})_{j}(t)|}{\overline{\mathbf{p}}_{j}} \right)\times 100. \tag{37}\] Here, pressure is defined by \(\mathbf{p}=\sigma^{2}\mathbf{\rho}\) with appropriate symbols for steady-state, linear, and nonlinear solutions. Errors in mass flow and compression are defined similarly. ## VIII Error Analysis If \(\mathbf{w}\), \(\mathbf{s}\), and \(\mu\) are time-invariant, then the initial-value systems in (22)-(23) and (25) are undisturbed and unforced, in the sense that the solutions of these systems remain at the origin for all time. Assuming that the eigenvalues of \(A\) have negative real parts, the origin of the linear and nonlinear systems is an asymptotically and a locally asymptotically stable equilibrium point, respectively. In this section, we assume that \(\mathbf{s}=0\) and \(\mu=0\), which is common for gas pipelines in practice. **Proposition 2.** Suppose that the real parts of the eigenvalues of \(A\) are negative, that \(\mu=0\), \(\mathbf{s}=0\), and \(\overline{\varphi}_{k}>0\) for all \(k\in\mathcal{E}\). If the solution of (22)-(23) satisfies \(|\mathbf{\rho}_{j}|\leq\kappa\overline{\mathbf{\rho}}_{j}\) for all \(j\in\mathcal{V}_{w}\) and \(|\varphi_{k}|\leq\kappa\overline{\varphi}_{k}\) for all \(k\in\mathcal{E}\) with \(\kappa\in(0,\kappa_{\max})\) and \(\kappa_{\max}<1\) sufficiently small, then there exists positive constants \(a\), \(b\), and \(r\) such that \[\|\mathbf{e}\|\leq r\left(\frac{\kappa^{3}}{(1-\kappa)^{2}}a+\kappa^{2}b\right), \tag{38}\] where \(\mathbf{e}=[(\mathbf{\rho},\varphi)-(\mathbf{\rho}_{\text{in}},\varphi_{\text{in}})]\) and \(\|\mathbf{e}\|\) is the Euclidean norm of the vector \(\mathbf{e}\). Here \((\mathbf{\rho},\varphi)\) is the solution of (22)-(23) and \((\mathbf{\rho}_{\text{in}},\varphi_{\text{in}})\) is the solution of (25). **Proof.** See the Appendix. \(\square\) The bound in (38) provides a rate by which the norm of the error changes in terms of the size of the variation around the steady-state. If \(\kappa=0\), then \(\|\mathbf{e}\|=0\), as expected, since the solutions of the linear and nonlinear systems are undisturbed and remain in equilibrium at the origin. As \(\kappa\) increases from \(\kappa=0\) to \(\kappa=\kappa_{\max}\), the norm of the error increases. Intuitively, if the variation in density, \(\mathbf{\rho}_{j}\), approaches the negative of \(\overline{\mathbf{\rho}}_{j}\) for some \(j\in\mathcal{V}_{w}\) (i.e., \(\kappa\to 1\)), then the magnitude of the nonlinear term in (23) increases without bound. However, the solution of the linear system is bounded due to the global asymptotic stability of the equilibrium of the linear system. The bound of this extreme difference as \(\kappa\to 1\) is captured in the ratio on the right side of (38). Although the bound in (38) provides some intuition about how the error changes as a function of the size of the flow variation, it is derived using conservative Lyapunov bounds, and, therefore, its quantitative values may be impractical for real gas systems. To improve the error estimate, we analyze the error in each flow variable of the linear and nonlinear optimization problems as functions of the size of the withdrawal variation. In particular, a series of optimization routines are performed on the single pipeline that was introduced in Section V with pipeline parameters given in Figure 2. The goal is to design the compressors to minimize the expended energy for both the linear and nonlinear OCPs and then compare each of the resulting optimal flow variables for a series of different boundary conditions. The boundary conditions are constant 5 (MPa) pressure at the inlet of the pipeline and varying mass outflow profiles at the outlet given by \[\mathbf{w}_{\text{out}}(t)=200\left(1+\kappa\,\text{step}(t-T/4)-\kappa\,\text{ step}(t-3T/4)\right), \tag{39}\] where \(0\leq\kappa\leq 1\) and \(\text{step}(t)=0.5\text{tanh}(7.2t)\) for \(t\in[0,T]\) with \(T=24\) (hr). The percent relative error in pressure, mass flux, and compression between the solutions of the nonlinear and linearized optimization problems, as in (37), are depicted in Figure 6 as functions of \(\kappa\times 100\) (percent flow variation). Fig. 6: Percent relative error (as in (37)) as a function of percent flow variation \((100\kappa)\) as defined in (39) for the single pipeline studied in Figure 2. ## IX Conclusion We have developed a linear control system of ODEs for natural gas flows in pipeline networks. The eigenvalues of the linear state matrix were computed numerically for a single pipe and compared to a subset of poles of an irrational transfer matrix. The analysis was extended to networks of pipelines, from which several conjectures can be made. First, the eigenvalues of the state matrix are qualitatively similar to the collection of poles of the transfer functions for some natural gas networks with typical pipeline parameters. Second, the eigenvalues of the state matrix may have greater dependence on the parameter values of each individual pipeline of the network than the connectivity structure of the network. Third, the number of imaginary asymptotes toward which the eigenvalues of the state matrix gravitate is at most the number of edges of the original network graph. These conjectures are based on the observations presented in Section V. The simple closed-form representation of the poles of the transfer function was used to qualitatively analyze the transient behavior of the natural gas pipeline system. Control actions of compressor units were designed to minimize the energy that the compressors expend. Although this OCP has received significant attention in the literature, there has not been much work on comparing the linear and nonlinear OCPs. The error between the solutions of the LP and NLP is analyzed computationally as a function of the size of the variation in withdrawal flow. It was demonstrated that the solution of the LP is within 5% relative error from the solution of the NLP for sufficiently slow withdraw transients that vary by less than 100% of the steady-state solution. This result generally depends on the network structure, but the methodology may be applied by control system operators using real data on sections of the network to determine flow variation conditions under which the LP is sufficiently accurate. We also derived an analytical bound on the error between the solutions of the linear and nonlinear flow equations using Lyapunov functions. One may use the LP for other applications as well. For example, if the error is too large for certain flow variations near highly transient electric generators, the control engineer may design adaptive switching between the LP and NLP to reduce complexity and improve convergence of optimizing gas dynamics for large network systems [35]. The results presented in this work could open new capabilities to analyze very large pipeline network systems that transport natural gas, or more generally, infrastructure networks involving natural gas, water distribution, carbon capture and storage, hydrogen blending, or traffic flow.
2302.03787
Deep Neural Network Uncertainty Quantification for LArTPC Reconstruction
We evaluate uncertainty quantification (UQ) methods for deep learning applied to liquid argon time projection chamber (LArTPC) physics analysis tasks. As deep learning applications enter widespread usage among physics data analysis, neural networks with reliable estimates of prediction uncertainty and robust performance against overconfidence and out-of-distribution (OOD) samples are critical for their full deployment in analyzing experimental data. While numerous UQ methods have been tested on simple datasets, performance evaluations for more complex tasks and datasets are scarce. We assess the application of selected deep learning UQ methods on the task of particle classification using the PiLArNet [1] monte carlo 3D LArTPC point cloud dataset. We observe that UQ methods not only allow for better rejection of prediction mistakes and OOD detection, but also generally achieve higher overall accuracy across different task settings. We assess the precision of uncertainty quantification using different evaluation metrics, such as distributional separation of prediction entropy across correctly and incorrectly identified samples, receiver operating characteristic curves (ROCs), and expected calibration error from observed empirical accuracy. We conclude that ensembling methods can obtain well calibrated classification probabilities and generally perform better than other existing methods in deep learning UQ literature.
Dae Heun Koh, Aashwin Mishra, Kazuhiro Terao
2023-02-07T22:56:09Z
http://arxiv.org/abs/2302.03787v4
# Deep Neural Network Uncertainty Quantification for LArTPC Reconstruction. ###### Abstract We evaluate uncertainty quantification (UQ) methods for deep learning applied to liquid argon time projection chamber (LArTPC) physics analysis tasks. As deep learning applications enter widespread usage among physics data analysis, neural networks with reliable estimates of prediction uncertainty and robust performance against overconfidence and out-of-distribution (OOD) samples are critical for their full deployment in analyzing experimental data.While numerous UQ methods have been tested on simple datasets, performance evaluations for more complex tasks and datasets are scarce. We assess the application of selected deep learning UQ methods on the task of particle classification using the PiLArNet [1] monte carlo 3D LArTPC point cloud dataset. We observe that UQ methods not only allow for better rejection of prediction mistakes and OOD detection, but also generally achieve higher overall accuracy across different task settings. We assess the precision of uncertainty quantification using different evaluation metrics, such as distributional separation of prediction entropy across correctly and incorrectly identified samples, receiver operating characteristic curves (ROCs), and expected calibration error from observed empirical accuracy. We conclude that ensembling methods can obtain well calibrated classification probabilities and generally perform better than other existing methods in deep learning UQ literature. + Footnote †: Corresponding author. ## 1 Introduction Deep learning has largely established itself as a dominant method for machine learning applications, in part due to its competence in a variety of well-known tasks such as image recognition, natural language processing, and automated control applications. As such, scientists in both artificial intelligence and the physical sciences have been investigating ways to realize deep learning's success in more complex domains of fundamental research. The trend for integrating deep learning for physics data reconstruction has been particularly notable in experimental particle physics, where large data generation from particle detectors such as liquid argon time projection chambers (LArTPCs) and the Large Hadron Collider (LHC) naturally prepare fertile grounds for deep learning models. Using deep learning for fundamental research, however, presents complications that are often omitted in many common industrial use cases, where practitioners generally attend to achieving state-of-the-art with respect to a family of conventional performance metrics. In particular, one of the most pressing issues with using deep neural networks for fundamental research is developing robust and consistent methods for quantifying uncertainties of its predictions. Deep neural networks are unable to recognize out-of-distribution examples and habitually make incorrect predictions with high confidence for such cases [2; 3]. Uncertainty in predictions has had serious consequences while applying deep learning to high-regret and safety-critical applications such as automated driving [4; 5; 6], law enforcement [7], medical sciences[8], etc. Overconfidence for out-of-distribution examples also demonstrate the need for deep learning models to acknowledge whether a given prediction is to be trusted or not. Undoubtedly, for deep neural nets to be integrated into the physics measurement process, such characteristics of deterministic neural networks must be addressed by an effective method for uncertainty quantification (UQ). As demand for UQ gradually escalated in domains such as autonomous driving and medicine, UQ methods diversified into a variety of different approaches under the name of Bayesian Deep Learning (BDL), but with scarce substantial application in the physical sciences. Moreover, most BDL methods have been benchmarked on simplified datasets (MNIST, CIFAR10), which are not representative of the complexity of physics data reconstruction process. Modern accelerator neutrino experiments such as ICARUS and DUNE offer ideal grounds for testing the efficacy of BDL in UQ, due to its recent adaptation and moderate success of deep learning based reconstruction techniques. The benefit derived from a detailed assessment of different UQ algorithms on a complex, multi-objective task such as LArTPC data reconstruction is two-fold: allow practitioners in machine learning to evaluate BDL's applicability in a real-world setting and enable physicists to design neural network that produce well justified uncertainty estimates for rejecting erroneous predictions and detecting out-of-distribution instances. Practitioners of deep learning in LArTPC reconstruction agree on the need for calibrated uncertainty bounds for deep learning model predictions along with OOD robustness. However, numerous different uncertainty quantification algorithms have been proposed for deep learning. These range from empirical approaches (such as bootstrapped ensembles), Bayesian approaches (such as EDL, HMC) and hybrid approaches (such as MC Dropout). None of these have been tested for complex applications such as LArTPC reconstruction. In this investigation, we select the most promising uncertainty quantification approaches from each of these categories, test and evaluate them with respect to critical intermediate reconstruction tasks: particle classification and semantic segmentation. We first briefly summarize the different methodologies and discuss the apparent advantage/disadvantage of using each of the proposed models in the following section. We describe in detail the monte-carlo generated 3D LArTPC particle image dataset and state any assumptions or additional information that was used to train and evaluate each model. In Section IV, we present quantitative performance evaluation of different UQ models on three different settings of single particle classification, multi-particle classification, and semantic segmentation, using a variety of quantitative metrics to measure UQ fidelity. ## 2 Methods of Uncertainty Quantification in Deep Learning Among numerous models and studies on uncertainty-quantifying neural networks [9; 10], we focus on methods designed for multi-class classification tasks that require minimal changes to popular neural network architectures. In this paper, we consider three class of UQ methods: model ensembling [11], Monte Carlo Dropout (MCD) [12], and Evidential Deep Learning (EDL) [13; 14]. ### Notation Let \(X=\{x^{(1)},x^{(2)},...,x^{(n)}\}\) and \(Y=\{y^{(1)},y^{(2)},...,y^{(N)}\}\) be data and labels in the training set, and let \(\tilde{X}=\{\tilde{x}^{(1)},\tilde{x}^{(2)},...,\tilde{x}^{(M)}\}\) and \(\tilde{Y}=\{\tilde{y}^{(1)},\tilde{y}^{(2)},...,\tilde{y}^{(M)}\}\) denote the test set. A neural network \(f_{\theta}\), parametrized by weights \(\theta\), is trained on \(D_{train}=\{(x^{(1)},y^{(1)}),...,(x^{(N)},y^{(N)})\}\), with logits given by \(z^{*}=f_{\theta}(x^{*};X,Y)\) and labels \(\hat{y}^{*}=\operatorname*{argmax}_{c}(f_{\theta}(x^{*};X,Y)_{1},...,f_{\theta}(x ^{*};X,Y)_{c})\), for some \(x^{*}\in X^{*}\subset\tilde{X}\). ### Ensembling Methods Model ensembling in the context of deep learning models refers to the method of training multiple instances of the same architecture with different random initialization seeds. In Naive Ensembling (NE), one trains each member of the ensemble on the same training dataset, resulting in \(N\) networks with identical architecture but different parameter values. Often, to achieve better generalization and stability, Bootstrapped Ensembling (BE) (or bagging) is preferred over naive ensembling. This is done by training each ensemble member on a dataset reorganized by sampling \(N\) examples from the full training set with replacement. If the size of the resampled dataset is equal to that of the original training set, each ensemble member is expected to see approximately 63% of the original training set. For classification, it is standard to use the most common label among the ensemble members as the final prediction, while for regression one usually computes the empirical mean. When an ensemble consists of a collection of neural networks trained with respect to a _proper scoring rule_[15] and often coupled with an optional adversarial training routine, the ensemble is termed _deep ensembles_[11]. Ensemble methods are the one of the simplest UQ methods that require no additional changes to the underlying model architecture, although the high computational cost in training \(N\) architecturally identical models and performing \(N\) forward passes for one prediction often renders them inapplicable for some memory or time consuming tasks. ### Monte Carlo Dropout Monte-Carlo Dropout is a bayesian technique introduced in [12], where one approximates the network's posterior distribution of class predictions by collecting samples obtained from multiple forward passes of dropout regularized networks. _Dropout regularization_[16] involves random omissions of feature vector dimension during train time, which is equivalent to masking rows of weight matrices. Inclusion of dropout layers mitigates model overfitting and is empirically known to improve model accuracy [16]. A key observation of [12] is that under suitable assumptions on the bayesian neural network prior and training procedure, sampling \(N\) predictions from the BNN's posterior is equivalent to performing \(N\) stochastic forward passes with dropout layers fully activated. This way, the full posterior distribution may be approximated by monte-carlo integration of the posterior softmax probability vector \(p(\hat{y}^{*}\mid x^{*};X,Y)\): \[p(\hat{y}^{*}\mid x^{*};X,Y)\approx\frac{1}{T}\sum_{t=1}^{T}\text{Softmax}( \mathbf{f}_{\theta_{t}}(x^{*};X,Y)), \tag{1}\] where \(T\) denotes the number of stochastic forward passes. As with ensembling methods, the final prediction of MCDropout for classification is given by the majority vote among all stochastic forward passes. For regression, we again compute the empirical mean. As evident from the apparent similarities, MCDropout networks may also be interpreted as a form of ensemble learning [16], where each stochastic forward pass corresponds to a different realization of a trained neural network. Implementing MCDropout requires one to modify the underlying neural network architecture to include dropout layers and configuring them to behave stochastically during test time. Often the location of dropout layers can critically affect prediction performance, and for convolutional neural networks the decision is made via trial-and-error [17]. Also, for memory intensive tasks such as semantic segmentation, sample collection by multiple forward passes can accumulate rapidly towards high compuational cost, similar to ensembling methods. ### Evidential Deep Learning Evidential Deep Learning (EDL) [13, 14], refers to a class of deep neural networks that exploit conjugate prior relationships to model the posterior distribution analytically. For multi-class classification, the distribution over the space of all probability vectors \(\mathbf{p}=(p_{1},...,p_{c})\) is modeled by a Dirichlet distribution with \(c\) concentration parameters \(\alpha=\alpha_{1},...,\alpha_{c}\): \[D(\mathbf{p}\mid\alpha)=\frac{1}{B(\alpha)}\prod_{i=1}^{c}p_{i}^{\alpha_{i}-1}, \tag{2}\] where \(\alpha_{i}\geq 1\) for all \(i\), \(B(\cdot)\) denotes the \(c\)-dimensional multinomial Beta function, and \(\mathbf{p}\) is in the \(c\)-unit simplex \(\mathcal{S}_{c}\): \[\mathcal{S}_{c}=\{\mathbf{v}\in\mathbb{R}^{c}:\sum_{i=1}^{c}v_{i}=1\}. \tag{3}\] In constrast to deterministic classification neural networks that minimize the cross-entropy loss by predicting the class logits, evidential classification networks predict the concentration parameters \(\alpha=(\alpha_{1},...,\alpha_{c})\). The expected value of the \(k\)-th class probability under the distribution \(D(\mathbf{p}\mid\alpha)\) is then given analytically as \[\hat{p}_{k}=\frac{\alpha_{k}}{S},\quad S=\sum_{i=1}^{c}\alpha_{i}. \tag{4}\] To estimate the concentration parameters, several distinct loss functions are available as training criteria. The _marginal likelihood loss_ (MLL) is given by: \[\mathcal{L}_{MLL}(\theta)=-\log\left(\int\prod_{i=1}^{c}p_{i}^{y_{i}}D( \mathbf{p}\mid\alpha)\ d\mathbf{p}\right). \tag{5}\] The _Bayes risk_ (posterior expectation of the risk) of the _log-likelihood_ (BR-L) formulation yields: \[\mathcal{L}_{BR}(\theta)=\int\left[\sum_{i=1}^{c}-y\log\left(p_{i}\right) \right]D(\mathbf{p}\mid\alpha)\ d\mathbf{p}. \tag{6}\] The _Bayes risk_ of the _Brier score_ (BR-B) may also be used as an alternative optimization objective: \[\mathcal{L}_{BS}(\theta)=\int\left\|\mathbf{y}-\mathbf{p}\right\|_{2}^{2}\ D( \mathbf{p}\mid\alpha)\ d\mathbf{p}. \tag{7}\] From Sensoy et. al. [13], analytic integration of the aforementioned loss functions give closed form expressions that are suited for gradient based optimization of the parameters \(\theta\). EDL methods have the immediate advantage of requiring only one single pass to access the full posterior distribution, at a price of restricting the space of posterior functions onto the appropriate conjugate prior forms. Also, EDL methods only require one to modify the loss function and the final layer of its deterministic baseline (if necessary), which allows flexible integration with complex, hierarchical deep neural architectures similar to the full LArTPC reconstruction chain. However, due to the strong assumptions made on the posterior analytical form, EDL methods are limited to classification and regression tasks as of now. As we later observe, EDL methods generally fall short on various UQ evaluation metrics compared to ensembling and MCDropout, depending on task specifics. ## 3 Evaluating Uncertainty Quantification Methods ### Evaluation Metrics As stated in [11], the goal for uncertainty quantification for deep learning models is two-fold: to achieve better alignment of predicted confidence probability with their long-run empirical accuracy and to serve as mis-classification or out-of-distribution alarms that could be used for rejecting unconfident predictions. The first condition, which we term _calibration fidelity_, may be evaluated by plotting the _reliability diagrams_[18], which are constructed by binning the predicted probabilities (often termed _confidence_) into equal sized bins and plotting the bin centers in the \(x\)-axis and the empirical accuracy of the bin members in the \(y\)-axis. The closer the reliability diagram is to the diagonal, the more desirable a given classifer is, in the sense of calibration fidelity. The deviation of a given classifier from the diagonal could be summarized by computing the _adaptive calibration error_ (ACE) [19]: \[ACE=\frac{1}{K}\frac{1}{R}\sum_{k=1}^{K}\sum_{r=1}^{R}|acc(r,k)-conf(r,k)|. \tag{1}\] Here, \(K\) denotes the number of unique classes and \(R\) denotes the number of equal-sample bins used to plot the reliability diagram for class \(k\), given by confidence \(conf(r,k)\) and corresponding empirical accuracy \(acc(r,k)\). Although the _expected calibration error_ (ECE) [20] is more widely known, we observed in practice that static binning schemes such as ECE is suboptimal for highly skewed predictive probability distributions common to accurate models. As calibration fidelity measurements using reliability diagrams are originally designed for binary classifiers, there have been numerous proposals for their extensions to multi-class classifiers [21; 22; 23]. We consider two relatively simple methods; the first is a standard used in Guo et. al. [21], where only the predicted probability for the most confident prediction of each sample is used to plot the reliability diagram. We refer to this mode of assessment as _max-confidence_ calibration fidelity. An alternative method is to evaluate calibration for each of the \(K\) classes separately, as in B. Zadrozny and C. Elkan [23]. We refer to this mode as _marginal_ calibration fidelity. Another metric of uncertainty quantification measures the model's _discriminative capacity_ to mis-classified or out-of-distribution samples. In practice, uncertainty quantification models have the capacity to reject predictions based on a numerical estimate of the trustworthiness of the prediction in question. For example, in a classification setting the entropy of the predicted softmax probability distribution (_predictive entropy_) could be used as a measure of confusion, as entropy is maximized if the predictive distribution reduces to a uniform distribution over \(K\) classes. In this construction, it is desirable to have the predicted entropy distributions of correctly and incorrectly classified samples to be as separated as possible. To compute the extent of distributional separation, we may use the first Wasserstein distance [24] between the predictive entropy distributions: \[W_{1}(u,v)=\inf_{\pi\in\Gamma(u,v)}\int_{\mathbb{R}\times\mathbb{R}}|x-y|\;d\pi(x,y). \tag{10}\] where \(u\) and \(v\) are two probability distributions, \(\Gamma(u,v)\) is the set of all joint probability measures in \(\mathbb{R}^{2}\). We use the Wasserstein distance with the \(L_{1}\) metric due to its simple computational implementation [24]. Sensitivity may also be measured by computing the area under the receiver operating characteristic curve (AUROC), also known as the concordance statistic (\(c\)-statistic) [25]. Using predictive entropy as the thresholding value, the ROC curve is constructed by plotting the false positive rate (incorrect predictions) in the \(x\)-axis and the true positive rate (correct predictions) in the \(y\)-axis at different threshold levels. In this setting, the AUROC is the probability that a randomly chosen correct prediction will have a lower predictive entropy than that of a randomly chosen incorrect prediction [26]. ## 4 Datasets and Network Architectures **Single Particle Classification**: We first implement and assess the different UQ models on the simpler task of single particle classification. The single particle dataset consists of Figure 1: Sparse-CNN layer definitions for architecture reference. Figure 2: Sparse-CNN block definitions for architecture reference. Figure 4: Sparse-CNN architecture for semantic segmentation networks. Figure 5: Architecture outline of multi-particle classification network. The geometric node encoder extracts hand-engineered features relevant to particle classification, such as orientation matrix and major PCA axes. Figure 3: Sparse-CNN architecture for single particle classifiers. 1024 3D images each containing only one particle, where all voxels in the given image belong to the same particle ID. The 3D images have one feature dimension corresponding to the amount of energy deposited in a one-voxel equivalent region of the detector. We use a ResNet [27] type encoder with dropout [16] regularization, where convolution operations are substituted by sparse convolutions implemented in the _MinkowskiEngine_ library [28]. For standard deterministic models, ensembles, and MCDropout, the final prediction probabilities are given by softmax activations, whereas for evidential models the concentration parameters \(\alpha\) are computed from Softplus [29] activations. The single particle dataset contains five particle classes: photon showers (\(\gamma\)), electron showers (\(e\)), muons (\(\mu\)), pions (\(\pi\)), and protons (\(p\)). **Semantic Segmentation** As segmentation is a classification task on individual pixels, the details of the implementation are mostly identical to those of single particle classification. We employ _Sparse-UResNet_[30] with dropout layers in the deeper half of the network as the base architecture for semantic segmentation networks and use the 768px resolution PILArNet [1] MultiPartRain (MPR) and MultiPartVertex (MPV) datasets for multiple particle datasets. The five semantic labels provided by PiLArNet consists of the following: * Shower Fragments: connected components of electromagnetic showers that are above a set voxel count and energy deposition threshold. * Tracks: particle trajectories that resemble stright lines, mostly originating from muon, pion, and protons. * Michel Electrons: an electron produced by muon decay at rest. * Delta Rays: electrons produced from muon tracks via hard scattering * Low Energy Depositions: cloud of low energy depositions of electromagnetic showers which are not labeled as shower fragments. **Multi Particle Reconstruction** The MPV/MPR dataset also contains particle type labels for each particle instance in a given image. For multi particle classification, we take each cluster of voxels that belong to the same particle and reduce the resulting groups of point cloud into 1-dimensional feature vectors. The node embeddings of each particle consists of geometric features such as its principal component vectors. These feature vectors are then given as input node features to a graph neural network, which performs three message passing operations to incorporate inter-particle relational information. ## 5 Results ### Training Details The training set consists of 80k images, and the test set were separated to a 2k validation set used for model selection and a 18k test set used for selected model evaluation with high statistics. All models were trained until the validation accuracy plateaued, and the network weights that achieved the highest validation accuracy were selected for further evaluation on a separate test set. To fully account for possible variations in model accuracy and uncertainty quantification quality due to randomized factors such as parameter initialization, the model selection procedure were repeated for five different random seeds for each model. This results in five independently trained models that share the same architecture but differing in parameter values. We used the Adam optimizer [31] with decoupled weight decay [32]. **Single Particle Classification**: Figure 6 shows the predictive entropy distribution, accuracy, and the \(W_{1}\) distance for single particle classification models. We observe that the distributional separation as measured in \(W_{1}\) is largest for the ensemble methods, while evidential model trained on the Brier score is also competitive. In general, ensemble methods achieve highest accuracy with better distributional separation compared to monte-carlo dropout and evidential models. The AUROC values in figure 8 also reflect the superior discriminative capacity of ensembling. The calibration curves for single particle classification are shown in the top row of figure 15, and figure 7 illustrates the adaptive calibration error (ACE) values across different subsets of the test set partitioned by true particle id labels. While all UQ models with the possible exception of EDL-BR-B achieve better calibration compared to standard deterministic neural networks, ensembling methods have the least max-confidence and marginal ACE values. **Semantic Segmentation**: For segmentation, the best distributional separation is achieved by evidential models, which are evident in Figure 9. The ensemble methods have the highest accuracy and AUROC scores as is shown in figure 10. It is interesting to note that while distributional separation measured in \(W_{1}\) is greatest for evidential models, the calibration fidelity falls short even with respect to standard deterministic models. As with single particle classification, the best calibration fidelity is realized by ensemble methods. **Multi Particle Reconstruction**: Since contextual information which are useful in determining Figure 6: Predictive entropy distribution, accuracy, and 1-Wasserstein distance for single particle classification. Figure 8: Single particle ROC and percentage rejection curves. Figure 7: Single particle classification adaptive calibration errors (ACEs) for each model and class. Figure 9: Predictive entropy distribution, accuracy, and 1-Wasserstein distance for semantic segmentation. a given particle's ID can only be used in a multi-particle setting, we expect a gain in accuracy from the single particle datasets. This approach leads to an overall approximate 5% increase in classification accuracy in all models. Again, ensemble methods provide the highest \(W_{1}\) distance, overall accuracy, and AUROC values (figures 12, 14) and the best calibration fidelity (figure 13). The full reliability plots used to calculate ACE values are provided in figures 15 and 16. A tabular summary of results is available in table 1. Figure 14: Multi particle ROC and percentage rejection curves. Figure 12: Predictive entropy distribution, accuracy, and 1-Wasserstein distance for multi particle classification. Figure 13: Multi particle classification adaptive calibration errors (ACEs) for each model and class. Figure 16: Reliability plots for semantic segmentation. Figure 15: Reliability plots for single and multi-particle classification. ## 6 Conclusion In this paper, we have proposed a new method for constructing the \(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O( }\mathcal{O(\mathcal{O}}(\mathcal{O(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O( }(\mathcal{O(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O(\mathcal{O}}(\mathcal{O(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O( }(\mathcal{O(\mathcal{O}}( \mathcal{O(\mathcal{O}}( \mathcal{O(\mathcal{O}}( \mathcal{O(\mathcal{O( }(\mathcal{O( }O(\mathcal{O( }( \mathcal{O( }O( \)( \mathcal{O}(( \mathcal{O( ( ( ( } (\)(\mathcal{(( } ( ( ( ( ( ( ( ( ( )}()}(()((())((((())) ## 6 Conclusions In this paper, we have presented a new method for computing We include a brief summary of the required GPU memory and time complexity of each model in tables 3 and 2. The batch size is denoted in the Train/Test subcolumn. The time information corresponds to the CPU time it takes for the model to run one iteration of training or evaluation routine with the denoted batch size. For training, this includes the time required for both model forwarding and gradient backpropagation, while for inference we compute the sum of evaluation mode model forwarding time and other post-processing operations (for example, in MC Dropout we have a sample averaging procedure needed to obtain probability values). The memory value is computed by taking the average of the maximum required GPU memory across 5 different samples. Note that the values for deterministic and naive ensembles are identical, since naive ensembles were constructed from trained deterministic models. ## 6 Categorization of Error Types Calibration fidelity cannot be examined in a single image basis, as calibration is a collective property that must be measured by appropriate binning of the test set predictions. However, it is possible to assess the discriminative capacity by observing samples with antipodal entropy values. With predictive entropy values in hand, the class predictions may be divided into four categories: 1) confident (low entropy) correct predictions, 2) uncertain (high entropy) correct predictions, 3) confident errors, and 4) uncertain errors. Among the four groups, confident errors are most problematic for robust design of deep learning models. Some representative examples are shown in figures 17, 18, and 19. Figure 17 is a high-entropy misclassification example, in which the network cannot confidently decide whether the set of voxels circled in red is an electron, muon, or a pion, in constrast with the confident predictions it gives for the pair of photons in and the vertex-attached proton. In figures 18 and 19, the network predicts the vertex-attached shower as an electron with high probability, while for the two \(\mu\) and \(\pi\) pair it retains some level of uncertainty. Hence, we observe that the assessment of the network on mis-identifying the shower as an electron is partly justified, as it is difficult to distinguish a photon shower attached to an interaction vertex from an electron shower. ## 7 Discussion We evaluated three different uncertainty quantification methods for deep neural networks on the task of single particle classification, multi-particle classification, and semantic segmentation using high resolution 3D LArTPC energy deposition images. The various metrics evaluating calibration fidelity and discriminative capacity leads to a notable conclusion: simple ensembling of few independently trained neural networks generally achieve highest accuracy and best calibration of output probability values. Also, we observe that the quality of uncertainty quantification depends greatly on the type of the classifier's task, and often it is possible for Bayesian models to perform worse than deterministic networks in calibration. Often, the choice made in hyperparameters and neural network architecture significantly affects the classifier's capacity to achieve the desired performance. It is important to note that the UQ methods presented in this paper does not include _structural_ and _hyperparameter_ of our models. Extant deep learning uncertainty quantification approaches can only account for aleatoric uncertainty and the parameter uncertainty component of epistemic uncertainty. Thus, these methods are unable to account for structural (or model form) uncertainty, that is a component of epistemic uncertainty. While a complete description of epistemic uncertainty is often intractable in practice, it is desirable to assess how much of the variability in a deep classifier's predictions could be attributed to hyperparameter and structural diversity. While out-of-distribution and mis-classification resilience of uncertainty quantifying neural nets may be used for rejecting unreliable predictions, obtaining calibrated probability estimates would provide further credibility in using deep learning techniques for physical sciences. Post-hoc calibration methods such as temperature scaling [21] train a calibration model (for temperature Figure 17: Example high-entropy error from a multi-particle evidential GNN (Bayes Risk). The particles that does not originate from a vertex are omitted and are colored in dark navy. Figure 18: Example low-entropy prediction from a multi-particle evidential GNN (Bayes Risk). scaling, a single parameter) after training to obtain calibrated probabilities for a deterministic neural network. As post-hoc methods do not require the classifier to be re-modeled and trained from its initial state, such methods may be better suited for ensuring proper calibration of classifiers with lower computational budget. Future work will include evaluation of uncertainty quantifying neural nets and post-hoc calibration methods for a full neutrino physics signal/background classifier, which is built on top of the separate tasks of particle classification and segmentation. ## Acknowledgment This work was supported in part by funding from Zoox, Inc. This work is supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, and Early Career Research Program under Contract DE-AC02-76SF00515.
2307.09312
Multi-Modal Discussion Transformer: Integrating Text, Images and Graph Transformers to Detect Hate Speech on Social Media
We present the Multi-Modal Discussion Transformer (mDT), a novel methodfor detecting hate speech in online social networks such as Reddit discussions. In contrast to traditional comment-only methods, our approach to labelling a comment as hate speech involves a holistic analysis of text and images grounded in the discussion context. This is done by leveraging graph transformers to capture the contextual relationships in the discussion surrounding a comment and grounding the interwoven fusion layers that combine text and image embeddings instead of processing modalities separately. To evaluate our work, we present a new dataset, HatefulDiscussions, comprising complete multi-modal discussions from multiple online communities on Reddit. We compare the performance of our model to baselines that only process individual comments and conduct extensive ablation studies.
Liam Hebert, Gaurav Sahu, Yuxuan Guo, Nanda Kishore Sreenivas, Lukasz Golab, Robin Cohen
2023-07-18T14:57:12Z
http://arxiv.org/abs/2307.09312v4
Multi-Modal Discussion Transformer: Integrating Text, Images and Graph Transformers to Detect Hate Speech on Social Media ###### Abstract We present the Multi-Modal Discussion Transformer (mDT), a novel multi-modal graph-based transformer model for detecting hate speech in online social networks, such as Reddit discussions. In contrast to traditional comment-only methods, our approach to labelling a comment as hate speech involves a holistic analysis of text and images grounded in the discussion context. This is done by leveraging graph transformers to capture the contextual relationships in the entire discussion surrounding a comment and grounding the inter-woven fusion layers that combine individual comments' text and image embeddings instead of processing modalities separately. We compare the performance of our model to baselines that only process individual comments and conduct extensive ablation studies. To evaluate our work, we present a new dataset, HatefulDiscussions, comprising complete multi-modal discussions from multiple online communities on Reddit. We conclude with future work for multimodal solutions to deliver social value in online contexts, arguing that capturing a holistic view of a conversation significantly advances the effort to detect anti-social behaviour. 1University of Waterloo [email protected] ## Introduction Social media have democratized public discourse, enabling users worldwide to freely express their opinions and thoughts. As of 2023, the social media giant Meta has reached 3 billion daily active users across its platforms [16]. While this level of connectivity and access to information is undeniably beneficial, it has also resulted in the alarming rise of hate speech [14]. This pervasive spread of hateful rhetoric has caused significant emotional harm to its targets [20], triggered social divisions and polarization [23], and has caused substantial harm to the mental health of users [21]. There is an urgent need for a comprehensive solution to automate the process of identifying hate speech, as a critical first step towards combatting this alarming practice. Initially, automated hate speech detection models were limited to text-only approaches such as HateXplain [22], which classify the text of individual comments. Such methods have two significant weaknesses. First, social media comments have evolved to include images, which can influence the context of the accompanying text. For instance, a text comment may be innocuous when taken alone, but the inclusion of an image may transform it into a hateful remark. Second, hate speech is contextual. Social media comments are often conversational and are influenced by other comments within the discussion thread. For example, a seemingly innocuous comment such as "That's gross!" can become hateful in a discussion about immigration or minority issues. Ongoing research to address these weaknesses includes multi-modal transformers such as ViIT [15] that combine images and text for a richer representation of comments, but they do not account for the contextual nature of hate speech. Hebert, Golab, and Cohen [23] address the concern of modelling discussion context utilizing graph neural networks, but they do not discuss how to integrate the interpretation of images within hateful social media discussions. Furthermore, the sequential nature of the proposed architecture prevents text embeddings from being grounded in relation to other comments in a graph. That is, the initial semantic content encoded by a comment embedding may differ when considered with different sets of comments versus in isolation. To overcome the limitations of the existing methods, we propose the Multi-Modal Discussion Transformer (mDT), a method to holistically encode comments in relation to the multi-model discussion context for hate speech detection. We also present a novel dataset, HatefulDiscussions, containing complete multi-modal discussion graphs from various Reddit communities and a diverse range of hateful behaviour, to evaluate our work. We compare mDT against comment-only and graph methods and conduct an ablation study on the various components of our architecture. We then conclude by discussing the potential for our model to deliver social value in online contexts by effectively identifying and combating anti-social behaviour in online communities. We also propose future work towards more advanced multi-modal solutions that can better capture the nuanced nature of online communication and behaviour, and that can adapt to the ever-changing landscape of social media. To summarize our contributions: **1)** We propose a novel fusion mechanism as the core of mDT that interweaves multi-modal fusion layers with graph transformer layers, allowing for multi-modal comment representations that are actively grounded in the discussion context. **2)** We propose a novel graph structure encoding specific to the conversational structure of social media discussions. **3)** We introduce a new dataset of 8266 annotated discussions, totalling 18359 labelled comments, with complete discussion trees and images to evaluate the effectiveness of mDT. We focus on the social platform Reddit in our work, which consists of branching tree discussions. Our codebase and datasets can be found in our supplemental material1 and will be open-sourced upon acceptance. Footnote 1: Details regarding software licenses and hardware configuration can also be found in the supplemental material ## Related Work Transformer-based text encoding models such as BERT have brought significant improvements to natural language processing due to their ability to effectively capture textual semantics Devlin et al. (2019). Inspired by these developments, methods such as HateXplain Mathew et al. (2021) and HateBERT Caselli et al. (2021) have been introduced to discern hateful comments on social platforms, focusing on text, alone. The effectiveness of these efforts is intrinsically tied to the diversity of datasets they are trained on. For instance, HateXplain utilized a specialized dataset sourced from diverse social platforms like Twitter and Gab, emphasizing interpretable hate speech detection. Other noteworthy datasets include Gong et al. (2021), studying heterogeneous hate speech (comments containing mixed abusive and non-abusive language); Founta et al. (2018), and which crowdsourced annotation of Twitter abusive content. Finally, Zampieri et al. (2019) collected hateful Twitter posts collated through a collaborative semi-supervised approach. While text is essential, images also significantly contribute to grasping the semantic context. CLIP introduced an approach to align text and image representations via contrastive pre-training Radford et al. (2021). ViLBERT Lu et al. (2019) conceptualized distinct transformers for each modality--images and text, which are then amalgamated through co-attentional transformer layers. Subsequent works like VilT Kim et al. (2021) and Nagrani et al. (2021) have devised novel inter-modality fusion mechanisms, unifying both modality transformers into one. This integration of multi-modal language grounding has also enriched hate speech detection, as evidenced by the Hateful-Memes challenge Kiela et al. (2020). Additional works like Liang et al. (2022) employ graph convolutional networks to merge text and images, with a primary aim of sarcasm detection. Meanwhile, Sahu et al. (2021) leverage generative adversarial networks to encode these modalities, facilitating congruent representations of comments. Cao et al. (2022) pursue a unique strategy by mapping the paired image to text descriptors, appending the comment text, and then predicting with a generative language model. Finally, Singh et al. (2022) incorporate image and text representations of product reviews to accurately disambiguate complaints. Despite the progress, many of these techniques overlook a vital modality: the context of discussions. The prevailing emphasis remains on formulating datasets and techniques that analyze singular comments, bypassing the contextual significance of the prior discussion. Drawing inspiration from Graphormer Ying et al. (2022)--a graph transformer network tailored for molecular modelling--Hebert et al. (2022) proposed an architecture that consolidates learned comment representations using Graphormer, trained to predict the trajectory of hateful discussions. However, this work has its limitations; it neglects the influence of images and, owing to the absence of complete discussion-focused hate speech datasets, resorts to approximating ground truth labels using a ready-made external classifier. Our work addresses both of these limitations, with the addition of interleaving comment and discussion layers, as well as human ground truth data. ## Methodology ### Multi-Modal Discussion Transformer (mDT) The mDT architecture consists of three components: Initial Pre-Fusion, Modality Fusion, and Graph Transformer (Figure 1). The description below expands upon the operations that assist with hate detection and outlines the inherently holistic nature of our solution. Initial Pre-FusionGiven a discussion \(D\) with comments \(c\in D\), each represented with text \(t_{c}\) and optional image \(i_{c}\), we start by leveraging pre-trained BERT and ViT models to encode text and images, respectively. Both models consist of \(N\) layers with the same hidden dimension of \(d\). In our experiments, we utilized BERT-base and ViT-base, which both have \(N=16\) layers and \(d=768\) hidden dimensions. Given these models, the Initial Pre-Fusion step consists of the first \(K\) layers of both models with gradients disabled (frozen), denoted as \[t_{c}^{k}=BERT_{init}(t_{c}),i_{c}^{k}=ViT_{init}(i_{c})\] where \(K<N\). This step encodes a foundational understanding of the images and text that make up each comment. Modality FusionAfter creating initial text and image embeddings \(t_{c},i_{c}\) for all comments \(c\in D\) in the discussion, we move to the Modality Fusion step. To encode inter-modality information, we adopt the bottleneck mechanism proposed by Nagrani et al. (2021). We concatenate \(b\) shared modality bottleneck tokens \(B\in R_{b\times d}\) to \(t_{c}\) and \(i_{c}\), transforming the input sequence to \([t_{c}^{k}\mid\mid B],[i_{c}^{k}\mid\mid B]\). We then define a modality fusion layer \(l\) as \[[t_{c}^{l+1}||B_{t,c}^{l+1}] =BERT_{l}([t_{c}^{l}||B_{c}^{l}])\] \[[i_{c}^{l+1}||B_{i,c}^{l+1}] =ViT_{l}([t_{c}^{l}||B_{c}^{l}])\] \[B_{c}^{l+1} =Avg(B_{t,c}^{l+1},B_{i,c}^{l+1})\] where both modalities can only share information through the \(B\) bottleneck tokens. This design forces both modalities to compress information to a limited set of tokens, improving performance and efficiency. If there are no images attached to a comment then \(B^{l+1}=B_{t}^{l+1}\). Graph TransformerThen, after \(Z\left(<(N-K)\right)\) modality fusion layers, we deploy Graph Transformer layers to aggregate contextual information from the other comments in the discussion. Given that the tokens in \(B_{c}\) encode rich inter-modality information, we innovate by leveraging these representations to represent the nodes in our discussion graph. Using \(b_{c}^{0}\in B_{c}\) to represent each comment \(c\in D\), we aggregate each embedding using a transformer model to incorporate discussion context from other comments. Our novel utilization of bottleneck tokens to represent graph nodes allows modality models to maintain a modality-specific pooler token ([CLS]) as well as a graph context representation (\(b_{0}\)). Since transformer layers are position-independent, we include two learned structure encodings. The first is Centrality Encoding, denoted \(z\), which encodes the degree of nodes in the graph [22]. Since social media discussion graphs are directed, the degree of comments is equivalent to the number of replies a comment receives plus one for the parent node. We implement this mechanism as \[h_{c}^{(0)}=b_{c}^{0}+z_{deg(c)}\] where \(h_{c}^{(0)}\) is the initial embedding of \(b_{c}^{0}\) in the graph and \(z_{deg(c)}\) is a learned embedding corresponding to the degree \(deg(c)\) of the comment. The second structure encoding is Spatial Encoding, denoted \(s_{(c,v)}\), which encodes the structural relationship between two nodes \(c,v\) in the graph. This encoding is added as an attention bias term during the self-attention mechanism. That is, we compute the self attention \(A_{(c,v)}\) between nodes \(c,v\) as \[A_{(c,v)}=\frac{(h_{c}\times W_{Q})(h_{v}\times W_{K})}{\sqrt{d}}+s_{(c,v)}\] where \(W_{Q}\) and \(W_{K}\) are learned weight matrices and \(d\) is the hidden dimension of \(h\). In previous graph transformer networks, \(s_{(c,v)}\) is encoded as a learned embedding representing the shortest distance between \(c,v\) in the graph [22, 23]. However, this metric does not lend itself well to the hierarchical structure of discussions, where equivalent distances can represent different interactions. This is best seen in the example discussion illustrated in Figure 2. When utilizing the shortest distance to encode structure, the distance between nodes \(a\) and \(c\) is the same as the distance between nodes \(b\) and \(d\) in this graph. However, \(b\) and \(d\) represent direct replies to the same parent post whereas \(a\) is two comments underneath \(c\). To account for this, we propose a novel hierarchical spatial encoding based on Cantor's pairing function. Cantor's pairing function uniquely maps sets of two numbers into a single number \(\mathbb{N}\times\mathbb{N}\rightarrow\mathbb{N}\). We utilize this function to encode structure as follows: Given comments \(a\) and \(b\), we first calculate the number of hops upward \(u_{(a,b)}\) and hops downward \(d_{(a,b)}\) to reach \(b\) from \(a\). In the example above, the distance between \(a\) and \(d\) is \(u_{(a,b)}=2,d_{(a,b)}=1\). We then compress both numbers into a single index using the proposed Figure 1: Multi-Modal Discussion Transformer Figure 2: Example Discussion Structure. Each node in the discussion tree represents a comment. The shortest distance between (a, c) and (b, d) is equivalent, demonstrating a lack of expressiveness towards hierarchy. position-independent variant of Cantor's pairing: \[s_{(c,v)} =s_{(v,c)}\] \[=Cantors(u,d)\] \[=\frac{(u+d)(u+d+1)}{2}+min(u,d)\] This uniquely maps \(\mathbb{N}\times\mathbb{N}\rightarrow\mathbb{N}\) such that \(s_{c,v}=s_{v,c}\). We utilize this function to index learned spatial embeddings in the self-attention mechanism. After \(G\) graph transformer layers, the final representation of \(h_{c}^{G}\) replaces \(b_{c}^{0}\) for the next set of \(Z\) modality fusion layers. We denote the combination of \(Z\) Modality Fusion and \(G\) Graph Transformer layers as a Graph Multi-Modal Fusion module. Finally, after \((N-K)/Z\) Graph Multi-Modal Fusion modules, we predict logits using the final embedding of \(b_{c}^{0}\) and the [CLS] embedding of \(t_{c}\). This novel interweaving of graph transformer layers and fusion layers through modality bottleneck tokens ensures that fusion models create representations that are grounded in the discussion context. Notably, this differs from previous approaches that utilize graph neural networks, which sequentially process individual comments before applying a set of graph layers. ### HatefulDiscussions Dataset In order to train our proposed architecture, we require a dataset that comprises complete multi-modal discussion graphs. Furthermore, we wanted to train our model on large discussions comprising many comments from different communities. However, many previous datasets used by other works [14, 15, 16] consist only of individual labelled comments and are predominately text-only. To address this issue, we curated an expansive novel benchmark comprising multiple datasets that used human annotators, which we augment to include complete multi-modal discussion graphs. Our final dataset comprises 8266 discussions with 18359 labelled comments, originating from 850 different communities. The first type of hate speech included in our benchmark is Identity-Directed and Affiliation-Directed Abuse. To capture this specific type of hate speech, we retrieved labelled examples from the Contextual Abuse Dataset (CAD) developed by vidgen2021exploiting. According to the authors, Identity-Directed abuse refers to content containing negative statements against a social category, encompassing fundamental aspects of individuals' community and socio-demographics, such as religion, race, ethnicity, and sexuality, among others. On the other hand, Affiliation-Directed abuse is defined as content expressing negativity toward an affiliation, which is described as a voluntary association with a collective, such as political affiliation and occupations [13]. We selected both these forms of abuse from CAD due to the similarity in their definitions--abuse that is directed at aspects of a person's identity rather than a specific individual directly. Next, the inclusion of slurs forms the second type of hateful content within our dataset, sampled from the Slurs corpus [14]. It is crucial to acknowledge that historically derogatory slurs can undergo re-appropriation by certain communities, such as the n-slur in African American Vernacular, transforming them into non-derogatory terms. Therefore, we hypothesize that understanding the contextual nuances surrounding the use of slurs becomes essential in distinguishing between non-derogatory and derogatory instances. The last type of hateful content we include is person-directed abuse, hate speech or offensive content that specifically targets and attacks an individual or a group of individuals. To include examples of this kind of abuse requiring context to understand, we source labelled examples from the Learning to Intervene (LTI) dataset by qian2019exploiting. For each labelled comment, we retrieved the corresponding complete discussion tree using the Pushshift Reddit API and downloaded all associated images2. To refine our dataset, we filtered out conversations without any images and constrained comments to have a maximum degree of three and conversations to have a maximum depth of five. By trimming the size of the discussion tree, we reduce computational complexity and focus the discussion on the most relevant parts of the conversation [15]. It is important to note that the majority of the images appear in the discussion context, such as the root post, rather than directly attached to labelled comments. In our case, only 424 labelled instances have an image attached, but all 8000 discussions have an image in the prior context. Therefore, comment-only multi-modal models (only text + images) are unsuitable for evaluation on this task. Footnote 2: At the time of writing, Reddit has suspended all access to the Pushshift API; however, our dataset contains complete metadata and graphs. In order to train our models, we map each of the retrieved labels to either Hateful or Normal and treat the problem as a binary classification. The final distribution of each label can be seen in Table 1. ## Results ### Experimental Setup In our experiments, we conduct a 7-fold stratified cross-validation (equivalent to a 14% test split) with a fixed seed \begin{table} \begin{tabular}{l l} \hline \hline Label & Count \\ \hline \multicolumn{2}{l}{Derogatory Slur (DEG)} & 4297 \\ \multicolumn{2}{l}{Not Derogatory Slur (NDG)} & 2401 \\ \multicolumn{2}{l}{Homonym (HOM)} & 364 \\ \hline LTI Normal & 4116 \\ LTI Hate & 1313 \\ \hline CAD Neutral & 4892 \\ CAD Identity Directed Abuse & 701 \\ CAD Affiliation Directed Abuse & 275 \\ \hline Normal & 11773 \\ Hateful & 6586 \\ \hline \hline \end{tabular} \end{table} Table 1: Label Distribution of the Hateful Discussions Dataset (1) and report the average performance for each model. By utilizing 7-fold, we allow for a larger diversity of labels across each fold instead of 10-fold validation. We report overall accuracy (Acc.) and class-weighted Precision (Pre.), Recall (Rec.) and F1 to account for label imbalance. In all results, * denotes statistical significance using Student's t-test with p-value \(<0.05\). Unless otherwise noted, the default model hyperparameter configuration and selection process we used for mDT can be seen in our supplemental material. ### Text-only Methods vs. Discussion Transformers To assess the performance of mDT, we compared it against several state-of-the-art hate speech detection methods. For comment-only approaches, we evaluated BERT-HateXplain [10], Detoxify [1], and RoBertA Dynabench [13]. We also compared mDT against a BERT model trained on the training set of HatefulDiscussions, referred to as BERT-HatefulDiscuss. To compare against previous graph-based approaches, we evaluated the text-only Graphormer model proposed by [13]. Our results (Table 2) show that mDT outperforms all evaluated methods across all metrics. Specifically, mDT achieves 14.5% higher accuracy and 21% higher F1 score than Graphormer. This indicates that our novel approach to including graph context is a significant improvement over the previous approach that incorporates this modality. Although the performance gap between BERT-HatefulDiscussions and mDT is narrower, we still achieve superior performance against all text-only methods. Particularly, we observed F1 score improvements of 20%, 13%, and 6.3% over Detoxify, BERT-HateXplain, and RoBertA Dynabench, respectively. ### Effect of Bottleneck Size Next, we investigated the impact of increasing the number of bottleneck interaction tokens (\(B\)) in mDT, which are added during the modality fusion step. By adding more bottleneck tokens, we reduce the amount of compression required by the BERT and ViT models to exchange information. Table 3 presents the results, where we find that using four bottleneck tokens leads to the best performance. We also observe a slight drop in performance when we increase the number of bottleneck tokens beyond four tokens, indicating the importance of compression when exchanging modality encodings between models. We assume that this reduction is due to the importance of compressed information to represent comments in the graph transformer network. ### Effect of Constrained Graph Attention A recent study by HeBERT et al. explored the limitations of graph transformers for hate speech prediction, finding that discussion context can sometimes mislead graph models into making incorrect predictions [13]. In light of this, we explore the impact of constraining the attention mechanism of our graph transformer network to only attend to nodes within a maximum number of hops away from a source node. We report the results in Table 4 and find that constraining the attention window to 5 hops achieves better performance. However, we also observed that performance gains from the 5-hop constraint were lost when we further constrained the attention to only 2 hops. Our findings suggest that a balance is required when constraining graph attention for optimal performance. ### Effect of Fusion Layers Next, we investigate the effect of increasing the number of Multi-Modal Fusion Layers (\(Z\)) in our mDT model. To ensure full utilization of the 16 available layers, any unused layers were allocated to the Initial Pre-Fusion step (\(K\)). Our results, presented in Table 5, indicate that utilizing 12 fusion layers leads to the best performance. Interestingly, we found that the performance gains did not follow a linear trend with the number of fusion layers. Specifically, we observed that 8 fusion layers outperformed 10 layers, but were still inferior to 12 layers. We believe that further research in this area should explore the potential benefits of scaling beyond 12 fusion layers using larger modality models. \begin{table} \begin{tabular}{c c c c c} \hline \hline Attention Window & Acc. & Pre. & Rec. & F1 \\ \hline 2 & 0.866 & 0.866 & 0.866 & 0.866 \\ 5 & **0.880*** & **0.880*** & **0.877*** \\ \(\infty\) & 0.870 & 0.861 & 0.850 & 0.855 \\ \hline \hline \end{tabular} \end{table} Table 4: Effect of Constraining Graph Attention \begin{table} \begin{tabular}{c c c c c} \hline \hline Bottleneck Size & Acc. & Pre. & Rec. & F1 \\ \hline 4 & **0.880** & **0.880*** & **0.880** & **0.877** \\ 8 & 0.863 & 0.864 & 0.863 & 0.863 \\ 16 & 0.864 & 0.850 & 0.853 & 0.852 \\ 32 & 0.874 & 0.872 & 0.874 & 0.872 \\ \hline \hline \end{tabular} \end{table} Table 3: Effect of Bottleneck Size on mDT Performance \begin{table} \begin{tabular}{c c c c c} \hline \hline Method & Acc. & Pre. & Rec. & F1 \\ \hline BERT-HateXplain & 0.742 & 0.763 & 0.742 & 0.747 \\ Detoxify & 0.687 & 0.679 & 0.696 & 0.677 \\ RoBertA Dynabench & 0.811 & 0.822 & 0.811 & 0.814 \\ BERT-HatefulDiscuss & 0.858 & 0.858 & 0.858 & 0.858 \\ Graphormer & 0.735 & 0.594 & 0.759 & 0.667 \\ mDT (ours) & **0.880*** & **0.880*** & **0.880*** & **0.877*** \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of mDT against Text-Only Methods. * denotes statistical significance (p-value \(<0.05\)) \begin{table} \begin{tabular}{c c c c c} \hline \hline Total Fusion Layers & Acc. & Pre. & Rec. & F1 \\ \hline 6 & 0.868 & 0.856 & 0.854 & 0.855 \\ 8 & 0.872 & 0.871 & 0.844 & 0.855 \\ 10 & 0.866 & 0.867 & 0.866 & 0.862 \\ 12 & **0.880*** & **0.880*** & **0.880*** & **0.877*** \\ \hline \hline \end{tabular} \end{table} Table 5: Effect of Fusion Layers ### Effect of Images We also investigated the impact of removing images in mDT. Our findings (Table 6) support the hypothesis that images provide crucial contextual information for detecting hateful content. Specifically, we observed that excluding images from mDT led to a 4.8% decrease in accuracy and a 4.9% decrease in the F1 score. It is worth noting that even without images, mDT outperformed Graphormer (Table 2), indicating that our approach provides substantial gains over previous graph-based methods for hate speech detection beyond just including images. The results of this experiment underscore the importance of considering multiple modalities for hate speech detection and suggest that future research should explore further improvements by leveraging additional types of contextual information. ### Qualitative Analysis: BERT vs. mDT We next perform a qualitative comparison of the text-only BERT model and the proposed mDT architecture. We find that the text-only BERT model misclassifies 385/2717 test instances. Upon passing those test instances through mDT, we found that it corrected BERT's labels in 161/385 instances. We further note that BERT and mDT predictions disagree on 264 test instances, out of which mDT is correct on 161 (61%). Figure 3 shows a fine-grained distribution of misclassified test examples by class. Using mDT results in an overall decrease in misclassifications (385 \(\rightarrow\) 327), with a major reduction in false positives (fewer misclassifications for the 'Not Hateful' class). However, we notice that BERT and mDT both struggle to detect the presence of hate speech in derogatory slur (DEG) and identity-directed (IdentityDirectedAbuse) comments. Table 7 shows some hateful test instances misclassified by the two models. We note that the main text under consideration (an individual comment) may not exhibit hate speech on its own; however, considering it with the context (rest of the discussion thread+image) helps mDT correctly classify the test instances as hate speech. Consider the first example in Table 7. The word "tranny" is a common acronym for "transmission" on social media, but considering the context, it is clearly an abusive discussion directed toward the transgender community. This is further contextualized by the accompanying image in the discussion, leading to evidence of hateful interpretations (Figure 4). Note that there are images present within the discussions for each example, likely providing similar contextual evidence. We also found some intriguing test examples where adding context proved misleading for the model, while BERT confidently classified the main text as hateful. For instance, in the last example in Table 7, comments in the context are largely non-abusive, misinterpreting the primary text as non-abusive. This suggests that while adding context results in a net decrease in misclassifications, majorly neutral context might also fool the model. This is likely since we emphasize the discussion context when we obtain the final classification by averaging the text embedding logit with the discussion node embedding (\(b_{c}^{0}\)). ## Future Work While we find mDT to be an effective method for analyzing discussions on social media, we have pointed out how it is challenged when the discussion context contains comments that are predominately neutral. To address this, we propose using a first-stage text ranker to compute semantic relevance between comments, to filter unrelated messages, as one possible extension. We also note that there are still many contextual signals in social media discussions beyond text, images, and discussion structure that remain untapped. Incorporating named entity recognition techniques in order to integrate deeper analysis of real-world knowledge is an avenue for adding important disambiguating signals [11] that we feel would be well-supported as well by the contextual nature of mDT. Perhaps the most exciting step forward would be to expand our analysis of individual communities, towards learning indicators of their propensity for hateful conduct. Notably, we would be especially interested in trying to effectively capture the culture of specific platforms containing diverse communities, including marginal \begin{table} \begin{tabular}{l c c c c} \hline \hline Usage of Images & Acc. & Pre. & Rec. & F1 \\ \hline With Images & **0.880*** & **0.880*** & **0.880*** & **0.877*** \\ Without Images & 0.832 & 0.835 & 0.822 & 0.828 \\ \hline \hline \end{tabular} \end{table} Table 6: Effect of Excluding Images Figure 4: Example of image present in the discussion context, seen only by mDT, contextualizing comments as potentially hateful Figure 3: Fine-grained distribution of BERT and mDT misclassification. (Acronyms above as in Table 1) ized communities which exchange unique reclaimed vernacular that should not be misinterpreted as hate. In addition to the example given earlier of the African American community, there are special usages as well that arise among platforms supporting LGBTQ users. The contextual nature of mDT captured by graph transformers provides much promise for being able to advance these extensions. Finally, the versatility of mDT's core mechanisms makes it a promising tool for a wide range of applications beyond hate speech detection. We feel that the approach could be applied to other domains such as online product reviews [1], political discourse analysis [13], and popularity analysis [14, 15], where understanding the discussion context is critical for accurate interpretation. ## Conclusion In this paper, we presented a holistic approach to detecting hate speech in social media using our mDT model. Our model leverages graph transformers with text and image transformers to reason about entire threads of discussion. Core to our approach is the introduction of hierarchical spatial encodings and coupling of text, image, and graph transformers through a novel bottleneck mechanism to produce an integrated solution considering all aspects of social discussions. We also present a new dataset of complete multi-modal discussions containing a wide spectrum of hateful content, enabling future work into robust graph-based solutions for hate speech detection. One significant contribution is demonstrating how discussion-oriented multi-modal analysis can improve the detection of anti-social behaviour online. Our experimental results, compared with several key competitors, demonstrate the quantitative improvements stemming from our method. Notably, we see a 21% improvement in F1 over previous methods to include discussion context, such as [1]. Furthermore, our initial qualitative analysis of multiple examples demonstrates the valuable impact of our holistic multi-modal approach. Our analysis also provides insights into the challenges of delivering social good in our current online environment from a multi-modal viewpoint. Beyond enhanced holistic discussion analysis, our work also enables a rich understanding of conversational dynamics, enabling community-centric prediction. This is largely powered by our novel improvements to graph transformers, a method gaining momentum in AI molecular modelling, revealing their potential to expressively capture the relationships in complex multi-modal discussions. We hypothesize that this expressiveness in capturing context can aid in disambiguating false positives, preventing further marginalization of communities, and can aid in the proactive mitigation of hateful behaviours. We also believe that our methods can aid researchers in studying the social dynamics of discussions, and motivate further graph-based multi-modal approaches. Overall, we believe that our approach presents a promising path forward for addressing the issue of hate speech on social media and encourages the exploration of holistic graph-based multi-modal models to interpret online discussions. We believe that our research can help foster healthier and more inclusive environments, improving mental health for individuals online. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Primary Text** & **Context (only seen by mDT)** & BERT pred. & mDT pred. \\ \hline Now imagine if virtuous keyboard jsws had their way? Their mascot should be Ralph Wiggum. & **[...]** Preferred pronouns: go/_slur_/yourself **[...]** If the Chinese in my corner of NZ only sold to Chinese they’d starve by Thursday. & Not Hateful & Hateful \\ **[...]** They just wanna _b-slur_ about something because their own life sucks. & & \\ \hline “That _n-slur_ was on PCP Johnson” Lmao & **[...]** Its’ a common pattern when dealing with these shootings. * Kill black dude **[...]** * Wingnut welfare kicks in as racist _f-slur_ create gofundme of over half a million _f-slur_ dollars for cops family **[...]** & Not Hateful & Hateful \\ \hline whoa brah.. leave my tramy out of this tardeh **[...]** That is not even close to what feminism is. What you are talking about is radical Feminism **[...]** Got banned from my sexual minority sub-reddit (r/bisexual) for not believing that all bisexuals should actually be panseuals **[...]** & Hateful & Not Hateful \\ \hline \end{tabular} \end{table} Table 7: Text instances misclassified by BERT and mDT. Note: The ground truth for all the examples shown here is “Hateful”. We have also redacted chunks of text from the context in the interest of space. The redacted content is shown by [...].
2302.13594
Leveraging Video Coding Knowledge for Deep Video Enhancement
Recent advancements in deep learning techniques have significantly improved the quality of compressed videos. However, previous approaches have not fully exploited the motion characteristics of compressed videos, such as the drastic change in motion between video contents and the hierarchical coding structure of the compressed video. This study proposes a novel framework that leverages the low-delay configuration of video compression to enhance the existing state-of-the-art method, BasicVSR++. We incorporate a context-adaptive video fusion method to enhance the final quality of compressed videos. The proposed approach has been evaluated in the NTIRE22 challenge, a benchmark for video restoration and enhancement, and achieved improvements in both quantitative metrics and visual quality compared to the previous method.
Thong Bach, Thuong Nguyen Canh, Van-Quang Nguyen
2023-02-27T09:00:29Z
http://arxiv.org/abs/2302.13594v1
# Leveraging Video Coding Knowledge for Deep Video Enhancement ###### Abstract Recent advancements in deep learning techniques have significantly improved the quality of compressed videos. However, previous approaches have not fully exploited the motion characteristics of compressed videos, such as the drastic change in motion between video contents and the hierarchical coding structure of the compressed video. This study proposes a novel framework that leverages the low-delay configuration of video compression to enhance the existing state-of-the-art method, BasicVSR++. We incorporate a context-adaptive video fusion method to enhance the final quality of compressed videos. The proposed approach has been evaluated in the NTIRE22 challenge, a benchmark for video restoration and enhancement, and achieved improvements in both quantitative metrics and visual quality compared to the previous method. ## 1 Introduction With the increasing demand for high-quality video transmission over the Internet, video compression has become essential to efficiently transmit videos over limited bandwidth. It has driven the development of video compression standards such as H.265/HEVC [1], and beyond. However, compressed videos suffer unavoidable compression artifacts. As a result, there is a growing interest in the research community in enhancing the quality of compressed videos. Several studies have proposed methods to improve the quality of individual frames in videos [2, 3] as well as leveraging temporal information between frames [4, 5, 6, 7, 8, 9]. Most existing methods focus on the architecture design of the model, which typically involves (i) designing the backbone extraction module using CNNs or Transformers, (ii) designing the propagation module to effectively capture information flow between frames, and (iii) designing the enhancement module as a post-processing step to improve the quality of the output video. However, there is often a little emphasis on incorporating prior knowledge from video content such as motion information as well as the compression algorithm. This represents an untapped potential for improving the overall quality of the video compression process. In this study, we propose several methods to enhance the performance of BasicVSR++, a state-of-the-art video super-resolution method [10]. Our approach begins by examining BasicVSR++'s performance with varying numbers of input frames, taking into account the motion information of the content. As compressed video uses the HEVC low-delay configuration, the first frame (also known as the Intra frame) has significantly higher quality than the others. To take advantage of this, we train a separate network called Intra frame BasicVSR++ to improve the quality of the first frame. Finally, we introduce an adaptive mechanism that combines multiple reconstructed instances with different input sequence lengths to obtain the final enhanced output. The experiments demonstrate that the proposed framework not only leverages the low-delay configuration of video compression but also incorporates context-adaptive video fusion to enhance the final quality of compressed videos. These results demonstrate the potential of incorporating domain-specific knowledge into deep learning models for advancing the state-of-the-art in compressed video quality enhancement. ## 2 Performance Analysis of BasicVSR++ BasicVSR++ [10] is a state-of-the-art video super-resolution method that enhances video quality through a combination of frame propagation and alignment techniques. While BasicVSR++ has shown impressive results in enhancing video quality, it does not take into account the unique characteristics of compressed video. In a compressed video [1], a frame can be an intra or inter frame. Intra frame compression only uses information from the current image, while inter-frame compression utilizes information from previously encoded frames to reduce redundancy. The NTIRE22 challenge encoded video using a low-delay configuration [1], as shown in Figure 1, with a group of pictures of 4. This results in the compressed video having only one intra frame at a significantly higher quality than other inter-frames. As a result, the quality of frames in the compressed video is highly varied, but BasicVSR++ does not take this into account. In practice, the entire video is fed as input to the model during testing, which may not be the optimal choice. We investigate the effect of varying input video frame lengths on the performance of BasicVSR++. As in Fig. 2-a, the pre-trained network's performance varies significantly depending on the frame length, with shorter frame length demonstrating higher performance for the first few frames. Interestingly, we also noticed that this performance phenomenon did not occur when the input of the trimmed video started from the 32nd frame, i.e., when the intra frame is not included. This finding suggests that the network is better able to exploit the high performance of the intra frame with a smaller frame length. For later frames, our experiments demonstrated that using the full frame length performed better thanks to the temporal dependency in both backward and forward directions. In contrast, trimmed video inputs have limited backward and forward dependency, resulting in lower performance for later frames. This observation suggests that the optimal choice of input frame length may vary depending on the temporal characteristics of the video content. The use of shorter frame lengths may be more effective for the early frames, while longer frame lengths may be more suitable for later frames that rely on both backward and forward dependencies. Additionally, the effectiveness of the backward and forward dependencies may be limited to a certain temporal range, as seen in Video 208 at shift 32. Examining 40 test sequences 201-240 in the NTIRE22 dataset, we observe that the first 64 frames of enhanced outputs with the video inputs trimmed at the length of 186 yields the best performance. Figure 1: Low-delay configuration of HEVC with a group of picture size of 4, where the configuration has only one intra frame at the 0-th index and a repeated group structure for 4 frames. Full references are only available from the 13-th frame. Figure 2: Performance variation of BasicVSR++ with respect to the number of input frames. The per-frame PSNR difference between the outputs of trimmed videos with different gaps (\(90,122,154,186\)) and the output of the original video is shown as \(\Delta\) PSNR. The first and second rows display the results with start frame \(0\) and \(32\), respectively. ## 3 Proposed Method ### Intra frame BasicVSR++ To further leverage the superior quality of the intra frame, we introduce a new network called Intra name BasicVSR++. In order to do so, we created a new intra frame video dataset by cutting the original videos into multiple non-overlapping 30-frame segments and encoding them using HEVC with the same low-delay configuration as the NTIRE22 dataset. The resulting dataset contains videos with only one intra frame and multiple inter-frames. We then divided the compressed videos into training and testing datasets for network training. By utilizing this configuration, we ensured that the intra frame is of higher quality than the inter-frames, reflecting the reality of compressed videos. During training, we fine-tuned the Intra frame BasicVSR++ network by using the intra frame as the first frame of each segment. This allowed the network to learn to enhance the high-quality intra frame more effectively, resulting in a more accurate and efficient network. In this way, the Intra frame BasicVSR++ network is designed to specifically target the improvement of the first frame's quality, while BasicVSR++ improves the quality of the entire video. As shown in Fig.3, we observed that the Intra frame BasicVSR++ network improves the performance of the intra frame in most cases, such as videos 201 and 202. However, this improvement is not universal and may not always hold for sequences with high frame rates and slow motion, such as video 226. This may be due to the limited amount of information in the intra frame in such cases, and the network may struggle to extract and propagate useful features. Nonetheless, these results suggest that the intra frame BasicVSR++ network can be an effective tool in improving the performance of compressed video enhancement for most types of content. ### Adaptive Context-Aware Fusion From our analysis, it is evident that BasicVSR++ benefits from adaptive input frame length and can be further improved by Intra frame BasicVSR++. However, it is also important to note that this improvement is not applicable to all types of video content. In order to address this issue, we propose a heuristic that separates cases where the frame rate is high and there is slow motion. This is achieved by comparing the gradient of the average frame with a given threshold. By using this threshold, we can determine if the video contains a high frame rate and slow motion, and take appropriate measures to optimize the final performance. Firstly, the average frame is obtained as follows: \[\bar{f}=\sum_{i}^{i\times m<N}f_{i\times m}, \tag{1}\] where \(m\) is a scaling factor that is proportional to the input video frame rate. Specifically, we set \(m=4\) for videos with a frame rate less than 30fps and \(m=8\) for those with a frame rate greater than 30fps. Next, we compare the gradient of the average frame to a given threshold: \[\nabla(\bar{f})=||\nabla_{x}(\bar{f})||+||\nabla_{y}(\bar{f})||<\tau, \tag{2}\] where \(\nabla_{x}\) and \(\nabla_{y}\) denote the gradient in the horizontal and vertical directions of a given frame \(f\), and \(\tau\) is a threshold of value 2300. The value of \(\tau\) can be normalized based on the number of pixels, as shown in Fig. 4. We propose a novel video fusion mechanism called Adaptive Context-Aware Fusion, as shown in Fig. 4. The method involves enhancing an input video using three different approaches: full BasicVSR++, short BasicVSR++ with the first 154 frames, and the first 122 frames by Intra frame BasicVSR++. Depending on the current content of the video, an adaptive fusion technique is performed to select either the first frame from short BasicVSR++ or Intra frame BasicVSR++. For the subsequent 63 frames, we select frames from the short BasicVSR++ set, and the remaining frames are extracted from the full BasicVSR++ set. Figure 3: Performance of the fine-tuned Intra frame BasicVSR++ without the Adaptive Context-Aware Fusion mechanism. ### Loss Function In order to fine-tune the BasicVSR++ and Intra frame BasicVSR++ networks, we used a weighted sum of three loss components: (i) Charbonnier loss [11], (ii) total variation (TV) loss, and (iii) temporal gradient (TG) loss. The temporal gradient loss captures the difference between the ground truth and the output temporal sequence by calculating the loss between two consecutive frames in each sequence. To calculate the temporal gradient loss, we subtract two consecutive frames in each sequence to obtain the temporal gradient sequence. The hyperparameters of the loss weights were optimized using grid search to find the best values. The final loss function is given by: \[\begin{split}\mathcal{L}_{\mathrm{Final}}=\mathcal{L}_{\mathrm{ Char}}+1e\times 10^{-3}*\mathcal{L}_{\mathrm{TG}}\\ +1e\times 10^{-4}*\mathcal{L}_{\mathrm{TV}},\end{split} \tag{3}\] where \(L_{Char}\) is the Charbonnier loss, \(L_{TV}\) is the total variation loss, \(L_{TG}\) is the temporal gradient loss. ## 4 Experiments ### Datasets For the NTIRE 2022 Challenge, we used the original LDV dataset [12] that consists of 240 videos as our primary training set. To increase our training data, we also utilized the LDV 2.0 dataset, which contains an additional 90 videos. We split the LDV 2.0 videos into six sets, each containing 15 videos. Two of these sets were used as our validation and test sets, respectively. In splitting the videos, we aimed to maintain the diversity of the videos in each set, in terms of content, frame rate, and other factors, as similar as possible. All videos in the LDV and LDV 2.0 datasets, as well as the splits for the NTIRE 2021 and NTIRE 2022 Challenges, are publicly available at [https://github.com/RenYang-home/LDV_dataset](https://github.com/RenYang-home/LDV_dataset). ### Training Details We employed the Adam optimizer [13] with a learning rate of \(2\times 10^{-5}\) and utilized the Cosine Restart scheduler [14] with a period of 10,000 iterations. To ensure a stable optimization process, we linearly increased the learning rate for the first 10% of iterations. #### 4.2.1 Fine-tuning BasicVSR++ Due to computational limitations, we fine-tuned only the upsample layer of the pre-trained BasicVSR++ network. We found that increasing the input frame length from 30 to 60 frames led to a 0.03 dB improvement in model performance. The network was fine-tuned for 50,000 iterations. #### 4.2.2 Fine-tuning Intra frame BasicVSR++ We create a new dataset is created by trimming the original videos into multiple segments, each of which consists of 30 frames without overlapping frames. The video segments are encoded using HEVC at a low-delay profile resulting in approximately 16,000 training samples. For the Intra frame BasicVSR++ network, only the segments from the first 200 videos are used to train the model, and the last 40 videos are used for testing. ### Ensembling with Test Time Augmentation In our study, we perform Ensembling with test time augmentation (TTA) by generating eight input variations by flipping and rotating the input sequences in the spatial dimension. We then use our proposed framework shown in Fig. 4 to enhance each variation, and post-process the corresponding output by flipping and rotating it back to its original location. Finally, we obtain the final output by averaging the outputs of all variations. This approach helps to reduce the impact of input variability and improve the overall performance of the model. Figure 4: General framework of the adaptive context-aware mechanism for video fusion. The mechanism detects static video content using the gradient of the average frame. ### Experimental Results We participated in the NTIRE22 challenge as team **OCL-VCE** and submitted our results on the test set. In Table 1, we present the performance of our method and BasicVSR++ on the test set. Our method achieved a higher PSNR score of 31.71 dB compared to 31.63 dB obtained by BasicVSR++. We further evaluated our framework on the validation set, which consists of 10 videos. Using the pre-trained BasicVSR++ and Intra frame BasicVSR++ models and applying test-time augmentation (TTA), our framework achieved a PSNR score of 32.12 dB, surpassing the 31.84 dB score achieved without TTA. In addition, without finetuning, with and without TTA, our framework achieved a PSNR score of 32.02 dB and 31.86 dB, respectively, improving the baseline by 0.06 dB and 0.02 dB, respectively. Our results demonstrate the effectiveness of our proposed Adaptive Context-Aware Fusion framework in enhancing the quality of low-delay compressed videos. ## 5 Conclusion In conclusion, this paper proposes a novel method that leverages the unique characteristics of low-delay video compression algorithms to improve the quality of compressed videos using deep learning techniques. By incorporating this prior knowledge into the state-of-the-art method, BasicVSR++, we achieve a significant improvement in performance over existing methods. Our experimental results on the NTIRE22 challenge validate the effectiveness of our proposed method. This work underscores the importance of incorporating video compression knowledge into deep learning models to further enhance their performance and enable real-world applications.
2303.02968
DwinFormer: Dual Window Transformers for End-to-End Monocular Depth Estimation
Depth estimation from a single image is of paramount importance in the realm of computer vision, with a multitude of applications. Conventional methods suffer from the trade-off between consistency and fine-grained details due to the local-receptive field limiting their practicality. This lack of long-range dependency inherently comes from the convolutional neural network part of the architecture. In this paper, a dual window transformer-based network, namely DwinFormer, is proposed, which utilizes both local and global features for end-to-end monocular depth estimation. The DwinFormer consists of dual window self-attention and cross-attention transformers, Dwin-SAT and Dwin-CAT, respectively. The Dwin-SAT seamlessly extracts intricate, locally aware features while concurrently capturing global context. It harnesses the power of local and global window attention to adeptly capture both short-range and long-range dependencies, obviating the need for complex and computationally expensive operations, such as attention masking or window shifting. Moreover, Dwin-SAT introduces inductive biases which provide desirable properties, such as translational equvariance and less dependence on large-scale data. Furthermore, conventional decoding methods often rely on skip connections which may result in semantic discrepancies and a lack of global context when fusing encoder and decoder features. In contrast, the Dwin-CAT employs both local and global window cross-attention to seamlessly fuse encoder and decoder features with both fine-grained local and contextually aware global information, effectively amending semantic gap. Empirical evidence obtained through extensive experimentation on the NYU-Depth-V2 and KITTI datasets demonstrates the superiority of the proposed method, consistently outperforming existing approaches across both indoor and outdoor environments.
Md Awsafur Rahman, Shaikh Anowarul Fattah
2023-03-06T08:53:22Z
http://arxiv.org/abs/2303.02968v2
# DwinFormer: Dual Window Transformers for End-to-End Monocular Depth Estimation ###### Abstract Depth estimation from a single image is of paramount importance in the realm of computer vision, with a multitude of applications. Conventional methods suffer from the trade-off between consistency and fine-grained details due to the local-receptive field limiting their practicality. This lack of long-range dependency inherently comes from the convolutional neural network part of the architecture. In this paper, a dual window transformer-based network, namely DwinFormer, is proposed, which utilizes both local and global features for end-to-end monocular depth estimation. The DwinFormer consists of dual window self-attention and cross-attention transformers, Dwin-SAT and Dwin-CAT, respectively. The Dwin-SAT seamlessly extracts intricate, locally aware features while concurrently capturing global context. It harnesses the power of local and global window attention to deeply capture both short-range and long-range dependencies, obviating the need for complex and computationally expensive operations, such as attention masking or window shifting. Moreover, Dwin-SAT introduces inductive biases which provide desirable properties, such as translational equivariance and less dependence on large-scale data. Furthermore, conventional decoding methods often rely on skip connections which may result in semantic discrepancies and a lack of global context when fusing encoder and decoder features. In contrast, the Dwin-CAT employs both local and global window cross-attention to seamlessly fuse encoder and decoder features with both fine-grained local and contextually aware global information, effectively amending semantic gap. Empirical evidence obtained through extensive experimentation on the NYU-Depth-V2 and KITTI datasets demonstrates the superiority of the proposed method, consistently outperforming existing approaches across both indoor and outdoor environments. Attention Mechanism, Computer Vision, Depth Estimation, Transformer ## I Introduction Depth information has a wide variety of applications in different fields [1, 2, 3, 4], such as refocusing/bokeh, self-driving vehicles, robot motion, augmented reality, and so on. Depth sensors e.g. LiDAR, and ToF are expensive and can capture only low-resolution sparse depth information, which demands the development of single-image (monocular) depth estimation for high-resolution and dense depth information. However, estimating depth relying on a single image is an ill-posed problem [5] as often there is not enough information to accurately determine the depth of objects in a scene, thus making it very challenging. In the past years, convolutional neural network (CNN) based deep learning methods have dominated monocular depth estimation [6]. But, CNN fails to produce global context-aware pixel-wise prediction due to its intrinsic locality resulting in a trade-off between fine-detailed and consistent depth map [7]. Several methods attempted to mitigate these limitations [8] still the above-mentioned issues persist. In recent years, transformers [9, 10, 11] have shown promising results for monocular depth estimation. They are notable for their ability to capture long-range dependencies in data. A few years back, Vision Transformer [12] applied pure transformer to the image surpassing the CNN, which verified the usability of transformers in computer vision. Transformers, which have been applied to pixel-wise prediction, have addressed the issue of a local receptive field. Nevertheless, they have also introduced new challenges, including a lack of multi-scale features, the need for higher image resolution than text, and the fact that the computational complexity of self-attention scales quadratically with the size of the image. Later on, several attempts have been made to address these issues with hierarchical transformer [13] but they provide limited coverage for the global-receptive field (cross-window connection) [14]. Recent studies such as [14] and [15] made progress towards addressing these issues. Despite that, these studies present their own set of challenges. The utilization of global sparse attention in [15] leads to a deterioration in the quality of global features as the size of the image increases. Meanwhile, the study presented in [14] suffers from ineffective global attention, resulting in a loss of crucial global information and suboptimal performance. Despite the success of the recent transformer based methods mentioned above in aligning depth edges with object boundaries, they often struggle to accurately assign depth labels to pixels due to problems in effectively fusing encoder and decoder features. Typically a skip connection is used to fuse encoder and decoder features, which applies convolution to features after concatenation. Due to convolution's intrinsic locality, the flow of semantic information is restricted from Fig. 1: Graphical abstract of the proposed method. long ranges affecting the ability of the model to predict the correct depth label for a pixel. To mitigate this issue, skip-attention module is introduced by [16] that integrates encoder-decoder features contextually using local window-based cross-attention. Despite the benefits of transformer-based fusion of encoder-decoder features, the skip-attention module is still restricted by the limited receptive field of local-window attention, thereby it is only able to effectively incorporate information from a limited range of input pixels leaving the global information unused, potentially limiting its overall performance. In this paper, a transformer based architecture is proposed for end-to-end depth estimation while addressing above mentioned issues of existing approaches. The main contributions of the proposed method can be summarized as follows: 1. A dual window transformer-based network, namely DwinFormer, is proposed for end-to-end monocular depth estimation. Here dual window self-attention (Dwin-SAT) and cross-attention (Dwin-SAT) transformers are introduced to effectively capture long-range dependencies and local fine-grained details. 2. Proposed Dwin-SAT introduces an effective design of a transformer-based backbone, which utilizes both local and global window attentions to mitigate the trade-off between fine details and consistency in depth maps by capturing both local and global contextual information. 3. To bridge the semantic gap between encoded and decoded features, the proposed Dwin-CAT decoder seamlessly fuses encoder and decoder features with both fine-grained local and contextually aware global information. ## II Related Work Eigen et al. [6] first proposed a coarse to-fine method to estimate depth. Zhou et al. [17] first introduced a method for predicting the camera's ego-motion and depth map from monocular video. Later on, Laina et al. [18] proposed a fully CNN-based residual network, incorporating up-projection block for improved performance. Ranjan et al [19] trained depth estimation in conjunction with optical flow estimation and motion segmentation to achieve synergistic results. Yourun et al. [20] developed a novel multi-scale method that downsamples the predicted depth map and performs image synthesis at multiple resolutions for improved model performance. In recent years, transformers have been a popular choice for encoders for their ability to capture long-range dependencies. TransDepth [10], and DPT [9] are the first to utilize vision-transformer (ViT) encoder and a convolutional decoder for depth estimation. Later on, SwinDepth [21] utilizes a hierarchical vision-transformer named Swin Transformer [13] as encoder and multi-scale convolutional block as decoder. On the other hand, AdaBin [22] uses an additional transformer-based block to adaptively estimate depth values for an image by dividing the range of possible depths into bins and calculating the final depth estimates as linear combinations of the bin centers. Very few explorations have been made in this area. But, recently in PixelFormer [16], a window attention-based transformer is utilized to effectively fuse the features produced by encoder and decoder. This approach avoids the limitations of simply concatenating these features, which can incite semantic gap [23]. Despite using a transformer-based fusion of encoder-decoder features, this method fails to reach its true potential as its skip-attention module is highly impeded by incorporation of information from a limited range of input pixels due to local-window attention. In essence, CNNs struggle with a trade-off between fine details and consistency in depth maps, while transformers have shown promise in capturing global information, yet they are hindered by high computational complexity, lack of multi-scale features, and inductive bias. Recent attempts to overcome these limitations through different types of window attention face new sets of challenges, such as limited global coverage, impaired global context, and loss of global context. Meanwhile, the fusion of encoder and decoder features remains a challenge due to semantic-gap, and while skip-attention module [16] has improved the fusion process, it is still limited by its local receptive fields. ## III Methodology ### _Problem Definition_ The proposed method models the depth map of an RGB image as the probability of maximum depth for all pixels. Given an image \(I\in\mathbb{R}^{H\times W\times 3}\), a model \(\mathcal{F}(\Theta)\) is developed to predict a probability mask \(\hat{p}\in[0,1]^{H\times W\times 1}\) indicating the likelihood of the maximum depth for all pixels in the image. The mask is transformed into the depth map \(\hat{y}=\hat{p}\odot\textit{max\_depth}\). The model's parameters \(\Theta\) are optimized by minimizing a chosen loss function \(\mathcal{L}(\Theta,y,\hat{y})\) with respect to the true depth map \(y\) through \(\Theta^{*}=\arg\min_{\Theta}\mathcal{L}(\Theta,y,\hat{y})\). ### _Overview of Proposed DwinFormer Architecture_ The proposed architecture, as shown in Fig. 2, introduces the Dual Window Self Attention Transformer (Dwin-SAT) backbone to process the input image \(I\). The Dwin-SAT backbone employs multiple layers of Dual Window Self Attention (Dwin-SA) to extract feature maps representing the image at different resolution scales. These resolution scales are represented by \(i\) and are defined as \(\frac{1}{2^{i+1}}\) relative to the input image \(I\), where \(i\in\{1,2,3,4\}\) and \(i=5\) has same resolution as \(i=4\). Dwin-SA leverages both Local Window Self Attention (Lwin-SA) and Global Window Self Attention (Gwin-SA), to extract both locally and globally contextualized features from the input image. These features are subsequently refined through the proposed Dual Window Cross Attention (Dwin-CA), which is employed at different stages to merge the encoder-decoder feature maps hierarchically. Dwin-CA exploits Local Window Cross Attention (Lwin-CA) and Global Window Cross Attention (Gwin-CA) respectively to fuse encoder-decoder features with both local and global contexts. Finally, the resulting decoded features from the stack of Dwin-CA layers are processed by the Depth Head module to estimate the per-pixel probability of maximum depth. In what follows, the blocks of DwinFormer are explained. ### _Proposed Encoder Architecture: Dwin-SAT_ The proposed Dwin-SAT backbone, as depicted in Fig. 2, effectively extracts features at multiple resolutions by iteratively downgrading the spatial dimensions and upgrading the channel dimensions by factors of 2. The process begins with the Stem block, which generates overlapping patches from the input image \(I\) to facilitate the application of the transformer. These patches are then projected into a \(C\)-dimensional embedding space through the use of a \(3\times 3\) convolutional layer with a stride of \(2\times 2\) and \(1\times 1\) padding. Subsequently, the number of patches is reduced by a factor of 2 using a strided convolution, thus allowing the Dwin-SAT backbone to continuously extract features at different levels of detail. Finally, the resultant features from Dwin-SAT are operated by a \(1\times 1\) convolution to precisely control the channel of the decoder. Specifically, Dwin-SAT utilizes the Dwin-SA module to extract spatial characteristics by interleaving between Lwin-SA and Gwin-SA modules with local and global contexts respectively. In contrast to Gwin-SA, which operates global attention using both global and local windows, Lwin-SA operates local attention using only local windows in the manner described in Swin Transformer [13]. The global window in Gwin-SA is generated by Global Window Generator (GWG) from the whole input feature maps and shares across all local windows, thus resulting in a reduced number of parameters and FLOPs. It confines information across the entire input feature maps for interaction with local query features. Specifically, as shown in Fig. 3, the generator consists series of a Fused-MBConv [14] block followed by a max pooling layer. Then, using a Downsample block at the end of each stage, the spatial dimension of resulting features is decreased and the channel dimension is increased by a factor of \(2\). The Downsample block follows the same architecture in [14] comprising a modified Fused-MBConv block, followed by a \(2\times 2\) strided convolution with a kernel size of \(3\times 3\) and layer normalization layer, which employs spatial feature contraction from CNN that impose inductive bias and inter-channel interactions. At the end of the Dwin-SAT, a \(1\times 1\) convolution is used to control the number of output channels of each decoder layer at each scale. For a given scale \(i\), the encoder can be expressed as \[\begin{split}{}_{1}Q_{l}^{e}&=LayerNorm(E_{i-1}) \\ K_{l}^{e}&=MLP(E_{i-1});V_{l}^{e}=MLP(E_{i-1})\\ L_{i}&=\text{Lwin-SA}(1Q_{l}^{e},K_{l}^{e},V_{l}^{e })+E_{i-1}\\ L_{i}&=L_{i}+MLP(LayerNorm(L_{i}))\\ {}_{2}Q_{l}^{e}&=LayerNorm(L_{i});E_{i}^{\prime}= GWG(E_{i-1})\\ K_{g}^{e}&=MLP(E_{i}^{\prime});V_{g}^{e}=MLP(E_{i}^{ \prime})\\ E_{i}&=\text{Gwin-SA}(2Q_{l}^{e},K_{g}^{e},V_{g}^{e })+L_{i}\\ E_{i}&=E_{i}+MLP(LayerNorm(E_{i}))\\ E_{i}&=\text{Downsample}(\text{E}_{i})\end{split} \tag{1}\] Fig. 2: Graphical overview of the DwinFormer Architecture. The input image is encoded into multiscale feature maps using a series of Dwin-SA blocks in the Dwin-SAT. The encoded features are then reconstructed into a depth map by the Dwin-CAT, which integrates features from Dwin-SA of the same level and Dwin-CA of the previous level. The final depth map is produced by the Depth Head, processing the decoded features. ### _Dual Window Self Attention (Dwin-SA)_ The Dwin-SAT backbone's primary computational operator is Dwin-SA. In general, it operates through the process of contextualization, which involves computing the pairwise relations between each query position and all key positions to form an attention map. This map serves as a weighting function, which is used to linearly combine the values to produce the output at each query position. In short, the output at each query position is expressed as a weighted sum of the values, where the weights are computed using the query and key through the attention mechanism. The Dwin-SA block employs two sub-mechanisms for feature extraction. Firstly, Lwin-SA mechanism partitions the image into smaller windows (referred to as local windows) and performs self-attention between patches within these windows to extract fine-grained local features. Secondly, the Gwin-SA, a novel attention mechanism, exploits the global window to interact with patches located anywhere in the image, allowing it to consider information outside of the local window. Gwin-SA accesses global information from the image through global keys and values generated from GWG, which represents the entire image. This enables Gwin-SA to take into account the global context when making attention decisions. Fig. 4 provides a visual insight into these attentions and they also can be expressed as follows, \[\mathrm{Lwin-SA}\left(Q_{l}^{e},K_{l}^{e},V_{l}^{e}\right)=S\left(Q_{l}^{e}K_{l }^{eT}/\sqrt{d}+B\right)V_{l}^{e} \tag{2}\] \[\mathrm{Gwin-SA}\left(Q_{l}^{e},K_{g}^{e},V_{g}^{e}\right)=S\left(Q_{l}^{e}K_{ g}^{eT}/\sqrt{d}+B\right)V_{g}^{e} \tag{3}\] where \(Q_{l}^{e},K_{l}^{e},V_{l}^{e}\) denotes to query, key, and value in the local window from encoder whereas \(K_{g}^{e},V_{g}^{e}\) denotes to key and value in the global window from encoder generated by GWG. For both local and global windows in encoder, \(Q,K,V\in\mathbb{R}^{M^{2}\times d}\); \(d\) is the query/key dimension, \(M^{2}\) is the number of patches in a window, and \(S\) is the Softmax function. Following [13, 14], assuming relative position along each axis lies in the range \([-M+1,M-1]\), learnable relative position bias \(B\) is sampled from bias matrix \(\hat{B}\in\mathbb{R}^{(2M-1)\times(2M-1)}\). The proposed Gwin-SA mechanism draws inspiration from GCViT [14] but utilizes a distinctive approach to calculate attention as shown in Fig. 4. The global attention in GCViT utilizes global query and a local key-value to contextualize the global window tokens with respect to the local window tokens. Even so, this process can lead to a loss of non-locality for the global window tokens, as the global query only contributes to the weights of the attention and the local value provides the image features. In this approach, the output at each query position is computed as a weighted average of the values, with the weights derived from the query and key through an attention mechanism. In contrast, the Gwin-SA mechanism employs a reverse approach by using a local query and a global key-value to contextualize the local window tokens with respect to the global window tokens. This approach enables the local window tokens to acquire non-locality in attention through the contribution of the global value's supply of image features. A further advantage of the Gwin-SA mechanism lies in its ability to consider each smaller zone by giving deeper effort, and maximize global information, the Gwin-SA mechanism provides a more in-depth effort. The key differences between the GCViT approach and the Gwin-SA mechanism are summarized as follows: * Global Attention in GCViT: \(\Rightarrow\) Global query + local key-value \(\Rightarrow\) Contextualize global window w.r.t local window * Gwin-SA in DwinFormer: \(\Rightarrow\) Local query + global key-value \(\Rightarrow\) Contextualize local window w.r.t global window ### _Proposed Decoder Architecture: Dwin-CAT_ The proposed Dual Window Cross Attention Transformer (Dwin-CAT) is depicted in Fig. 2. Here, the number of output channels of each decoder layer at each resolution scale \(i\) is represented as \(F_{i}=2^{i+4}\), where \(i\in{1,2,3,4,5}\). The decoder takes the lowest-resolution feature map from the encoder and gradually enhances it to produce a final depth representation. At each level, Dwin-CAT integrates details from the encoder's feature map through skip connections and upscaling. However, the encoded feature with its rich information can sometimes mismatch with the decoded feature that holds more semantic information, making the optimization process challenging. To bridge this semantic gap, Dwin-CAT uses the Dwin-CA component that integrates both local and global contexts, resulting in a more contextualized feature. The encoded features are first passed through a \(1\times 1\) convolution to harmonize their channels with the previous decoded features and facilitate attention. The Dwin-CA component in Dwin-CAT fuses the encoded and decoded features using Local Window Cross Attention (Lwin-CA) and Global Window Cross Attention (Gwin-CA). The resulting feature is passed through the Upsample layer to simultaneously upscale the spatial dimension and downscale the channel dimension. The decoder operates similarly to Dwin-SAT, but it uses cross-attention between the encoded Fig. 3: The input feature map with \(H,W\) dimensions, is fed into the Global Window Generator (GWG), which uses the Fused-MBConv and MaxPool layer repeatedly \(K\) times to generate a global window with \(h,w\) dimensions which is reshaped then repeated \(M\) times. and decoded features, where queries come from the encoder and keys-values from the decoder. In mathematical terms, the decoder layer can be expressed as follows, \[{}_{1}Q_{l}^{d} =LayerNorm(D_{i-1});E_{i}^{\prime}=Conv_{1\times 1}(E_{i}) \tag{4}\] \[K_{l}^{e} =MLP(E_{i}^{\prime});V_{l}^{e}=MLP(E_{i}^{\prime})\] \[L_{i} =\text{Lwin-CA}({{\rm{A}}_{1}}Q_{l}^{d},K_{l}^{e},V_{l}^{e})+D_{i-1}\] \[L_{i} =L_{i}+MLP(LayerNorm(L_{i}))\] \[{}_{2}Q_{l}^{d} =LayerNorm(L_{i});E_{l}^{\prime\prime}=GWG(E_{i}^{\prime})\] \[K_{g}^{e} =MLP(E_{i}^{\prime\prime});V_{g}^{e}=MLP(E_{i}^{\prime\prime})\] \[D_{i} =\text{Gwin-CA}({{\rm{A}}_{2}}Q_{l}^{d},K_{g}^{e},V_{g}^{e})+L_{i}\] \[D_{i} =D_{i}+MLP(LayerNorm(D_{i}))\] \[D_{i} =\text{Upsample}({\rm{D}}_{i})\] ### _Dual Window Cross Attention (Dwin-CA)_ The Dual Window Cross Attention (Dwin-CA) is an effective technique that seamlessly integrates encoder and decoder features while addressing semantic discrepancies between them. Similar to Dual Window Self-Attention (Dwin-SA), it utilizes local and global window attention mechanisms. However, instead of self-attention within encoded features, it utilizes cross-attention between encoded and decoder features. This allows Dwin-CA to bridge semantic gaps between the rich and nuanced encoded feature map and the decoded feature map, which holds the prevalence of information about the final depth representation. The technique does this by incorporating both local and global context through Local Window Cross Attention (Lwin-CA) and Global Window Cross Attention (Gwin-CA) respectively. Lwin-CA extracts fine-grained local features by performing cross-attention between the encoder and decoder using local windows while Gwin-CA captures long-range dependencies by performing cross-attention between the encoder and decoder using global windows. Fig. 5 shows an illustration of the proposed cross-attention mechanism and they can also be mathematically represented as \[\text{Lwin-CA}\left(Q_{l}^{d},K_{l}^{e},V_{l}^{e}\right)=S\left(Q_{l}^{d}{K_{l }^{e}}^{T}/\sqrt{d}+B\right)V_{l}^{e} \tag{5}\] Fig. 4: Comparison between Global Window Self Attention in proposed Dwin-SAT and Global Attention in GCViT. While both methods use attention on encoded window tokens, Gwin-SA leverages global key-values with local queries and GCViT uses global queries with local key-values. Here, Global Window Generator is used to create global query-key-values. Fig. 5: The Local and Global Window Cross Attention (Lwin-CA and Gwin-CA) blocks in the proposed Dwin-CAT utilize cross-attention between the encoded and decoded features. Both blocks employ decoded features as local queries and encode features as key-values. However, Lwin-CA uses encoded features as local key-values, while Gwin-CA employs them as global key-values via Global Window Generator. \[\mathrm{Gwin-CA}\left(Q_{l}^{d},K_{g}^{e},V_{g}^{e}\right)=S\left(Q_{l}^{d}{K_{g}^{ e}}^{T}/\sqrt{d}+B\right)V_{g}^{e} \tag{6}\] The notations used in these equations, such as \(Q_{l}^{d}\), \(K_{l}^{e}\), \(V_{l}^{e}\), \(K_{g}^{e}\), \(V_{g}^{e}\), \(d\), \(B\), and \(S\) are consistent with the previously defined notations in equation 2 and 3, with the exception of \(Q_{l}^{d}\), which represents the query in the local window from the decoder. ### _Depth Head_ The Depth Head of the proposed architecture utilizes a depth feature map generated by the Dwin-CAT of size \(\left(\frac{H}{4},\frac{W}{4},C\right)\). The depth feature map is then passed through a \(3\times 3\) convolutional layer which transforms it into a probability map of maximum depth for each pixel. Then the depth map is obtained by applying the sigmoid function on this probability map, resulting in a continuous estimate of the depth at each pixel. Finally, the resultant depth map is upsampled \(4\times\) to match the resolution of the ground truth. ### _Loss Function_ The Scale-Invariant loss (SI) [6] is utilized in a scaled form in this paper. The mathematical representation of the loss is given by the equation: \[\mathcal{L}_{\text{pixel}}\,=\alpha\sqrt{\frac{1}{T}\sum_{i}g_{i}^{2}-\frac{ \lambda}{T^{2}}\left(\sum_{i}g_{i}\right)^{2}} \tag{7}\] Where \(d_{i}\) is the ground truth depth, \(\hat{d}_{i}\) is the estimated depth, \(T\) represents the number of pixels with valid ground truth values and \(g_{i}\) is computed as \(g_{i}=\log_{e}(\hat{d}_{i})-\log_{e}(d_{i})\). The parameters \(\lambda\) and \(\alpha\) used in the experiments are set to 0.85 and 10 respectively. ## IV Experiments Numerous experiments using the KITTI [29] and NYU Depth V2 [30] datasets are conducted to confirm the effectiveness of the novel method proposed. Through a thorough analysis of both quantitative and qualitative criteria, the proposed method is put through a thorough evaluation and comparison against current state-of-the-art techniques. Additionally, an ablation study is carried out to clearly show the importance of each individual component. ### _Datasets_ NYU Depth V2 is an indoor dataset of 120K RGB-depth pairs from 464 scenes, with an official training/testing split of 50K/654 images, and a depth upper bound of 10 meters. The proposed method is trained with a resolution of \(480\times 640\). KITTI is an outdoor dataset of stereo images from 61 scenes, with a training/testing split of 26K/697 images defined by [6], and a depth upper bound of 80 meters. The proposed method is trained with a resolution of \(704\times 352\). ### _Implementation Details_ The present study employs the TensorFlow framework to implement its proposed method. Adam optimizer [31], with a batch size of 8 is utilized during training. Both the KITTI and NYUV2 datasets are trained for a period of 30 epochs, with an initial learning rate of \(3\times 10^{-6}\), that is gradually increased linearly to \(2.5\times 10^{-5}\) and subsequently decreased with a Cosine schedule. The proposed method utilizes 8 x NVIDIA V100 GPUs for training. To prevent overfitting, various data augmentation techniques such as random rotation, horizontal flipping, brightness-contrast-hue adjustments, grayscale, and noise are applied. The proposed architectural design allows for the utilization of pre-trained weights from GCViT for initializing the encoder backbone, thus avoiding the need for training the backbone on ImageNet from scratch. The number of output channels in each level for both the encoder and decoder is dependent on the size of the architecture, where for the XXTiny, XTiny, Tiny, Small, Base, and Large sizes, \(C\) is set to \(64\), \(64\), \(64\), \(96\), \(128\) and \(192\) respectively. The remaining architectural parameters are kept similar to those of GCViT [14]. ### _Evaluation Metrics_ To evaluate and compare the performance of the proposed method in comparison to existing methods five metrics [6] are employed, namely Average Relative error (Abs Rel): \(\frac{1}{n}\sum_{i\in n}\frac{\left|d_{i}-\hat{d}_{i}\right|}{\hat{d}_{i}}\); Root Mean Squared error (RMS): \(\sqrt{\frac{1}{n}\sum_{i\in n}\left\|d_{i}-\hat{d}_{i}\right\|^{2}}\); Average \((\log_{10})\) error: \(\frac{1}{n}\sum_{i\in n}\left|\log_{10}\left(d_{i}\right)-\log_{10}\left( \hat{d}_{i}\right)\right|;\) Threshold Accuracy \((\delta_{i}):\) \begin{table} \begin{tabular}{l c c c c} \hline \hline Method & Abs Rel \(\downarrow\) & RMS \(\downarrow\) & \(\log_{10}\downarrow\) & \(\delta_{1}\uparrow\) \\ \hline Eigen et al. [6] & \(0.203\) & \(6.307\) & \(0.282\) & \(0.702\) \\ DORN. [24] & \(0.072\) & \(2.727\) & \(0.120\) & \(0.932\) \\ Yin et al. [25] & \(0.072\) & \(3.258\) & \(0.117\) & \(0.938\) \\ BTS [26] & \(0.059\) & \(2.756\) & \(0.096\) & \(0.956\) \\ TransDepth [10] & \(0.064\) & \(2.755\) & \(0.098\) & \(0.956\) \\ Adabins [22] & \(0.058\) & \(2.360\) & \(0.088\) & \(0.964\) \\ DPT [9] & \(0.060\) & \(2.573\) & \(0.092\) & \(0.959\) \\ SwinDepth [21] & \(0.064\) & \(2.643\) & - & \(0.957\) \\ Depthformer [11] & \(0.058\) & \(2.285\) & - & \(0.967\) \\ NeWCRFs [28] & \(0.052\) & \(2.129\) & \(0.079\) & \(0.974\) \\ PixelFormer [16] & \(0.051\) & \(2.081\) & \(0.077\) & \(0.976\) \\ \hline \hline DwinFormer (Proposed) & \(\mathbf{0.047}\) & \(\mathbf{1.959}\) & \(\mathbf{0.073}\) & \(\mathbf{0.980}\) \\ \hline \hline \end{tabular} \end{table} TABLE II: Result on KITTI Data. The best result is indicated in **bold**, second best is underlined, and symbols \(\uparrow\) or \(\downarrow\) denote higher/lower values are preferable \begin{table} \begin{tabular}{l c c c c} \hline \hline Method & Abs Rel \(\downarrow\) & RMS \(\downarrow\) & \(\log_{10}\downarrow\) & \(\delta_{1}\uparrow\) \\ \hline Eigen et al. [6] & \(0.158\) & \(0.641\) & - & \(0.769\) \\ DORN [24] & \(0.115\) & \(0.509\) & \(0.051\) & \(0.828\) \\ Yin et al. [25] & \(0.108\) & \(0.416\) & \(0.048\) & \(0.872\) \\ BTS [26] & \(0.110\) & \(0.392\) & \(0.047\) & \(0.885\) \\ TransDepth [10] & \(0.106\) & \(0.365\) & \(0.045\) & \(0.900\) \\ DPT [9] & \(0.110\) & \(0.367\) & \(0.045\) & \(0.904\) \\ Adabins [22] & \(0.103\) & \(0.364\) & \(0.044\) & \(0.903\) \\ P3Depth [27] & \(0.104\) & \(0.356\) & \(0.043\) & \(0.898\) \\ SwinDepth [21] & \(0.100\) & \(0.354\) & 0.042 & \(0.909\) \\ DeeHDformer [11] & \(0.100\) & \(0.345\) & - & \(0.911\) \\ NeWCRFs [28] & \(0.095\) & \(0.334\) & \(0.041\) & \(0.922\) \\ PixelFormer [16] & \(0.090\) & \(0.322\) & \(0.039\) & \(0.929\) \\ \hline DwinFormer (Proposed) & \(\mathbf{0.081}\) & \(\mathbf{0.280}\) & \(\mathbf{0.034}\) & \(\mathbf{0.951}\) \\ \hline \hline \end{tabular} \end{table} TABLE I: Result on NYUv2 Data. The best result is indicated in **bold**, second best is underlined, and symbols \(\uparrow\) or \(\downarrow\) denote higher/lower values are preferable \(\%\) of \(d_{i}\) s.t. \(\max\left(\frac{d_{i}}{d_{i}},\frac{d_{i}}{d_{i}}\right)=\delta<thr\) for \(thr=1.25\); Squared Relative difference (Sq Rel): \(\frac{1}{n}\sum_{i\in n}\frac{\left\|d_{i}-d_{i}\right\|^{2}}{d_{i}}\). Here \(n\) is the total number of pixels for each depth map, \(d_{i}\), and \(\hat{d}_{i}\) denotes ground truth, and predicted depth for \(i\)-th pixel respectively. ### _Performance Comparison with Existing Methods_ #### Iv-D1 Quantitative Analysis Tables I, and II present a quantitative analysis, respectively on the outdoor dataset KITTI and indoor dataset NYUv2, comparing the proposed method to existing techniques. The results of the comparison reveal that the proposed method exhibits outstanding performance, surpassing the other methods by a considerable margin across a wide range of metrics. The only exception is the metric \(\delta_{3}\) where the results are highly saturated, but even in that case, the proposed method's results are competitive. These results conclusively demonstrate the exceptional performance of the proposed method in the field of monocular depth estimation, and its ability to produce highly accurate and precise depth maps. #### Iv-D2 Qualitative Analysis Figures 6 and 7 present an insightful comparison between the proposed method and existing techniques, respectively, on the outdoor dataset KITTI and indoor dataset NYUv2. As it is plainly evident from the figures, the proposed method demonstrates an aptitude for effectively identifying depth cues such as object boundaries, and sharp edges while producing both consistent and detailed depth maps even for objects with missing information in the RGB image. This highlights the superiority of the proposed method. ### _Ablation Study_ #### Iv-E1 Effectiveness of Dwin-SAT Table III presents a comprehensive examination of the performance of the proposed Dwin-SAT in comparison to other established backbones. In this ablation study, Dwin-CAT is utilized as decoders across all encoders for comparison purposes. It is manifestly evident that the proposed Dwin-SAT outperforms the other methods significantly. This is attributed to its unique ability to effectively extract features with both local and global contexts, thus conclusively validating the efficacy of the proposed architecture. #### Iv-E2 Effectiveness of Dwin-CAT Table IV presents the performance of our proposed Dwin-CAT module in comparison to existing techniques. In this ablation study, Dwin-SAT is utilized as encoders across all decoders for comparison purposes. The results clearly demonstrate that the proposed Dwin-CAT module stands out as a clear winner, exhibiting a significant superiority over the other methods, thus validating the efficacy of the proposed module. ## V Conclusion In this paper, a transformer-based design namely, Dwin-Former is introduced for end-to-end monocular depth estimation. The proposed encoder and decoder incorporate both self-attention and cross-attention mechanisms, utilizing both Fig. 6: Qualitative comparison between previous SOTA and proposed method for KITTI dataset. \begin{table} \begin{tabular}{l c c c} \hline \hline Method & Abs Rel \(\downarrow\) & Sq Rel \(\downarrow\) & \(\delta_{1}\uparrow\) \\ \hline EfficientNet-B7 [32] & \(0.056\) & \(0.165\) & \(0.968\) \\ SwinT-Large [13] & \(0.053\) & \(0.153\) & \(0.972\) \\ ConvNeXi-Large [33] & \(0.052\) & \(0.154\) & \(0.974\) \\ MaxViT-Large [15] & \(0.051\) & \(0.140\) & \(0.975\) \\ GCViT-Large [14] & \(0.050\) & \(0.138\) & \(0.977\) \\ \hline Dwin-SAT (Proposed) & \(\mathbf{0.047}\) & \(\mathbf{0.136}\) & \(\mathbf{0.980}\) \\ \hline \hline \end{tabular} \end{table} TABLE III: Ablation study of Dwin-SAT on KITTI Eigen Split. Results are denoted by \(\uparrow\) for better and \(\downarrow\) for worse. The best results are in **bold** Fig. 7: Qualitative comparison between previous SOTA and proposed method for NYUv2 dataset. local and global receptive fields. This approach effectively addresses challenges, such as trade-off between consistency and fine details in the depth map and bridging semantic disparities between the encoded and decoded features. As a result, efficient feature extraction and improved reconstruction of depth maps are acquired, leading to more accurate depth maps. The experimental results firmly establish the dominance of the proposed method over existing approaches, as seen by the remarkable performance improvement on the NYUv2 and KITTI datasets.
2310.02850
On the Atypical Solutions of the Symmetric Binary Perceptron
We study the random binary symmetric perceptron problem, focusing on the behavior of rare high-margin solutions. While most solutions are isolated, we demonstrate that these rare solutions are part of clusters of extensive entropy, heuristically corresponding to non-trivial fixed points of an approximate message-passing algorithm. We enumerate these clusters via a local entropy, defined as a Franz-Parisi potential, which we rigorously evaluate using the first and second moment methods in the limit of a small constraint density $\alpha$ (corresponding to vanishing margin $\kappa$) under a certain assumption on the concentration of the entropy. This examination unveils several intriguing phenomena: i) We demonstrate that these clusters have an entropic barrier in the sense that the entropy as a function of the distance from the reference high-margin solution is non-monotone when $\kappa \le 1.429 \sqrt{-\alpha/\log{\alpha}}$, while it is monotone otherwise, and that they have an energetic barrier in the sense that there are no solutions at an intermediate distance from the reference solution when $\kappa \le 1.239 \sqrt{-\alpha/ \log{\alpha}}$. The critical scaling of the margin $\kappa$ in $\sqrt{-\alpha/\log\alpha}$ corresponds to the one obtained from the earlier work of Gamarnik et al. (2022) for the overlap-gap property, a phenomenon known to present a barrier to certain efficient algorithms. ii) We establish using the replica method that the complexity (the logarithm of the number of clusters of such solutions) versus entropy (the logarithm of the number of solutions in the clusters) curves are partly non-concave and correspond to very large values of the Parisi parameter, with the equilibrium being reached when the Parisi parameter diverges.
Damien Barbier, Ahmed El Alaoui, Florent Krzakala, Lenka Zdeborová
2023-10-04T14:35:32Z
http://arxiv.org/abs/2310.02850v2
# On the Atypical Solutions of the Symmetric Binary Perceptron ###### Abstract We study the random binary symmetric perceptron problem, focusing on the behavior of rare high-margin solutions. While most solutions are isolated, we demonstrate that these rare solutions are part of clusters of extensive entropy, heuristically corresponding to non-trivial fixed points of an approximate message-passing algorithm. We enumerate these clusters via a local entropy, defined as a Franz-Parisi potential, which we rigorously evaluate using the first and second moment methods in the limit of a small constraint density \(\alpha\) (corresponding to vanishing margin \(\kappa\)) under a certain assumption on the concentration of the entropy. This examination unveils several intriguing phenomena: i) We demonstrate that these clusters have an entropic barrier in the sense that the entropy as a function of the distance from the reference high-margin solution is non-monotone when \(\kappa\leq 1.429\sqrt{-\alpha/\log\alpha}\), while it is monotone otherwise, and that they have an energetic barrier in the sense that there are no solutions at an intermediate distance from the reference solution when \(\kappa\leq 1.239\sqrt{-\alpha/\log\alpha}\). The critical scaling of the margin \(\kappa\) in \(\sqrt{-\alpha/\log\alpha}\) corresponds to the one obtained from the earlier work of Gamarnik et al. [20] for the overlap-gap property, a phenomenon known to present a barrier to certain efficient algorithms. ii) We establish using the replica method that the complexity (the logarithm of the number of clusters of such solutions) versus entropy (the logarithm of the number of solutions in the clusters) curves are partly non-concave and correspond to very large values of the Parisi parameter, with the equilibrium being reached when the Parisi parameter diverges. ## I Introduction ### Background and Motivation We consider the symmetric binary perceptron (SBP), introduced in [3], where we let \(G=(g_{a})_{a=1}^{M}\) be a collection of \(M\) i.i.d. standard Gaussian random vectors in \(\mathbb{R}^{N}\), with \(M=\lfloor\alpha N\rfloor\) for a fixed \(\alpha>0\) and for \(\kappa>0\), we consider the set of binary solutions \(\mathbf{x}\in\{-1,+1\}^{N}\) to the system of linear inequalities \[\big{|}\big{\langle}g_{a},\mathbf{x}\big{\rangle}\big{|}\leq\kappa\sqrt{N}\quad \text{ for all }1\leq a\leq M\,. \tag{1}\] We denote the set of solutions by \(S(\mathbf{G},\kappa)\), and its cardinality by \[Z(\mathbf{G},\kappa)=\big{|}S(\mathbf{G},\kappa)\big{|}\,. \tag{2}\] It was shown by Aubin, Perkins and Zdeborova [3] that \(S(\mathbf{G},\kappa)\) is nonempty with high probability if and only if \(\kappa>\kappa_{sat}(\alpha)\) where \(\kappa_{sat}(\alpha)\) is defined by the equation \[\mathbb{P}\left(|Z|\leq\kappa\right)^{\alpha}=1/2\,,\quad Z\sim N(0,1)\,. \tag{3}\] Moreover, in the limit of small \(\alpha\) we have \[\kappa_{sat}(\alpha)\underset{a\to 0}{\sim}\sqrt{\frac{\pi}{2}}2^{-1/ \alpha}\,. \tag{4}\] Our main interest is in investigating the possibility of finding solutions efficiently when \(\kappa>\kappa_{sat}(\alpha)\). Mezard and Krauth [24] showed in their seminal work using the non-rigorous replica method [30] that the solution landscape of the one-sided perceptron (where there is no absolute value in the constraints (1)) is dominated by _isolated_ solutions lying at large mutual Hamming distances, a structure sometimes called "frozen replica symmetry breaking" [16; 21; 22; 28; 39]. From the mathematics point of view, the frozen replica symmetry breaking prediction was proven true for the SBP in works by Perkins and Xu [33] and Abbe, Li and Sly [1], who showed that for all \(\kappa>\kappa_{sat}(\alpha)\), a solution drawn uniformly at random from \(S(\mathbf{G},\kappa)\) is _isolated_ with high probability, in the sense that it is separated from any other solution by a Hamming distance linear in \(N\). This type of landscape property has been traditionally associated with algorithmic hardness, with the rationale that an algorithm performing local moves is unlikely to succeed in the face of such extreme clustering, as argued, for instance, by Zdeborova and Mezard [38], or Huang and Kabashima [21]. In some problems, this predicted algorithmic hardness was confirmed empirically, e.g. [38; 39]. In other problems, a prominent example being the binary perceptron (symmetric or not), it is known that certain efficient heuristics are able to find solutions for \(\alpha\) small enough as a function of \(\kappa\)[5; 6; 15; 17; 18; 22; 23]. Statistical physics studies of the neighborhood of the solutions returned by efficient heuristics have put forward the intriguing observation that in the binary perceptron problem, a dense region of other solutions surrounds the ones which are returned [4; 8; 9]. This means that efficient algorithms may be drawn to _rare_, _well connected_ subset(s) of \(S(G,\kappa)\). Moreover, these efficient algorithms fail to return a solution when \(\alpha\) becomes large, suggesting the existence of a _computational phase transition_ in the binary perceptron (symmetric or not). For the symmetric version of the problem, this state of affairs has been partially elucidated in two recent mathematical works: In [2], Abbe et al. show the existence of clusters of solutions of linear diameter for all \(\kappa>\kappa_{sat}(\alpha)\), and maximal diameter for \(\alpha\) small enough. In a different direction, Gamarnik et al. [20] established an almost sharp result in the regime of small \(\alpha\), stating the following: There exists constants \(c_{0},c_{1}>0\) such that for \(\alpha\) small enough, * if \(\kappa\geq c_{0}\sqrt{\alpha}\) then a certain online algorithm of Bansal and Spencer [11] finds a solution in \(S(G,\kappa)\), and * if \(\kappa\leq c_{1}\sqrt{-\alpha/\log(\alpha)}\) then \(S(G,\kappa)\) exhibits a _overlap gap property_ ruling out a wide class of efficient algorithms. We mention that the positive result which holds for \(\kappa\geq c_{0}\sqrt{\alpha}\) is established in the case where the constraint matrix \(G\) is Rademacher instead of Gaussian; nevertheless, the same result is expected in the Gaussian case. Baldassi et al. [7] suggest that this computational transition can be probed by studying the monotonicity properties of the _local entropy_ of solutions around atypical solutions \(\mathbf{x}_{0}\) as a function of the distance from this solution. One can interpret the results of [7] as evidence towards a conjecture that finding a solution is computationally easy precisely when there exist some rare solutions around which this local entropy is monotone in the distance and that the problem becomes hard when this local entropy develops a local maximum at some distance \(r_{0}\) from the reference solution \(\mathbf{x}_{0}\). If such a conjecture is correct, then it must agree with the above-mentioned finding of Gamarnik et al. [20] in the regime of small \(\alpha\). This question motivated the present work. Another gap in the physics literature we elucidate in this work relates to the fact that the replica method on the one-step replica symmetry breaking level so far has not managed to find clusters of solutions in the binary perceptron. Indeed, the method can count rare clusters as long as they correspond to fixed points of a corresponding message-passing algorithm, see e.g. [37]. Parallels between the 1RSB calculation and the analysis of solutions with a monotonic local entropy have been put forward in [4; 8; 9], but not in the form where one writes the standard 1RSB equations and shows that they have a solution corresponding to rare subdominant clusters. We show that the standard 1RSB framework actually does present such solutions which describe subdominant clusters of extensive entropy, and we give likely reasons why these solutions were missed in past investigations. ### Summary of our results Local entropy around high margin solutions.We define and study a notion of local entropy around solutions which are typical at some margin \(\kappa_{0}<\kappa\). While typical solutions at \(\kappa_{0}\) are isolated from each other, it was shown in [2] that they belong to connected components of solutions at margin \(\kappa\) having a linear diameter in \(N\). Here, we show that these solutions are surrounded by exponentially many solutions at margin \(\kappa\). Consistently with the statistical physics literature, we say that there is a cluster of extensive entropy around a reference solution \(\mathbf{x}_{0}\) when the local entropy as a function of the distance achieves a local maximum at some distance from \(\mathbf{x}_{0}\). We show that for a certain range of \(\kappa\) typical solutions at margin \(\kappa_{0}\) have extensive entropy clusters around them. We define the entropy of these clusters as the value of the entropy at a local maximum. An analogous investigation of local entropy around large margin solutions was performed in [10] for the one-sided binary perceptron using the replica method. In our case, the symmetry of the constraints (1) allows us to derive simpler formulas for the local entropy in the regime of small \(\alpha\), essentially via a first moment method. This is due to the present model being contiguous to a corresponding simpler planted model in which the first and second moment computations can be conducted. We show that under a certain assumption on the concentration of the entropy of the SBB, while for any constant value of \(\alpha\) the second moment is exponentially larger than the square of the first moment, the exponent of the ratio of these quantities, when normalized by \(N\), tends to zero in the limit of small \(\alpha\). The resulting entropy of these clusters is plotted in Fig. 1 for various values of \(\kappa\) and \(\kappa_{0}\) in the \(\alpha\to 0\) limit. We observe that at a certain margin \(\kappa_{\text{entr}}(\kappa_{0})\) the entropy curve stops because the local entropy curve becomes monotone in the distance for \(\kappa>\kappa_{\text{entr}}(\kappa_{0})\). As discussed above, the existence of reference solutions such that the local entropy curve is monotone was speculated to provoke the onset of a region of parameters where finding solutions is algorithmically easy. In this paper, we show the existence of solutions-those typical at \(\kappa_{0}\)-for which the local entropy is monotone, and hence we do not expect the problem to be computationally hard for \(\kappa>\kappa_{\text{entr}}(\kappa_{0})\). In Fig. 1 we see that the smallest \(\kappa\) where this happens is \(\kappa_{\rm entr}\equiv\min_{\kappa_{0}}\kappa_{\rm entr}(\kappa_{0})=\kappa_{ \rm entr}(\kappa_{0}=\kappa_{\rm sat})\). For this reason, a large part of this investigation is devoted to the case \(\kappa_{0}=\kappa_{\rm sat}(\alpha)\). Motivated by these findings, we then study the local entropy of solutions that are at a Hamming distance \(Nr\) from the solution planted at \(\kappa_{0}=\kappa_{\rm sat}(\alpha)\). This is akin to the Franz-Parisi potential as studied in the physics of spin glasses [19]. Here, we compute this potential around a typical solution at \(\kappa_{0}\). Our findings, again in the regime of small \(\alpha\), are summarized in Fig. 2 (left), where it is apparent that the local entropy as a function of the distance \(r\) from a reference solution is monotone when \(\kappa\geq\tilde{\kappa}_{\rm entr}\sqrt{-\alpha/\log(\alpha)}\) and has a local maximum at an intermediate distance \(r_{0}\) when \(\kappa<\tilde{\kappa}_{\rm entr}\sqrt{-\alpha/\log(\alpha)}\), with \(\tilde{\kappa}_{\rm entr}\approx 1.429\) given by implicit equations (84) and (85). We also show that no solutions can be found in an interval of distances from the reference solution when \(\kappa<\tilde{\kappa}_{\rm entr}\sqrt{-\alpha/\log(\alpha)}\) with \(\tilde{\kappa}_{\rm entr}\approx 1.239\) given by the implicit equations (79) and (80). From these results, we note the existence of a logarithmic gap in \(1/\alpha\) in the value of \(\kappa\) where the local entropy curve becomes monotone and the value where the Bansal-Spencer algorithm is proved to succeed, in the regime of small \(\alpha\). It is an interesting open problem to close this gap, either by showing that efficient algorithms can find solutions for all \(\kappa\geq\tilde{\kappa}_{\text{entr}}\sqrt{-\alpha/\log(\alpha)}\) or by showing the local entropy approach is not indicative of algorithmic hardness. The 1RSB computation of the complexity curve:We note that in the statistical physics literature, clusters as defined above are also associated with a fixed point of the approximate message passing (AMP) algorithm or equivalently the Thouless-Anderson-Palmer (TAP) equations. The cluster entropy can be thus computed as the Bethe entropy corresponding to the AMP/TAP fixed point that is reached by AMP run at \(\kappa\) and initialized in one of the typical solutions at margin \(\kappa_{0}\). For \(\kappa>\kappa_{\text{entr}}(\kappa_{0})\) the AMP/TAP converges to the same fixed point as would be reached from a random initialization, corresponding to an entropy covering the whole space of solutions. Using this relation, the onset of a region where algorithms may be able to find these solutions is then related to the existence of solutions such that a AMP/TAP iteration initialized at these points converges to the same fixed point as if the iteration was initialized uniformly at random from \(S(G,\kappa)\). Indeed, it was observed empirically that solutions found by efficient algorithms always have such a property of AMP/TAP or the belief propagation algorithm converging to the same fixed point as from a random initialization [14; 27]. In the existing statistical physics literature, using the replica method on the one-step replica symmetry breaking level, researchers so far have not found clusters of solutions of extensive entropy in the binary perceptron. This is a point of concern as this method is supposed to count all clusters of solutions corresponding to the TAP/AMP fixed points, including the rare non-equilibrium ones [16; 25; 29; 31; 37]. This a priori casts doubt on the efficacy of the replica method and the validity of its predictions for the number of clusters of a given size, since the method misses a large part of the phase space (unless some explicit conditioning is done as in [8; 9].) We propose, based on the replica method, that the answer to this question lies in the properties of the complexity (the logarithm of the number of clusters) versus entropy (the logarithm of the number of solutions in the clusters) \(\Sigma(s)\). We observe that the numerical value of the complexity is rather large compared to the entropy. The slope of \(\Sigma(s)\) gives the value of the so-called Parisi parameter \(x\) that is therefore rather large: \(x\gg 1\). Since the value of \(x\) describing the equilibrium properties of the system is always between \(0\) and \(1\) it is not that surprising that the literature has not investigated solutions of the replica equations corresponding to \(x\gg 1\). When we consider a large range of values of \(x\) in the standard 1RSB equations for SBP [3], we obtain the \(\Sigma(s)\) depicted in Fig. 2 right. We then provide an argument that leads us to conjecture that in the small \(\alpha\) limit, the curve \(\Sigma(s)\) corresponds to the one we obtain via the approach of planting at \(\kappa_{0}\). Thus even though, in general, by planting we construct only some of the rare clusters, it seems that in fact we construct the most frequent ones in the limit of small \(\alpha\). Another property that we unveil is related to the fact that the curve \(\Sigma(s)\) is usually expected to be concave. The non-concave parts were so far considered "unphysical" in the literature (e.g. Fig. 8 in [13] or Fig. 5 in [37]). We show in our present work that the so-called "unphysical branch" of the replica/cavity prediction is actually not "unphysical" in the SBP and that it reproduces the curve \(\Sigma(s)\) obtained from the local entropy calculation at small \(\alpha\) and small internal cluster entropy. Moreover, we show that some of the relevant parts of the curve \(\Sigma(s)\) cannot be obtained in the usually iterative way of solving the 1RSB equations at a fixed value of the Parisi parameter \(x\). To access this part of the curve we need to adjust the value of \(x\) adaptively in every step when solving the 1RSB fixed point equations iteratively. ### Organization of the paper and the level of rigour The rest of the paper is organized as follows: Section II defines the local entropy and states the main Theorem 1 in the small \(\alpha\) limit. Section III introduces the planted model and its contiguity to the original model; a key element of the proof. Section IV contains the moment computations in the planted model, ending with the proof of Theorem 1. In Section V we use the result of Theorem 1 and study the properties of the asymptotic formula of the local entropy in the small \(\alpha\) limit. In section VI we study the one-step-replica-symmetry breaking solution of the SBP and its relation to the local entropy. This section investigates general values of \(\alpha\), not only the small \(\alpha\) limit. Finally, we conclude in section VII. Sections II to IV are fully mathematically rigorous. In Section V we analyze the resulting local entropy formula heuristically, solving the corresponding fixed point equations numerically, and deriving the numerical values for the energetic and the entropic thresholds. In Section VI we rely on the replica method which is well-accepted and widely used in theoretical statistical physics but not rigorously justified from the mathematical standpoint. ## II Definitions and main theorem In this paper, the local entropy is defined around a solution satisfying the SBP inequalities (1) with a _stricter margin_\(\kappa_{0}\). More precisely, for \(\kappa_{0}\leq\kappa\), let \(\mathbf{x}_{0}\in S(G,\kappa_{0})\), and let \(Z(\mathbf{x}_{0},\kappa,r)\) be the set of solutions \(\mathbf{y}\in S(G,\kappa)\) which are at Hamming distance \(Nr\) form \(\mathbf{x}_{0}\): \[Z(\mathbf{x}_{0},\kappa,r):=\left|\left\{\mathbf{y}\in S(\mathbf{G},\kappa)\ :\ d_{H}(\mathbf{x}_{0},\mathbf{y})=Nr \right\}\right|\,. \tag{5}\] We then define the local entropy function as the (truncated) logarithm of \(Z\) averaged over the choice of \(\mathbf{x}_{0}\) and the disorder \(\mathbf{G}\): \[\phi_{N,\delta}(r):=\frac{1}{N}\,\mathbb{E}_{G}\left[\ \frac{1}{\big{|}S(\mathbf{G}, \kappa_{0})\big{|}}\sum_{\mathbf{x}_{0}\in S(\mathbf{G},\kappa_{0})}\log_{N\delta}Z( \mathbf{x}_{0},\kappa,r)\ \Big{|}\ S(\mathbf{G},\kappa_{0})\neq\emptyset\right]\,, \tag{6}\] where \(\log_{N\delta}(x)=\max\{\log(x),N\delta\}\), \(\delta>0\). This truncation to the logarithm is technically convenient, following [32; 35]. Note that for \(\kappa_{0}=\kappa\), the fact that there are no solutions at a distance less than \(r_{0}N\) around \(\mathbf{x}_{0}\) for some \(r_{0}=r_{0}(\kappa,\alpha)\) with high probability [33] implies that \(\phi_{N,\delta}(r)=\delta+o_{N}(1)\) for all \(r<r_{0}\), and so \(\lim_{\delta\to 0}\lim_{N\to\infty}\phi_{N,\delta}(r)=0\) for \(r<r_{0}\). However, as we increase \(\kappa\) starting from \(\kappa_{0}\), new nearby solutions are expected to emerge. These are the solutions which are counted by \(\phi_{N,\delta}(r)\). This, of course, does not contradict the frozen-1RSB property of \(S(\mathbf{G},\kappa)\) since \(\mathbf{x}_{0}\) is not typical in \(S(\mathbf{G},\kappa)\). We show that local entropy \(\phi_{N,\delta}(r)\) is given in the limit \(N\to\infty\) followed by \(\alpha\to 0\) then \(\delta\to 0\) by a simple formula which corresponds to the first moment bound (i.e., annealed entropy) in the corresponding planted model of the SBP: We define binary entropy function \[h(m)=-\Big{(}\frac{1+m}{2}\Big{)}\log\Big{(}\frac{1+m}{2}\Big{)}-\Big{(}\frac {1-m}{2}\Big{)}\log\Big{(}\frac{1-m}{2}\Big{)}\,,\quad m\in(-1,1)\,, \tag{7}\] and \[\varphi_{1}(m)=\mathbb{E}\log\mathbb{P}\left(\left|mZ_{0}+\sqrt{1-m^{2}}Z \right|\leq\kappa\,\big{|}\,Z_{0}\right),\quad m\in(-1,1)\,, \tag{8}\] where \(Z_{0}\sim N(0,1)\) conditioned on the event \(|Z_{0}|\leq\kappa_{0}\), and \(Z\sim N(0,1)\) independently of \(Z_{0}\). (\(Z_{0}\) has p.d.f. \(f\); Eq. (12).) **Theorem 1**.: _For \(m\in(-1,1)\), \(r=(1-m)/2\) and any sequence \(\kappa=\kappa(\alpha)\to 0\) as \(\alpha\to 0\) such that \(\alpha\log(1/\kappa)\to 0\), under Assumption 1 we have_ \[\lim_{\delta\to 0}\limsup_{\alpha\to 0}\limsup_{N\to\infty}\left|\phi_{N, \delta}(r)-\max\big{\{}h(m)+\alpha\varphi_{1}(m)\,,\,\delta\big{\}}\right|=0\,. \tag{9}\] **Remark**.: _Observe that for a small, we have \(\kappa_{\text{sol}}(\alpha)=\Theta(2^{-1/\alpha})\), therefore the condition on \(\kappa\) and \(\alpha\) in the theorem can be interpreted as \(\kappa\gg\kappa_{\text{sol}}(\alpha)\)._ The proof of Theorem 1 can be found in Section IV. ## III The planted model and contiguity The analysis of the local entropy is achieved via a planted model where \(\mathbf{x}_{0}\) is drawn uniformly at random from the hypercube \(\{-1,+1\}^{N}\) and then the constraint vectors \(\mathbf{g}_{a}\) are drawn from the Gaussian distribution conditional on \(\mathbf{x}_{0}\) being a satisfying configuration, i.e., conditional on \(\mathbf{x}_{0}\in S(\mathbf{G},\kappa_{0})\). More precisely, we fix the reference (planted) vector \(\mathbf{x}_{0}\in\{-1,+1\}^{N}\) and for each \(a\in\{1,\cdots,M\}\) we independently draw Gaussian random vectors \(\mathbf{g}_{a}\) conditioned on the event that \[\left|\langle\mathbf{g}_{a},\mathbf{x}_{0}\rangle\right|\leq\kappa_{0}\sqrt{N}\,. \tag{10}\] Equivalently, we can write \[\mathbf{g}_{a}=\frac{1}{\sqrt{N}}\mathbf{w}_{a}\,\mathbf{x}_{0}+\Big{(}\mathbf{I}-\frac{1}{N} \mathbf{x}_{0}\mathbf{x}_{0}^{\top}\Big{)}\tilde{\mathbf{g}}_{a}\,, \tag{11}\] where \((\tilde{\mathbf{g}}_{a})_{a=1}^{M}\) are independent \(N(0,\mathbf{I}_{N})\) random vectors and \(\mathbf{w}=(w_{a})_{a=1}^{M}\) has mutually independent coordinates, independent of \((\tilde{\mathbf{g}}_{a})_{a=1}^{M}\), and distributed as \(N(0,1)\) r.v.'s conditioned to be smaller than \(\kappa_{0}\) in absolute value, i.e., they have a p.d.f. \[f(w):=\Big{(}\frac{1}{\sqrt{2\pi}}e^{-w^{2}/2}\,\mathbf{1}\{|w|\leq\kappa_{0}\} \Big{)}\Big{/}\,\mathbb{P}(|Z|\leq\kappa_{0})\,. \tag{12}\] We let \(\mathbb{P}_{\mu}\) be the distribution of the pair \((\mathbf{G},\mathbf{x}_{0})\) as per the description above, Eq. (10), and \(\mathbb{P}_{\mu}\) be their distribution according to the original model where \(\mathbf{G}\in\mathbb{R}^{M\times N}\) is an array of standard Gaussian vectors and \(\mathbf{x}_{0}\) is drawn uniformly at random from \(S(\mathbf{G},\kappa)\), conditional on the latter being non-empty. We denote by \(\mathbb{E}_{\mu}\) and \(\mathbb{E}_{\mu}\) the associated expectations. A simple computation reveals that the ratio of \(\mathbb{P}_{\mu}\) to \(\mathbb{P}_{\mu}\) is given by \[\frac{\mathrm{d}\,\mathbb{P}_{\mu}}{\mathrm{d}\,\mathbb{P}_{\mu}}(\mathbf{G},\mathbf{x }_{0})=\frac{\left|S(\mathbf{G},\kappa_{0})\right|}{\mathbb{E}\left|S(\mathbf{G}, \kappa_{0})\right|}\mathbf{1}\{\mathbf{x}_{0}\in S(\mathbf{G},\kappa_{0})\}\,,\quad\forall \,\mathbf{G}\in\mathbb{R}^{M\times N}\,,\,\mathbf{x}_{0}\in\{-1,+1\}^{N}\,. \tag{13}\] By [2; 33], the above likelihood ratio has constant order log-normal fluctuations for all \(\kappa_{0}>\kappa_{sa}(\alpha)\); this implies in particular that \(\mathbb{P}_{\mu}\) and \(\mathbb{P}_{\mu}\) are mutually contiguous, meaning that for any sequence of events \(E_{n}\) (in the common probability space of \(\mathbb{P}_{\mu}\) and \(\mathbb{P}_{\mu}\)), \(\mathbb{P}_{\mu}(E_{n})\to 0\) if and only if \(\mathbb{P}_{\mu}(E_{n})\to 0\), see for instance [36, Lemma 6.4]. In other words, any high-probability event under the planted distribution \(\mathbb{P}_{\mu}\) is also a high-probability event under the original distribution \(\mathbb{P}_{\mu}\). This will allow us to compute the local entropy in the planted model, where \(\mathbf{x}_{0}\) is uniformly distributed over \(\{-1,+1\}^{N}\) instead of \(S(\mathbf{G},\kappa_{0})\), and then transfer the result of this computation to the original model; see Lemma 2 below. In addition to contiguity, this argument will require a concentration property of the restricted partition function \(Z(\mathbf{x}_{0},\kappa,r)\) with respect to the disorder \(\mathbf{G}\), which we state in more general form as follows: Let \(a_{j}<b_{j}\), \(1\leq j\leq M\) be two sequences of real numbers, let \(m\in[-1,1]\) and consider the partition function \[Z_{N}=\left|\left\{\mathbf{x}\in\{-1,+1\}^{N}\,:\,\sum_{i=1}^{N}x_{i}=N\,m\,,\, \langle\mathbf{g}_{j},\mathbf{x}-m\mathbf{1}\rangle/\sqrt{N}\in[a_{j},b_{j}]\quad\forall 1 \leq j\leq M\right\}\right|\,, \tag{14}\] where \(\mathbf{g}_{j}\) are i.i.d. standard Gaussian random vectors in \(\mathbb{R}^{N}\). **Assumption 1**.: _For any \(\delta>0\), \(m\in[-1,1]\) and sequences \((a_{j}),(b_{j})\) as above, there exist a constant \(C>0\) depending only on \(\delta\) and \(\Delta:=max_{j}(b_{j}-a_{j})\) such that for all \(t>0\),_ \[\mathbb{P}\left(\Big{|}\log_{N\delta}Z_{N}-\mathbb{E}\log_{N\delta}Z_{N}\Big{|} \geq CNt\right)\leq\exp\Big{(}-N\min\{t^{2},t\}\Big{)}\,. \tag{15}\] In models of disordered systems where the free energy is a smooth function of the Gaussian disorder, this concentration follows from general principles of Gaussian concentration of Lipschitz functions, see e.g. [12]. In particular, a stronger version of the above assumption (with no truncation to the logarithm and where the decay on the right-hand side is sub-Gaussian for all \(t>0\)) holds for the SK and \(p\)-spin models at any positive temperature, and for the family of \(U\)-_perceptrons_ where the activation function \(U\) is positive and differentiable with bounded derivative. However, in our case the hard constraints defining the model make concentration far less obvious. Currently, exponential concentration of the truncated log-partition function is known for the half-space model i.e., the one-sided perceptron [35], and for the more general family of \(U\)-perceptrons which includes the SBP model under study here, albeit with a non-optimal exponent in \(N\) on the right-hand side of Eq. (15), and with an additional slowly vanishing term on the right-hand side; see [32, Proposition 4.5]. (The latter paper also studies concentration and the sharp-threshold phenomenon for more general disorder distributions.) For our purposes, an essential feature is exponential decay in \(N\,\theta(t)\) where \(\theta:\mathbb{R}_{+}\to\mathbb{R}_{+}\) is any increasing function with \(\theta(0)=0\). We assume \(\theta(t)=\min\{t^{2},t\}\) in the above since this is the sub-exponential tail which is expected, but this is not crucial to the proof. Establishing the above assumption is an interesting mathematical problem on its own and goes beyond the scope of this paper. In the planted model, the local entropy takes the simplified form \[\phi_{N,\delta}^{\mu}(r):=\frac{1}{N}\,\mathbb{E}_{\mu}\big{[}\log_{N\delta}Z( \mathbf{x}_{0},\kappa,r)\big{]}\,, \tag{16}\] where the expectation is with respect to \(\mathbf{x}_{0}\) taken uniformly in \(\{-1,+1\}^{N}\) and the conditional distribution \(\mathbf{G}|\mathbf{x}_{0}\) is given by Eq. (11). We now show that under Assumption 1, \(\phi_{N,\delta}(r)\) and \(\phi_{N,\delta}^{\mu}(r)\) are close: **Lemma 2**.: _Under Assumption 1 we have for all \(r\in(0,1)\),_ \[\lim_{N\to\infty}\left|\phi_{N,\delta}(r)-\phi_{N,\delta}^{\mu}(r)\right|=0\,. \tag{17}\] Proof.: We define the random variable \(X=(1/N)\log_{N\delta}Z(\mathbf{x}_{0},\kappa,r)\). We have \(\mathbb{E}_{\mu}[X]=\phi_{N,\delta}^{\mu}(r)\) and \(\mathbb{E}_{\mu}[X]=\phi_{N,\delta}(r)\). Now for \(t>0\) fixed, we consider the event \(A=\left\{\left|X-\mathbb{E}_{\mu}[X]\right|\leq t\right\}\). Under the planted model \(\mathbb{P}_{\mu}\) we may assume that \(\mathbf{x}_{0}=\mathbf{1}\) by symmetry of the Gaussian distribution. Therefore by Assumption 1 (with \(\Delta=(1-2r)\kappa\)) we have \(\mathbb{P}_{\mu}(A^{\zeta})=o_{N}(1)\). It follows that \(\mathbb{P}_{\mu}(A^{\zeta})=o_{N}(1)\) by contiguity. Further, observe that \(0\leq X\leq\log 2\), \(\mathbb{P}_{\mu}\)-almost surely. Therefore we have \[\left|\mathbb{E}_{\mu}[X]-\mathbb{E}_{\mu}[X]\right|\leq\mathbb{E}_{\mu}\Big{[} \left|X-\mathbb{E}_{\mu}[X]\right|\Big{]}\leq\epsilon\,\mathbb{P}_{\mu}(A)+(2 \log 2)\,\mathbb{P}_{\mu}(A^{\zeta})\leq t+o_{N}(1)\,. \tag{18}\] The claim follows by letting \(t\to 0\) after \(N\to\infty\) Moment estimates in the planted model Now we aim to calculate the limit of \(\phi_{N}^{\mu}(r)\) as \(N\to\infty\) for small \(\alpha\). To this end we evaluate the first two moments of \(Z(\mathbf{x}_{0},\kappa,r)\) as show that the second moment is only larger than the square of the first moment by an exponential factor which shrinks as \(\alpha\to 0\). Then we show that \(\phi_{N}^{\mu}(r)\) is close to its annealed approximation using Assumption 1. We first need to define two auxiliary functions. For a jointly distributed pair of discrete random variables \((\theta_{1},\theta_{2})\) let \(h(\theta_{1},\theta_{2})\) be their Shannon entropy. For \(m,q\in(-1,1)\) we define the function \[\varphi_{2}(m,q)=\mathbb{E}\log\mathbb{P}\left(\left|mZ_{0}+Z_{1}\right| \leq\kappa\,,\left|mZ_{0}+Z_{2}\right|\leq\kappa\,\right|Z_{0}\right), \tag{19}\] where \(Z_{0}\sim f\) and the pair \((Z_{1},Z_{2})\) is a centered bivariate Gaussian vector independent of \(Z_{0}\) with covariance \[\begin{bmatrix}1-m^{2}&q-m^{2}\\ q-m^{2}&1-m^{2}\end{bmatrix}. \tag{20}\] **Theorem 3**.: _Let \(\mathbf{w}=(w_{\alpha})_{\alpha=1}^{M}\) as in Eq. (11). For \(m\in(-1,1)\), \(r=(1-m)/2\) we have_ \[\frac{1}{N}\log\mathbb{E}\left[Z(\mathbf{x}_{0},\kappa,r)\,\middle|\, \mathbf{w}\right]\xrightarrow[N\to\infty]{a.s.}h(m)+\alpha\varphi_{1}(m), \tag{21}\] \[\text{and}\quad\frac{1}{N}\log\mathbb{E}\left[Z(\mathbf{x}_{0},\kappa,r)^{2}\,\middle|\,\mathbf{w}\right]\xrightarrow[N\to\infty]{a.s.}\max_{q\in[-1,1 ]}\left\{\max_{(\theta_{1},\theta_{2})}h(\theta_{1},\theta_{2})+\alpha\varphi _{2}(m,q)\right\}, \tag{22}\] _where the inner maximization in Eq. (22) is over the joint distribution of two \(\{-1,+1\}\)-valued random variables \((\theta_{1},\theta_{2})\) such that \(\mathbb{E}[\theta_{1}]=\mathbb{E}[\theta_{2}]=m\) and \(\mathbb{E}[\theta_{1}\theta_{2}]=q\)._ The proof of the above theorem relies on a standard use of Stirling's formula, and is postponed to the end of this section. At this point, if the right-hand side of Eq. (22) is equal to twice the right-hand side of Eq. (21), a mild concentration argument would allow us to conclude that \(\phi_{N}^{\mu}(r)\) is given by Eq. (21) in the large \(N\) limit. This equality would follow if the value \(q=m^{2}\) is a maximizer in Eq. (22). This does not appear to be the case for any values of \(\alpha,\kappa_{0},\kappa\). However, we show that the difference is vanishing when \(\alpha\to 0\). Let \[\phi_{1}(m) =h(m)+\alpha\varphi_{1}(m)\,, \tag{23}\] \[\phi_{2}(m) =\max_{q\in[-1,1]}\left\{\max_{(\theta_{1},\theta_{2})}h(\theta_ {1},\theta_{2})+\alpha\varphi_{2}(m,q)\right\}. \tag{24}\] **Lemma 4**.: _Assume \(\kappa_{0}<1\) and \(\kappa^{2}\geq\kappa_{0}^{2}/(1-\kappa_{0}^{2})\). Then for all \(m\in(-1,1)\),_ \[0\leq\phi_{2}(m)-2\phi_{1}(m)\leq\alpha\log\left(1/p(\kappa)\right), \tag{25}\] _where \(p(\kappa)=\mathbb{P}\left(\left|Z\right|\leq\kappa\right)\), \(Z\sim N(0,1)\). Therefore the above difference tends to zero wherever \(\alpha\to 0,\kappa\to 0\) with \(\alpha\log(1/\kappa)\to 0\), and \(\kappa_{0}/\kappa\to 0\). The latter condition holds for \(\kappa_{0}=\kappa_{\text{\tiny{so}}}(\alpha)\)._ Proof.: We first remark that by sub-additivity of the entropy, \[h(\theta_{1},\theta_{2})\leq 2h(m)\,, \tag{26}\] with equality if and only if the pair \((\theta_{1},\theta_{2})\) is independent, i.e., if \(q=m^{2}\). Moreover we remark that for all \(q\in[-,1,1]\), \[2\varphi_{1}(m)\leq\varphi_{2}(m,q)\leq\varphi_{1}(m)\,, \tag{27}\] where the lower bound follows from the Gaussian correlation inequality [26; 34] (with equality if \((Z_{1},Z_{2})\) are independent, i.e., \(q=m^{2}\)) and the upper bound by Cauchy Schwarz (with equality if \(Z_{1}=Z_{2}\), i.e., \(q=1\)). Using the bounds (26) and (27) we have \[\phi_{2}(m)\leq 2h(m)+\alpha\varphi_{1}(m)\,, \tag{28}\] whence, \[0\leq\phi_{2}(m)-2\phi_{1}(m)\leq-\alpha\varphi_{1}(m)\,. \tag{29}\] It remains to show that \(\varphi_{1}\) is a non-decreasing function so that \(\varphi_{1}(m)\geq\varphi_{1}(0)=\log p(\kappa)\). A simple computation of the derivative of \(\varphi_{1}\) reveals that \[\varphi_{1}^{\prime}(m)=\mathbb{E}\left[\frac{a_{+}^{\prime}(m)e^{-a_{+}^{2}(m)/ 2}-a_{-}^{\prime}(m)e^{-a_{-}^{2}(m)/2}}{\int_{a_{-}(m)}^{a_{+}(m)}e^{-a/2} \mathrm{d}u}\right],\ \ \text{with}\ \ \ a_{\pm}(m)=\frac{-mw_{0}\pm\kappa}{\sqrt{1-m^{2}}}\,. \tag{30}\] Using \(a_{\pm}^{\prime}(m)=\frac{-w_{0}\pm m\kappa}{(1-m^{2})^{3/2}}\), the numerator of the above expression can be written as follows: \[a_{+}^{\prime}(m)e^{-a_{-}^{2}(m)/2}-a_{-}^{\prime}(m)e^{-a_{-}^{2}(m)/2}= \frac{1}{(1-m^{2})^{3/2}}\Big{(}(m\kappa-w_{0})e^{-a_{+}^{2}(m)/2}+(m\kappa+w _{0})e^{-a_{-}^{2}(m)/2}\Big{)}\,. \tag{31}\] We note that this expression is even in \(w_{0}\) so we assume \(w_{0}\geq 0\) without loss of generality. Now, since \(w_{0}\leq\kappa_{0}\) a.s. the above expression is nonnegative if \(m\geq\kappa_{0}/\kappa\). Now let us consider the remaining case \(m<\kappa_{0}/\kappa\). Processing the numerator further we obtain \[a_{+}^{\prime}(m)e^{-a_{+}^{2}(m)/2}-a_{-}^{\prime}(m)e^{-a_{-}^ {2}(m)/2} \tag{32}\] \[=\frac{e^{-(x^{2}+m^{2}w_{0}^{2})/(2(1-m^{2}))}}{(1-m^{2})^{3/2} }\Big{(}(m\kappa-w_{0})e^{m\kappa w_{0}/(1-m^{2})}+(m\kappa+w_{0})e^{-m\kappa w _{0}/(1-m^{2})}\Big{)}\] (33) \[=\frac{e^{-(x^{2}+m^{2}w_{0}^{2})/(2(1-m^{2}))}\cosh\big{(}m \kappa w_{0}/(1-m^{2})\big{)}}{(1-m^{2})^{3/2}}\Big{(}m\kappa-w_{0}\tanh\big{(} m\kappa w_{0}/(1-m^{2})\big{)}\Big{)}\,. \tag{34}\] From the bound \(\tanh(x)\leq x\) for \(x\geq 0\) we see that \[m\kappa-w_{0}\tanh\big{(}m\kappa w_{0}/(1-m^{2})\big{)}\geq m\kappa\Big{(}1- \frac{w_{0}^{2}}{1-m^{2}}\Big{)}\,. \tag{35}\] This is non-negative as long as \(1-\kappa_{0}^{2}/(1-m^{2})\geq 0\). Since \(m<\kappa_{0}/\kappa\), this is verified when \(1-(\kappa_{0}/\kappa)^{2}\geq\kappa_{0}^{2}\), i.e., when \(\kappa^{2}\geq\kappa_{0}^{2}/(1-\kappa_{0}^{2})\). Next, we are ready to prove the main result of this section: **Theorem 5**.: _Under the assumptions of Theorem 1 we have_ \[\lim_{\delta\to 0}\limsup_{\alpha\to 0}\limsup_{N\to\infty}\left|\phi_{N, \delta}^{\mu}(r)-\max\big{\{}\phi_{1}(m)\,,\,\delta\big{\}}\right|=0\,,\qquad r =(1-m)/2\,, \tag{36}\] _where the limit in \(\alpha\) is such that \(\alpha\to 0,\kappa\to 0\) with \(\alpha\log(1/\kappa)\to 0\)._ We see that Theorem 1 follows from Theorem 5 and Lemma 2. Now we prove Theorem 5: Proof.: We write \(Z=Z(\mathbf{x}_{0},\kappa,r)\). All probabilities and expectations are taken under \(\mathbb{P}_{\mu}\). For fixed \(t,t^{\prime}>0\) to be chosen later we define the events \[A=\left\{\frac{1}{N}\log_{N\delta}\mathbb{E}_{\mu}\big{[}Z\,|\, \mathbf{w}\big{]}-\frac{1}{N}\log_{N\delta}Z\leq\frac{\log 2}{N}\right\}\,,\quad B= \left\{\frac{1}{N}\log_{N\delta}Z-\phi_{N,\delta}^{\mu}(r)\leq t^{\prime} \right\}\,, \tag{37}\] \[C=\left\{\max\big{\{}\phi_{1}(m)\,,\,\delta\big{\}}-\frac{1}{N} \log_{N\delta}\mathbb{E}_{\mu}\big{[}Z\,|\,\mathbf{w}\big{]}\leq t\right\}\,,\ \ \text{and}\ \ D=\left\{\frac{1}{N}\log \mathbb{E}_{\mu}\big{[}Z^{2}\,|\,\mathbf{w}\big{]}-\phi_{2}(m)\leq t\right\}\,. \tag{38}\] First, we note that by Jensen's inequality, \[\frac{1}{N}\,\mathbb{E}_{\mu}\big{[}\log_{N\delta}Z\,|\,\mathbf{w} \big{]} \leq\frac{1}{N}\log\mathbb{E}_{\mu}\big{[}\max\{e^{N\delta},Z\} \,|\,\mathbf{w}\big{]} \tag{39}\] \[\leq\frac{1}{N}\log\Big{(}e^{N\delta}+\mathbb{E}_{\mu}\big{[}Z\,| \,\mathbf{w}\big{]}\Big{)}\] (40) \[\leq\frac{1}{N}\log\Big{(}2\max\big{\{}e^{N\delta},\mathbb{E}_{\mu }\big{[}Z\,|\,\mathbf{w}\big{]}\big{\}}\Big{)}\] (41) \[=\frac{\log 2}{N}+\frac{1}{N}\log_{N\delta}\mathbb{E}_{\mu}\big{[}Z\,| \,\mathbf{w}\big{]}. \tag{42}\] Since by Theorem 3, \(\frac{1}{N}\log\mathbb{E}_{\mu}\big{[}Z\,|\,\mathbf{w}\big{]}\to\phi_{1}(m)\) almost surely as \(N\to\infty\), we have by dominated convergence, \[\limsup_{N\to\infty}\phi_{N,\delta}^{\mu}(r)\leq\max\big{\{}\phi_{1}(m),\, \delta\big{\}}\,. \tag{43}\] Next, under \(A\cap B\cap C\) we have \[\max\big{\{}\phi_{1}(m),\,\delta\big{\}}-\phi_{N,\delta}^{\mu}(r) \leq\frac{\log 2}{N}+t+t^{\prime}\,. \tag{44}\] Now the goal is to show that \(\mathbb{P}_{\mu}(A\cap B\cap C)>0\). Let \(\mathbf{w}\) be such that \(C\cap D\) holds. It follows by the Paley-Zigmund inequality that \[\mathbb{P}_{\mu}(A\,|\,\mathbf{w}) =\mathbb{P}_{\mu}\Big{(}\max\big{\{}e^{N\delta},Z\big{\}}\geq\max \big{\{}e^{N\delta},\mathbb{E}_{\mu}[Z\,|\,\mathbf{w}]\big{\}}/2\,\big{|}\,\mathbf{w} \Big{)} \tag{45}\] \[\geq\mathbb{P}_{\mu}\Big{(}Z\geq\max\big{\{}e^{N\delta},\mathbb{ E}_{\mu}[Z\,|\,\mathbf{w}]\big{\}}/2\,\big{|}\,\mathbf{w}\Big{)}\] (46) \[\geq\frac{\max\big{\{}\mathbb{E}_{\mu}[Z\,|\,\mathbf{w}]^{2},e^{2N \delta}\big{\}}}{4\,\mathbb{E}_{\mu}[Z^{2}\,|\,\mathbf{w}]}\] (47) \[\geq\frac{1}{4}\frac{\mathbb{E}_{\mu}[Z\,|\,\mathbf{w}]^{2}}{\mathbb{ E}_{\mu}[Z^{2}\,|\,\mathbf{w}]}\] (48) \[\geq\frac{1}{4}\exp\Big{(}-N\big{(}\phi_{2}(m)-2\phi_{1}(m)+3t+2 \delta\big{)}\Big{)}\,, \tag{49}\] where the last inequality follows from \(C\cap D\). From Lemma 4 we have \(\phi_{2}(m)-2\phi_{1}(m)\leq\alpha\log(1/p(\kappa))\) when \(\kappa^{2}\geq\kappa_{0}^{2}(1-\kappa_{0}^{2})\). Next by Theorem 3 we have \(\mathbb{P}_{\mu}(C\cap D)\geq 1/2\) for \(N\) large enough (it is actually \(1-o_{N}(1)\)). It follows that \[\mathbb{P}_{\mu}(A\cap C\cap D)\geq\frac{1}{8}\exp\Big{(}-N\big{(}\alpha\log (1/p(\kappa))+3t+2\delta\big{)}\Big{)}\,. \tag{50}\] On the other hand, by our concentration Assumption 1, \(\mathbb{P}_{\mu}(B^{c})\leq\exp(-N\min\{t^{\prime 2}/C^{2},t^{\prime}/C\})\) where \(C=C(\delta,\Delta)\), \(\Delta=(1-2r)\kappa\) is the constant appearing in the assumption, we have by a union bound \[\mathbb{P}_{\mu}(A\cap B\cap C\cap D)\geq\mathbb{P}_{\mu}(A\cap C\cap D)- \mathbb{P}_{\mu}(B^{c})\geq\frac{1}{8}\exp\Big{(}-N\big{(}\alpha\log(1/p(\kappa ))+3t+2\delta\big{)}\Big{)}-\exp\big{(}-N\min\{t^{\prime 2}/C^{2},t^{\prime}/C\} \big{)}\,. \tag{51}\] Now we choose \(t=\frac{1}{3}(2\delta+\alpha\log(1/p(\kappa)))\) and \(t^{\prime}=t^{\prime}_{N}\) such that \(\min\{t^{\prime 2}_{N}/C^{2},t^{\prime}_{N}/C\}=(\log 16)/N+2\alpha\log(1/p( \kappa))+4\delta\). We obtain \[\mathbb{P}(A\cap B\cap C\cap D)\geq\frac{1}{16}\exp\big{(}-2N\alpha\log(1/p( \kappa))-4N\delta\big{)}>0\,. \tag{52}\] Therefore the bound (44) holds with this choice of parameters: \[\max\big{\{}\phi_{1}(m),\,\delta\big{\}}-\phi_{N,\delta}^{\mu}(r) \leq\frac{\log 2}{N}+\frac{1}{3}\alpha\log(1/p(\kappa))+\frac{2}{3}\delta+t^{ \prime}_{N}\,, \tag{53}\] ans we obtain \[\liminf_{N\to\infty}\phi_{N,\delta}^{\mu}(r)\geq\max\big{\{}\phi_{1}(m),\, \delta\big{\}}-\frac{1}{3}\alpha\log(1/p(\kappa))-\frac{2}{3}\delta-t^{\prime }_{\infty}\,. \tag{54}\] Letting \(\alpha\to 0,\kappa\to 0\) such that \(\alpha\log(1/\kappa)\to 0\) and then \(\delta\to 0\) concludes the proof. Proof of Theorem 3.: Let us start with the first moment. First, we have \[\mathbb{E}\Big{[}Z(\mathbf{x}_{0},\kappa,r)\,\Big{|}\,\mathbf{w}\Big{]} =\sum_{\mathbf{x}\in\{-1,+1\}^{N}}\mathbb{E}\Big{[}\mathbf{1}\big{\{}\mathbf{x }\in S(\mathbf{G},\kappa)\,,\langle\mathbf{x}_{0},\mathbf{x}\rangle=Nm\big{\}}\,\big{|}\, \mathbf{w}\Big{]} \tag{55}\] \[=\frac{1}{2^{N}}\sum_{\mathbf{x}_{0},\mathbf{x}\in[-1,+1]^{N}}\mathbf{1} \big{\{}\mathbf{x}_{0},\mathbf{x}\rangle=Nm\big{\}}\prod_{a=1}^{M}\mathbb{P}\Big{(} \big{|}\langle\mathbf{g}_{a},\mathbf{x}\rangle\big{|}\leq\kappa\sqrt{N}\,\big{|}\,\mathbf{x }_{0},w_{a}\Big{)}. \tag{56}\] We further have \[\frac{1}{\sqrt{N}}\langle\mathbf{g}_{a},\mathbf{x}\rangle =\frac{w_{a}}{N}\langle\mathbf{x},\mathbf{x}_{0}\rangle+\frac{1}{\sqrt{N}} \langle\tilde{\mathbf{g}}_{a}\big{(}\mathbf{I}-\frac{1}{N}\mathbf{x}_{0}\mathbf{x}_{0}^{\top} \big{)}\mathbf{x}\rangle \tag{57}\] \[\overset{\mathrm{d}}{=}mw_{a}+\sqrt{1-m^{2}}Z\,, \tag{58}\] where \(Z\sim N(0,1)\) independently. Using Stirling's formula, we obtain \[\frac{1}{N}\log\mathbb{E}\Big{[}Z(\mathbf{x}_{0},\kappa,m)\big{|}\, \mathbf{w}\Big{]}=h(m)+\frac{1}{N}\sum_{a=1}^{M}\log\mathbb{P}\big{(}|mw_{a}+ \sqrt{1-m^{2}}Z|\leq\kappa\,\big{|}\,w_{a}\big{)}+o_{N}(1)\,. \tag{59}\] An application of the strong law of large numbers yields the formula in Eq. (21). We now calculate the second moment: \[\mathbb{E}\Big{[}Z(\mathbf{x}_{0},\kappa,m)^{2}\,\big{|}\,\mathbf{w}\Big{]} =\sum_{\mathbf{x}^{1},\mathbf{x}^{2}\in\{-1,+1\}^{N}}\mathbb{E}\Big{[} \mathbf{1}\big{\{}\mathbf{x}^{i}\in S(G,\kappa)\,,\langle\mathbf{x}_{0},\mathbf{x}^{i}\rangle =Nm,i=1,2\big{\}}\Big{]} \tag{60}\] \[=\frac{1}{2^{N}}\sum_{\mathbf{x}_{0},\mathbf{x}^{1},\mathbf{x}^{2}\in\{-1,+1 \}^{N}}\hskip-14.226378pt\mathbf{1}\big{\{}\langle\mathbf{x}_{0},\mathbf{x}^{i}\rangle=Nm,i=1,2\big{\}}\prod_{a=1}^{M}\mathbb{P}\Big{(}\big{|}\langle\mathbf{g}_{a},\mathbf{x}^ {i}\rangle\big{|}\leq\kappa\sqrt{N}\,,i=1,2\,\big{|}\,\mathbf{x}_{0},w_{a}\Big{)}\,. \tag{61}\] Fix \(m,q\in[-1,1]\) and three vectors \(\mathbf{x}_{0},\mathbf{x}^{1}\), \(\mathbf{x}^{2}\) such that \(\langle\mathbf{x}^{1},\mathbf{x}^{2}\rangle=Nq\), and \(\langle\mathbf{x}^{i},\mathbf{x}_{0}\rangle=Nm\), for \(i=1,2\). Then as before, \[\frac{1}{\sqrt{N}}\langle\mathbf{g}_{a},\mathbf{x}^{i}\rangle =\frac{w_{a}}{N}\langle\mathbf{x}^{i},\mathbf{x}_{0}\rangle+\frac{1}{ \sqrt{N}}\langle\tilde{\mathbf{g}}_{a},\big{(}\mathbf{I}-\frac{1}{N}\mathbf{x}_{0}\mathbf{x}_{ 0}^{\top}\big{)}\mathbf{x}^{i}\rangle \tag{62}\] \[\overset{\mathrm{d}}{=}mw_{a}+Z_{i}\,, \tag{63}\] where the pair \((Z_{1},Z_{2})\) is defined as in Eq. (20). Furthermore, by symmetry we can assume that \(\mathbf{x}_{0}=\mathbf{1}\), and we define the set \[C(m,q)=\left\{\mathbf{x}^{1},\mathbf{x}^{2}\in\{-1,+1\}^{N}\,:\,\langle\mathbf{x}^{1},\bm {x}^{2}\rangle=Nq\,,\langle\mathbf{1},\mathbf{x}^{i}\rangle=Nm\,,i=1,2\right\}. \tag{64}\] We have \[\mathbb{E}\Big{[}Z(\mathbf{x}_{0},\kappa,m)^{2}\,\big{|}\,\mathbf{w}\Big{]}=\sum_{q \in[-1,1]\cap Z/N}\Big{|}C(m,q)\Big{|}\prod_{a=1}^{M}\mathbb{P}\Big{(}\big{|} mw_{a}+Z_{i}\big{|}\leq\kappa\,,i=1,2\,\big{|}\,w_{a}\Big{)}\,. \tag{65}\] Therefore \[\frac{1}{N}\log\mathbb{E}\Big{[}Z(\mathbf{x}_{0},\kappa,m)^{2}\,\big{|}\,\mathbf{w} \Big{]}=\max_{q\in[-1,1]\cap Z/N}\Bigg{\{}\frac{1}{N}\log\big{|}C(m,q)\big{|} +\frac{1}{N}\sum_{a=1}^{M}\log\mathbb{P}\Big{(}\big{|}mw_{a}+Z_{i}\big{|}\leq \kappa\,,i=1,2\,\big{|}\,w_{a}\Big{)}\Bigg{\}}+o_{N}(1)\,. \tag{66}\] Next we compute the size of \(C(m,q)\). Using Stirling's formula we find \[\frac{1}{N}\log\big{|}C(m,q)\big{|}=\max_{(\theta_{1},\theta_{2})}h(\theta_{1},\theta_{2})+o_{N}(1)\,, \tag{67}\] where the maximization is as in Eq. (22). Moreover, letting \(\theta(w):=\log\mathbb{P}\left(\big{|}mw+Z_{i}\big{|}\leq\kappa\,,i=1,2\, \big{|}\,w\right)\) the average \(X_{N}=\frac{1}{N}\sum_{a=1}^{M}\log\theta(w_{i})\) has a subGaussian tail in \(N\), i.e., \(\mathbb{P}(|X_{N}-\mathbb{E}[X_{N}]|\geq t)\leq 2e^{-Nt^{2}/(2C)}\) for some constant \(C>0\) by the Azuma-Hoeffding inequality. Since the maximum in Eq. (66) is taken over no more than \(2N+1\) values we can let \(t=t_{N}\to 0\) slowly with \(N\) such that \(\sum_{N}Ne^{-Nt_{N}^{2}/(2C)}<\infty\). The Borel-Cantelli lemma and continuity allow us to conclude the proof. ## V Analysing the local entropy and its thresholds Having shown in Theorem 1 that the local entropy \(\phi_{N,\delta}(r)\) is asymptotically given by the formula \(\max\{0,\phi_{1}(r)\}\) in the limit \(N\to\infty\), \(\alpha\to 0\) then \(\delta\to 0\), where \[\phi_{1}\bigg{(}r=\frac{1-m}{2}\bigg{)}=h(m)+\alpha\varphi_{1}(m)\,. \tag{68}\] We will now focus on the analysis of this function. In App. A we derive the local entropy for generic values of these parameters and show a posteriori how we can recover the limit presented above. A first step to simplify our analysis is to rewrite \(\varphi_{1}(m)\) in the following fashion \[\varphi_{1}(m)=\int\mathcal{D}Z_{0}\frac{\mathbf{1}\{|Z_{0}|\leq\kappa_{0}\}}{ \mathcal{N}_{\kappa_{0}}}\,\log\left\{\frac{1}{2}\mathrm{erf}\left[\frac{ \kappa+Z_{0}}{\sqrt{2(1-m^{2})}}\right]+\frac{1}{2}\mathrm{erf}\left[\frac{ \kappa-Z_{0}}{\sqrt{2(1-m^{2})}}\right]\right\}\,, \tag{69}\] where \(\mathcal{N}_{\kappa_{0}}=\mathbb{P}(|Z_{0}|\leq\kappa_{0})\), and we let \(\mathcal{D}Z_{0}=\frac{e^{-\kappa_{0}^{2}/2}}{\sqrt{2\pi}}\mathrm{d}Z_{0}\) denote the Gaussian measure. We recall that the error function is \[\mathrm{erf}[x]=\frac{2}{\sqrt{\pi}}\int_{0}^{x}e^{-x^{2}}dt\,. \tag{70}\] In fact, when \(\alpha\ll 1\) the local entropy is a non-trivial function for only a restricted range of parameters \(\kappa\) and \(m\). For this to happen the entropic and energetic contributions have to be comparable. This leads us to introduce a rescaling of the form \[1-m^{2}=-a\tilde{r}/\log(\alpha)\,,\quad\kappa_{0}=\tilde{\kappa}_{0}\sqrt{- \alpha/\log(\alpha)}\,,\quad\text{and}\quad\kappa=\tilde{\kappa}\sqrt{-\alpha /\log(\alpha)}\,, \tag{71}\] in order to have both \(\varphi_{1}(m)\) and \(h(m)\) contributing as a \(\mathcal{O}(\alpha)\) in the local entropy when \(\alpha\ll 1\). This first indicates that we can restrict our analysis to a regime where \(1-m\ll 1\). Consequently, the entropic term is simplified to \[h(m) =-\frac{1-m}{2}\log\left(\frac{1-m}{2}\right)+o(\alpha) \tag{72}\] \[=\frac{\alpha\tilde{r}}{4}+o(\alpha)\,.\] Then, using this rescaling we obtain the simplified form of the local entropy and the equation for its local maxima (at \(\tilde{r}\neq 0\)) \[\phi_{1}\bigg{(}r=\frac{-\alpha\tilde{r}}{4\log(\alpha)}\bigg{)} =\frac{\alpha\tilde{r}}{4}+\frac{\alpha}{\mathcal{N}_{\kappa_{0}} }\int\mathcal{D}Z_{0}\,\mathbf{1}\{|Z_{0}|\leq\tilde{\kappa}_{0}\}\,\log \left\{\frac{1}{2}\mathrm{erf}\left[\frac{\tilde{\kappa}+Z_{0}}{\sqrt{2\tilde {r}}}\right]+\frac{1}{2}\mathrm{erf}\left[\frac{\tilde{\kappa}-Z_{0}}{\sqrt{2 \tilde{r}}}\right]\right\}+o(\alpha)\,, \tag{73}\] \[1 =\frac{4}{\mathcal{N}_{\kappa_{0}}}\int\mathcal{D}Z_{0}\frac{ \mathbf{1}\{|Z_{0}|\leq\kappa_{0}\}}{\mathrm{erf}\left[\frac{\tilde{\kappa}+Z _{0}}{\sqrt{2\tilde{r}}}\right]+\mathrm{erf}\left[\frac{\tilde{\kappa}-Z_{0}} {\sqrt{2\tilde{r}}}\right]}\left\{\frac{(\tilde{\kappa}+Z_{0})e^{\frac{-( \tilde{\kappa}+Z_{0})^{2}}{2\tilde{r}}}}{\sqrt{2\pi}\tilde{r}^{3/2}}+\frac{( \tilde{\kappa}-Z_{0})e^{\frac{-(\tilde{\kappa}-Z_{0})^{2}}{2\tilde{r}}}}{ \sqrt{2\pi}\tilde{r}^{3/2}}\right\}+o(\alpha) \tag{74}\] with again \(\mathcal{N}_{\kappa_{0}}=\mathbb{P}(|Z_{0}|\leq\tilde{\kappa}_{0})\). The presence of this local maximum in the potential tells us that there is a cluster of atypical solutions with margin \(\kappa\) around each typical configuration with margin \(\kappa_{0}\). In the following, we will denote as \(s[\tilde{\kappa}_{0},\tilde{\kappa}]\) the local entropy evaluated at this maximum. In Fig. 1 we display the behavior of the local entropy \(s[\tilde{\kappa}_{0},\tilde{\kappa}]\) as a function of \(\tilde{\kappa}_{0}\) and \(\tilde{\kappa}\). As outlined by the dashed line, clusters exist only for a finite span of values for \(\tilde{\kappa}\), which depends on the margin \(\tilde{\kappa}_{0}\) of the reference vector \(\mathbf{x}_{0}\). Defining \(\tilde{\kappa}_{\mathrm{entr}}(\tilde{\kappa}_{0})\) as the critical value of \(\tilde{\kappa}\) for which clusters disappear, we see from the figure that \(\tilde{\kappa}_{\mathrm{entr}}\equiv\min_{\tilde{\kappa}_{0}}\tilde{\kappa}_ {\mathrm{entr}}(\tilde{\kappa}_{0})=\tilde{\kappa}_{\mathrm{entr}}(\tilde{ \kappa}_{0}=0)\). In other words, the first clusters to disappear are the ones formed around a reference vector at \(\tilde{\kappa}_{0}=0\). In particular, this corresponds to planting at \(\tilde{\kappa}_{0}=\tilde{\kappa}_{\mathrm{s}\mathrm{e}\mathrm{r}}\) as we have \[\kappa_{\mathrm{s}\mathrm{e}\mathrm{r}}(\alpha)\underset{\alpha\to 0}{\sim}\sqrt{ \frac{\pi}{2}}e^{\frac{-\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\ In the two following sections, we focus our analysis on these two thresholds in the case where \(\tilde{\kappa}_{0}=0\). Again, this choice is justified by the fact that the _energetic_ and _entropic_ threshold happen first when planting at \(\tilde{\kappa}_{0}=\kappa_{\text{\tiny{e}},\alpha}\rightarrow_{a\to 0}0\), i.e. \[\tilde{\kappa}_{\text{ener}}(0) =\min_{\tilde{\kappa}_{0}}\tilde{\kappa}_{\text{ener}}(\tilde{ \kappa}_{0})\,, \tag{76}\] \[\tilde{\kappa}_{\text{ent}}(0) =\min_{\tilde{\kappa}_{0}}\tilde{\kappa}_{\text{ent}}(\tilde{ \kappa}_{0})\,. \tag{77}\] Similarly to the entropic threshold, we will use in the following the shortening \(\tilde{\kappa}_{\text{ener}}(0)=\tilde{\kappa}_{\text{ener}}\). ### Energetic threshold The _energetic_ threshold occurs when in a range of intermediate distances the local entropy \(\phi_{1}(r)\) is negative. This means that we want to find the exact point where the minimum of the entropy (excluding \(m=1\)) is zero. We start by setting \(\tilde{\kappa}_{0}=0\) in Eq. (73) to obtain the simplified form of the local entropy \[\phi_{1}\bigg{(}r=\frac{-\alpha\tilde{r}}{4\log(\alpha)}\bigg{)}=\frac{\alpha \tilde{r}}{4}+\alpha\log\bigg{[}\text{erf}\bigg{(}\frac{\tilde{\kappa}_{ \text{ener}}}{\sqrt{2\tilde{r}}}\bigg{)}\bigg{]}+o(\alpha)\,. \tag{78}\] The potential is then null when \[1=-4\frac{\log\Big{[}\text{erf}\Big{(}\frac{\tilde{\kappa}_{\text{ener}}}{ \sqrt{2\tilde{r}}}\Big{)}\Big{]}}{\tilde{r}}+o(1) \tag{79}\] and the r.h.s of the upper equation has a maximum for \[\frac{\log\Big{[}\text{erf}\Big{(}\frac{\tilde{\kappa}_{\text{ener}}}{\sqrt{2 \tilde{r}}}\Big{)}\Big{]}}{\tilde{r}^{2}}+\sqrt{\frac{2}{\pi}}\frac{\tilde{ \kappa}_{\text{ener}}e^{\frac{-\tilde{\kappa}_{\text{ener}}^{2}}{2\tilde{r}}} }{\text{erf}\Big{(}\frac{\tilde{\kappa}_{\text{ener}}}{\sqrt{2\tilde{r}}} \Big{)}\tilde{r}^{5/2}}=o(1)\,. \tag{80}\] Finally, if we solve the two previous equations, we obtain the set of values \(\{\tilde{\kappa}_{\text{ener}},\tilde{r}\}\) for which the potential stops being negative for any value of the magnetization \(m\). Numerically we obtain \[\kappa_{\text{ener}} =\tilde{\kappa}_{\text{ener}}\sqrt{-\alpha/\log(\alpha)}+o\left( \frac{\alpha}{\log(\alpha)}\right)\approx 1.238518\sqrt{-\alpha/\log(\alpha)}\,, \tag{81}\] \[1-m^{2} =-\alpha\tilde{r}/\log(\alpha)+o\left(\frac{\alpha}{\log(\alpha) }\right)\approx-1.351180\,\alpha/\log(\alpha)\,. \tag{82}\] ### Entropic threshold The _entropic_ threshold occurs when the local maximum other than \(m\neq 0\) of the free entropy cease to exist. We recall that the local entropy for \(\tilde{\kappa}_{0}=0\) reads \[\phi\bigg{(}r=\frac{-\alpha\tilde{r}}{4\log(\alpha)}\bigg{)}=\frac{\alpha \tilde{r}}{4}+\alpha\log\bigg{[}\text{erf}\bigg{(}\frac{\tilde{\kappa}_{\text {ener}}}{\sqrt{2\tilde{r}}}\bigg{)}\bigg{]}+o(\alpha) \tag{83}\] with a non-trivial local maximum obtained by solving the fixed point equation \[\frac{\alpha}{4}=\alpha\frac{\tilde{\kappa}_{\text{ener}}e^{\frac{-\tilde{ \kappa}_{\text{ener}}^{2}}{2\tilde{r}}}}{\sqrt{2\pi}\tilde{r}^{3/2}\text{erf} \Big{(}\frac{\tilde{\kappa}_{\text{ener}}}{\sqrt{2\tilde{r}}}\Big{)}}+o(\alpha)\,. \tag{84}\] Again, the r.h.s. of the previous has a maximum for \[\frac{\tilde{\kappa}_{\text{ener}}^{2}}{2\tilde{r}^{2}}-\frac{3}{2\tilde{r}}+ \frac{\tilde{\kappa}_{\text{ener}}e^{\frac{-\tilde{\kappa}_{\text{ener}}^{2}}{2 \tilde{r}}}}{\sqrt{2\pi}\tilde{r}^{3/2}\text{erf}\Big{(}\frac{\tilde{\kappa}_ {\text{ener}}}{\sqrt{2\tilde{r}}}\Big{)}}=o(1)\,. \tag{85}\] Finally, we can solve numerically the two previous equations and we obtain \[\kappa_{\text{entr}} =\tilde{\kappa}_{\text{entr}}\sqrt{-\alpha/\log(\alpha)}+o\left( \frac{\alpha}{\log(\alpha)}\right)\approx 1.428754\sqrt{-\alpha/\log(\alpha)}\,, \tag{86}\] \[1-m^{2} =-\alpha\tilde{r}/\log(\alpha)+o\left(\frac{\alpha}{\log(\alpha) }\right)\approx-0.782487\,\alpha/\log(\alpha)\,. \tag{87}\] ### Complexity versus entropy In this section, we focus on the relation between the complexity of the clusters around the high-margin solutions and their local entropy. We define the complexity as the logarithm of the number of clusters around solutions at margin \(\kappa_{0}\), normalized by \(N\), and we recall that the local entropy of a cluster is the value of the local entropy \(\phi_{1}(r=\frac{1-m}{2})\) at the nearest local maximum to the reference solution. By contiguity to the planted model, the clusters of solutions with margin \(\kappa>\kappa_{0}\) living around two different planted configurations are distant, since the reference configurations are nearly orthogonal with high probability. Thus, heuristically, counting their exponential number (or complexity) simply consists of enumerating the number of typical solutions at \(\kappa_{0}\) we can plant. Taking these previous considerations into account the obtained clusters have a complexity that depends solely on \(\kappa_{0}\) while their local entropy is a function of \(\kappa\) and \(\kappa_{0}\). Fixing \(\kappa\) while tuning \(\kappa_{0}\) enables us to scan across sets of clusters with different complexities and local entropies, all containing atypical solutions of the symmetric binary perceptron with margin \(\kappa\). More specifically, the complexity is \[\Sigma[\kappa_{0}]=\log 2+\alpha\log\mathbb{P}(|Z|\leq\kappa_{0})=\log 2+ \alpha\log\left\{\mathrm{erf}\!\left[\frac{\kappa_{0}}{\sqrt{2}}\right] \right\}\,. \tag{88}\] and the entropy of a cluster is \[\begin{split} s[\kappa_{0},\kappa]=&-\frac{1-m}{2} \log\left(\frac{1-m}{2}\right)+\frac{\alpha}{\mathcal{N}_{\kappa_{0}}}\int \mathcal{D}Z_{0}\,\mathbf{1}\{|Z_{0}|\leq\kappa_{0}\}\,\log\left\{\frac{1}{2 }\mathrm{erf}\!\left[\frac{\kappa+Z_{0}}{\sqrt{2(1-m^{2})}}\right]+\frac{1}{2 }\mathrm{erf}\!\left[\frac{\kappa-Z_{0}}{\sqrt{2(1-m^{2})}}\right]\right\}\\ &+o\left(\frac{1-m}{2}\log\!\left(\frac{1-m}{2}\right)\right), \end{split} \tag{89}\] in which \(m\) is evaluated with the fixed-point equation \[-\log\!\left(\frac{1-m}{2}\right)+o\left(\log\left[\frac{1-m}{2}\right]\right) =\frac{4\alpha m}{\mathcal{N}_{\kappa_{0}}}\int\mathcal{D}B\frac{\mathbf{1} \{|B|\leq\kappa_{0}\}}{\mathrm{erf}\!\left[\frac{\kappa+B}{\sqrt{2(1-m^{2})} }\right]+\mathrm{erf}\!\left[\frac{\kappa-B}{\sqrt{2(1-m^{2})}}\right]}\left\{ \frac{(\kappa+B)e^{\frac{i(\kappa+B)^{2}}{2(1-m^{2})}}}{\sqrt{2\pi}(1-m^{2}) ^{3/2}}+\frac{(\kappa-B)e^{\frac{i(\kappa-B)^{2}}{2(1-m^{2})^{3/2}}}}{\sqrt{2 \pi}(1-m^{2})^{3/2}}\right\}\,. \tag{90}\] Using the rescaling from the previous section we can finally write for these two functions in the leading order in \(\alpha\to 0:\) \[\begin{split} s[\tilde{\kappa}_{0},\tilde{\kappa}]& =\frac{\alpha\tilde{r}}{4}+\frac{\alpha}{\mathcal{N}_{\tilde{ \kappa}_{0}}}\int\mathcal{D}B\,\mathbf{1}\{|B|\leq\kappa_{0}\}\,\log\left\{ \frac{1}{2}\mathrm{erf}\!\left[\frac{\tilde{\kappa}+B}{\sqrt{2\tilde{r}}} \right]+\frac{1}{2}\mathrm{erf}\!\left[\frac{\tilde{\kappa}-B}{\sqrt{2\tilde{ r}}}\right]\right\}+o(\alpha)\,,\end{split} \tag{91}\] \[\Sigma[\tilde{\kappa}_{0}] =\log(2)+\frac{\alpha}{2}\log\!\left(\frac{\alpha}{\log\alpha} \right)+\alpha\log(\tilde{\kappa}_{0})=\Sigma_{o}+\alpha\log\left(\sqrt{\frac {2}{\pi}}\tilde{\kappa}_{0}\right)\,, \tag{92}\] where we recall that \(\tilde{r}\) is evaluated with \[1=\frac{4}{\mathcal{N}_{\tilde{\kappa}_{0}}}\int\mathcal{D}B\frac{\mathbf{1} \{|B|\leq\kappa_{0}\}}{\mathrm{erf}\!\left[\frac{\tilde{\kappa}+B}{\sqrt{2 \tilde{r}}}\right]+\mathrm{erf}\!\left[\frac{\tilde{\kappa}-B}{\sqrt{2\tilde{ r}}}\right]}\left\{\frac{(\tilde{\kappa}+B)e^{\frac{i(\tilde{\kappa}+B)^{2}}{2}}}{ \sqrt{2\pi}\tilde{r}^{3/2}}+\frac{(\tilde{\kappa}-B)e^{\frac{-(\tilde{\kappa}- B)^{2}}{2}}}{\sqrt{2\pi}\tilde{r}^{3/2}}\right\}+o(\alpha) \tag{93}\] and \[1-m^{2}=-\alpha\tilde{r}/\log(\alpha),\;\kappa_{0}=\tilde{\kappa}_{0}\sqrt{- \alpha/\log(\alpha)}\,,\;\kappa=\tilde{\kappa}\sqrt{-\alpha/\log(\alpha)}\,, \;\Sigma_{o}=\log(2)+\alpha\log\left(\frac{-\alpha}{\log\alpha}\right). \tag{94}\] In Fig. 2 the right-hand side displays several curves of complexity \(\Sigma[\tilde{\kappa}_{0}]\) as a function of the local entropy \(s[\tilde{\kappa}_{0},\tilde{\kappa}]\) for fixed values of \(\tilde{\kappa}\). Three regimes can be outlined for \(\kappa<\kappa_{\mathrm{entr}}\). First, for \(s[\tilde{\kappa}_{0},\tilde{\kappa}]\approx 0\), we have locally convex curves (and \(\tilde{\kappa}_{0}-\tilde{\kappa}=o(1)\)). This result appears quite surprising as usually these \(\Sigma(s)\) curves are fully concave [13; 37]. Then, the curve becomes concave while having \(s[\tilde{\kappa}_{0},\tilde{\kappa}]=\mathcal{O}(\alpha)\) and \(\tilde{\kappa}_{0}-\tilde{\kappa}=\mathcal{O}(1)\). In this regime, the complexity continues to scale as \(\Sigma[\tilde{\kappa}_{0}]-\Sigma_{o}=\mathcal{O}(\alpha)\) and the local entropy is upper bounded by \(s[0,\tilde{\kappa}]\). Finally, if we set \(\tilde{\kappa}_{0}\ll\tilde{\kappa}\) (i.e. \(\kappa_{0}=o\left(\sqrt{-\alpha/\log(\alpha)}\right)\)) the complexity jumps from \(\Sigma[\tilde{\kappa}_{0}]\approx\Sigma_{o}\) to \(\Sigma[\tilde{\kappa}_{0}]=0\). In this case, the entropy remains fixed (in first order) at \(s[\tilde{\kappa}_{0},\tilde{\kappa}]=s[0,\tilde{\kappa}]\). We sketched these three regimes for the complexity versus entropy curves in Fig. 3. For \(\tilde{\kappa}>\tilde{\kappa}_{\mathrm{entr}}\) only the first regime exists since for small enough \(\kappa_{0}\) the local maximum of the potential disappears. ## VI Analysis of the clustered structure through the replica method ### The 1-RSB free energy In this section, we show how the clustered structures we obtained with the planting approach can also be observed via the ordinary 1-RSB computation [31]. For this we will consider the set of solutions \(S(\mathbf{G},\kappa)\) of the unbiased symmetric binary perceptron. In particular, we will consider its cardinality \[Z(\kappa):=\left\lvert S(\mathbf{G},\kappa)\right\rvert \tag{95}\] and its total entropy function as the logarithm of \(Z\) averaged over the disorder \(\mathbf{G}\) \[\phi_{N}:=\frac{1}{N}\operatorname{\mathbb{E}}_{\mathcal{G}}\left[\log Z( \kappa)\right]. \tag{96}\] So as to perform the average over the disorder we will use the replica trick [31]. This trick takes the form of \[\operatorname{\mathbb{E}}_{\mathbf{G}}\left[\log Z(\kappa)\right]=\sum_{n\to 0} \frac{\operatorname{\mathbb{E}}_{\mathcal{G}}\left[Z^{n}(\kappa)\right]-1}{n} \tag{97}\] where each of the \(n\) introduced copies of the system is called a replica. This technique enables to shift from a computation where interactions are random and the replica decoupled to a computation where the replica interacts with deterministic couplings. With this approach, the rest of the computation mainly consists in evaluating the quantity \(\operatorname{\mathbb{E}}_{\mathbf{G}}\left[Z^{n}(\kappa)\right]\) at a fixed-point of the overlap matrix \(Q\in\operatorname{\mathbb{R}}^{n\times n}\), where \[Q^{a,b}=\operatorname{\mathbb{E}}_{\mathbf{G}}\left[\mathbf{y}^{a}\cdot\mathbf{y}^{b} \ \middle|\ \mathbf{y}^{a},\mathbf{y}^{b}\in S(\mathbf{G},\kappa)\right]\quad\text{and}\quad a,b \in\left[\!\left[1,n\right]\!\right]. \tag{98}\] Moreover, as the constraints on the overlaps are introduced in the following fashion \[\delta\left(\mathbf{y}^{a}\cdot\mathbf{y}^{b}-Q^{a,b}\right)=\int d\widehat{Q}^{a,b} \operatorname{e}^{i\widehat{Q}^{a,b}\left(\mathbf{y}^{a}\cdot\mathbf{y}^{b}-Q^{a,b} \right)}\mathop{=}_{N\to+\infty}\int d\widehat{Q}^{a,b}\operatorname{e}^{ \widehat{Q}^{a,b}\left(\mathbf{y}^{a}\cdot\mathbf{y}^{b}-Q^{a,b}\right)}, \tag{99}\] we will have also to evaluated \(\operatorname{\mathbb{E}}_{\mathbf{G}}\left[Z^{n}(\kappa)\right]\) at a fixed-point of the matrix \(\widehat{Q}\). In more detail, the computation consists of evaluating \[\operatorname{\mathbb{E}}_{\mathbf{G}}\left[Z^{n}(\kappa)\right]=\prod_{a=1}^{n} \left(\sum_{\mathbf{y}^{a}\in\{-1,1\}^{n}}\int\prod_{a<b}dQ^{a,b}d\widehat{Q}^{a,b }\operatorname{e}^{\widehat{Q}^{a,b}\left(\mathbf{y}^{a}\cdot\mathbf{y}^{b}-Q^{a,b} \right)}\int\prod_{a=1}^{n}\left[d\,v^{a,j}\Theta\left(\kappa-v^{a,j}\right) \right]\!e^{-\frac{1}{2}\sum_{b}v^{a,j}\mathbf{v}^{a,b}\cdot j}\right). \tag{100}\] with \[v^{a,j}=\mathbf{g}_{j}\cdot\mathbf{y}^{a}\quad\text{and}\quad\Sigma^{a,b}= \operatorname{\mathbb{E}}_{\mathbf{G}}\left[(\mathbf{g}_{j}\cdot\mathbf{y}^{a})(\mathbf{g}_{j }\cdot\mathbf{y}^{b})\right]=\operatorname{\mathbb{E}}_{\mathbf{G}}\left[\mathbf{y}^{a} \cdot\mathbf{y}^{b}\right]=Q^{a,b} \tag{101}\] The computation of \(\mathbb{E}_{G}\left[Z^{n}(\kappa)\right]\) with the \(1\)-step replica symmetric (\(1\)-RSB) _ansatz_ implies the following form for the matrices \(Q\) and \(\widehat{Q}\) \[Q^{a,b}=\left\{\begin{array}{ll}1&\text{if $a=b$}\\ q_{1}&\lfloor\frac{a}{x}\rfloor=\lfloor\frac{b}{x}\rfloor\\ q_{0}&\text{otherwise}\end{array}\right.\quad\text{and}\quad\widehat{Q}^{a,b}= \left\{\begin{array}{ll}\widehat{Q}&\text{if $a=b$}\\ \widehat{q}_{1}&\lfloor\frac{a}{x}\rfloor=\lfloor\frac{b}{x}\rfloor\\ \widehat{q}_{0}&\text{otherwise}\end{array}\right.. \tag{102}\] With this ansatz Eq. (100) boils down to \[\phi^{1-\text{RSB}}=\lim_{n\to 0}\lim_{N\to\infty}\frac{\mathbb{E}_{G} \left[Z^{n}(\kappa)\right]-1}{nN}= -\frac{x\widehat{q}_{1}}{2}+\frac{x(1-x)q_{1}\widehat{q}_{1}}{2} +\frac{x^{2}q_{0}\widehat{q}_{0}}{2}+\alpha\int\mathcal{D}t\,\log\left\{\int \mathcal{D}\mathbb{Z}\,e^{x\phi^{\kappa}_{\text{\tiny out}}[\sqrt{q_{1}-q_{ 0}}+\sqrt{q_{0}}t,1-q_{1}]}\right\}\] \[+\int\mathcal{D}t\,\log\left\{\int\mathcal{D}\mathbb{Z}\,e^{x \phi_{\text{\tiny out}}[\sqrt{q_{1}-\widehat{q}_{0}}+\sqrt{\widehat{q}_{0}}t ]}\right\} \tag{103}\] with \[\phi^{\kappa}_{\text{\tiny out}}[\omega,V] =\log\left[\int_{\frac{-\omega\omega}{\sqrt{V}}}^{\frac{-\omega \omega}{\sqrt{V}}}Du\right]=\log\left[\frac{1}{2}\text{erf}\left(\frac{\kappa -\omega}{\sqrt{2V}}\right)+\frac{1}{2}\text{erf}\left(\frac{\kappa+\omega}{ \sqrt{2V}}\right)\right], \tag{104}\] \[\phi_{\text{\tiny in}}[B] =\log\left[\sum_{x=\pm 1}e^{Bx}\right]=\log\left[2\cosh \left(B\right)\right]\,. \tag{105}\] For more details on the calculation steps to derive \(\phi^{1-\text{RSB}}\) we redirect the interested readers to the first appendix of [3]. Before moving on with the analysis of the \(1\)-RSB potential, a first simplification consists in taking into account a symmetry in the in/out channels: \(\phi^{\kappa}_{\text{\tiny out}}[\omega,V]=\phi^{\kappa}_{\text{\tiny out}}[ -\omega,V]\) and \(\phi_{\text{\tiny in}}[B]=\phi_{\text{\tiny in}}[-B]\). Indeed, this symmetry implies that optimizing the potential yields the solution \(q_{0}=\widehat{q}_{0}=0\). Thus, in the following, we will always take this solution. Then, the remaining equations we have to verify for the fixed point are \[\widehat{q}_{1}=f(q_{1}) =-\frac{2\alpha}{(1-x)}\times\frac{\int\mathcal{D}\mathbb{Z}\, \widehat{q}_{q_{1}}\phi^{\text{\tiny out}}[\sqrt{q_{1}}\mathbb{Z},1-q_{1}]e^{x \phi^{\text{\tiny out}}[\sqrt{q_{1}}\mathbb{Z},1-q_{1}]}}{\int\mathcal{D} \mathbb{Z}\,e^{x\phi^{\text{\tiny out}}[\sqrt{q_{1}}\mathbb{Z},1-q_{1}]}}\,, \tag{106}\] \[q_{1}=g(\widehat{q}_{1}) =\frac{2}{1-x}\left[\frac{1}{2}-\frac{\int\mathcal{D}\mathbb{Z}\, \widehat{q}_{q_{1}}\phi^{\text{\tiny in}}[\sqrt{q_{1}}\mathbb{Z}]e^{x\phi^{ \text{\tiny in}}[\sqrt{q_{1}}\mathbb{Z}]}}{\int\mathcal{D}\mathbb{Z}\,e^{x \phi^{\text{\tiny in}}[\sqrt{q_{1}}\mathbb{Z}]}}\right]. \tag{107}\] With these definitions, the entropy and complexity of the clusters can be determined at the fixed point as \[s=\partial_{x}\phi^{1-\text{RSB}}\,,\quad\Sigma=\phi^{1-\text{RSB}}-xs\quad \text{and}\quad\frac{\partial\Sigma}{\partial s}=-x\,. \tag{108}\] ### The 1RSB solution at finite \(\alpha\) When it comes to solving the 1RSB equations, we focus in this subsection on \(\alpha=0.5\) as a representative value not close to zero, the corresponding satisfiability threshold is \(\kappa_{\text{\tiny out}}(\alpha=0.5)=0.319\). We obtained four branches of solutions when solving the fixed-point equations (106, 107) with respect to \(q_{1}\) and \(\widehat{q}_{1}\) for the \(1\)-RSB potential (and browsing through values for the Parisi parameter \(x\)). Two of these solutions are unstable under the iteration scheme \[\widehat{q}_{1}^{t+1}=f(\widehat{q}_{1}^{t})\,,\quad q_{1}^{t+1}=g(q_{1}^{t})\,, \tag{109}\] while the remaining two are stable. When browsing different values of \(x\), we also observe a threshold value for \(\kappa\) for which the overall behavior of these fixed points changes. We will call this value \(\kappa_{\text{break}}(\alpha=0.5)\approx 0.455\). In Fig. 4 (left panel) we plot the complexity \(\Sigma\) as a function of their entropy \(s\) for the four branches. When tuning \(x\) each solution describes a trajectory that we highlighted with either a dashed (unstable fixed point) or a full line (stable fixed point). One key question arising from these results is how we should select the fixed-point branch that corresponds to the actual clusters of solutions in the problem. First, we clearly need to restrict to non-negative \(\Sigma\) and non-negative \(s\). Moreover, we know that the correct equilibrium state is given by the solution where the total entropy \[s_{\text{total}}=\Sigma+s\mid\Sigma\geq 0\,,\;s\geq 0 \tag{110}\] is maximized. For the present model, this happens for \(s=0\) when the (negative) slope of the \(\Sigma(s)\) curve is infinite. This can be seen by realizing that the slope of the curve \(\Sigma(s)\) is much smaller than \(-1\). We recall that this slope is equal to \(-x\), where \(x\) is the Parisi parameter, as explained in Eq. (108). We highlighted this equilibrium point with a colored dot in the left panel of Fig. 4. In particular, this point \(\Sigma(0)\) corresponds to the equilibrium frozen 1RSB solution of the SBP problem with a value corresponding to one computed in [3]. We note that this criterion for equilibrium is rather unusual among other models where the 1RSB solution was evaluated. Usually, either both \(\Sigma>0\) and \(s>0\) at the point where the negative slope \(x=1\) corresponding to the so-called dynamical-1RSB phase, or the maximum is achieved when \(\Sigma=0\) at a (negative) slope strictly between \(0<x<1\) corresponding to the so-called static-1RSB phase. Here, we observe the equilibrium being achieved for \(x\rightarrow+\infty\) corresponding to frozen-1RSB at equilibrium. Finally, we observe that for \(\kappa>k_{\text{break}}\thicksim 0.455\) (still considering \(\alpha=0.5\)) the curve \(\Sigma(s)\) for positive values of both \(s\) and \(\Sigma\) breaks into two branches. Consequently, there is a finite range of values for the entropy \(s\) where we do not obtain any fixed point. The meaning of such a gap is unclear, but it appears in other problems and their 1-RSB solution [38]. In Fig. 4 right panel, we plot the complexity \(\Sigma\) as a function of the entropy \(s\) selecting the branch that is an analytic continuation of the equilibrium \(\Sigma(0)\) point and compare it to the one obtained via the planting approach. For this comparison, we need the local entropy in the planted model at finite values of \(\alpha\) that is derived in appendix A. We note that the two complexities exactly agree at \(s=0\) as is expected because at \(s=0\) both the complexities correspond to the total number of solutions at that \(\kappa\). For \(s>0\), the two complexities have a similar shape, being clearly convex for small values of \(s\). We note again that the overall values of \(\Sigma\) are larger than the values of \(s\) meaning that the slope actually takes a rather large value in the whole range of those curves. We recall that in the context of the 1-RSB computation this slope is equal to \(-x\), where \(x\) is the Parisi parameter. Taking its value much larger than one is not common in other models for which 1RSB was studied. This is likely the reason why these extensive size clusters were not described earlier in the literature for the binary perceptron. We further see that the 1RSB complexity, when it exists, is strictly larger than the one obtained via planting as again expected since via planting we obtain only some of the clusters of solution whereas the 1RSB computation should be able to count all of them. Then, when \(\kappa>k_{\text{break}}\), the planted model predicts the existence of clusters with an internal entropy that lies inside the fixed-point gap of the 1-RSB approach. This indicates that the 1RSB solution does not fully describe the space of solutions in this case. This may have many causes. For example, we may have missed a branch of fixed points in our analysis of the 1-RSB potential. Or, this region may involve a replica _ansatz_ with further symmetry breaking. Or perhaps these rare clusters simply cannot be obtained with a replica computation. Finally, when \(\kappa>k_{\text{ener}}(\alpha=0.5)\approx 0.499\), the curves \(\Sigma(s)\) obtained with planting stop at some positive values of \(\Sigma\) and \(s\) and thus look again qualitatively similar to the portion of the curve \(\Sigma(s)\) that is obtained from 1RSB by analytically continuing from the equilibrium \(\Sigma(s=0)\) point. Overall, the 1RSB approach evaluated at sufficiently larger values of the Parisi parameter \(x\) identified clusters of extensive size in parts of the solution corresponding to a convex curve that is unstable under the iterations of the 1RSB fixed point Figure 4: We plot in these two panels the complexity \(\Sigma\) as a function of the entropy \(s\), in both cases \(\alpha=0.5\). On the left panel, we plot all the branches obtained when solving the saddle-point equations with respect to \(q_{1}\) and \(\widehat{q}_{1}\) for the 1-RSB potential (and browsing through values for the Parisi parameter \(x\)). As a guide-to-the-eye we emphasize each of the four branches of solutions with either a full or dashed line. The full lines correspond to stable fixed points regarding the iteration scheme of Eq. (109), while dashed ones correspond to unstable fixed points. We highlighted with colored dots the fact that certain branches stop at \(s=0\) at a value of \(\Sigma\) corresponding to the equilibrium solutions. To reach this point, the Parisi parameter \(x\) has to be set to infinity. On the right panel, we compare the results obtained by the branches yielding this equilibrium complexity with the ones obtained with the previous planting method. We added colored dots when the fixed point (either of the planted or the 1-RSB saddle-point equations) with maximum entropy \(s\) was not obtain for \(\Sigma=0\). equations. These curves are partly compatible with the complexity obtained from planting. Yet there are still regions of \(\kappa,s\) for which we obtain extensive clusters of solution from the planting procedure but not from the 1RSB. The reason behind this paradox is left for future work. For small \(\alpha\), the situation becomes actually clearer. In the next section, we will discuss this case. ### The \(\alpha\to 0\) and \(x\to+\infty\) limit We now focus, as in the first part of the paper, on the regime of small \(\alpha\). Using our results from the planting computation, and anticipating a similarity of behaviour in the 1RSB, we can deduce the behavior of the Parisi parameter \(x\) in the low \(\alpha\) limit. Indeed, alike the 1-RSB computation, we saw that the planting approach probes clustered solutions. It also allows for computing their complexity \(\Sigma\) and local entropy \(s\), see Eq. (91) and (92). As mentioned above, in the context of a 1-RSB computation we have \(\partial\Sigma/\partial s=-x\). Thus, if we plug-in the entropy and complexity from Eq. (91), (92) we can compute \(\partial\Sigma/\partial s\) and estimate the Parisi parameter. By doing so we obtain two regimes for which the slope \(\partial\Sigma/\partial s\) becomes infinite in the low \(\alpha\) limit. First, when \(\tilde{\kappa}_{0}\ll\tilde{\kappa}\) the entropy remains constant at first order in \(\alpha\) while the complexity roughly jumps from \(\Sigma_{o}\) to zero, see the left panel in Fig. 2. It indicates that to recover these states with the 1-RSB computation we should set \(x\gg 1\). The second regime for which we observe an infinite slope corresponds to \(\tilde{\kappa}_{0}\approx\tilde{\kappa}\). Indeed, we have \(|\partial\Sigma/\partial s|\sim(\tilde{\kappa}-\tilde{\kappa}_{0})^{-1}\) close to \(\tilde{\kappa}_{0}=\tilde{\kappa}\). Consequently, if we want to probe the clusters with almost zero local entropy we should also set \(x\gg 1\) to obtain this regime. A last piece of information given by the planted model is that these clusters (in the two regimes mentioned above) correspond to a limit where \(q=m^{2}\approx 1\). Thus, if we impose the same condition in the 1-RSB fixed-point equations, we have that setting \(q_{1}\approx 1\) in Eq. (107) implies \(\tilde{q}_{1}\gg 1\). Therefore, in order to find these clusters we will not only set \(x\gg 1\) but we will also take \(q_{1}\approx 1\) and \(\tilde{q}_{1}\gg 1\). The first regime we mentioned will be referred as the _maximum entropy_ regime, while the second one will be referred as the _minimum entropy_ regime. First, we see that the entropic contribution can be simplified identically in both regimes. Indeed, when setting \(\tilde{q}_{1}\gg 1\) we obtain \[\phi_{\text{in}}\Big{[}\sqrt{\tilde{q}_{1}}z\Big{]} \approx\sqrt{\tilde{q}_{1}}|z|+\log\Big{[}1+e^{-2\sqrt{\tilde{q}_{ 1}}|z|}\Big{]} \tag{111}\] \[\approx\sqrt{\tilde{q}_{1}}|z|+e^{-2\sqrt{\tilde{q}_{1}}|z|}\] which then yields \[\log\left\{\int\mathcal{D}z\,e^{x\phi_{\text{in}}\left[\sqrt{ \tilde{q}_{1}}z\right]}\right\} \approx\log\left\{2\int_{0}^{+\infty}\mathcal{D}z\,e^{x\sqrt{ \tilde{q}_{1}}z+xe^{-2\sqrt{\tilde{q}_{1}}z}}\right\} \tag{112}\] \[\approx\log\left\{2\int_{0}^{+\infty}\mathcal{D}z\,e^{x\sqrt{ \tilde{q}_{1}}z}\Big{(}1+xe^{-2\sqrt{\tilde{q}_{1}}z}\Big{)}\right\}\] \[\approx\log\left\{2\bigg{(}e^{\frac{x^{2}\tilde{q}_{1}}{2}}+xe^{ \frac{(x-2\tilde{q}_{1}}{2})}\bigg{)}\right\}\] \[\approx\log 2+\frac{x^{2}\tilde{q}_{1}}{2}+xe^{-2(x-1)\tilde{q}_{1}}\,.\] As we will see in the following subsections, the simplification for the energetic term will be regime-dependent. #### vi.3.1 Maximum entropy regime For the maximum entropy regime, we can again go back to the results from the planting model to help us make the correct approximation. We know, for example, that we should have \(1-q_{1}\sim\kappa^{2}\) in the low \(\alpha\) limit (as both quantities have the same scaling in \(\alpha\)). This implies that we should have \[\phi_{\text{out}}^{\kappa}[\sqrt{q_{1}}z,1-q_{1}]=\mathcal{O}(1)\quad\text{ with}\quad z\in[-\kappa,\kappa]. \tag{113}\] Therefore, with \(x\gg 1\), we will approximate the energetic term with a saddle-point method. In other words, we will compute \[\int\mathcal{D}z\,e^{x\phi_{\text{out}}^{\kappa}[\sqrt{q_{1}}z,1-q_{1}]} \approx e^{x\phi_{\text{out}}^{\kappa}[\sqrt{q_{1}}z_{0},1-q_{1}]}\int \mathcal{D}z\,e^{x\frac{x\,\alpha(q_{1}-q_{2})^{2}}{2}\phi_{\text{out}}^{ \kappa}[\sqrt{q_{1}}z_{0},1-q_{1}]} \tag{114}\] where \(Z_{0}\) corresponds to the maxima of \(\phi_{\rm out}^{\kappa}[\sqrt{q_{1}}z,1-q_{1}]\). In particular, with this function we have \(Z_{0}=0\). By doing the saddle-point approximation in \(Z_{0}=0\) we obtain \[\alpha\log\left\{\int\mathcal{D}\bar{z}\,e^{x\phi_{\rm out}^{\kappa}[\sqrt{q_{1 }}z,1-q_{1}]}\right\}\approx\alpha x\log\left[\mathrm{erf}\Big{(}\frac{\kappa }{1-q_{1}}\Big{)}\right]-\frac{\alpha}{2}\log[1+\Delta x] \tag{115}\] with \[\Delta=\left|q_{1}\partial_{\alpha}^{2}\phi_{\rm out}^{\kappa}(0,1-q_{1}) \right|\,. \tag{116}\] Finally, if we combine the simplification of both the entropic and energetic contributions the total entropy becomes \[\phi^{1-\mathrm{RSB}}\approx -\frac{x\widehat{q}_{1}}{2}+\frac{x(1-x)q_{1}\widehat{q}_{1}}{2}+ \alpha x\log\left[\mathrm{erf}\Big{(}\frac{\kappa}{\sqrt{2(1-q_{1})}}\Big{)}\right] \tag{117}\] \[-\frac{\alpha}{2}\log(1+\Delta x)+\log(2)+\frac{x^{2}\widehat{q} _{1}}{2}+xe^{-2(x-1)\widehat{q}_{1}}\] and its fixed point equations are, at first order in \(x\), \[\frac{x(x-1)\widehat{q}_{1}}{2} =\frac{\alpha x\kappa e^{\frac{-x^{2}}{1(1-q)}}}{\sqrt{2\pi}(1-q_ {1})^{3/2}\mathrm{erf}\Big{(}\frac{\kappa}{\sqrt{2(1-q_{1})}}\Big{)}}\,, \tag{118}\] \[q_{1} =1-4e^{-2(x-1)\widehat{q}_{1}}\,. \tag{119}\] Now, we are able to draw a direct parallel with the planted system at low \(\alpha\) and \(\kappa_{0}\ll\kappa\). Indeed, if we use the correspondence \(q_{1}\equiv m^{2}\) and \((x-1)\widehat{q}_{1}\equiv\widehat{m}\), Eqs. (118,119) are nothing but the fixed-point equations for the planted model (see Eqs. 111, 119). This indicates a posteriori that the 1-RSB calculation enables us to recover the same clusters as in the planted model where we have set \(\kappa_{0}\ll\kappa\). To make the identification between the two approaches even more direct we can focus on the entropy and the complexity of this 1-RSB fixed-point. We obtain at first order in \(x\) that \[s =\partial_{x}\phi^{1-\mathrm{RSB}}=\frac{(1-q_{1})x\widehat{q}_{1 }}{2}+\alpha\log\left[\mathrm{erf}\Big{(}\frac{\kappa}{\sqrt{2(1-q_{1})}} \Big{)}\right]\] \[=\left.\phi\left(r=\frac{1-m}{2}\right)\right|_{\alpha\ll 1, \kappa_{0}\ll\kappa,\,m\ll 1} \tag{120}\] and \[\Sigma=\phi^{1-\mathrm{RSB}}-xs=0\,. \tag{121}\] Thus, at first order in \(x\) these clustered states have exactly the same entropy and complexity as the ones from the planted system with \(\kappa_{0}\ll\kappa\) (and \(\alpha\ll 1\)). #### vi.2.2 Minimum entropy regime In this section, we want to probe clusters with a very small entropy. Now, keeping \(\kappa\) fixed, this means that we will have to set \(q_{1}\) extremely close to one and eventually have \(1-q_{1}\ll\kappa\) in order to go up to zero local entropy. This scaling between \(q_{1}\) and \(\kappa\) is incompatible with the saddle-point approximation we performed for the maximum entropy regime, as Eq. (113) is not verified anymore. In fact, in this case, we have to introduce an asymptotic development \[\phi_{\rm out}^{\kappa}[\sqrt{q_{1}}z,1-q_{1}]\approx\log\left\{\Theta\Big{(} \kappa/\sqrt{q_{1}}-|z|\Big{)}-\sqrt{\frac{1-q_{1}}{2\pi}}\left[\frac{e^{\frac {-(\kappa_{0}+\kappa)\widehat{m}}{2(1-q_{1})}\beta^{2}}}{\kappa+\sqrt{q_{1}}z }+\frac{e^{\frac{-(\kappa_{0}-\kappa)\widehat{m}}{2(1-q_{1})}\beta^{2}}}{ \kappa-\sqrt{q_{1}}z}\right]\right\} \tag{122}\] where we used the identity \[\mathrm{erf}(x)\underset{x\to+\infty}{\approx}1-\frac{e^{-x^{2}}}{x\sqrt{\pi} }\,. \tag{123}\] We then estimate the interval of value of \(z\) (\(z\in[-Z_{0},Z_{0}]\)) for which \(e^{x\phi_{\text{max}}^{*}(\sqrt{q_{1}}z,1-q_{1})}\) remains finite. In other words, we compute the value of \(z\) for which the function \(e^{x\phi_{\text{max}}^{*}(\sqrt{q_{1}}z,1-q_{1})}\) is equal to an arbitrary value \(1/C\), \[e^{x\phi_{\text{max}}^{*}(\sqrt{q_{1}}z,1-q_{1})} =\frac{1}{C} \tag{124}\] \[\implies x\phi_{\text{out}}^{*}[\sqrt{q_{1}}z,1-q_{1}]=-\log C\] \[\implies \sqrt{\frac{1-q_{1}}{2\pi}}\times\frac{e^{\frac{-(x-q_{1})^{2}}{ 2(1-q_{1})}}}{\kappa-\sqrt{q_{1}}z}\underset{\frac{\kappa}{\sqrt{q_{1}}}z>0}{ \overset{\kappa}{\underset{x\to+\infty}}}{\overset{\kappa}{\underset{x\to+ \infty}}}\frac{\log C}{x}\] \[\implies \frac{\kappa-\sqrt{q_{1}}z}{\sqrt{2(1-q_{1})}}\underset{\frac{ \kappa}{\sqrt{q_{1}}}z>0}{\overset{\kappa}{\underset{x\to+\infty}}}\sqrt{ \frac{W_{0}\left(\frac{2}{a^{2}}\right)}{2}}\quad\text{with}\quad a=\frac{2 \log C}{x\sqrt{\pi}}\] \[\implies \frac{\kappa-\sqrt{q_{1}}z}{\sqrt{2(1-q_{1})}}\underset{\frac{ \kappa}{\sqrt{q_{1}}}z\underset{\frac{\kappa}{\sqrt{q_{1}}}z>0}{\overset{ \kappa}{\underset{x\to+\infty}}}}{\sqrt{\frac{\log x^{2}}{2}}}\] \[\implies z\underset{\frac{x\to+\infty}{\sqrt{q_{1}}}z\underset{ \frac{\kappa}{\sqrt{q_{1}}}z>0}{\overset{\kappa}{\underset{x\to+\infty}}}}{ \kappa-\sqrt{2(1-q_{1})\log x}}\] where \(W_{0}(.)\) is the Lambert function with branch index \(k=0\). This computation thus shows that \(e^{x\phi_{\text{max}}^{*}(\sqrt{q_{1}}z,1-q_{1})}\) jumps from \(1\) to any arbitrary fraction \(1/C\) exactly at \[Z_{0}\underset{\frac{x\to+\infty}{\sqrt{q_{1}}}z>0}{\overset{ \kappa}{\underset{x\to+\infty}}}\frac{\kappa-\sqrt{2(1-q_{1})\log x}}{\sqrt{q _{1}}}\,. \tag{125}\] In this limit, we thus have for the energetic contribution \[\alpha\log\left\{\int\mathcal{D}z\,e^{x\phi_{\text{max}}^{*}(\sqrt{q_{1}}z,1-q _{1})}\right\}=\alpha\log\left\{\int_{-Z_{0}}^{Z_{0}}Dz\right\}=\alpha\log \left\{\operatorname{erf}\left(\frac{\kappa-\sqrt{2(1-q_{1})\log x}}{\sqrt{2q _{1}}}\right)\right\}\,. \tag{126}\] And finally, if we put together the simplified entropic and energetic contribution to the \(1\)-RSB potential we obtain \[\phi^{1-RSB}\approx -\frac{x\widehat{q}_{1}}{2}+\frac{x(1-x)q_{1}\widehat{q}_{1}}{2}+ \alpha\log\left\{\operatorname{erf}\left(\frac{\kappa-\sqrt{2(1-q_{1})\log x }}{\sqrt{2q_{1}}}\right)\right\} \tag{127}\] \[+\log(2)+\frac{x^{2}\widehat{q}_{1}}{2}+xe^{-2(x-1)\widehat{q}_{1}}\] and the corresponding fixed point equations are (at first order in \(x\)) \[\frac{x^{2}\widehat{q}_{1}}{2} =\alpha\frac{e^{-\kappa^{2}/2}}{\operatorname{erf}\left(\frac{ \kappa}{\sqrt{2}}\right)}\sqrt{\frac{\log x}{\pi(1-q_{1})}}\,, \tag{128}\] \[q_{1} =1-4e^{-2(x-1)\widehat{q}_{1}}\,. \tag{129}\] The combination of these two fixed-point equations implies \(\kappa\gg\sqrt{2(1-q_{1})\log x}\). Consequently, the term \(x^{2}(1-q_{1})\widehat{q}_{1}\) can be neglected and the \(1\)-RSB free energy boils down to \[\phi^{1-RSB}\approx \alpha\log\left\{\operatorname{erf}\left(\frac{\kappa}{\sqrt{2}} \right)\right\}+\log(2)\,. \tag{130}\] We thus recover the case of the planted system at \(\kappa_{0}=\kappa\) as we have \[s =\partial_{x}\phi^{1-RSB}=0\,, \tag{131}\] \[\Sigma =\phi^{1-RSB}\approx\alpha\log\left\{\operatorname{erf}\left( \frac{\kappa}{\sqrt{2}}\right)\right\}+\log(2)\,. \tag{132}\] In [3] the authors showed in a more standard computation that these equilibrium configurations (verifying a frozen 1-RSB structure) can also be obtained by imposing \(q_{0}=0\), \(q_{1}=1\) and \(x=1\). In Fig. 5 we display for several values of \(\tilde{\kappa}=\kappa\sqrt{-\log(\alpha)/\alpha}\) the complexity as a function of the entropy. The light-colored full lines correspond to the results obtained with the planted model. The dashed and dotted lines correspond respectively to the maximum and minimum entropy fixed-point branches of the 1-RSB free energy. To obtain them we solved the fixed-point equations in each regime for large but finite Parisi parameter \(x\). As shown by the previous computations each end of the curve sees a close match between the planting approach and one of the \(x\rightarrow+\infty\) regimes. This leads us to conjecture that in the limit of small \(\alpha\), the planted and 1-RSB \(\Sigma(s)\) curves match exactly. In other words, the planting approach actually captures the dominant clusters for each size \(s\). ## VII Conclusion and discussion We study the local entropy in the SBP problem around solutions planted at a smaller margin. Our results are rigorous in the limit of small \(\alpha\), conditional on a condition of concentration of a certain entropy. We identify clusters of solutions of an extensive entropy as local maximizers of this local entropy. We identify two thresholds \(\kappa_{\text{ener}}\) and \(\kappa_{\text{entr}}\) that we consider of particular interest. \(\kappa_{\text{entr}}\) is the smallest \(\kappa\) at which the planted clusters and the corresponding maximum disappear, thus presumably melting into an extended structure that may be accessible to efficient algorithms. \(\kappa_{\text{ener}}\) is a value above which there are solutions at all distances from the planted solutions and as such is an upper bound on the overlap gap property threshold. We then investigated the 1RSB solution of the symmetric binary perceptron problem and showed how it allows us to identify extensive clusters of solutions without introducing concepts that are not present in the canonical 1RSB computation already. It suffices to consider large values of the Parisi parameter \(x\) and both convex and concave parts of the \(\Sigma(s)\) curve. We discuss how the equilibrium frozen-1RSB is recovered in the \(x\rightarrow\infty\) limit. While this resolves some open questions about the 1RSB solution for binary perceptions, we conclude that the 1RSB calculation is incomplete at finite \(\alpha\) as we did not find solutions corresponding to all the extensive clusters identified by the planting procedure. We further showed that while, in general, the planting procedure we study does not describe all the rare clusters, in the limit of small \(\alpha\) it seems that the \(\Sigma(s)\) obtained via planting is exactly the same as the one obtained from the 1RSB. This leads us to conjecture that in the limit of small \(\alpha\) the planting actually describes almost all clusters of a given size. ###### Acknowledgements. We acknowledge funding from the Swiss National Science Foundation grants OperaGOST (grant number 200390) and SMArtNet (grant number 212049). We also thank David Gamarnik, Carlo Lucibello and Riccardo Zecchina for enlightening discussions on these problems. Figure 5: We plot the complexity \(\Sigma\) as a function of the entropy \(s\) for several values of \(\tilde{\kappa}\). The dashed and dotted curves correspond to the maximum and minimum entropy regime and are obtained by optimizing the potentials Eqs. (117) and (127) over \(q_{1}\) and \(\widehat{q}_{1}\) with large but finite values of the Parisi parameter \(x\). In order to compare the 1-RSB computation with the planting approach, the full curves correspond to the complexity and entropy obtained with the planting computation.
2308.09597
ChatHaruhi: Reviving Anime Character in Reality via Large Language Model
Role-playing chatbots built on large language models have drawn interest, but better techniques are needed to enable mimicking specific fictional characters. We propose an algorithm that controls language models via an improved prompt and memories of the character extracted from scripts. We construct ChatHaruhi, a dataset covering 32 Chinese / English TV / anime characters with over 54k simulated dialogues. Both automatic and human evaluations show our approach improves role-playing ability over baselines. Code and data are available at https://github.com/LC1332/Chat-Haruhi-Suzumiya .
Cheng Li, Ziang Leng, Chenxi Yan, Junyi Shen, Hao Wang, Weishi MI, Yaying Fei, Xiaoyang Feng, Song Yan, HaoSheng Wang, Linkang Zhan, Yaokai Jia, Pingyu Wu, Haozhen Sun
2023-08-18T14:50:25Z
http://arxiv.org/abs/2308.09597v1
# ChatHaruhi: Reviving Anime Character in Reality ###### Abstract Role-playing chatbots built on large language models have drawn interest, but better techniques are needed to enable mimicking specific fictional characters. We propose an algorithm that controls language models via an improved prompt and memories of the character extracted from scripts. We construct ChatHaruhi, a dataset covering 32 Chinese / English TV / anime characters with over 54k simulated dialogues. Both automatic and human evaluations show our approach improves role-playing ability over baselines. Code and data are available at [https://github.com/LC1332/Chat-Haruhi-Suzumiya](https://github.com/LC1332/Chat-Haruhi-Suzumiya). ## 1 Introduction1 Footnote 1: This is an open source work, and the original affiliations of all authors can be found in the Contributors sec 8.1. With the release of ChatGPT by OpenAI (OpenAI, 2023), large language models and their applications have received widespread attention. Role-playing is a novel and active application area. Users found that large language models have the ability to act as specific characters, and communities for sharing prompts have even emerged (e.g. AIPRM (AIPRM, 2023) ). Many companies have also released role-playing products based on language models, such as Glow, Character.AI, etc (Character.AI, 2023). These applications and experiments with getting language models to role-play have garnered great interest, and have potential applications in many areas like games, creative industries, etc. In open-source role-playing implementations, developers or users have employed similar prompts, inputting them continuously into ChatGPT or as a system whisper into the language model: _I want you to act like [character] from [series]. I want you to respond and answer like [character] using the tone, manner and vocabulary [character] would use. Do not write any explanations. Only answer like [character]. You must know all of the knowledge of [character]. My first sentence is "Hi [character]."_ With the intelligence exhibited by larger language models like ChatGPT or Claude(Anthropic, 2023), trained on many stories, users found that models can demonstrate a certain capability for role-playing under such prompts. However, while simple, such implementations have the following drawbacks: 1. They rely heavily on the language model's existing memories. If the language model's own memories about the work are fuzzy, it cannot mimic specific characters well. 2. The "know all of the knowledge of {character}" is vaguely defined, and does not guard well against hallucinations. 3. Even with such prompts, the chatbot's conversational style is still heavily influenced by the underlying language model. Adjusting the prompt may alleviate this, but finely tuning the prompt is needed for each character. These drawbacks clearly limit the utility of such role-playing chatbots. Another simple idea is to fine-tune the model on the character's dialogues. With sufficient data, language models can capture a character's tone, but this also introduces new problems. In a preliminary experiment, we found that fine-tuned ChatBots produced more hallucinations. Also, for many minor characters, it is difficult to obtain enough data for fine-tuning. In summary, better enabling language Figure 1: Our algorithm is role-playing Haruhi Suzumiya. Note that the user’s questions are related but not identical to the original plot, while Chat Haruhi Suzumiya’s answers can largely quote the original plot. models to role-play and mimic character classics remains an unsolved issue. The main goal of this project is to study whether natural language models can play real characters from anime, TV or other works during a conversation. In this process, we believe a virtual character consists of three core components: **Knowledge and background:** Each virtual character exists in their own background. Characters in Harry Potter exist in the magical world of Harry Potter. Haruhi Suzumiya is situated in a Japanese high school. Other anime characters also have their own worldbuilding. Therefore, in constructing the chatbot, we hope it can understand the setting of the corresponding story. This poses a major test of the language model's memory, often requiring external knowledge bases. **Personality:** The personality of a character is also a very important part of anime, TV and even game works. The personality must remain consistent throughout the work. Some literary works even define the personality first before writing the rest. Therefore, we hope the chatbot reflects the original personality. **Linguistic habits:** Language habits are easiest for language models to mimic. With large models in recent years, given suitable examples in context, language models can produce mimicking outputs. Here we hope that fans interacting with the chatbot can'reproduce' classic excerpts, providing them with a better experience. The key idea of this project is to extract as much of the original script as possible to form a memory database for the character. When users ask new questions, the system searches for relevant classic plots. Combined with prompts about the character's setting, we attempt to better mimic the character by controlling the language model. Meanwhile, inspired by CAMEL[11] and Baize[23], we designed a system to automatically generate dialogues fitting the character's personality, even for characters with fewer original dialogues. This allows us to generate sufficient data for fine-tuning a local model. The main contributions of this paper can be summarized as follows: 1. Based on large language models, we propose a complete role-playing algorithm system. This algorithm can effectively organize a character's memories, allowing language models to mimic the tone and knowledge of specific anime/TV characters during a conversation. This system can use pretrained model like ChatGPT and Claude, or a smaller 7B-size model. 2. We construct a role-playing dataset covering 32 different Chinese/English TV/anime characters. Figure 2: The statistics of the ChatHaruhi-54K dataset, showing 32 characters and 54,726 dialogues. The opaque bars indicate the original script data, while the translucent bars present simulated dialogues generated by models like Alpaca. By collecting and structurally extracting dialogues from movies, novels, scripts, we have collected over 22,000 conversational exchanges. This data can be used to train and evaluate role-playing language models. Using our proposed algorithm, aided by GPT-3 and GPT-4, we additionally simulated over 31,000 dialogues for these characters. Combined, this forms the ChatHaruhi-54k dataset. 3. To evaluate and compare different role-playing chatbots, we use both automatic and human evaluations. For automatic evaluation, we test whether the chatbot can respond to classic plot points with similar answers to the original script. For human evaluation, we propose two metrics for raters to assess: **Alignment:** Whether the chatbot's answer aligns with the character's original setting. **Response quality:** Whether the chatbot's responses have good linguistic quality. Results show that given the same base language model, our algorithm yields improved role-playing performance. To support further research, all data and code are available at [https://github.com/LC1332/ChatHaruhi-Suzumiya](https://github.com/LC1332/ChatHaruhi-Suzumiya). We are also working to further modularize the project code for ease of use (see Appendix: API Design). ## 2 Related Work In-context LearningIn the development of Chat-GPT, starting from GPT2 Brown et al. (2020), it was proposed to eliminate special extraction tokens in language models and adopt the form of instructions + examples to enhance the natural language model's ability to handle various tasks. Since its introduction, In Context Learning has been a focal point of research. Previous work has proposed better methods of posing questions Zhao et al. (2021); Holtzman et al. (2021), better selection of token examples for demonstration Liu et al. (2021); Lu et al. (2022); Rubin et al. (2022), meta-training with explicit contextual learning objectives Chen et al. (2022), and a variant of context learning that follows instructionsMishra et al. (2022); Efrat et al. (2021); Wei et al. (2022); Sanh et al. (2022). At the same time, some studies have reported issues of vulnerability and over-sensitivity in context learning Lu et al. (2022); Zhao et al. (2021); Mishra et al. (2022). In our work, in-context learning is primarily used to generate user questions for our chatbot. Given a character's background and prior memories, our approach produces the subsequent dialogues responding to each question in-context. Compared to general conversational agents, our system focuses on tailoring the dialogues to a specific persona based on its given settings and history. The Figure 3: A blueprint of the complete ChatHaruhi system is given. Dialogues are first extracted from novels, TV shows etc. as reference exchanges \(D\) for each character, forming the core chatbot. Simulated dialogues are further generated by Alpaca-like models, training a 7B model. Thus, large models like ChatGPT and Claude can be used, or fine-tuned 7B models. generated question-answer pairs provide valuable data for analyzing and learning the behaviors of a persona-based agent. Automatic Dialogue GenerationRecent advances in large language models (LLMs) have shown impressive capabilities in open-domain dialogues. Models like Meena (Adiwardana et al., 2020), LaMDA (Touvron et al., 2023)) and Chat-GPT (OpenAI, 2022) are pretrained on massive amounts of conversational data and can conduct human-like chitchat. Concurrently, there have been attempts to replicate such models with open-source LLMs. Alpaca (Taori et al., 2023)uses self-instruction to collect data from a proprietary LLM, and fine-tunes LLaMA(Touvron et al., 2023). Vicuna (Ahn et al., 2023) trains LLaMA on dialogues from ChatGPT. More related to our work, Baize(Xu et al., 2023) proposes a pipeline to automatically generate multi-turn dialogues by making ChatGPT converse with itself. The data is used to fine-tune LLaMA into Baby Baize. CAMEL(Li et al., 2023) explores facilitating cooperation between chat agents using inception prompting, guiding them to complete tasks while maintaining human intentions. Our work similarly leverages large conversational models like ChatGPT to automatically generate dialogues between agents. However, different from prior work, we focus on dialogue generation for a specific character that the user wants to role-play. Our system incorporates substantial prompts about the character's background, personality and prior conversations, in order to produce in-character dialogues. The generated exchanges provide valuable data for learning the behaviors of a specific persona. ## 3 ChatBot Design Given a specific character \(R\) and a query question \(q\), we want to be able to generate an answer \(a\) based on the character's knowledge background, personality and language habits: \[a=\operatorname{argmax}_{a^{\prime}}P(a^{\prime}|R,q,\Theta) \tag{1}\] where \(\Theta\) represents the language model parameters which are static during inference. After the release of ChatGPT, users found that they can specify a certain system prompt \(s_{R}=\) 'I want you to act like character from series...' So: \[a=\operatorname{argmax}_{a^{\prime}}P(a^{\prime}|s_{R},q,\Theta) \tag{2}\] This shows the language model has some role-playing ability. However, the character's memory completely relies on the parameters \(\Theta\). If the model's knowledge is limited and does not even contain the desired character \(R\), it often fails to achieve the ideal effect. Inspired by in-context learning, in addition to \(\Theta\) and \(s_{R}\), we can introduce a sequence of the character's previous dialogues, i.e., \[D(q,R)=(u_{1},v(u_{1};R)),...,(u_{M},v(u_{M};R)) \tag{3}\] where \(u_{m}\) is any question raised by characters other than \(R\), and \(v(u_{m};R)\) is character \(R\)'s reply to the question. We hope that by inputting the classic dialogues of the character into the context, the model will have a better ability to play the role of character \(R\), i.e., \[a=\operatorname{argmax}_{a^{\prime}}P(a^{\prime}|s_{R},D(q,R),q,\Theta) \tag{4}\] For characters with a larger worldview, in order to make the content of \(D(q,R)\) more relevant to the content of \(q\), we use sentence embeddings here to search for the \(M\) most relevant Q&As from a larger memory bank \(U\). Here \(U\) is the set of all sentences where other characters interact with \(R\) throughout the novel/movie. Of course, in practice, we also need to additionally record a dialogue history \(H\) to ensure conversational continuity, since the context of previous dialogues needs to be considered as well. \[a=\operatorname{argmax}_{a^{\prime}}P(a^{\prime}|s_{R},D(q,R),q,H,\Theta) \tag{5}\] The overall construction of the chatbot is shown in the fig 4. In the other subsections of this chapter, we will introduce the details of the system prompt \(s_{R}\), the classic dialogues \(D\) from the story, and the searching mechanism \(u_{m}(q)\). ### System Prompt Actually, the system prompt mentioned in the introduction can already achieve basic functionality when using ChatGPT as the base model. After initial experiments, we find two aspects of this system prompt that need improvement: **Won't repeat lines:** For models like ChatGPT and LLaMA2 that have gone through a lot of reinforcement learning from human feedback (RLHF), since these language models often face tasks like "Give me m different options", "Generate m titles", etc., the output of such language models tends to not repeat content from the context. We also observed this phenomenon in preliminary experiments. Therefore, our proposed method is to emphasize in the prompt \(s_{R}\) that the model is cosplaying a specific character. And emphasize that the language model can reuse classic lines from the novel or movie. **Character emphasis not prominent enough:** Due to RLHF, each language model has its own specific language preferences. Even when given \(D(q,R)\) to imitate, the model's output is still influenced by the language model itself. We find that supplementing the personality of the character at the end of \(s_{R}\) yields better results in this case. Based on the above two points, the character setting prompt template \(s_{R}\) we commonly use is as follows: I want you to act like {character} from {series}. You are now cosplay {character} If others' questions are related with the novel, please try to reuse the original lines from the novel. I want you to respond and answer like {character} using the tone, manner and vocabulary {character} would use. You must know all of the knowledge of {character}. {Supplementary explanation of the character's personality} Note that we have strengthened the requirement for the language model to reuse sentences from the story. We find the final output of the language model is quite sensitive to the effects of the supplementary explanation. Including adding certain verbal tics in the supplementary explanation will also be reflected in the final output. ### Dialogues from each Character In order to better reproduce the character's behavior in novels/TV shows/movies, we have included a large number of classic script excerpts in \(D\). It should be noted here that except for a few characters (such as crosstalk performer Yu Qian), not all dialogues are in a good question-answer format. Here, the actual \(D\) we use is in story form, as shown in the figure, i.e., \[D(q,R)=\{d_{1},d_{2},...,d_{M}\} \tag{6}\] We ensure that in \(d_{m}\) there is at least one dialogue pair in the form of \((u_{m},v(u_{m};R))\). Between the information of \(u\) and \(v\), there may be narration, or more dialogues from other characters, or action information of a character. We relax this condition so that each story \(d_{m}\) can better preserve the plot of the dialogue. Sometimes the narration and actions around the dialogues are inevitable. Also, relaxing the condition is conducive to preparing more script data, which will be mentioned in the later novel text extraction. ### Original Dialogue Searching In practice, the total number of tokens summed over all stories for a character \(R\) often far exceeds the mature scope of the language model. Here we use a search method to reduce the number of Original Dialogues input each time. For a query \(q\), we will use a sentence embedding model \(f()\) to extract embeddings \(f(d)\) for all \(d\in D\). After similarly extracting \(f(q)\) for the query \(q\), we extract the \(M\) samples from \(D\) that are closest (in cosine similarity) to \(f(q)\). This forms the reference context \(D(q,R)\) for this dialogue. For the number of original dialogue excerpts cited per dialogue \(M\), we will actually adjust dynamically based on the number of tokens searched. Figure 4: The core dialogue system of ChatHaruhi, comprising the system prompt, character memories \(D(q,R)\) retrieved for the user query \(q\), and the dialogue history \(H\). In the specific implementation, if using OpenAI's turbo-3.5 model, we will limit the total number of tokens in \(D\) to within 1500. Therefore, when building the dialogue memory bank, we suggest that the length of each story should not be too long, so as not to occupy the space of other stories during search. For the embedding model, we use OpenAI's text-embedding-ada-002 model (Neelakantan et al., 2022). At the same time, for Chinese questions, we use Luotuo-Bert-Medium (Siyi et al., 2023), because the latter is a distilled model from the former, with the same distribution. This allows us to seamlessly use Luotuo-Bert even when the stories are in English, enabling cross-lingual chatbots, which will be explained in the cross-lingual experiments section. It should be noted that we have noticed many other embedding models such as instructor-large (Su et al., 2022), M3E (Wang Yuxin, 2023) or BGE (Shitao Xiao, 2023) models, etc. In the reconstruction of ChatHaruhi 2. 0 A when there is sufficient time, we will replace the embedding model for experiments. ### Chat Memory For the memory \(H\), we record each user query \(q\) and the chatbot's response \(a\), forming a sequence \(H\): \(H=\{(q_{1},a_{1}),...,(q_{T},a_{T})\}\) The information in \(H\) is also input to the language model to ensure conversational coherence. In the actual implementation, starting from \(T\), we count the total number of tokens forward. And limit the dialogue history input to the language model to within 1200 tokens. Therefore, in this work, we do not care about the character's long-term memory. 1200 tokens can accommodate about 6-10 rounds of dialogue. As language models become able to contain longer contexts, how to encode and summarize a long-term memory will also be a more interesting problem. ## 4 Character Dataset Building For our project, we need classic stories \(D(q,R)\) related to character \(R\) as input when generating dialogues involving that character. Therefore, constructing original dialogues \(D\) for each character is crucial. In addition, we need more data than just the original dialogues \(\|D\|\) to train local models. In this section, we first introduce how to build \(D\) for each character. ### Characters In the current version of our project, we selected 32 characters to form the dataset. **Haruhi Suzumiya:** When choosing the first character, we wanted someone who satisfies: 1. ChatGPT has some knowledge about them. 2. The character's fictional world is not too large for the first character. 3. The character has a distinctive personality. So we chose the character Haruhi Suzumiya, a famous anime character who represents the transition from light novels to animation. Many subsequent school-based light novel animes pay homage to Haruhi Suzumiya. **Li Yunlong:** (from Drawing Sword) is the first character ChatGPT knows little about that we added. We found that with appropriate memory dialogues, ChatBot can also effectively mimic Commander Li's behaviors. Here we use the TV version of Drawing Sword, which has extensive dialogue writing compared to the novel, sculpting a very three-dimensional and vivid military character. **Harry Potter (novel, 8 characters):** After initial qualitative tests of the first few characters were successful, we started trying to build stories with larger fictional worlds. Harry Potter is a good choice with a large audience, and if we want to incorporate multimodal data in the future, there are also suitable datasets we can reference. For the Harry Potter novels, we used the novel extraction tool described later to extract passages. We then compiled the story collection \(D_{R}\) for each character separately. **Big Bang Theory (TV show, 3 characters):** Big Bang Theory is also a TV show that researchers enjoy using to form datasets. The show itself depicts the stories of several Caltech researchers. From Big Bang Theory, we extracted Sheldon, Penny and Raj for experiments. As with Harry Potter, Big Bang Theory enables potential future incorporation of other multimodal datasets for multimodal research. **Demi-Gods and Semi-Devils (novel, 7 characters):** Demi-Gods and Semi-Devils is a grand wuxia novel by Jin Yong. With its multi-threaded narrative centered around the three protagonists Duan Yu, Qiao Feng and Xu Zhu, the intricate story unfolds. Having been adapted into TV dramas multiple times, the novel holds a special place in Chinese readers' hearts. We extracted Duan Yu, Xu Zhu, Qiao Feng, Xiao Feng, Jiumozhi, Murong e after finding out he is Khitan. Treating them as two separate characters allows us to observe any different behaviors in response to the same questions. **Wei Xiaobao:** is the protagonist in another novel by Jin Yong, The Deer and the Cauldron. He is a clever and cunning character, holding positions in the Qing government, anti-Qing organizations and the jianghu underworld, while also popular with female characters. **My Own Swordsman" (TV show, 3 characters):** My Own Swordsman" is a popular episodic comedy in China. We extracted Tong Xiangyu, Bai ZhanTang and Guo Furong from it. **Genshin Impact (wiki, 5 characters):** Genshin Impact is an open world RPG developed by HiroYo that is commercially successful with players worldwide. Its intricate story is set in the fictional world of Teyvat. Many characters are beloved by players - we extracted Akaya, Raiden Shogun, Zhongli, Hutao and The Wanderer. **Wang Duoyu:** is the protagonist of the movie Hello Mr. Billionaire, a remake of the 1985 American movie Brewster's Millions. The script can also be found online. Interestingly, Wang Duoyu is supposed to keep the secret of "spending 100 million in a month". The current ChatBot cannot keep this secret well, so improving via constructed reasoning chains or constitutional additions could be an interesting future direction. **Counsellor Tang:** Counsellor Tang(Tang-shive) is a frequently appearing minor character in the movie Let the Bullets Fly directed by Jiang Wen. The latter is a hugely popular movie among Bilibili users, and its script can be found online. Thus we attempted constructing the character Counsellor Tang. **Yu Qian:** Yu Qian is the witty stock character in Guo Degang's comic crosstalk routines. Because witty stock characters have very consistent speaking styles in Chinese crosstalk, and the Crosstalk-Generation (Zhang, 2020) project has a large corpus of Yu Qian's material, we also included him in our project. ### Original Script Extraction In all cases, a character's dialogues do not naturally organize into the form shown in Figure 1. For this, we constructed different extraction tools for TV shows, movies or novels: #### 4.2.1 Quotes Data The scripts for Counsellor Tang from Let the Bullets Fly, Wang Duoyu from Hello Mr. Billionaire, and Yu Qian can be directly found online. For the first two characters, we manually split the scripts into segments and formatted them into the defined "character: 'dialogue'" form. Figure 5: ChatHaruhi-54K covers 32 different Chinese and English characters. Yu Qian's corpus in Crosstalk-Generation contains over 6,000 dialogue exchanges. All dialogues are in good question-answer form. We checked the length of each utterance \(u\) and \(v\), and split before a sentence if it had locally maximal length in a continuous crosstalk routine. For Genshin Impact and The Big Bang Theory, enthusiastic netizens have posted quotes and plots on wikis and websites. Preliminary organization of these resources yields the target format. Here the Big Bang Theory data was split using the same finite state machine as for the novel extraction. #### 4.2.2 Extract from TV Series For characters like Haruhi Suzumiya and Li Yunlong, TV scriptwriters often create original dialogues. Comparing the Drawing Sword's novel and TV drama for example, the latter has more conversational information and three-dimensional characterization. Extracting character dialogues from the TV drama is needed in such cases. We first perform speech recognition using Whisper or directly utilize original subtitles. We further identify the speaker of each line using a 192-dim ECAPA-TDNN speaker verification embedding, trained on some labeled character dialogues. Finally, recognition errors are manually corrected and scripts split. Since manual organization is time-consuming (often needing to rewatch the show), we only processed Haruhi Suzumiya and Li Yunlong this way. We hope more enthusiasts can build characters after open-sourcing the full TV processing toolkit. #### 4.2.3 Extract from Novel Thanks to progress in large language models, we can also use general language models to batch process novels. For Demi-Gods and Semi-Devils, The Deer and the Cauldron, and Harry Potter, we utilize Kor's extraction mechanism (an in-context learning information extraction library) to extract 'character-action-dialogue' information sentence by sentence. In the extraction prompt, if a sentence contains dialogue, we want the language model to record the action as 'dialogue' and the dialogue content. If a sentence does not contain dialogue, we want the language model to summarize the character's actions in the action field. The language model also has some ability to infer who is speaking in each dialogue from context. After bulk novel extraction, we use a finite state machine to split the dialogues. For each protagonist, we look for a segment of appropriate length, preferably with limited length, few characters, and minimal non-dialogue sentences. The automatic extraction is implemented with such a state machine. Extraction statistics are shown in the table. ## 5 Dialogue Synthesizing Given the system prompt \(s_{R}\) and corresponding classic animation \(D(q,R)\) for each character, we find the character can already respond to user questions \(q\) in a certain style. However, at this point we need to leverage ChatGPT or Claude APIs to model \(p(a|s_{R},D(q,R),q)\). If we want to transfer the functionality of ChatHaruhi to a local model, we need to build a suitable \((R,q,a)\) dataset. In this section we discuss how to augment dialogue data for characters with limited data. ### Generate Dialogue from Question Note that the collected \(D\) data is not in strict \((q,a)\) form. This means we cannot simply fine-tune a language model to learn all \(\{D_{R}\}\) data. To address this, for any \(d\in D_{R}\), we take all dialogues before the protagonist \(R\)'s turn as \(q\), hoping to take this \(q\) as the first question \(q_{1}\) to generate a dialogue. In practice, we find the language model can Figure 6: The 22k original exchanges for fine-tuning are extracted from movie scripts, skit scripts, TV shows and novels. Figure 7: Massive simulated queries are generated with Alpaca-like. Detailed prompts given in Appendix. sometimes output multiple dialogues, i.e. after giving one answer \(a_{1}\), it generates a new question \(q_{2}\) and subsequent replies. Moreover, all \(a\) in such generated dialogues conform to character \(R\)'s profile. Hence, we want to modify our ChatBot to further utilize this trait to produce more dialogues. For each dialogue \(d\), we locate the first utterance by character \(R\). Before this utterance, \(d\) is split into left and right parts \(d^{L}\) and \(d^{R}\). We insert User Message special tokens at both ends of \(d^{L}\) and AI Message special tokens at both ends of \(d^{R}\). We do this for all \(M\) stories, so that based on these \(M\) examples, given \(q_{1}\), the language model can simulate generating the corresponding dialogue \(d^{\prime}\). The resulting \(d^{\prime}\) becomes fine-tuning data for the language model. That is, \[d^{\prime}(q_{1})=\mathrm{LLM}(s_{R},(d^{1}_{M},d^{R}_{M}),\ldots,(d^{L}_{M},d ^{R}_{M}),q_{1}) \tag{7}\] Often this method generates \(d^{\prime}\) with only one sentence. But it can also generate multi-sentence dialogues with close to 50% probability. When the given \(q\) overlaps with the original text, due to our \(s_{R}\) prompt design, the model tends to output according to the protagonist's original lines. ### Question Generating Note some characters have very limited data, insufficient for fine-tuning language models. Thus we need to augment questions \(q\) using existing data for each character. Fortunately, in a recent study, R. Taori et al. augmented from less than 200 instructions to 54K questions. Here we adapt their prompt (see Appendix for details). When using augmentation methods like Alpaca, we need to provide a clear \((q,a)\) pair, based on which the model generates around 10 heuristic outputs. Here we only keep \(q_{s}\), then leverage the aforementioned technique to regenerate training dialogues in the character's ChatBot. We use a mix of ChatGPT, GPT4 and Claude for question generation, with the latter two producing questions more relevant to the character, at a higher cost. Statistics of final generation for each character are shown in Figure 2. Note in the first version, we used Alpaca-like approaches to generate around 27k data. Alpaca-generated questions are influenced by our examples, i.e. they tend to relate to the original scripts. We hope to further filter for real user questions in later versions for testing. We collected 22,752 original dialogues (\(D_{R}\)) and additionally simulated 31,974 questions with corresponding dialogues for the ChatHaruhi-v1 dataset. Note that each dialogue does not necessarily consist of only one QA pair. In total, we collected 54,726 dialogues. ## 6 Experiments Previous work often evaluates the quality of dialogues in role-playing chatbots by conducting pairwise human comparisons of dialogue outputs from different language models, and analyzing the results with TrueSkill or Elo rating. This approach has high cost due to the need for human evaluation, and different human raters may yield inconsistent results. Moreover, for role-playing, dialogue quality alone is insufficient for evaluation. For example, prompts like "Li Yunlong's language style is crude" or "Bai Zhantang's language has a Jianghu flavor" would significantly reduce dialogue quality despite accurately reflecting the role. Thus, human evaluations should judge both 'role consistency' and 'dialogue quality' separately. ### Metrics for Automatic Evaluation Due to the limited dialogues and lack of continuity for the 5 Genshin Impact roles, we only consider the remaining 27 roles for evaluation. For each of the 27 roles, we select 30 stories containing long dialogues \(\hat{a}\) spoken by character \(R\) from their classic narratives \(D\). We test whether a model can produce a plausible response \(a\) to the previous utterance \(q\) before \(\hat{a}\). We judge plausibility by the similarity of \(\hat{a}\) and \(a\) using sentence embeddings. Specifically, we compute cosine similarity between \(\hat{a}\) and \(a\) for each role. We use OpenAI's Text-Embedding-Ada-002, a multilingual sentence embedding model, for evaluation. ### Metrics for User Study The user study is still in progress and will be included in a future version of this report. Figure 8: LLM generates full dialogues conditioned on the first utterance and history. ### Language Model Tuning With the complete 54K ChatHaruhi dataset, we can fine-tune local language models. Approximately 15K dialogues are in English, with the remainder in Chinese. We fine-tune the ChatGLM2-6B model [cite]. Inputs follow the \(s\)-\(R\)-\(D\)-\(H\)-\(q\) format described earlier, with \(a\) as the target for GPT loss calculation. We obtain three models: * model-A, Fine-tuned on the 22,752 original dialogues. * model-B, Fine-tuned on the full 54K dataset with original and simulated dialogues. * model-C, Alternatively, we can fine-tune on original character utterances rather than ChatBot-generated dialogues. All models were fine-tuned for 3 epochs on 4-A100 GPUs. We will release versions A and B with this report, and version C in a future update. ### Qualitative Results We qualitatively compare five models: 1. GPT Turbo 3.5 given just the system prompt \(s_{R}\). 2. Turbo 3.5 given the full \(s\)-\(R\)-\(D\)-\(H\)-\(q\) prompt. 3. ChatGLM2 given just the system prompt. 4. ChatGLM2 given the full prompt. 5. ChatGLM2 fine-tuned on ChatHaruhi with the full prompt. Figure 9: Experiments are conducted on three characters using: a) ChatGPT with the prompt, b) full ChatHaruhi + ChatGPT, c) ChatGLM2 with the prompt, d) full ChatHaruhi + ChatGLM2, and e) full ChatHaruhi + fine-tuned ChatGLM2. More characters are available on our hugging face demo. With classic dialogues and an improved prompt, models like ChatGPT can effectively adopt the speaking style of specific characters. Fine-tuning the 7B model also helps it internalize the full prompt.(Fig 9) ### Quantitative Results Quantitative experiments are still in progress and will be included in a future version of this report. ### User Study The user study is still in progress and will be included in a future update. ## 7 Conclusion, Discussion & Future Work In this tech report, we present an attempt at constructing a system capable of role-playing dialogues as different virtual characters. Leveraging the in-context learning abilities of language models and the growth of larger models, we show the possibility of mimicking distinctive conversational styles by providing appropriate system prompts and example passages of classic scenes featuring each character. We generate a dataset of 54K simulated dialogues and demonstrate the feasibility of fine-tuning multiple roles into a single 7B parameter local model. As our first attempted character is the vividly characterized Haruhi Suzumiya, we name the project ChatHaruhi and the dataset Haruhi-54K. Accompanying this report, we release model A trained on 23K original transcripts and model B trained on the full 54K dataset, demos on HuggingFace, and the complete ChatHaruhi-54K dataset. In future iterations, we will refine the ChatHaruhi interface for easier reusability (see Appendix A for ChatHaruhi 2.0 draft), and supplement the quantitative evaluations. ## 8 Acknowledgments This is an open source project originating from the June 2023 study group of the DataWale community, where we tested an early Haruhi-only version of the chatbot and received highly enthusiastic feedback. We then recruited volunteers from the community (e.g. Yan Chenxi, Feng Xiaoyang) for collaborative development. DataWale nominated the project for the ModelScope Hackathon in early July, bringing greater exposure and drawing more developers. We sincerely thank DataWale and ModelScope for their support during this project. ChatHaruhi is also a sub-project of the open source community Luotuo, which has benefited from various donations including funding, computing resources, and OpenAI API credits. We thank Luotuo's sponsors for their supports. ### Contributors This project is an open source work, with all personnel contributing and developing in their spare time. The developers of this project may belong to other institutions or be independent developers. Here we list the main contributions of each developer, as well as their affiliations. Cheng Li@SenseTime purposed the entire project and designed and implemented most of the functionality. Ziang Leng@SenseTime designed and implemented the overall training, data generation and backend architecture of ChatHaruhi1.0. Chenxi Yan@Chengdu University of Information Technology implemented and maintained the back-end of ChatHaruhi1.0 version. Junyi Shen@Zhejiang University implemented the training code and participated in the generation of training dataset. Hao Wang collected script data from My Own Swordsman and participated in the generation of augmented data. Weishi MI@Tsinghua University participated in the generation of augmented data. Yaying Fei@Beijing University of Technology implemented the ASR function of the script tool and participated in the Openness-Aware Personality paper sub-project. Xiaoyang Feng@Nanjing Agricultural University integrated the functions of the script recognition tool and participated in the Openness-Aware Personality paper sub-project. Song Yan collected data from The Big Bang Theory. Implemented script format conversion functionality. HaoSheng Wang implemented voiceprint recognition function in script tool, and tts-vits speech synthesis function. Linkang Zhan@Case Western Reserve University collected system prompt and story data from Genshin Impact. Yaokai Jia implemented the Vue version of the front end, and practiced GPU extraction of Bert in the psychology project. Pingyu Wu@Juncai Shuyun helped deploy the first version of the training code. Haozhen Sun@Tianjin University drew the mosaic of ChatHaruhi characters. If you have any suggestions for the project, such as the interface design of ChatHaruhi2.0, or want to add references to the future version of this report, please go to our project [https://github.com/LC1332/Chat-Haruhi-Suzumiya](https://github.com/LC1332/Chat-Haruhi-Suzumiya) to submit an issue.
2308.06868
Camera Based mmWave Beam Prediction: Towards Multi-Candidate Real-World Scenarios
Leveraging sensory information to aid the millimeter-wave (mmWave) and sub-terahertz (sub-THz) beam selection process is attracting increasing interest. This sensory data, captured for example by cameras at the basestations, has the potential of significantly reducing the beam sweeping overhead and enabling highly-mobile applications. The solutions developed so far, however, have mainly considered single-candidate scenarios, i.e., scenarios with a single candidate user in the visual scene, and were evaluated using synthetic datasets. To address these limitations, this paper extensively investigates the sensing-aided beam prediction problem in a real-world multi-object vehicle-to-infrastructure (V2I) scenario and presents a comprehensive machine learning-based framework. In particular, this paper proposes to utilize visual and positional data to predict the optimal beam indices as an alternative to the conventional beam sweeping approaches. For this, a novel user (transmitter) identification solution has been developed, a key step in realizing sensing-aided multi-candidate and multi-user beam prediction solutions. The proposed solutions are evaluated on the large-scale real-world DeepSense $6$G dataset. Experimental results in realistic V2I communication scenarios indicate that the proposed solutions achieve close to $100\%$ top-5 beam prediction accuracy for the scenarios with single-user and close to $95\%$ top-5 beam prediction accuracy for multi-candidate scenarios. Furthermore, the proposed approach can identify the probable transmitting candidate with more than $93\%$ accuracy across the different scenarios. This highlights a promising approach for nearly eliminating the beam training overhead in mmWave/THz communication systems.
Gouranga Charan, Muhammad Alrabeiah, Tawfik Osman, Ahmed Alkhateeb
2023-08-14T00:15:01Z
http://arxiv.org/abs/2308.06868v1
# Camera Based mmWave Beam Prediction: ###### Abstract Leveraging sensory information to aid the millimeter-wave (mmWave) and sub-terahertz (sub-THz) beam selection process is attracting increasing interest. This sensory data, captured for example by cameras at the basestations, has the potential of significantly reducing the beam sweeping overhead and enabling highly-mobile applications. The solutions developed so far, however, have mainly considered single-candidate scenarios, i.e., scenarios with a single candidate user in the visual scene, and were evaluated using synthetic datasets. To address these limitations, this paper extensively investigates the sensing-aided beam prediction problem in a real-world multi-object vehicle-to-infrastructure (V2I) scenario and presents a comprehensive machine learning based framework. In particular, this paper proposes to utilize visual and positional data to predict the optimal beam indices as an alternative to the conventional beam sweeping approaches. For this, a novel user (transmitter) identification solution has been developed, a key step in realizing sensing-aided multi-candidate and multi-user beam prediction solutions. The proposed solutions are evaluated on the large-scale real-world DeepSense \(6\)G dataset. Experimental results in realistic V2I communication scenarios indicate that the proposed solutions achieve close to \(100\%\) top-5 beam prediction accuracy for the scenarios with single-user and close to \(95\%\) top-5 beam prediction accuracy for multi-candidate scenarios. Furthermore, the proposed approach can identify the probable transmitting candidate with more than \(93\%\) accuracy across the different scenarios. This highlights a promising approach for nearly eliminating the beam training overhead in mmWave/THz communication systems. Deep learning, computer vision, mmWave communication, multi-user, beam prediction. ## I Introduction The promise that 5G and beyond hold for supporting revolutionary applications (such as autonomous vehicles, intelligent factories, and Internet of Things (IoT)) is contingent on those systems meeting unprecedented performance requirements in terms of achievable rates, latency, and reliability [1, 2, 3]. Communication systems in high-frequency ranges, e.g., millimeter-wave (mmWave) and sub-terahertz (sub THz), present a way to meet the first of those demands. This is primarily due to their abundance of bandwidth that helps achieve data rates in excess of tens of Gbps [2, 4]. However, high-frequency systems face challenges on many levels, and one of the most critical challenges is the relatively large beam-training overhead. Signal propagation in the high-frequency domain is characterized by poor penetration ability and suffers from high power loss due to scattering [5]. Therefore these systems need to periodically update the choice of beamforming vectors at both transmitters and receivers to maintain satisfactory Signal-to-Noise Ratios (SNRs). Such need is a common source of strain for those systems, and it has driven much research in the wireless community for innovative solutions to reduce the training overhead. Some recently emerging approaches to deal with many high-frequency wireless communication challenges revolve around machine learning, and computer vision [6, 7, 8, 9, 10, 11, 12, 13]. Those approaches collectively define the _Vision-Aided Wireless Communications_ (ViWiComm) framework. Within that framework, a wireless system utilizes computer vision, multimodal machine learning, and deep learning to develop an understanding of the wireless environment and its elements. The system, then, taps into that understanding to address some of the adversities it faces, like the beam-training overhead. A key advantage of the ViWiComm framework is the information-rich sensors it introduces to the wireless system, RGB cameras [12, 14], radars [15, 16], and LiDAR sensors [17], to name a few. These sensors provide much-needed information about the wireless environment that is commonly under-utilized or ignored altogether. Much of the work on addressing beam training [7, 18] with ViWiComm either assumes simplified wireless settings or is based on synthetic datasets like the ViWi dataset [6]. One may question the practicality of the developed framework, given that real wireless communication environments are characterized by two inherent properties: dynamism and visual diversity. Dynamism in wireless environments refers to the continuous change in the locations of radio transmitters and receivers, which naturally leads to time-varying wireless channels. This property is a serious and well-acknowledged challenge in wireless communications [1, 19], for it is the main reason that beam training needs to be performed frequently. Furthermore, dynamism partially contributes to the second property, visual diversity. The wireless environment is fairly visually complex; it is composed of visually diverse objects (e.g., trees, buildings, people, cars, buses, etc.), some of which are continuously on the move, causing the visual scene to vary. From a vision perspective, visual diversity is a serious challenge to a ViWiComm as it leads to the multi-candidate dilemma; it is the case when the composition of the wireless environment includes multiple objects that could constitute a possible wireless transmitter, see [11]. By considering these two properties, the practicality issue of ViWiComm could be summarized in the form of two questions: **Q.1:**: _Could the encouraging results obtained with synthetic datasets (e.g., [7, 18]) be extended to datasets collected from real wireless environments?_ **Q.2:**: _Could the vision-aided wireless communication framework tackle the beam-training challenge in wireless environments with multiple candidate transmitters?_ Answering these two questions constitutes the cornerstone of this study. More specifically, the study builds on top of the work in [11, 12, 20] and provides a detailed evaluation of the beam training challenge with the ViWiComm framework. ### _Prior Work_ The beam-training challenge in high-frequency wireless systems has been investigated in several studies like [21, 22], to name a few. In recent years, a considerable amount of research has explored machine-learning-based approaches to tackle that challenge. For this study, that research is surveyed from the perspective of computer vision. This means the proposed approaches in the literature will be categorized based on whether they utilize visual data or not. **Beam prediction without visual data:** Many studies have considered developing machine learning algorithms for beam prediction using wireless and/or position sensory data. Good examples could be found in [1, 23, 24]. The work in [1] introduces a novel coordinated beamforming approach based on mmWave omni- or quasi-omni-channels. A set of coordinating mmWave base stations estimates their local mmWave channels using omni or quasi-omni beam patterns and then uses those channels to train a Deep Neural Network (DNN) to predict the beamforming vectors at each base station. The study in [23] takes a different look at what could be used as sensory data. It proposes a new approach based on sub-6 GHz channels and DNNs. It poses the beam prediction problem as a classification problem to which a DNN is trained to observe the sub-6 GHz channels and predict the optimal beamforming vector. The use of position sensory data to tackle the beam-training challenge has been explored in [24]. It proposes to use receiver position and types of neighboring vehicles to handcraft a per receiver feature vector, and it trains an ensemble classifier based on a random forest model to predict the optimal beamforming vectors using that feature vector. The existing work has shown promise in addressing the beam-training challenge. However, there is potential for further improvement by considering better sensory data and learning methods. Currently, the chosen sensory data is insufficient to capture the complexity and dynamics of the wireless environment. The wireless or position data only offer partial information about objects and their movements. Additionally, the previous research relies solely on unimodal machine learning. Future wireless systems, including base stations and user equipment, are expected to incorporate multiple sensors such as RGB cameras, sub-6 GHz transceivers, LiDARs, radars, and GPS [25, 26]. Utilizing all available sensory data instead of relying on a single modality would be more practical. Therefore, machine learning research should focus on developing algorithms that can effectively extract meaningful features by combining different modalities. **Beam prediction with visual data:** This category includes beam prediction approaches that address the two shortcomings mentioned above by utilizing visual sensory data and multimodal machine learning. Using computer vision to address the beam-training challenge in high-frequency wireless systems is rooted in the early work in [6, 7]. The ViWiComm framework, in which wireless systems are equipped with visual data sources, has first been introduced in [6] along with the first Vision-Wireless (ViWi) dataset. Using ViWi, [7] presents the first case study for beam prediction using the ViWiComm framework. It proposes a beam prediction approach based on Convolutional Neural Networks (CNNs) for wireless settings with a single-candidate user. The work in [18] extends that in [7] by utilizing object detectors. It takes a step closer to studying ViWiComm in realistic wireless settings by synthesizing images with multiple candidate users using a scenario from the ViWi dataset. Some common limitations of the early work include focusing on simple communication scenarios, such as settings with single-candidate users or artificially generated multi-candidate users, and not fully exploring the potential of multimodal machine learning. Wireless environments, as mentioned earlier, are characterized by dynamic changes and visual variations. This poses a significant challenge when predicting beams based solely on visual data, as visual information alone does not provide any insight into the identity of the object responsible for the radio signal. To address this issue, previous studies such as [11, 27] have proposed multimodal machine learning algorithms that combine both visual and wireless data to identify the radio transmitter in the environment. However, it is important to note that these studies do not specifically tackle the beam-training challenge in practical wireless settings. ### _Contribution_ Recognizing both the shortcomings of initial work on ViWiComm and its potential to deal with the beam-training challenge in high-frequency wireless systems, this paper presents a comprehensive study tackling the challenge in practical wireless settings. In particular, it addresses the two questions **Q.1** and **Q.2** by utilizing visual data and deep learning models. The main contributions of this paper could be summarized as follows: * **Beam prediction in single-candidate settings:** As a stepping stone, this paper addresses **Q.1** using a dataset constructed using DeepSense 6G scenarios [20], which represent real wireless communication environments. It shows that those encouraging results in [7, 18] could indeed be achieved in real wireless communication settings with a single-candidate user. * **Beam prediction in multi-candidate settings:** An answer to **Q.2** is provided in the form of a novel multimodal beam prediction DNN. The proposed solution utilizes visual and position data to identify the radio transmitter in the environment and predict its optimal beamforming vector. It represents the first demonstration of a multimodal ViWiComm DNN designed to tackle beam prediction in practical wireless communication settings. * **Large multimodal dataset and comprehensive evaluation:** By utilizing the different scenarios in DeepSense, two datasets of co-existing multimodal data points (i.e., visual, position, and wireless) are constructed for development and testing purposes. Those datasets constitute the seeds for the comprehensive evaluation experiments conducted to answer the questions in **Q.1** and **Q.2** and to provide insights into the upsides and downsides of vision-aided beam prediction. ## II Vision-aided Beam Prediction Overcoming the challenge of beam training in high-frequency wireless communication systems (mmWave and sub-THz) is a cornerstone in realizing the full potential of those systems. Recent efforts to deal with this challenge have seen increasing interest in leveraging artificial intelligence, and deep learning in particular [18, 1, 23, 7]. Among those efforts, vision-aided beam prediction is a promising approach proposed to reduce the beam training overhead in high-frequency systems. As stated in Section I-B, this work provides a comprehensive and realistic evaluation of the potential of vision-aided beam prediction solutions. In the following three subsections, we (i) motivate the need for vision, (ii) introduce the key idea of vision-aided beam prediction, and (iii) present the overall flow of this paper. ### _Motivation_ Harvesting the large bandwidth available at the mmWave and sub-THz bands requires using narrow beams and overcoming their alignment challenge [2, 28]. The true challenge with beam alignment stems from the inherently dynamic nature of the wireless communication environment; transmitters, receivers, and scatterers could all be on the move. A direct implication of that dynamics is that transmitters and receivers need to periodically update their choice of beamforming vectors to maintain a satisfactory SNR level and, preferably, a LOS connection. This update comes in the form of beam-training, which is a recognized burden in high-frequency wireless communications [18, 23]. An interesting and novel approach for handling the beam-training burden could be found in embracing a striking resemblance between high-frequency communication and computer vision systems, which is their reliance on LOS [29, 8]. Since high-frequency signals struggle in penetrating objects in the wireless environment and lose a significant amount of power due to scattering [28], there is a quite large SNR margin between LOS and NLOS communication links that skews in favor of LOS. This makes LOS a preferable setting in high-frequency communications, and it draws a connection with computer vision, which is inherently LOS. The data usually captured and analyzed in a computer vision system depicts what is _visible_ in the scene, starting with simple patterns (e.g., edges, colors, etc.) to abstract concepts (e.g., human, dog, tree, etc.). As such, the information contained in visual data could be as valuable to a high-frequency system as it is to a computer vision system, _begging the question of how computer vision could be used to mitigate the beam-training challenge_. ### _Key Idea_ The reliance on LOS is the key feature that links high-frequency communications to computer vision and is the bedrock of vision-aided beam prediction. To understand the connection and how it is used to overcome the large beam training overhead challenge, consider the example depicted in Fig. 1, where a high-frequency basestation serves some users in its surrounding environment. Without loss of generality, the base station is assumed to employ a well-calibrated high-frequency antenna array 1. Some users experience LOS connections with the base station in this environment, while others have NLOS links. In a classical system operation, the base station performs beam training to identify suitable beams for each user. Two examples are illustrated in the "top-view" image in Fig. 1, namely beams \(\mathbf{f}_{\text{LOS}}\) and \(\mathbf{f}_{\text{NLOS}}\). An important factor in the beam choice is the user's relative position with respect to the base station. For instance, \(\mathbf{f}_{\text{LOS}}\) is clearly a LOS beam, and it represents a straightforward path between the base station and the green SUV. The NLOS beam \(\mathbf{f}_{\text{NLOS}}\), does not point to the red car but to a visible building (i.e., reflector) that leads to the red car. Although simple, these observations highlight an important property that could be utilized to address the beam-training challenge; both the SUV and the building are in the field of view of the base station and are qualified as LOS objects. This suggests that detecting those objects and understanding their roles could be key to identifying the choice of beamforming vectors. Footnote 1: Well-calibrated here means the array geometry is known, and no hardware impairments are assumed, more on those issues could be found in [30, 31] A high-frequency wireless system could tap into the advances in the fields of computer vision and machine learning [32] to realize the notion of detecting objects and understand their roles, and perform vision-aided beam prediction. Fig. 1: An illustration of a high-frequency wireless communication system and its environment. “Base station View” depicts the environment from the base station perspective, showing some LOS users and possible LOS reflectors. “Top View” shows the invisible part of that environment, i.e., NLOS users, and the beamforming vectors used to serve LOS and NLOS users. Abstractly, using a machine learning algorithm designed for ViWiComm, the task of predicting beams could be broken down into three stages: _scene analysis_, _object-role identification_, and _decision making_. The first stage is where the machine learning model directly operates on the visual data to extract contextual information. Referring back to Fig. 1, this is loosely equivalent to detecting various objects of importance in the scene, like cars, buses, pedestrians, trees, buildings, etc. Those objects are passed on to the second stage, in which the machine learning attempts to identify the roles of those objects relative to the wireless system; this means labeling the objects in Fig. 1 as candidate users, candidate reflectors, or candidate signal blockages. Such labeling will be discussed under the concept of transmitter identification in Section V. This will highlight that visual data alone is not sufficient for identifying the transmitter in the scene. Hence, the object-role identification stage requires augmenting what has been learned from the previous stage with other sensory data, such as wireless, LiDAR, or GPS data. Recognizing the role of each object gets the machine learning ready to go into the decision-making stage, where it selects the suitable beams to serve the LOS/NLOS users. ### _Flow_ This work presents a real-world study on realizing vision-aided beam prediction and demonstrating its ability to tackle the beam-training challenge. The study is conducted in two major phases: (i) Beam-prediction in single-candidate settings, where there is only one candidate user in the visual scene and (ii) beam-prediction in multiple-candidate settings, where multiple objects/candidate users exist in the visual scene. The two phases represent the evolution of the ViWiComm framework first proposed in [7] from conception to real-world implementation in realsitic multi candidate user settings. This study takes advantage of the recently developed DeepSense 6G dataset [20] that reflect real wireless communication environments. The dataset is designed for multimodal machine learning research in wireless communication. It consists of various scenarios where multimodal sensing and communication data samples are collected using a multi-sensor testbed; see [20] for more information. ## III System Model This paper adopts a system model that consists of a basestation, deployed on the sidewalk, and a vehicular mobile user, similar to the system depicted in Fig. 1. The base station is equipped with a uniform linear array (ULA) with \(M\) elements, a standard-resolution RGB camera, and a GPS. For practicality [23], the base station is assumed to employ analog-only architecture with a single RF chain and \(M\) phase shifters. The base station adopts a predefined local beamforming codebook \(\boldsymbol{\mathcal{F}}=\{\mathbf{f}_{q}\}_{q=1}^{Q}\), where \(\mathbf{f}_{q}\in\mathbb{C}^{M\times 1}\) and \(Q\) is the total number of beams. This codebook spans a ULA field of view of \(\gamma^{\circ}\) along the azimuth plane. Adopting an OFDM with a cyclic prefix of length \(D\) and a number of subcarriers \(K\), the received downlink signal at the mobile unit is given by \[y_{k}=\mathbf{h}_{k}^{T}\mathbf{f}x+\beta_{k}, \tag{1}\] where \(y_{k}\in\mathbb{C}\) is the received signal at the \(k\)th subcarrier, \(\mathbf{f}\in\boldsymbol{\mathcal{F}}\) is the selected beamforming vector, \(\mathbf{h}_{k}\in\mathbb{C}^{M\times 1}\) is the channel between the BS and the mobile unit at the \(k\)th subcarrier, \(x\in\mathbb{C}\) is a transmitted complex symbol that satisfies the following constraint \(\mathbb{E}\left[|x|^{2}\right]=P\), where \(P\) is a power budge per symbol, and finally \(\beta_{k}\) is a noise sample drawn from a complex Gaussian distribution \(\mathcal{N}_{\mathbb{C}}(0,\sigma^{2})\). The choice of the beam is determined using classical beam training, in which the base station sweeps the codebook \(\mathcal{F}\) looking for the optimal vector \(\mathbf{f}^{\star}\). Formally, that sweep could be expressed by \[\mathbf{f}^{\star}=\underset{\mathbf{f}_{q}\in\mathcal{F}}{\text{argmax}}\frac {1}{K}\sum_{k=1}^{K}\log_{2}\left(1+\mathrm{SNR}|\mathbf{h}_{k}^{T}\mathbf{f}_ {q}|^{2}\right), \tag{2}\] where \(\mathrm{SNR}\) is the signal-to-noise ratio. However, in LOS-dominated system operation (almost single-path channels), (2) could be approximated by \[\mathbf{f}^{\star}=\underset{\mathbf{f}_{q}\in\mathcal{F}}{\text{argmax}} \frac{1}{K}\sum_{k=1}^{K}|\mathbf{h}_{k}^{T}\mathbf{f}_{q}|^{2}. \tag{3}\] The channel model adopted throughout this paper is the geometric mmWave channel model. This model choice comes as a result of two facts: (i) the model captures the limited scattering property of the mmWave band [19, 33], and (ii) the experimental results in this paper are based on real measurements, which are captured well by the geometric model. The channel vector \(\mathbf{h}_{k}\) in (1) is given by \[\mathbf{h}_{u,k}=\sum_{d=0}^{D-1}\sum_{\ell=1}^{L}\alpha_{\ell}e^{-\frac{2 \pi k}{K}d}p\left(dT_{\mathrm{S}}-\tau_{\ell}\right)\mathbf{a}\left(\theta_{ \ell},\phi_{\ell}\right), \tag{4}\] where \(L\) is number of channel paths, \(\alpha_{\ell},\tau_{\ell},\theta_{\ell},\phi_{\ell}\) are the path gains (including the path-loss), the delay, the azimuth angle of arrival, and the elevation angle of arrival, respectively, of the \(\ell\)th channel path. \(T_{\mathrm{S}}\) represents the sampling time while \(D\) denotes the cyclic prefix length (assuming that the maximum delay is less than \(DT_{\mathrm{S}}\)). ## IV Single-Candidate Settings This section starts the first phase of this study, where vision-aided beam prediction is studied in a real wireless setting with a single candidate user. These settings are the immediate and natural extension to those in [7]. This phase is structured into two parts. The first part includes the formal definition of the single-candidate beam-prediction problem, the proposed deep learning solution to that problem, and a brief discussion on the practical challenges associated with that solution, motivating the transition to the second phase of this study. The second part of this phase presents a detailed evaluation of the proposed solution on the real-world dataset, and it will be discussed in Section VIII-A. ### _Problem definition_ The beam prediction task in single-candidate wireless settings is defined as follows: _Given a wireless communication environment where there is only one possible high-frequency transmitter, the vision-aided beam prediction at the infrastructure is the task of predicting the optimal beam indices from a pre-defined codebook by utilizing a machine learning model and the images captured by the camera installed at the basestation_ Formally, the problem can be defined as follows. A dataset of image-beam pairs (samples) is collected from real wireless environments where each pair has an image with a single-candidate transmitter and its best beamforming vector. This dataset could be given by \(\mathcal{D}_{\text{task}_{1}}=\{(\mathbf{X}_{u},\mathbf{f}_{u}^{\star})\}_{u=1}^ {U}\) where \(\mathbf{X}_{u}\in\mathbb{R}^{H\times W\times C}\) is the RGB image of the \(u\)th pair in \(\mathcal{D}_{\text{task}_{1}}\) with height \(H\), width \(W\), and number of channels \(C\), and \(U\) is the total number of pairs in the dataset. At any time instant \(t\), the objective of the single-candidate beam prediction task is to find a prediction/mapping function \(f_{\Theta_{1}}\) that utilizes the available sensory data \(\mathbf{X}[t]\) to predict (estimate) the optimal beam index \(\mathbf{\hat{f}}[t]\in\boldsymbol{\mathcal{F}}\) with high fidelity. The mapping function can be formally expressed as \[f_{\Theta_{1}}:\mathbf{X}[t]\rightarrow\mathbf{\hat{f}}[t]. \tag{5}\] In this work, we design a deep learning algorithm to learn a prediction function \(f_{\Theta_{1}}\) parameterized by a set of parameters \(\Theta_{1}\) from the dataset \(\mathcal{D}_{\text{task}_{1}}\). The objective of the learned function is to maximize the overall correct prediction probability over all samples in \(\mathcal{D}_{\text{task}_{1}}\), which could be expressed as follows \[f_{\Theta_{1}^{\star}}^{\star}=\max_{f_{\Theta_{1}}}\prod_{u=1}^{U}\mathbb{P} \left(\mathbf{\hat{f}}_{u}=\mathbf{f}_{u}^{\star}\mid\mathbf{X}_{u}\right), \tag{6}\] where the product is the result of an implicit assumption that the samples of \(\mathcal{D}_{\text{task}_{1}}\) are drawn independently from an unknown distribution \(\mathbb{P}(\mathbf{X},\mathbf{f}^{\star})\) that models the relation between \(\mathbf{X}\) and \(\mathbf{f}^{\star}\)2. Footnote 2: A common assumption in the realm of machine learning, see [34] for example. ### _Proposed solution_ The proposed deep learning solution relies on the important parallel between vision and high-frequency communications, i.e., the dependency on LOS objects. It attempts to learn beam prediction by learning where the user (for LOS situations) or reflecting surface (for NLOS situations) are in the environment. In general, a beam-steering codebook deployed at the basestation, especially with a well-calibrated mmWave phased array [30], induces sectoring of the wireless environment across the azimuth plane. This sectoring could be projected onto the image plane, i.e., \(\mathbf{X}\), to result in visual sectors [11] that could be regarded as classes. Given the assumption of a single candidate in the environment, the location of the user (LOS situations) or reflector (NLOS situations) in the image defines the sector to which that user or reflector belongs, henceforth referred to as the object-sector assignment. Therefore, the prediction function \(f_{\Theta}(\mathbf{X})\) should learn such an assignment in order to predict the best beamforming vector. For the NLOS situations, recognizing the most likely reflector may need a sequence of images; they could help indicate the user direction before it gets blocked and provides some insight into its best reflector. Based on the intuition mentioned above about the single-candidate beam prediction task, this study proposes a modified residual neural network (ResNet) [35], more specifically ResNet-50. The architecture is pre-trained on the ImageNet dataset and modified to incorporate a new \(M\)-class classifier layer. The reason behind the choice is rooted in two main facts. The first one is that residual learning is an effective approach to building very deep architecture. The results in [35] have shown that residual blocks, the fundamental element of ResNets, prevent the performance degradation commonly associated with training deep architectures. The second reason is the good performance ResNets have registered in many computer vision tasks. They have been originally designed for image classification; however, they have found their way into many deep architectures developed for object detection [36] and semantic segmentation [37]. ### _Challenges_ The proposed ResNet architecture above is expected to face a critical challenge when deployed in real mmWave environments. The primary reason is the assumption of a single candidate user, i.e., the proposed solution searches for a single candidate user in the scene to assign to a sector. As a result, the proposed solution is incapable of handling a scenario with multiple users. For instance, when two vehicles Fig. 2: This figure illustrates the proposed machine learning-based vision-aided beam prediction model that leverages visual data captured at the base station for mmWave/sub-THz beam prediction in a single-candidate setting. are present in the image (the wireless environment), and they belong to different visual sectors, it is impossible to predict the beam without identifying the roles of each vehicle (transmitter, moving blockage, or possible reflector). This dilemma with multiple candidates is the building block for the third phase of this study, which is discussed in the following section. ## V Multi-Candidate Settings The interest in addressing the dilemma of beam prediction in environments with multi-candidate users is at the center of the second phase of this study. As discussed in Section IV-C, a vision-aided beam prediction algorithm needs, in some way or another, to realize the three-stage process described in Section II-B. The key challenge in doing so is the ability to identify the roles of every candidate in the environment and differentiate the connected users from the other objects in the environment. In order to deal with this challenge, we need to perform what we call _user identification_ in the visual scene by leveraging other user attributed (that could also be captured using other sensing modalities). In this section, we investigate this problem and propose a DNN architecture to perform the task of beam prediction in multi-candidate settings. This establishes the bases for enabling vision-aided multi-user communications. ### _Problem Definition_ The beam prediction task in multi-candidate wireless settings is defined as follows: _Given a wireless communication environment where there are multiple objects that could visually constitute wireless transmitters, the beam prediction task in multi-candidate settings is defined as the problem of predicting the optimal beamforming vector from a pre-defined beam codebook using a pool of multimodal data that includes vision._ The task is formally defined as follows: Let \(\mathcal{V}\) be a \(v\)-tuple of multimodal data samples that includes vision, i.e., \(\mathcal{V}=(\mathbf{X},\mathbf{g}_{1},\ldots,\mathbf{g}_{v})\) where \(\mathbf{g}_{1}\) to \(\mathbf{g}_{v}\) are vectors containing the other modality data samples. Then, a dataset of (\(v+1\))-tuples is collected from a real wireless environment \(\mathcal{D}_{\text{task}_{2}}=\{(\mathcal{V}_{u},\mathbf{f}_{u}^{*})\}_{u=1}^ {U}\) where \(U\) is the total number of samples in \(\mathcal{D}_{\text{task}_{2}}\) and \(\mathbf{f}_{u}^{*}\) is the optimal beam in \(\boldsymbol{\mathcal{F}}\) associated with the \(u\)th \(v\)-tuple \(\mathcal{V}_{u}\). At any time instant \(t\), the objective of the single-candidate beam prediction task is to find a prediction/mapping function \(f_{\Theta_{2}}\) that utilizes the available sensory data \(\mathcal{V}[t]\) to predict (estimate) the optimal beam index \(\mathbf{\hat{f}}[t]\in\boldsymbol{\mathcal{F}}\) with high fidelity. The mapping function can be formally expressed as \[f_{\Theta_{2}}:\mathcal{V}[t]\rightarrow\mathbf{\hat{f}}[t]. \tag{7}\] In this work, we design a deep learning algorithm to learn a prediction function \(f_{\Theta_{2}}\) parameterized by \(\Theta_{2}\). This function needs to maximize the probability of correct beam prediction given the image and position data, i.e., \[f_{\Theta_{2}}^{*}=\max_{f_{\Theta_{2}}}\prod_{u=1}^{U}\mathbb{P}\left(\mathbf{ \hat{f}}_{u}=\mathbf{f}_{u}^{*}\ |\ \mathcal{V}\right). \tag{8}\] Again, similar to the formulation of (6), the product in (8) is a result of an implicit assumption that the samples of \(\mathcal{D}_{\text{task}_{2}}\) are independent and identically distributed, i.e., follow the same unknown joint distribution \(\mathbb{P}(\mathcal{V},\mathbf{f}^{*})\). ### _The Choice of Data Modalities for User Identification_ From the definition above, the ultimate objective of beam prediction in multi-candidate settings is still the same as that in Section IV-A, developing a deep learning algorithm that predicts optimal beamforming vectors; however, performing the beam-prediction task requires more than visual data in those settings. This is a direct consequence of the fact that multiple candidate users share the same visual traits, see Section II-B and therefore, their roles from the wireless system perspective _cannot be visually determined_. This calls for a secondary source of information that could augment visual data, and the choice for such source in this study is GPS (i.e., \(\mathcal{V}=(\mathbf{X},\mathbf{g})\) where \(\mathbf{g}\in\mathbb{R}^{2}\) is a vector of latitude and longitude coordinates). It is motivated by two important observations. First, position information is intimately related to visual information; GPS provides x-y coordinates for objects in the 3-dimensional world, and an image is a projection of that 3-dimensional world onto a 2-dimensional plane. Hence, in some sense, the position data complements what is missing in the visual data, which is the sense of distance. The second reason is that position data is lightweight, making them easily exchangeable between a basestation and its candidate user. It is quite important to emphasize at this point that GPS (or position data) is _meant to augment visual data and not replace them_. As shown in [12], position data could be of help to beam prediction, yet they lack a sense of surrounding; position data only indicate where candidate users are, and they do not account for contextual information (shapes and relations between those candidates). For instance, visual data reflect information about the shape and type of candidates (e.g., large vehicle, small vehicle, pedestrians, etc.) and their relation to one another, which provide contextual information to the machine learning algorithm. ### _Proposed Solution_ This subsection presents the proposed solution for beam prediction in a real-wireless environment with multiple transmitting candidates. It proposes a novel approach that utilizes bimodal visual and position data in \(\mathcal{D}_{\text{task}_{2}}\) to predict optimal beamforming vectors. The proposed solution follows the three-stage sequence outlined in Section II-B. More to the point, it breaks down the function \(f_{\Theta}(\mathbf{X},\mathbf{g})\) into two major components: user (transmitter) identification and beam prediction. The first component utilizes visual data to detect relevant objects in the environment (i.e., scene analysis), then identifies the radio transmitter among the objects using position data (i.e., object-role identification). The second component takes in the extracted information about the transmitter and its surrounding objects and predicts the optimal beamforming vector (i.e., decision-making). Fig. 3 shows an schematic of the proposed architecture. #### V-C1 Transmitter Identification The goal of the first component of the proposed multi-candidate beam prediction solution is to identify the candidate transmitter in the image, i.e., transmitter identification. For that task, a two-step architecture is proposed. The first step of the proposed architecture relies on DNNs to produce bounding boxes enclosing relevant objects in the scene. It is performed to detect all the probable transmitting objects in the environment. In the second step, the DNN uses position data to filter out detected candidates that are not the radio transmitter. A deeper look at the two-step DNN architecture is given below. **Bounding box detection:** In order to detect the transmitting candidate in real-wireless settings, the first step is to identify all the relevant objects in the scene (scene analysis). A pre-trained object detector is adopted for this purpose. The object detector is modified to detect two classes of objects in the scene, labeled as "Tx (transmitter)" and "No Tx (Distractors)". The former label encompasses all objects relevant to the wireless system in the scene. For example, in a scene depicting a city street, relevant objects include, but are not limited to, cars, trucks, buses, pedestrians, and cyclists. The other label includes those cases where no relevant objects are present in the scene. The modified object detector is fine-tuned in a supervised fashion using a subset of the manually labeled dataset described in Section VI-A. In this work, a YOLOv3 architecture is selected for the bounding box detection task. It provides accurate detection at a relatively high frame rate, reducing inference latency. During inference, the fine-tuned YOLOv3 model generates bounding boxes for the detected candidates in the scene and their confidence scores. By using those output boxes, the relevant-object matrix \(\mathbf{B}\in\mathbb{R}^{N\times 2}\) is constructed such that each row has only the normalized coordinates of the center of a bounding box, see Fig. 3. **Bounding box selection:** In this step, both relevant-object matrix \(\mathbf{B}\) and position data are utilized to identify the probable transmitter in the scene. This process starts by learning a prediction function that estimates the bounding box center of a transmitter using its position information, encoding the relation between object position in the 3D world and object location in the image. The function is learned using a \(3^{rd}\)-degree polynomial regression model \[\hat{\mathbf{b}}_{\text{Tx}}=\mathbf{W}^{T}\boldsymbol{\phi}, \tag{9}\] where \(\hat{\mathbf{b}}_{\text{Tx}}\in\mathbb{R}^{2\times 1}\) is a vector with an initial prediction of the centers of a transmitter object, and \(\mathbf{W}\) is an \(A\times 2\) parameter matrix, the \(A\times 1\) vector \(\boldsymbol{\phi}\) is a feature vector obtained from the \(3^{rd}\)-degree polynomial transformation of the predictors \(\mathbf{g}\), with \(A\) denoting the number of unique monomials in a bivariate3\({}^{rd}\)-degree polynomial, i.e., \(A=9\). The parameter matrix \(\mathbf{W}\) is learned from the dataset \(\mathcal{D}_{\text{task}_{2}}\)--more on the training of this model in Section VII--and once it is learned, the model could be used to get an initial estimate on the center of the transmitter bounding box. Since \(\hat{\mathbf{b}}_{\text{Tx}}\) is an initial estimate that solely relies on position data, it is not expected to be a final prediction but merely a guide. It is used in conjunction with the relevant-object matrix \(\mathbf{B}\) to identify (or select) the object responsible for the radio signal (transmitter). This is done using the nearest neighbor algorithm with an Euclidean distance metric. In other words, the row of \(\mathbf{B}\) with the shortest distance to \(\hat{\mathbf{b}}_{\text{Tx}}\) is picked as the nearest neighbor and, hence, the predicted user (transmitter). Footnote 3: It is bivariate because the vector of predictors \(\mathbf{g}\) is 2 dimensional, see [34, 38] for more information on polynomial regression. #### V-C2 Beam Prediction The second component of the proposed solution is a feed-forward neural network that predicts the optimal beam given the identified transmitter. That this component has enough contextual information from the previous one makes it capable of realizing the third and last stage of the three-stage sequence, which is decision-making. A 2-layer feed-forward neural network is developed to perform that beam prediction task. In particular, the prediction task here is posed as a classification problem. The input to the network is the center coordinates of the identified transmitter, and the output is the best beam in the codebook \(\boldsymbol{\mathcal{F}}\). ## VI Development Dataset As the cornerstone of this study is to answer **Q.1** and **Q.2** in Section I, an experimental setup is built around multi-modal real-world sensor measurements collected from real wireless environments. This is done by utilizing the publicly available DeepSense dataset [20] and constructing a large development dataset with tuples of RGB images, mmWave beams, GPS positions, and bounding boxes for the transmitters Fig. 3: The figure presents the proposed multi-modal machine learning based beam prediction model that leverages both visual and positional data to predict the optimal beam indices in a multi-candidate settings. and distractors. The considered scenarios from DeepSense and the constructed final development datasets are discussed below. ### _Communication Scenarios and Development Dataset_ The DeepSense 6G dataset provides a variety of outdoor wireless communication scenarios with different data modalities [20]. To evaluate the two-beam prediction problems defined in Sections IV and V, we select scenarios \(1\) to \(8\) from the DeepSense dataset. They represent six outdoor wireless environments with vehicles as the main candidate transmitters. Furthermore, the scenarios were collected at different times of the day and in varied weather conditions to increase the overall diversity of the dataset. All eight scenarios contribute a total of \(\approx 18000\) data samples, each of which is a tuple of RGB image, mmWave received power, and GPS position, and across all of them, the wireless system deploys a beam-steering codebook of 64 beams. In Fig. 5, we present the data samples from the \(8\) different scenarios of the DeepSense dataset. The proposed beam prediction tasks and their solutions mandate slightly different types of communication scenarios and data modalities. Therefore, the raw dataset passes through a processing pipeline to filter out data samples unrelated to a particular task. This leads to two different development datasets, one for each task. The details of this process are given below: **Single-candidate dataset:** The task defined in Section IV requires data collected from the wireless environment with a single candidate in the FoV of the basestation. The first step in generating the development dataset is to filter out samples that do not fit the single-candidate setting from the raw dataset. This is done by manually examining the vision data of each DeepSense scenario. Doing so would show that scenarios 5, 6, 7, and 8 have some multimodal data that pertain to the single-candidate setting. Table I lists those scenarios and the number of samples they contribute to the development dataset. The selected samples undergo a second processing step, in which only visual and wireless data is retained. The task relies on visual data as inputs and optimal beams as targets (labels); therefore, only RGB images and mmWave received power data are picked from the modalities of each scenario. The power data go through one extra step to generate the optimal beams. Each received power vector is first downsampled to 32 elements by selecting every other element in the vector. Since the basestation receives the mmWave signal using an oversampled codebook of \(64\) pre-defined beams, the downsampling does not affect the total area covered by the beams. This implies that the effective size of the codebook in this paper is \(Q=32\). Then, out of the 32 elements per vector, the index of the beam with maximum received power is selected as the optimal beam (as described in Section III and given by (3)). The final outcome of this pipeline is a dataset \(\mathcal{D}_{\text{task}_{1}}=\{(\mathbf{X},\mathbf{f}^{\star})_{u}\}_{u=1}^{U _{1}}\) where \(U_{1}\approx 9000\). **Multi-candidate dataset:** The multi-candidate task requires data samples collected from a wireless environment with multiple candidates. Thus, similar to the single-candidate dataset, the raw dataset is examined to identify the multi-candidate samples. This reveals that Scenarios 1, 2, 3, and 4 are more suited for multi-candidate settings, see Table I for more details. As described in Section V, the development dataset for this task requires the preparation of data tuples of RGB images, GPS position, and mmWave optimal beams. This means the extra step from the single-candidate dataset is also applied here to obtain the optimal beam indices for each data sample, i.e., downsampling and using (3). The final outcome of this simple pipeline is a dataset \(\mathcal{D}_{\text{task}_{2}}=\{(\mathcal{V}_{u},\mathbf{f}^{\star})_{u}\}_{u =1}^{U_{2}}\) where \(U_{2}\approx 9500\). The two development datasets above are used to train and evaluate the performance of the proposed solutions. Both datasets are further divided into training and validation sets with a split of \(70-30\%\). The details of the experimental setup are provided in the next section, while Section VIII is devoted to discussing the performance of the proposed solutions. ## VII Experimental Setup This section first presents an overview of the model training process and the hyper-parameters utilized to train the proposed machine learning model for both single-candidate and multi-candidate settings. Next, we discuss the metric used to evaluate the beam prediction performance of the proposed solution. All the experiments are performed on a single NVIDIA Quadro RTX 6000 GPU using the PyTorch deep learning framework. **Network Training:** In Section IV-B and Section V-C, we proposed two different machine learning-based models for the single-candidate and the multi-candidate settings, respectively. Both the proposed solutions utilize the cross-entropy loss with Fig. 4: This figure presents the DeepSense 6G testbed \(1\) used during the data collection. It consists of a stationary and mobile unit. The stationary basestation (unit1) is equipped with a mmWave receiver (60 GHz band) and sensory suite (RGB camera, GPS, radar and LiDAR). The mobile unit, acting as a transmitter, is equipped with a 60GHz quasi-emni antenna and GPS receiver. Adam optimizer to train the models. The detailed hyperparameters used to fine-tune the models are presented in Table II. Next, we present the in-depth details of the model training. **Single-Candidate:** As described in Section IV-B, the proposed vision-aided beam prediction solution adopts an ImageNet pre-trained ResNet-50 object classification model. The model is further modified by removing the last output layer and replacing it with a fully-connected layer with \(M=32\) neurons. The proposed model is trained in a supervised manner with a dataset \(\mathcal{D}_{\text{task}_{1}}\), comprising RGB images and its corresponding ground-truth beam index. **Multi-Candidate:** The multi-candidate proposed solution consists of two major components, namely, (i) transmitter identification and (ii) beam prediction. The transmitter identification component consists of a pre-trained YOLOv3 model, which is further fine-tuned in a supervised fashion using a subset of manually labeled dataset \(\mathcal{D}_{\text{task}_{2}}\). During inference, the YOLOv3 model is used to detect all the relevant objects and extract the bounding boxes of those objects. The proposed solution utilizes the user's positional data to select the most probable bounding box. After identifying the probable transmitter, the bounding box center coordinates, \(\hat{\mathbf{b}}_{\text{Tx}}\), are then provided as input to the proposed feed-forward neural network. The proposed machine learning model is then trained in a supervised fashion using the ground-truth bounding-box coordinate and the corresponding beam index. **Evaluation Metric:** Here, we present the details of the metric adopted to evaluate the efficacy of the proposed solution. The primary metric adopted for evaluation is the top-\(k\) accuracy. The top-\(k\) accuracy is defined as the percentage of the test samples where the ground-truth beam is within the top-\(k\) predicted beams. In this work, we utilize the top-1, top-2, top-3, and top-5 accuracies to compute the prediction performance of the proposed solution. In Section VIII, we present the in-depth evaluation of the proposed solution for both single-candidate and multi-candidate settings. ## VIII Experimental Results The performance of the proposed ViWiComm beam prediction solutions is studied in this section. The discussion is divided into two parts. The first will discuss the performance of the proposed DNN in single-candidate settings. It will highlight the main advantages and shortcomings of ViWiComm Fig. 5: This figure shows the image samples from the different scenarios in the DeepSense 6G dataset. As shown in this figure, scenarios \(1-4\) are multi-candidate scenarios, i.e., more than one objects-of-interest (whickes) are usually present in the FoV of the basestation and have been utilized to investigate the multi-candidate beam prediction problem statement. Scenarios \(5-8\) primarily consists of a single object and is useful for evaluating the performance of the proposed sensing-aided single-candidate solution. \begin{table} \begin{tabular}{|l|l|c|c|c|} \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Location**} & \multirow{2}{*}{**Time of Day**} & \multicolumn{2}{c|}{**Number of Samples**} \\ \cline{3-4} & & & **Training** & **Validation** \\ \hline \hline \multirow{4}{*}{**Single-Candidate**} & **Tyler Parking (Scenario 5)** & Night & 1817 & 736 \\ \cline{2-4} & **Lot-59 Parking (Scenario 6)** & Day & 812 & 348 \\ \cline{2-4} & **Downtown Chandler (Scenario 7)** & Day & 641 & 275 \\ \cline{2-4} & **Bio-design (Scenario 8)** & Day & 3077 & 1320 \\ \hline \multirow{4}{*}{**Multi-Candidate**} & **McAllister Ave. (Scenario 1)** & Day & 1904 & 816 \\ \cline{2-4} & **McAllister Ave. (Scenario 2)** & Night & 2197 & 942 \\ \cline{2-4} & **Rural Rd. (Scenario 3)** & Day & 1120 & 477 \\ \cline{1-1} \cline{2-4} & **Rural Rd. (Scenario 4)** & Night & 1363 & 592 \\ \hline \end{tabular} \end{table} TABLE I: Single-Candidate and Multi-Candidate Dataset for beam prediction, setting the stage for the discussion on the multi-candidate setting. The second part will focus on the multi-candidate settings and show how the proposed solution can handle the beam prediction task in those practical settings. ### _Single-candidate_ The beam prediction performance of the proposed solution is studied from two different standpoints, machine learning and wireless communication. The first perspective includes experiments that evaluate the performance of the proposed DNN architecture from different machine-learning angles, e.g., performance per location, number of data samples needed for training, etc. Those experiments use machine learning metrics such as top-k accuracy, where k \(\in\{1,2,3,5\}\) is the accuracy rank, and confusion matrices. The second perspective attempts to translate the results of the machine learning evaluation into results pertaining to the wireless system performance, such as studying the implications of beam-prediction failure on the wireless receive power. #### Vii-A1 Machine Learning Perspective The proposed DNN architecture is evaluated on the single-candidate dataset. This evaluation investigates three important questions: (i) Does the proposed DNN perform consistently across different wireless environments? (ii) Are there any advantages to training the proposed DNN on several environments simultaneously? Furthermore, (iii) how many data samples are needed to learn the beam prediction in single-candidate settings, in general? **Could the DNN have consistent performance across different environments (scenarios)?** This question attempts to identify whether the proposed DNN can achieve similar single-candidate beam-prediction performance across different wireless environments or not. Fig. 6 addresses the question by training and testing the DNN on each scenario individually and two combinations of scenarios. The first thing one might observe from the figure is how volatile the top-\(1\) performance looks across different environments, ranging from \(\approx 67\%\) to \(\approx 84\%\). This could be attributed to the difference between the wireless environments. The four scenarios represent four different physical locations and two different times of day, i.e., scenario \(5\) represents a wireless environment operating at night. The fluctuation might initially seem alarming, for it suggests a sense of uneven DNN performance. However, a closer look at the top-\(3\) and \(5\) performances negates that suggestion; for _it shows that the DNN could produce consistent or nearly uniform performance_. From the figure, the DNN registers almost the same accuracies across locations and times of day, ranging from \(\approx 92\%\) to \(\approx 99\%\) for top-3--the implications of this will be further explored in Section VIII-A2. **What is the gain of learning from multiple scenarios?** Even more interesting than the consistency of the DNN performance is the impact of combining scenarios on that performance. The bars of "Day Combined" and "Total" in Fig. 6 indicate two intertwined facts: (i) A model can be shared across environments and (ii) combining data samples could help improve the top-1 performance on each scenario. When the DNN is trained on a combined dataset (whether combining scenarios having the same time of day or combining all), it achieves a top-\(1\) performance that is _better_ than the top-1 performance of three of the individual scenarios. It is observed that in the "Total" case, the top-\(1\) accuracy is higher than those of individual scenarios, i.e., scenarios \(5\), \(6\), and \(7\). _This not only indicates that a single model could be trained for multiple scenarios simultaneously but that combining scenarios can even help in improving the learning process_. At first glance, one could be tempted to attribute this improvement to the unbalanced data contribution of each scenario, see Table I; scenario \(8\) with its \(1320\) validation samples may bias the top-1 performance of the combined dataset. However, this is not the case, and the improvement is a result of the improved learning process for the DNN. Fig. 6(a) corroborates this conclusion. It compares the top-1 accuracy of the DNN when it is trained on each scenario individually and on all four scenarios combined, i.e., training on individual scenarios separately and testing on individual scenarios or training on all scenarios together and testing on individual scenarios. **How many training samples are needed?** This interesting question could be seen as a natural follow-up to the previous discussion, for it ponders the computational cost of that performance. Fig. 6(b) provides an answer to that question. It plots the top-1 and top-5 accuracies versus the number of training samples used for two scenarios. An obvious observation from the figure is that approximately \(30\%\) of the total training samples is enough to achieve the reported top-1 performances in Fig. 6 and even less than \(30\%\) is needed for top-5. Under the surface of this observation lies a more interesting takeaway. The figure consolidates the earlier conclusion on the role of combining scenarios in achieving improved performance. Adding more data samples (more than the \(30\%\)) does not have much of an impact on the performance of the DNN. This means the improvement observed after combining the scenarios is actually a consequence of an improved learning process and not an increased number of data samples. #### Vii-A2 Wireless Communication Perspective This perspective focuses on the implications of the performance of the proposed DNN on the wireless system. More specifically, it attempts to answer two critical questions: (i) What are the implications of predicting the wrong beam? Moreover, (ii) how much of an impact does mis-prediction have on the wireless system? **What are the implications of mis-predictions?** The previous results in Fig. 6 indicate that the prediction of the proposed solution deviates from the optimal beam between \(\approx 16\%\) to \(\approx 33\%\) of the times (based on top-1 accuracy). This may seem concerning at first, yet, as discussed earlier, a closer look at \begin{table} \begin{tabular}{l|c c} \hline \hline **Parameters** & **ResNet-50** & **MLP** \\ \hline \hline **Batch Size** & 32 & 32 \\ **Learning Rate** & \(1\times 10^{-4}\) & \(1\times 10^{-2}\) \\ **Learning Rate Decay** & epochs \(4\) and \(8\) epochs \(20\) and \(40\) \\ **Learning Rate Reduction Factor** & \(0.1\) & \(0.1\) \\ **Dropout** & \(0.3\) & \(0.3\) \\ **Total Training Epochs** & \(15\) & \(50\) \\ **Number of Output Nodes (\(M\))** & \(32\) & \(32\) \\ \hline \hline \end{tabular} \end{table} TABLE II: Design and Training Hyper-parameters top-3 or 5 shows a much slimmer margin of error, \(\approx 8\%\) to \(\approx 1\%\). This means that a wireless system may not need to rely exclusively on the proposed DNN, obsoleting classical beam training. Instead, _the system could use the top-3 or 5 beams in conjunction with some lightweight beam training_; it trains the wireless user on the predicted top-3 or 5 beams to determine the optimal one, which adds a level of robustness to the system operation. **What if only the top-1 prediction is used?** This is an interesting question as it encourages a dive into the effect of mis-prediction on the wireless system. Fig. 8 is one way to address that question. Its subfigures are obtained on the case of training and testing the DNN on all scenarios at the same time (case of "Total" in Fig. 6). The confusion matrix, Fig. 7(a), suggests that even when the DNN misses, its top-1 prediction is likely to be one of the neighboring beams, e.g., if the optimal beam is 15, the DNN is most likely to predict beams between 14 and 16. Below the surface, such mis-prediction is not costly; neighboring beams are expected to achieve reasonable wireless performance. This is verified by Fig. 7(b). It shows a scatter plot for the top-1 received power versus the ground-truth received power. The figure indicates that the neighborhood of the optimal beam registers a similar level of received power. Hence, predicting a neighboring beam achieves almost the same received power as the optimal beam. This is quantified on the figure using the R\({}^{2}\)-score, which reflects the compactness of received power compared to the ground-truth. ### _Multi-Candidate Beam Prediction_ This section focuses on the practical case of beam prediction in multi-candidate settings. As mentioned in Section V-C, the Fig. 8: (a) shows the confusion matrix of beam prediction. (b) illustrates the relation between the groundtruth received power and the received power using the top-1 predicted beam. Fig. 6: Top-k accuracies (k \(\in\{1,2,3,5\}\)) of the proposed DNN across all four single-candidate scenarios and two choices of combining, combining day time scenarios and combining all four scenarios. Fig. 7: A deeper dive into the performance of the proposed DNN. (a) highlights the value of combining data samples from different scenarios. (b) sheds some light on how many training samples are required to get the performances in Fig. 6. proposed solution has two components, transmitter identification, and beam prediction. Hence, the performance evaluation here is divided into three subsections. The first one evaluates the performance of the transmitter identification component, and the other two evaluate the beam prediction performance in a similar way to that presented in Section VIII-A, namely from machine learning and wireless communication perspectives. #### V-B1 Transmitter Identification As described in Section V-C, this component performs scene analysis and object-role identification. In particular, it attempts to identify the target user in the visual scene (from the other objects/distractors). The performance of transmitter identification is evaluated on the multi-candidate dataset before the component is integrated with beam prediction. Fig. 9 shows seven confusion matrices for transmitter identification, four for individual scenarios and three for different combined scenarios. Overall, the matrices display high true positive and negative rates, which do not go below \(\approx 90\%\). This indicates a good precision-recall performance regardless of the location or time of day a scenario represents. For instance, when all four scenarios are combined (case of "All" in the figure), the proposed DNN achieves a precision of \(\approx 91\%\) at a recall of \(93\%\). Such good precision-recall performance translates to high confidence in the proposed DNN to identify transmitters in various wireless environments and lighting conditions. #### V-B2 Machine Learning Perspective The two-component DNN is evaluated on the multi-candidate dataset, and the focus of this evaluation is still the same as that of Section VIII-A1. **Could the DNN have consistent performance across scenarios?** Addressing this question is even more interesting in multi-candidate settings than it is in single-candidate settings, as they are the epitome of practical wireless environments. Fig. 10 shows the top-\(k\) beam prediction accuracies over all four scenarios and three different combinations of them. The first thing one could observe there is a slight performance degradation across all variants of the top-\(k\) metric. The primary reason behind the reduction in model performance is _the multi-candidate nature of these scenarios_ (i.e., various relevant objects appear in the RGB image). It makes the beam-prediction task much more challenging than single-candidate. Identifying the user (transmitter), in these cases, is no longer straightforward, as evident from the confusion matrices in Fig. 9. However, a closer look at the top-\(3\) and \(5\) accuracies highlights the following. Compared to the single-candidate settings, top-\(3\) accuracies fluctuate within a range of \(10\%\) (a \(3\%\) increase over that in single-candidate), and top-\(5\) accuracies register a range of \(\approx 5\%\) (a \(4\%\) increase over that of single-candidate). _These numbers, overall, are quite encouraging because they indicate a fairly consistent performance considering the increased challenge in multi-candidate settings_. **What is the gain of learning from multiple scenarios?** Combining scenarios and training the proposed DNN does not result in the same improvement in performance seen in the single-candidate case. In fact, combining seems to yield the same or slightly degraded performance as opposed to individual scenarios, as shown in the bars "Day Comb.", "Night Comb.", and "Total" of Fig. 10. The reason for this could be traced back to the bottleneck of the proposed DNN, which is the transmitter identification component. Owing to its reliance on position data that are inherently noisy4 and the presence of multiple candidate transmitters in an image, a transmitter might not be correctly identified. Despite how rarely this happens, when it does, it leads to significant mis-prediction. This clarifies that mis-identifying the transmitter often leads to critical performance loss as the predicted beam Fig. 10: Top-k accuracies (k \(\in\{1,2,3,5\}\)) of the proposed DNN across all four multi-candidate scenarios and three choices of combining, combining day time scenarios, night time scenarios, and combining all four. Fig. 9: The confusion matrices on all case studies in the multi-candidate settings. Each matrix quantifies the likelihood of identifying a transmitter in a group of candidates with similar visual traits. is not in the neighborhood of the optimal one). **How many training samples are needed?** The previous discussion has extended the findings from single-candidate beam prediction to multi-candidate cases. More specifically, the proposed DNN displays consistent performance across different scenarios, and the model can be trained on multiple scenarios simultaneously. In parallel to the single-candidate discussion, it is important to establish how many training samples are required to get those results. Fig. (b)b presents two examples for training on scenarios 1 and 4. The top-1 accuracies for both examples require approximately \(50\%\) of the training samples in each scenario to achieve the reported performance in Fig. 10. Compared with the need for \(30\%\) of the samples in the single-candidate settings, both examples highlight the difficulty of the beam prediction task in multi-candidate settings. #### V-B3 Wireless Communication perspective Next, we investigate the performance from wireless perspective and draw some insights about the system operation. **What are the implications of mis-predictions?** The findings in Section VIII-B2 indicate a slight dip in the beam prediction accuracies, especially top-1 accuracy. The implications of that on the wireless system performance could be explored further with the confusion matrix in Fig. (a)a. The first thing to observe from the figure is that most predictions identify the optimal beam or a beam in its neighborhood. _This means that robust beam prediction could still be achieved with lightweight beam training and the top-5 predictions of the DNN_, as in the single-candidate settings. However, in contrast to the confusion matrix in Fig. (a)a, there is a higher likelihood of seeing predictions far from the ground truth. This is evident from the number of bright spots scattered around the diagonal of the matrix. They are a direct consequence of the catastrophic mis-predictions resulting from misidentifying the transmitter. **What if only the top-1 prediction is used?** The impact of mis-predictions in multi-candidate settings becomes clear when a wireless system relies only on top-1 predictions. Fig. 10 and Fig. (a)a have hinted to the phenomenon, but neither has explored its direct impact on the wireless system performance, and as such, Fig. (b)b attempts to bridge that gap and round out this analysis. It shows a power scatter plot similar to that presented in Fig. (b)b. The plot shows a wider cluster of blue points, resulting in a smaller R\({}^{2}\)-score compared to the single-candidate settings. This score directly reflects the impact of catastrophic mis-prediction; the phenomenon produces beam predictions with very low received power. For instance, the lower half of the y-axis (Top-1 Power) shows some examples where the received power of a top-1 predicted beam is \(\approx 0.1\). In contrast, the ground truth beam produces received power in the range of \(0.3\) to \(0.7\), i.e., ground truth power is \(3\) to \(7\) times higher than the predicted beam. _Such observation emphasizes the importance of augmenting a ViWiComm system with lightweight beam training in practical wireless environments_. ## IX Conclusion This paper presents a machine learning framework specifically designed to address the challenges of realistic scenarios Fig. 11: A deeper dive into the performance of the proposed DNN. (a) explores the impact of combining data samples from different scenarios. (b) sheds some light on how many training samples are required to get the performances in Fig. 6. Fig. 12: (a) shows the confusion matrix of beam prediction. (b) illustrates the relation between the groundtruth received power and the received power using the top-1 predicted beam. in highly mobile mmWave/sub-THz wireless communication systems with multiple probable transmitting candidates. The proposed solution utilizes visual and positional data to predict optimal beam indices, effectively reducing the beam training overhead associated with adjusting narrow beams of large antenna arrays. Experimental evaluation on the DeepSense \(6\)G dataset demonstrates that the proposed solution can achieve close to \(100\%\) top-5 beam prediction accuracy for single-user scenarios and approximately \(95\%\) for multi-object scenarios while accurately identifying the probable transmitting candidate with over \(93\%\) accuracy. An important area for future work is ensuring the generalizability of the trained model to unseen scenarios, further enhancing its applicability in real-world mmWave/sub-THz wireless communication networks.
2307.08018
Real-Time Analytics by Coordinating Reuse and Work Sharing
Analytical tools often require real-time responses for highly concurrent parameterized workloads. A common solution is to answer queries using materialized subexpressions, hence reducing processing at runtime. However, as queries are still processed individually, concurrent outstanding computations accumulate and increase response times. By contrast, shared execution mitigates the effect of concurrency and improves scalability by exploiting overlapping work between queries but does so using heavyweight shared operators that result in high response times. Thus, on their own, both reuse and work sharing fail to provide real-time responses for large batches. Furthermore, naively combining the two approaches is ineffective and can deteriorate performance due to increased filtering costs, reduced marginal benefits, and lower reusability. In this work, we present ParCuR, a framework that harmonizes reuse with work sharing. ParCuR adapts reuse to work sharing in four aspects: i) to reduce filtering costs, it builds access methods on materialized results, ii) to resolve the conflict between benefits from work sharing and materialization, it introduces a sharing-aware materialization policy, iii) to incorporate reuse into sharing-aware optimization, it introduces a two-phase optimization strategy, and iv) to improve reusability and to avoid performance cliffs when queries are partially covered, especially during workload shifts, it combines partial reuse with data clustering based on historical batches. ParCuR outperforms a state-of-the-art work-sharing database by 6.4x and 2x in the SSB and TPC-H benchmarks respectively
Panagiotis Sioulas, Ioannis Mytilinis, Anastasia Ailamaki
2023-07-16T12:06:00Z
http://arxiv.org/abs/2307.08018v1
# Real-Time Analytics by Coordinating Reuse and Work Sharing ###### Abstract. Analytical tools often require real-time responses for highly concurrent parameterized workloads. A common solution is to answer queries using materialized subexpressions, hence reducing processing at runtime. However, as queries are still processed individually, concurrent outstanding computations accumulate and increase response times. By contrast, shared execution mitigates the effect of concurrency and improves scalability by exploiting overlapping work between queries but does so using heavyweight shared operators that result in high response times. Thus, on their own, both reuse and work sharing fail to provide real-time responses for large batches. Furthermore, naively combining the two approaches is ineffective and can deteriorate performance due to increased filtering costs, reduced marginal benefits, and lower reusability. In this work, we present ParCuR, a framework that harmonizes reuse with work sharing. ParCuR adapts reuse to work sharing in four aspects: i) to reduce filtering costs, it builds access methods on materialized results, ii) to resolve the conflict between benefits from work sharing and materialization, it introduces a sharing-aware materialization policy, iii) to incorporate reuse into sharing-aware optimization, it introduces a two-phase optimization strategy, and iv) to improve reusability and to avoid performance drifts when queries are partially covered, especially during workload shifts, it combines partial reuse with data clustering based on historical batches. ParCuR outperforms a state-of-the-art work-sharing data-base by 6.4x and 2x in the SSB and TPC-H benchmarks respectively. + Footnote †: This work was done while the author was at EPFL. + Footnote †: This work was done while the author was at EPFL. + Footnote †: This work was done while the author was at EPFL. + Footnote †: This work was done while the author was at EPFL. + Footnote †: This work was done while the author was at EPFL. ## 1. Introduction Reusability is a driving factor for many analytical tools, such as dashboards, notebooks, and pipelines. Often, such reusable workloads consist of highly concurrent parameterized queries. Dashboards, for example, produce visualizations by processing several canned queries that are parameterized through UI interactions or other queries (Sioulas, 2016; Sioulas, 2016). Similarly, analysts rerun data-science notebooks for reproducibility and exploration, often with different parameters (Bauer et al., 2015; Bauer et al., 2015; Sioulas, 2016; Binder et al., 2016); hence, multiple queries that transform and analyze data recur. While such applications process large numbers of queries, they are interactive in nature and require low response times for all queries. However, under high concurrency, backend databases struggle to produce responses within a tight timeframe. Traditionally, there are two approaches to accelerate processing for large batches of recurring queries. On the one hand, we can optimize individual queries. To do so, both commercial and open-source databases can _reuse_ materialized results; databases avoid full recomputation and drastically reduce processing time. Often, optimizing for reuse opportunities is automated in the form of caching, recycling, and materialized views and subexpressions (Sioulas, 2016; Sioulas, 2016; Sioulas, 2016; Oracle, 2016; Zurich, Switzerland, 2016). Nevertheless, materialization is subject to a storage budget and thus leaves outstanding computations. Moreover, as the outstanding computations for different queries are still processed individually, response time is increased with concurrency. On the other hand, we can optimize the scalability of batch processing using _work sharing_. Work-sharing databases reduce the total processing time by exploiting overlapping computations across the queries in the batch. However, large numbers of heavyweight shared operators and the fact that everything is recomputed from scratch can violate stringent response time requirements. Figure 1 depicts processing time for a large query batch1. Both query-at-a-time (QaT) reuse and work sharing fail to provide fast responses. Reuse eliminates computations by precomputing joins, but suffers from concurrent outstanding processing (i.e., filters on materialized results, non-materialized joins). By contrast, work sharing mitigates the impact of concurrency and reduces the response time but suffers from processing heavy shared joins at runtime. Footnote 1: The setup corresponds to Figure 7a with 50% budget (presented in Section 6.2) Individually, both reuse and work sharing fail to process large workloads interactively but still make complementary contributions. Thus, it is attractive to combine the two approaches to exploit their cumulative benefit. However, naively reusing materialized results in a work-sharing database as we would in a query-at-a-time database brings limited benefit and can even degrade performance ("Work-sharing + Reuse" in Figure 1). Reuse in a work-sharing environment is ineffective because i) it eliminates upstream shared operators only when their results are not required by any downstream computation, ii) as it rewrites only queries that the used materialized results subsume, mismatching (i.e., non-subsumed) queries may recompute, fully or partially, the reused results, hence decreasing benefit - mismatches become increasingly likely as concurrency is increased, especially during workload shifts - and iii) it severely amplifies processing for shared filters. To enable interactive responses for large parameterized batches, we introduce ParCuR (Partition-Cut-Reuse), a novel framework that harmonizes reuse with work sharing. To address the limited effectiveness of reuse in work-sharing environments, ParCuR adapts materialization and reuse techniques across three axes: Figure 1. ParCuR harmonizes reuse and work sharing to speed up recurring batches _Cut:_ Work sharing violates the assumptions of traditional subexpression selection (Han et al., 2017; Krizhevsky et al., 2017); thus, existing solutions fail to minimize processing. To increase the impact of reuse, ParCuR introduces novel materialization and reuse policies that make decisions based on the eliminated shared operators in the work-sharing setup. As eliminating each shared operator depends on downstream decisions, ParCuR introduces the concept of _cuts_. Cuts represent sets of materialized subexpressions that act synergistically in eliminating more upstream operators. The policies use cuts when evaluating which results to materialize or reuse. ParCuR proposes approximation algorithms for materialization as well as a cost-based reuse algorithm that maximizes processing-time savings. _Reuse:_ ParCuR focuses on making reuse efficient, and thus it is imperative to reduce the high processing time for shared filters. To this end, it builds and uses access methods on materialized subexpressions. By building access methods on materialized subexpressions based on frequent predicates, and by using the access methods at runtime, ParCuR evaluates frequent filters for one batch of tuples at a time, thus amortizing the required processing. _Partition:_ To increase the usability of materialized results in case of mismatches, e.g., during workload shifts, ParCuR uses partial reuse. It uses the fragments of materialized results that are relevant for each query batch at hand and performs any additional recomputation only as needed, thus relaxing the subsumption constraint; it eliminates shared operators for all queries for the data ranges that materialized results cover. To efficiently identify and access the relevant fragments and the base data for the recomputation, ParCuR uses partitioning. Nevertheless, by materializing and reusing at the partition-granularity, it creates a dependency between the storage footprint and the partitioning scheme: such materializations may include tuples that are rarely useful if the partition is misaligned with the predicates of the corresponding queries. Hence, to maximize reuse while minimizing footprint, ParCuR introduces a novel partitioning algorithm that clusters together data that are accessed by similar subexpressions and hence aligns partitions with predicate-subexpression combinations. ParCuR incorporates the above techniques in a two-phase framework: i) an offline tuner that optimizes ParCuR's state (i.e., partitions, materialized results, access methods) for a target workload and ii) an online executor that, by exploiting the available state, minimizes the processing time for query batches arriving at runtime. By adapting and exploiting the available state, ParCuR makes reuse efficient and effective in work-sharing environments. As Figure 1 demonstrates, ParCuR drastically reduces batch response time. The experiments show that ParCuR outperforms work sharing by 6.4\(\times\) and 2\(\times\) in the SSBM and TPC-H benchmarks, respectively. We make the following contributions: * Choosing materializations using QaT heuristics is ineffective and uses the storage budget suboptimally. We propose a family of materialization policies that, by adapting to the workload's sharing opportunities, improve time savings for the same budget. * Work-sharing decisions and access patterns affect the benefit of reuse. We propose a cost-based optimization strategy that chooses when and which materializations to inject into each batch's plan such that response time is minimized. * Naively reusing materializations in work-sharing databases can increase response time considerably. Instead, we propose that materialization should be accompanied by access methods that enable data skipping and filter skipping. * Increasing the usability of materialized results in case of mismatches requires partial reuse. Partition-level materialization and execution enable efficient partial reuse at the expense of storage overhead. We propose a novel partitioning scheme that maximizes reuse while minimizing redundant materialization by aligning partitions to workload patterns. ## 2. Reuse in Shared Execution We provide an overview of the challenges in reusing materializations during shared execution. We first briefly present work-sharing concepts and motivate reusing materializations to reduce recomputation, then highlight the performance pitfalls that reuse introduces when combined with work sharing, and finally outline our solutions. For ease of presentation, we use the following batch as a running example: Q1: SELECT SUM(X) FROM A,B,C,D WHERE expr1 Q2: SELECT SUM(X) FROM A,B,E WHERE expr2 ### Shared Execution Work-sharing databases accelerate query batches by exploiting overlapping work across queries. To do so, they rely on i) the _global plan_ and ii) the _Data-Query model_. **Global plan:** The global plan expresses sharing opportunities among different queries. It is a directed acyclic graph (DAG) of relational operators that process tuples for one or more queries, and multi-cast their results to one or more parent operators. Figure 2 shows the global plan for Q1 and Q2. For ease of reference, each operator is labeled with a number. Operator 2 processes \(A\bowtie B\) for both queries, and sends results to operators 3 and 8, which serve Q1 and Q2, respectively. At the two roots, the global plan produces the results for Q1 and Q2. By processing each operator of the global plan only once, the database shares work across queries and reduces the overall processing time. **Data-Query model:** The Data-Query model enables efficient sharing between queries with different selection predicates. Sharing through query re-writing that uses standard relational operators and filters the union of the predicates is expensive as i) it produces and processes redundant tuples, and ii) filters data several times within the plan (Krizhevsky et al., 2017). For example, operator 2 can join a "probe" tuple belonging only to Q1 with a "build" tuple only belonging to Q2 and filter it out afterwards. The Data-Query model addresses these two inefficiencies: it annotates each tuple with a _query-set_ that indicates to which queries the tuple contributes. Then, specialized _shared_ operators process both the actual tuples and the query-sets. This way the database tracks membership for intermediate results, and eliminates redundant tuples early. Therefore, with Data-Query model: i) the global plan shares work on tuples that are common across some but not all queries, and ii) the operators can immediately drop tuples that do not belong to any query. ### Recomputation Bottleneck When using work sharing, processing more queries increases the response time sublinearly, and thus, the total processing time is reduced compared to QaT execution. However, for each submitted query batch, global plan execution always starts from a clean slate. Data flows from the input tables to each query's output, and all shared operators of the global plan are fully processed from scratch. Recomputation of previously "seen" expressions can be critical as the additional processing for handling query-sets renders shared operators particularly time-consuming. For example, shared filters and joins require one or more query-set intersections, the cost of which is increased as a function of the number of queries. Furthermore, shared filters are not simple comparisons but are implemented as joins with predicates using the predicate indices. All in all, as global plans often consist of tens of operators, processing accumulates and prevents providing results within a tight time window. Therefore, to offer interactivity, we need to reduce the required computations for each batch. ### Pitfalls of Combining Reuse and Work Sharing Analytical databases reduce runtime computations by reusing precomputed results. However, we observe that using materializations in work-sharing environments exhibits a set of properties that have not been studied before and which make reuse inefficient. Namely, these properties are: i) _shared cost_, ii) _synergy_, iii) _filter amplification_, and iv) _risk of miss_. We elaborate on each of these properties. **Shared cost:**_QaT cost models are inaccurate in work-sharing environments_. Work sharing affects both which operators reuse eliminates and their relative importance. On the one hand, reuse eliminates upstream operators only as long as their results are not required by other remaining downstream operators. For example, reusing the results of operator 4 eliminates operators 3 and 4, but operator 2 is still required for Q2. On the other hand, work sharing across queries diminishes the importance of frequency of occurrence for operators; the savings depend more on the total number of Data-Query tuples processed by the shared operator rather than the number of participating queries. This is contrary to the assumptions of traditional cost models for materializing intermediates, which assume that reuse eliminates all upstream costs and which simply add up the benefit for each affected query. **Synergy:**_The benefit of individual materializations is amplified_. Materialization decisions affect each other's results differently than they do in single-query plans. Reuse in single-query plans results in diminishing returns. For example, reusing the results of operator 4 in the original plan eliminates joins 3 and 4, whereas reusing the same results in a rewritten plan that already uses operator 3 only eliminates join 4. This observation is critical for the design of heuristic materialization algorithms that are based on submodularity. However, diminishing returns are not necessarily the case in global plans. Consider the example where we reuse results for operators 3 and 8. We observe a counter-intuitive effect: individually, they eliminate one join each, but together the benefit is amplified, and they eliminate 3 joins. This effect, which we refer to as _synergy_, marks a departure from traditional materialization and reuse. **Filter amplification:**_Shared filters over materializations dominate the total processing time_. When injecting a materialization into a global plan, the work-sharing database needs to process filters from all the tables participating in the computation. For example, if the database reuses the results of the subquery corresponding to operator 4, the global plan needs to process 4 shared filters from tables \(A\), \(B\), and \(C\). Then, the processing time for filters is amplified for two reasons: i) materializations can have a significantly larger cardinality than small dimension tables, and ii) filters must process every materialization where the corresponding table participates (e.g., filters from \(A\) are processed on the materializations of both 4 and 8). In some cases, reuse deteriorates performance compared to processing the batch from scratch using work sharing. **Risk of miss:**_The probability that the materialization covers all accessed data decreases with the number of queries_. Reuse typically requires that the materialization fully subsumes the subquery that it eliminates. Similarly, the materialization needs to subsume all participating queries to eliminate subplans in global plans. For example, eliminating operator 2 by reusing its result requires that both Q1 and Q2 can be answered using the materialization. Assume that the materialization only covers _expr1_ and _expr1_ defines a subset of _expr2_: then, even if Q1 is answered using the materialized subexpression, Q2 fully recomputes the shared operator's result already and thus reuse brings no benefit compared to shared execution. Requiring full subsumption for materialized subexpressions has a high risk of mismatch, especially in case of workload shifts. ### Harmonizing Reuse and Work Sharing To significantly reduce their runtime computations, work-sharing databases need to address inefficiency in reuse. In this work, we harmonize work sharing and reuse: we redesign, based on the above-mentioned properties, the techniques for materializing and reusing precomputed results such that we maximize eliminated computations and minimize reuse overhead. Harmonization takes place across three axes: i) materialization and reuse policies which address shared cost and synergy, ii) access methods for materializations which address filter amplification, and iii) partial reuse, which addresses the risk of miss. **Materialization and reuse policies:** Due to shared cost and synergy, algorithms for selecting materializations or injecting materializations into plans make suboptimal decisions. Work sharing renders their cost models inaccurate and violates common submodularity assumptions. Hence, harmonization requires novel materialization and reuse policies that, by taking into account both shared cost and synergy, select materializations that bring higher processing time reduction, given the same budget. We introduce a methodology that evaluates cost reduction using i) the eliminated Figure 2. Motivational example: work sharing introduces novel challenges for reuse shared cost in global plans and ii) the novel concept of cuts, that is, sets of materializations that exhibit synergy. We formulate the problem of choosing materializations for a target workload as a variant of the subexpression selection problem (Kang et al., 2017; Wang et al., 2018). We show that the materialization problem can be reduced, using cuts, into a Submodular Cover Submodular Knapsack (SCSCK) problem (Kang et al., 2017), for which there exists a family of approximation algorithms. Afterward, we address selecting which materializations to reuse and when in shared execution. We propose a reuse optimization pass that, at runtime, injects into a global plan the materialized subexpression that maximize cost savings (i.e., eliminated computation minus filtering overhead) for the selected subexpressions. **Access methods:** Filter amplification limits the applicability of reuse as it shrinks the net benefit and may even deteriorate performance. Efficient reuse requires that the processing time for shared filters over materializations is decreased. We reduce processing time for filters using suitable access methods for the workload at hand. By building and using access methods, ParCuR enables shared execution to evaluate shared filters over one block of tuples at a time instead of processing them on a tuple-by-tuple basis, and thus to amortize the overhead. We build access methods for the target workload through partitioning and then use the created access methods to eliminate filters at runtime (Section 4.1). **Partial reuse:** Strict subsumption limits the applicability of reuse. For this reason, ParCuR opts for partial reuse: to exploit available materializations for the parts of the data that they cover. During execution, ParCuR can answer each query by combining computations from parts of different materializations and even from parts of the base data. Computations on disjoint parts of the data that consist of filters, projections, join probes, and aggregations can be combined to produce the full result (Sundundar et al., 2017). Our insight is that, to enable composable computations from different parts of data, planning and execution need to take place at partition-granularity. In addition, the reusability of materializations is maximum when they fully cover the data for a set of partitions. For those partitions, they always subsume the matching partition-local computations and can eliminate the corresponding processing. Hence, ParCuR performs materialization and reuse at partition-granularity. The materialization policy selects for each materialization a set of partitions to fully cover and injects materializations into each partition's global plan at runtime. However, this scheme creates a dependency between partitioning and the storage overhead for covering the target workload; storage overhead is minimum when partition boundaries are aligned with the queries that the materializations subsume. Thus, due to this dependency, data needs to be partitioned such that each partition's tuples are required by the same computations, which, in turn, require the same materializations. We propose the metric of homogeneity to capture the similarity of computations across each partition's tuples. ParCuR introduces a partitioning scheme that, by splitting data such that homogeneity is maximized, maps each computation to the data that it concerns and reduces wasteful materialization. ParCuR uses the selected partitions at runtime in a partition-oriented execution model to enable partial reuse and achieves cost savings that are proportional to the overlap between the runtime and tuning workload. ### Putting It All Together We present ParCuR, a framework that enables shared execution to effectively take advantage of materialized subexpressions by combining the proposed solutions. ParCuR's architecture comprises two parts: the _tuner_, and the _executor_. The tuner operates offline. It analyzes a target workload made of historical query batches and adapts the framework's state by employing the ParCuR's offline mechanisms: i) it partitions the data based on the access patterns of the target workload, ii) it materializes a set of subexpressions for the given partitions, and iii) it builds new access methods for the materialized subexpressions using finer-grained partitioning. Then, given the available partitioning, materialized subexpressions, and access methods, the executor processes each query batch arriving at runtime: i) it performs shared execution at the level of the available partitions, ii) it decides when and where to reuse materialized subexpressions for each partition, and iii) it uses the available access methods to reduce filter costs using data and filter-skipping. Figure 3 illustrates the end-to-end workflow for both the offline tuner and the online executor. We elaborate on each of these mechanisms in Sections 3 and 4. Note that the query batches processed at runtime can be arbitrarily different from historical batches both in terms of access patterns and global plans. In all cases, ParCuR opportunistically uses the existing state to reduce the response time of runtime batches. ## 3. Tuning ParCuR's State By analyzing a target workload that consists of a sequence of query batches, the tuner repartitions the data, materializes a set of selected subexpressions, and builds access methods on the materialized subexpressions. Tuning takes place offline. After tuning is done, the partitions, the materialized subexpressions and the access \begin{table} \begin{tabular}{|c|c|c|c|} \hline Materialization policy & ✓ & ✓ & \\ \hline Access methods & & & ✓ \\ \hline Partitioning & & & ✓ \\ \hline Reuse policy & ✓ & ✓ & \\ \hline Data \& Filter skipping & & ✓ & \\ \hline Partition-oriented execution & & & ✓ \\ \hline \end{tabular} \end{table} Table 1. Challenges (columns) and mechanisms (rows) that ParCuR uses to harmonize reuse and work sharing. Brown rows: offline mechanisms, purple rows: online mechanisms. Figure 3. ParCuR’s workflow in a) the offline tuner and b) the online executor methods are exposed to the executor at runtime, which uses them to eliminate recurring computation in subsequent query batches. In this section, we present each of the steps in the tuner's workflow. Each step's output is the input for the next step in line: partitioning chooses the boundaries for materializing subexpressions and the materialization policy selects the subexpressions on which to build access methods. We first present the partitioning algorithm (Section 3.1), then introduce the materialization policy (Section 3.2) and finally discuss access method construction (Section 3.3). ### Workload-driven Partitioning The first step of ParCuR's tuner is to partition the data in a way that maximizes the utility of subsequent materializations. To differentiate between this partitioning and any additional data reorganization for building access methods, we name the first step's partitioning as _1st-level partitioning_ and any further partitioning as _2nd-level_. For partition-granularity materialization to be budget-efficient, all tuples should be processed by similar query patterns, i.e., most of their downstream computation should be the same. Therefore, ParCuR employs a partitioning scheme that clusters tuples according to query patterns and materializes subexpressions for each partition independently. Such a partitioning scheme offers three benefits: i) if the query patterns remain the same, materialized subexpressions are almost fully reused, and space is not wasted, ii) materialization is specialized for the sharing decisions of each partition's query pattern, and iii) for the case of _partial reuse_ during a workload shift, performance degradation becomes proportional to the magnitude of the shift. To cluster together data that is processed by similar query patterns, we keep track of processing history for a sample of tuples by maintaining a _subquery-vector_ for each tuple. We consider all possible subqueries \(e_{1},e_{2},\ldots,e_{m}\), that appear in a set of historical batches, and mark to which of them each tuple belongs. By _subqueries_, we mean all the join subexpressions (and their reorderings), that exist in each batch, and involve the fact table. For example, considering a batch with two queries, \(A\bowtie B\bowtie C\), \(A\bowtie B\bowtie D\) and \(A\) as the fact table, leads to the subqueries depicted in Figure 3(a). We represent subexpressions in different batches as separate subqueries because they do not actually co-occur. Using subqueries is advantageous as it exposes similarities that do not depend on a specific execution plan and naturally represents co-occurrence in the same batch. We then use the subquery-vectors in order to formulate a tuple-clustering problem based on homogeneity. We assume a matrix \(W\), where the \(i_{th}\) row of it corresponds to the subquery-vector of the \(i_{th}\) tuple: If at least one query with subquery \(e_{j}\) accesses the \(i_{th}\) tuple, \(W_{i,j}=w(e_{j})\), where \(w(e_{j})\) is a weight assigned to \(e_{j}\). Otherwise, \(W_{i,j}=0\). In our implementation, to increase the relative importance of larger subqueries to homogeneity, we set \(w(e_{j})=|e_{j}|\), where \(|e_{j}|\) is the number of tables participating in \(e_{j}\). Alternatives assignments can also achieve a similar result. Given a set of tuples \(T\), we formally define homogeneity as: \[H(T,W)=\sum_{t\in T}\frac{\sum_{j=1}^{m}W_{t,j}}{max(\sum_{j=1}^{m}w(e_{j}) \times u(\sum_{t\in T}W_{t,j}),1)}\] where \(u(x)\) is the step function with \(u(x)=1\) when \(x>0\) and \(0\) otherwise. Homogeneity assigns a score to each tuple in \(T\) based on the subqueries that access the tuple and is defined as the sum of scores. Each tuple's score is the sum of weights for the subqueries that access tuple \(t\) over the sum of weights for the subqueries that access at least one tuple in \(T\). Hence, the complexity for computing \(H(T,W)\) is \(O(m\times|T|)\). The score is maximum (i.e., equals \(1\)) if all the subqueries that access at least one tuple in \(T\) also access \(t\). The intuition is that homogeneity is maximum when all tuples in \(T\) are accessed by the exact same subqueries. In that case, the utilization of materializations is also maximum; assuming that a subquery's results are materialized and that the historical batches recur as is, reuse exploits all the tuples in the materialization, and no tuple is redundant. Homogeneity-based partitioning is defined as finding the partitions \(\{p_{1}^{*},p_{2}^{*},\ldots,p_{n}^{*}\}\) that maximize the aggregate homogeneity: \[\{p_{1}^{*},p_{2}^{*},\ldots,p_{n}^{*}\}=\operatorname*{arg\,max}_{\{p_{1}, \ldots,p_{n}\}}\sum_{i=1}^{n}H(p_{i},W)\,s.t.\ \forall p_{i}\ |p_{i}|\geq PS_{min}\] where \(PS_{min}\) is the minimum allowed partition size. Homogeneity-based partitioning finds partitions such that, in each partition, the tuples are accessed by almost the same set of subqueries, and thus, barring a workload shift, the utilization of materializations is high. The partition size constraint ensures that the solution avoids the trivial optimal solution where each tuple forms its own partition. To efficiently compute a solution to homogeneity-based partitioning, we use a space-cutting approach that, similar to (Sang et al., 2018), forms a tree of cuts in the space of table attributes. Each internal node corresponds to a logical subspace of the table and contains a predicate based on which this subspace is further split: the left child corresponds to the data that satisfies the predicate, whereas the right child to the data that does not. Finally, the leaves of the tree correspond to data partitions, which are the quanta for materialization. The advantage of the space-cutting approach is that it enables routing queries to required partitions based on the predicates of the splits and the queries. To solve the partitioning problem, we use the greedy Algorithm 1. The algorithm runs on a uniform sample of the tuples to keep runtime monitoring overhead low. ParCuR computes the sample's query pattern matrix by monitoring data accesses across batches and by recording the vector of subqueries for the sample's tuples. When triggered, the greedy algorithm computes the change in the objective function for each candidate cut, that is, a predicate that intersects with the partition at hand (lines 5-9), and finds the locally optimal cut that maximizes the aggregate homogeneity (lines 10-11). Then, the space is partitioned based on the locally optimal cut, and the greedy algorithm is recursively invoked for the two children subspaces and the respective sample tuples (lines 13-16). Figure 4. Two-query example for subquery vectors ``` 1Function\(\mathit{PARTITION}(partition,W,cuts,PS_{min})\): 2\(output=null\) ; \(best=null\) ; \(bestScore=null\) ; 3\(score=H(partition.sample,W)\) ; 4for\(cut\in cuts\)do 5if\(intersects(partition,cut)\)then 6\(tp,fp=getPartitions(partition,cut)\) ; 7if\(tp.size<PS_{min}\) or \(fp.size<PS_{min}\)then 8\(continue\) ; 9\(curr=H(tp.sample,W)+H(fp.sample,W)\) ; 10if\(best==null\) or \(curr>bestScore\)then 11\(best=cut\) ; \(bestScore=curr\) ; 12 13if\(best==null\) and \(bestScore>1.01\times score\)then 14\(tp,fp=getPartitions(partition,best)\) ; 15\(trees=\mathit{PARTITION}(tp,W,cuts,PS_{min})\) ; 16\(trees=\mathit{PARTITION}(fp,W,cuts,PS_{min})\) ; 17\(output=Node(best,trees.fres)\) ; 18 19else\(output=Leaf()\) ; 20 21return\(output\) ; ``` **Algorithm 1**Homogeneity-based Partitioning The greedy algorithm stops when either the relative improvement from the locally optimal cut drops below a threshold, which we set at 1% (line 7), or all candidate cuts violate the minimum partition size for resulting partitions (lines 7-8). Let \(S_{p}\) the sample per partition and \(S\) the entire sample. The algorithm's complexity depends on i) the number of recursive invocations, ii) the complexity of \(H(S_{p},W)\), which is \(O(m\times|S_{p}|)\), and iii) the number \(|F|\) of distinct filters in the tuning workload. As the minimum partition size is \(PS_{min}\), we can have at most \(\frac{|S|}{PS_{min}}\) leaf-partitions, and hence \(\frac{2\times|S|}{PS_{min}}-1=O(\frac{|S|}{PS_{min}})\) invocations. Thus, the complexity of Algorithm 1 is: \(O(\frac{|S|}{PS_{min}}\times m\times|S|\times|F|)\). Homogeneity-based partitioning results in more efficient use of the storage budget compared to data access-based partitioning schemes, such as Qd-tree (Shen et al., 2017). ### Materialization Policy 1st-level partitioning assumes that query patterns represent the overall workload and thus recur in future batches. To eliminate recomputation in such cases, ParCuR materializes subexpressions on a per 1st-level partition basis. Due to the interference between reuse and work sharing, a global-plan-aware materialization policy is required. Also, in ParCuR the policy should consider that partitions process different query patterns. Hence, ParCuR relies on a new formulation of the subexpression selection problem, which is: i) sharing-aware, and ii) works on _partition-wise global plans_. The optimal solution differs from the one of the classical problem. We call this new problem _Multi-Partition Subexpression Selection for Sharing (MS3)_. We define the _Historical Workload Graph_, which is the input of MS3, and then MS3 itself. **Definition 3.1** (Historical Workload Graph).: \(-\) Given a fact table \(T\), a partitioning \(\{p_{1},p_{2},..,p_{n}\}\) of \(T\), and batches \(\{Q_{1},Q_{2},\ldots,Q_{m}\}\), the historical workload graph \(G\) is a graph composed of connected components \(G_{i,j}\), \(i\in\{1,\ldots,n\}\), \(j\in\{1,\ldots,m\}\), where \(G_{i,j}\) is the global plan for \(Q_{j}\) over \(p_{i}\). In the global plan, nodes represent operators (including a pseudo-operator for \(T\)) and edges represent producer-consumer relationships. **Definition 3.2** (Ms3).: \(-\) Let \(R(c)\) be the maximum cost reduction that reuse can incur when executing the global plans of the historical workload graph \(G\) with an available set of materialized subexpressions \(c\), and \(B(c)\) the budget required for materializing \(c\). If \(\mathcal{B}\) is the total memory budget, MS3 is defined as: \[\max_{c}R(c),\,s.t.:\,B(c)\leq\mathcal{B}\] MS3 is a hard problem and hence it is time-consuming to compute a tractable exact solution. To solve it, we first prove a reduction to _Submodular Cover Submodular Knapsack (SCSK)_ problem (Kolmogorov, 1999) and then show how we can use approximate algorithms for SCSK to choose to materialize a set of expressions that achieve a high cost reduction with approximation guarantees. #### 3.2.1. Reduction to SCSK Let \(U\) be a set and \(f,g:2^{U}\rightarrow\mathbb{R}\) be two submodular functions2, then SCSK is the optimization problem Footnote 2: Submodularity formalizes diminishing returns. Specifically, a function \(h\) is defined as submodular if \(S\subset S^{\prime}\Rightarrow h(S\cup\{s\})-R(S)\geq h(S^{\prime}\cup\{s\})-h(S ^{\prime})\). \[\max_{S\subset U}\,g(S),\,s.t.\,f(S)\leq B\] To reduce MS3 to SCSK, cost savings in MS3 should be submodular, i.e., adding more materialized subexpressions should result in diminishing returns. While this holds in QaT execution, where each materialization reduces the marginal benefit of other conflicting materializations, it does not hold in shared execution. We observe that shared execution benefits more from materializations in the same path of the global plan where synergy increases cost savings. The key idea for reducing MS3 to a submodular optimization problem is to materialize subexpressions in groups. We notice that computing cost savings for groups gives us more accurate estimates for the eliminated upstream computations. In addition, synergy between groups is always captured by their _super-group_, i.e., a group that contains their union. We formulate useful groups of materializations by introducing the concept of _cuts_. Intuitively, in a given global plan, a cut is a set of subexpressions that, if materialized, eliminate all upstream operators between (inclusive) the operators that produce them and a common ancestor, the _anchor_. For example, in Figure 2, the cut composed of operators 3 and 8 also eliminates the upstream operators 1 and 2, which are also anchors. Formally, we define cuts and anchors as follows: **Definition 3.3** (Cuts and anchors).: \(-\) _Let \(G\) be the historical workload graph. A set of nodes \(c\subset V\) is defined as a cut with respect to anchor \(a\in V\) if:_ * \(a\) _is an ancestor of every_ \(v\in c\)_._ * _Every descendant of_ \(a\) _is either i) an ancestor of at least one node_ \(v\in c\)_, or ii) a descendant of exactly one node_ \(v\in c\)_._ _We represent the set of all cuts in \(G\) as \(CUTS(G)\), and for all \(c\in CUTS(G)\) we define \(BC(c,a)\) as the nodes between (inclusive) the cut's nodes and anchor \(a\). The shorthand \(BC(c)\) implies using the minimal anchor (i.e., an anchor whose predecessor is not an anchor for \(c\))._ Choosing cuts to materialize so as to maximize the eliminated cost in their BC sets is related but not identical to \(MS3\). The cost in \(BC\) sets is not always equal to the cost reduction from the same materializations in \(MS3\), because \(MS3\) implicitly includes the savings from super-cuts, that is the union of smaller materialized cuts. However, we prove that solutions in the _cut selection_ problem can be enriched such that they are both solutions to _cut selection_ and \(MS3\) with equal savings. Furthermore, we prove that _cut selection_ is an SCSK problem, and thus we can solve it using approximate algorithms. Based on these two properties, _cut selection_ gives a solution to \(MS3\) with better or equal approximation factor than the one given for SCSK. In the following paragraphs, we formally define _cut selection_ and prove the mentioned properties. First, we introduce some required notation: **Definition 3.4** (Domain and Enrichment).: _-- Let \(S\) be a set of cuts to materialize. We define the domain of \(S\) as the set_ \[d(S)=\{o.subquery|o\in(\bigcup_{c\in S}c)\}\] _and the enrichment of \(S\) as the set_ \[e(S)=\{c|c\in CUTS(G)\text{ and }\forall o\in(o.subquery\in d(S))\}\] _The domain represents which results \(S\) materializes, and the enrichment represents all cuts that are materialized by materializing \(S\)._ **Definition 3.5** (Cost Reduction and Budget).: _-- Let \(S\) be a set of cuts. Also, let \(cost(op)\) be the processing cost for an operator \(op\) in the global plan. We model the cost reduction due to materializing \(S\) as_ \[\tilde{R}(S)=\sum_{op\in O}cost(op),\text{ where }O=\bigcup_{c\in S}BC(c)\] _and the required materialization budget as \(\tilde{B}(S)=\sum_{op\in d(S)}B(\{v\})\). \(R(S)\) is equal to the cost of operators \(O\) that are eliminated by materializing the cuts in \(S\). Each cut eliminates the shared operators between the cut and the minimal anchor and, by definition, computing \(O\) as the union of operators accounts for overlaps between the operators that are eliminated by different cuts._ \(\tilde{B}(S)\) _equals the total budget required for materializing the results of the cuts in \(S\). \(d(S)\) is by definition the results that \(S\) materializes._ **Definition 3.6** (Reduced Workload Graph).: _-- Let \(S\) be a set of cuts. We define the reduced workload graph of \(S\) as_ \[G(\emptyset)=<V(\emptyset),E(\emptyset)>=G\] \[G(S)=<V(S),E(S)>=G[V-\bigcup_{c\in S}BC(c)]\] _where \(G[V^{\prime}]\) is the induced subgraph of \(G\) for vertices \(V^{\prime}\)._ _The reduced workload graph represents the global plans for the historical query batches after materializing and reusing the cuts in \(S\)._ We define _cut selection_ problem as follows: **Definition 3.7** (Cut Selection).: _-- Cut selection is defined as the optimization problem of finding a set of cuts \(S\) such that:_ \[max\tilde{R}(S),\text{s.t.:}\tilde{B}(S)\leq B\] Using the above definitions, we prove the following theorems: **Theorem 1**.: Cut selection is a SCSK problem. Proof.: We prove that \(\tilde{R}\) and \(\tilde{B}\) are submodular. For a set of cuts \(S\) and a cut \(c\), it holds that: \[\tilde{R}(S\cup\{c\})-\tilde{R}(S)=\sum_{op\in O(S)}cost(op)\text{ s.t. }O(S)=BC(c)\cap V(S)\] and \[\tilde{B}(S\cup\{c\})-\tilde{R}(S)=\sum_{m\in M(S)}B(\{m\})\text{ s.t. }M(S)=d(\{c\})-d(S)\] Let \(S\), \(S^{\prime}\) be two sets of cuts such that \(S\subset S^{\prime}\). Then, \[\tilde{R}(S\cup\{c\})-\tilde{R}(S)=\sum_{op\in O(S)}cost(op)\] and \[\tilde{R}(S^{\prime}\cup\{c\})-\tilde{R}(S^{\prime})=\sum_{op\in O(S^{\prime}) }cost(op)\] However, \(V(S^{\prime})\subset V(S)\) and thus \(O(S^{\prime})\subset O(S)\). Therefore, \[\tilde{R}(S\cup\{c\})-\tilde{R}(S)\geq\tilde{R}(S^{\prime}\cup\{c\})-\tilde{ R}(S^{\prime})\] Similarly, \[\tilde{B}(S\cup\{c\})-\tilde{B}(S)=\sum_{m\in M(S)}\] and \[\tilde{B}(S^{\prime}\cup\{c\})-\tilde{B}(S^{\prime})=\sum_{m\in M(S^{\prime})}\] Then, \(d(S)\subset d(S^{\prime})\) and thus \(M(S^{\prime})\subset M(S)\). Therefore, \[\tilde{B}(S\cup\{c\})-\tilde{B}(S)\geq\tilde{B}(S^{\prime}\cup\{c\})-\tilde{ B}(S^{\prime})\] Therefore, both f and g are submodular. **Theorem 2**.: If \(S\) is a solution to cut selection, then \(e(S)\) is also a solution to cut selection with \(R(e(S))\geq R(S)\). Proof.: By definition, \(S\subset e(S)\). It also holds: \(B(e(S))=\sum_{m\in d(S)}B(\{m\})=B(S)\). Therefore \(\tilde{B}(e(S))\leq B\) and \(e(S)\) is a solution to cut selection. Furthermore, \(\tilde{R}(e(S))=\sum_{op\in(\bigcup_{c\in e(S)}BC(c))}\) However, \(S\subset e(S)\Rightarrow\bigcup_{c\in S}BC(c)\subset\bigcup_{c\in e(S)}BC(c)\). So, \(\tilde{R}(e(S))\geq\tilde{R}(S)\). **Theorem 3**.: For every \(e(S)\), it holds that \(R(d(S))=\tilde{R}(e(S))\) Proof.: Let \(S\) be a set of cuts. We represent the eliminated operators in the original MS3 problem when \(S\) is materialized as \(t(S)\). Formally, \(t(S)\) is the set of all nodes whose operators produce \(d(S)\) or all their successors belong to \(t(S)\). Then: \[R(d(S))=\sum_{op\in t(S)}cost(op)\] We now prove that \(t(S)=\bigcup_{c\in e(S)}BC(c)\). Let \(c^{\prime}\in e(S)\). Then, \(c^{\prime}\subset d(S)\), and \(\forall o\in BC(c^{\prime})\) it holds \(o\in t(S)\) and thus \(BC(c^{\prime})\subset t(S)\). It follows that \(\bigcup_{c^{\prime}\in e(S)}BC(c^{\prime})\subset t(S)\). Also, let \(a\in t(S)\) and \(c_{a}\) all the descendants of \(a\) that belong to \(d(S)\). Then, \(c_{a}\) is a cut with anchor \(a\), as the two conditions in the definition of cuts are true: i) \(a\) is an ancestor for all nodes in \(c_{a}\), and ii) assume there is a descendant of \(a\), \(a^{\prime}\), that is not a descendant of any node in \(c_{a}\). Then, \(a^{\prime}\) is an ancestor of at least one node in \(c_{a}\) because \(a\in t(S)\) (otherwise, the nodes in the path from \(a\) to \(a^{\prime}\) should not be in \(t(S)\)). Therefore, \(c_{a}\) is a cut, \(a\in BC(c_{a})\) and \(t(S)\subset\bigcup_{c^{\prime}\in e(S)}BC(c^{\prime})\). Thus \(t(S)=\bigcup\limits_{c\in e(S)}BC(c)\) and: \[R(d(S))=\sum\limits_{op\in t(S)}cost(op)=\sum\limits_{op\in\bigcup_{c\in e(S)}BC( c)}cost(op)=\bar{R}(e(S))\] #### 3.2.2. Approximating MS3 ParCuR's tuner chooses subexpressions to materialize by solving cut selection for historical batches. The selection process has two steps: i) the tuner constructs the workload graph and computes the cuts and their corresponding \(BC\) sets, ii) the tuner runs an algorithm for solving the cut selection instance for the computed cuts. The subexpressions in the selected cuts are then materialized and used in subsequent batches. The tuner currently implements two approximate algorithms for solving SCSK, greedy (Gr) and iterative submodular knapsack (ISK) (Kalalal and Triggs, 2009). We briefly present the properties of the two algorithms as presented in the work of Iyer et al. (Kal and Triggs, 2009). **Gr**: Gr is a greedy algorithm. At each step, it chooses the cut with the highest marginal benefit that can fit in the budget and adds it to the solution. Gr's complexity is \(O([CUTS(G)]^{2})\), and in practice it takes few msecs. Gr provides an approximation factor: \(1-(\frac{K_{f}-1}{K_{f}})^{k_{f}}\), where \(K_{f}=\{max_{S\subset U}\{|S||f(S)\leq B\}\) and \(k_{f}=\{min_{S\subset U}\{|S|\|f(S)\leq B\wedge f(S\cup\{j\})>B\}\). Indeed, Gr is inefficient when few cuts can saturate the budget. **ISK**: ISK is a fixed point algorithm. In each iteration, it combines partial enumeration with greedy expansion; it chooses between \((\binom{CUTS(G)}{3})\) candidate solutions, where each candidate fixes the first three cuts and chooses the rest using a greedy algorithm. At each step, the greedy algorithm chooses the cut with the highest ratio of marginal benefit to required budget. The solution in each iteration affects the budget calculation for the next iteration. ISK's complexity is \(O(|CUTS(G)|^{5})\) and can run for hundreds of seconds for a few hundreds of cuts. ISK provides a constant factor \(1-e^{-1}\) for the solution of \(\{max_{S\subset U}g(S)|f(S)\leq\frac{b}{K_{f}}\}\) and a bicriterion guarantee if we run it with a larger budget constraint (Kal and Triggs, 2009). ### Building Access Methods At runtime, materialized subexpressions are accessed at a per-partition level. Nevertheless, they still need to be scanned and filtered based on the predicates of the running queries. The processing time for shared access and filtering of base and cached data can dominate the total processing time. ParCuR further reduces both data access- and filtering costs by reorganizing data within each partition using multidimensional range partitioning. We refer to this finer-grained partitioning as _2nd-level partitioning_. Multidimensional range partitioning can enable efficient data access that reduces accesses during scans, as it enables data skipping. Furthermore, by cutting data across values that are frequently used in predicates, it can be used to statically evaluate frequent filters for a whole partition. To build the partitions, we iteratively subpartition data across the predicates values of one attribute at a time. The resulting subpartitions inherit query homogeneity from the _1st-level_ partitioning and also reduce data-access costs. From this point on, we differentiate the partitions derived from the _2nd-level_ partitioning by calling them _blocks_. ## 4. Reuse-Aware Shared Execution At execution time, ParCuR takes advantage of the constructed partitions and materialized subexpressions and optimizes query processing in three levels: First, it uses data and filter skipping to identify the queries that access each partition and reduces filtering costs. Second, it adopts a partition-oriented execution paradigm that plans and optimizes each partition independently; thus, exposing different opportunities per partition. Third, ParCuR introduces a cost-based optimization framework that chooses which materializations to inject into each partition's plan. ### Data and Filter Skipping ParCuR uses 2nd-level partitioning to reduce data access and filtering costs. To do so, for each block, it identifies i) which queries process the block, and ii) which predicates have the same value for all tuples in the block. Then, during execution, it skips 2nd-level partitions that are not processed by any query, and eliminates filters whose predicates are invariant across the block. Both optimizations occur on both the fact table and the materializations, and can drastically reduce batch response time. As the data is organized by cutting the data space, each block's boundaries are defined by a range along each attribute. Then, if the range is known, the above analysis can be done statically. Concretely, a query's predicate is invariant when its value range either subsumes (always true) or does not overlap (always false) with the block's range. Moreover, a query _skips a block_ if at least one of its predicates always evaluates to false (no overlap). For example, the query SELECT COUNT(*) FROM T WHERE x > 8 skips block \(5\leq x<7\), as the two ranges do not overlap. Similarly, for the same block, the predicate of query SELECT COUNT(*) FROM T WHERE x > 4 is true across the whole block and, thus, it is redundant to evaluate it for every tuple. The above logic is implemented by maintaining zone-maps (Kal and Triggs, 2009): a lightweight index that stores min-max statistics for each attribute. During the table scan, for each block, ParCuR compares the corresponding ranges against the shared filter predicates to identify which queries do not overlap with this block (data skipping), and which are satisfied by the entire block (filter skipping). The remaining ambivalent filters are processed using the global plan. ### Partitioned Execution ParCuR optimizes each 1-st level partition independently to i) exploit partition-specific materializations, and ii) enable partial reuse by decoupling planning between partitions. To do so, it introduces a two-phase partition-oriented execution model. First, it computes the shared state between partitions such as hash tables on dimensions and data structures for aggregation. Next, it executes each partition independently. For each partition, ParCuR identifies which queries process the partition using the same data-skipping mechanism as above. Then, it chooses a global plan that is specialized for the queries and materializations of the partition at hand. Finally, partial results from each partition are merged together in the output operators such as projections, aggregations, and GROUP-BPs. To reduce aggregation overheads, our implementation preaggregates partial results at the thread-level. Since shared execution processes subqueries that comprise selection, projection, join probe, and potentially aggregation operators, combining partial results produces the final output (Srivastava et al., 2017). Partial reuse is feasible because the output operators are oblivious to each partition's planning decisions. When query patterns recur with minor shifts, they mostly process their designated 1-st level partitions and spill over only to few neighboring partitions. Then, ParCuR processes the bulk of the processing using materializations and addresses spill-overs with selective computations. Hence, in case of a workload shift, performance degradation becomes proportional to the magnitude of the shift, and thus ParCuR avoids suffering a performance cliff. ### Injecting Materializations in Global Plans For each partition, ParCuR optimizes and processes a global plan that exploits the available materializations and access methods as well as sharing opportunities. However, making all planning decisions in a unified optimization framework scales poorly. To this end, ParCuR adopts a two-phase optimizer: it first finds the best possible global plan that only uses work-sharing, and then, it improves it by optimally substituting shared operators with materialized views. #### 4.3.1. Two-phase optimizer Benefits from reuse and work sharing are interdependent: the marginal benefit from reuse, if any, depends on available sharing opportunities and, also, the opportunities from downstream work sharing between queries are contingent on answering them using the same materialization. Thus, it is tempting to formulate a unified optimization problem in order to find a globally optimal plan. However, sharing-aware optimization already has a very large search space, and thus enriching it with reuse planning decisions is prohibitive. To incorporate work sharing and reuse in a scalable and practical manner, the optimizer needs to restrict the search space. ParCuR's optimizer focuses on ensuring better performance than pure work sharing and on avoiding performance regression. Thus, the optimizer uses two phases. In the first phase, the optimizer chooses a _baseline_ global plan that uses work sharing. Then, in the second phase, the optimizer improves on the baseline plan by rewriting it to reuse materializations. Finally, ParCuR processes the resulting plan, which combines reuse and work sharing. #### 4.3.2. Reuse phase The reuse phase is based on the observation that reuse replaces operators from the baseline plan with filters on materializations. Hence, the goal is to find which subexpressions, if reused, can maximize the difference between eliminated computations and filtering costs. For each cut \(c\), we can estimate this difference, which we call _benefit_, as: \[benefit(c,a)=\sum_{op\in C(c,a)}cost(op)-\sum_{o\in c}(c_{f}\times|RF(v)|\times o.size)\] where \(cost(op)\) of operator \(op\) in the baseline plan, \(RF(v)\) are the runtime filters on subexpression \(v\) after filter-skipping in the current partition, \(v.size\) is the number of tuples for the subexpression in the current partition and \(c_{f}\) is a constant for estimating filtering costs per tuple as a linear function of the number of runtime filters \(|RF(v)|\). \(benefit(c,a)\) represents the net benefit of reusing \(c\) with respect to anchor \(a\) as the different between the cost of eliminated operators between \(c\) and \(a\) and the overhead for accessing and filtering \(c\)'s materializations. The optimizer has all this information at the time of running the reuse phase. In order to choose which subexpressions to reuse, the reuse phase, which we show in Algorithm 2, performs a post-order traversal of the baseline plan and transforms the plan. When visiting a node, the traversal first processes the node's successors and merges their rewrite decisions (lines 4-6). Then, the algorithm finds the best cut (i.e., the cut with the highest benefit) that can eliminate the current node. If all of the node's successors are eliminated or are anchors for cuts, then the algorithm computes the best cut of downstream subexpressions by merging the cuts of the remaining successors (lines 9-12). If the node corresponds to a materialized subexpressions, the algorithm also considers the cut that consists of the node's results (lines 15-16). Finally, if the best cut provides net gain, the rewrite is applied immediately (lines 17-19), and otherwise the best cut is propagated to upstream nodes. **Theorem 4**.: Given a plan, Algorithm 2 makes optimal view injection. Proof.: By induction on the plan size. **Base step**: Single-node plan. If reuse is beneficial, the plan is rewritten. Otherwise, it is optimal and stays as is. **Induction step**: If it holds for plan size \(\leq k\), it also holds for \(k+1\). We assume a single root in the plan. If the plan consists of multiple connected components, then separately solving for each component is trivially optimal. The first visited node is the root. Each downstream subplan \(\leq k\) nodes, so the algorithm minimizes the cost. Let each node have an attribute \(optPlan\) that represents the optimal downstream plan and \(DC(plan)\) be the cost for an optimized downstream plan. Before line 17, we have: \[\Delta=\sum_{s\in succ}(DC(s.optPlan)-DC(s.bestPlan))+(x-y)cost(v)\] where \(x,y\in\{0,1\}\) represent if \(v\) is part of the plan. Then, there are the following two cases: Case1: if \(x=1\) or \(y=0\), then \(\Delta\geq 0\). Thus, \(bestPlan\) is optimal. Case2: if \(x=0\) and \(y=1\), we prove that the algorithm eliminates \(v\) in line 17 and \(\Delta\geq 0\) for the new \(bestPlan\). Since \(x=0\), there exists a cut \(c\) with anchor \(v\). - If \(benefit(\{v\},v)>0\Rightarrow\Delta\geq 0\) for the new \(bestPlan\). - Otherwise, REWRITE needs to happen in the downstream cut. Let \(s_{1},s_{2},\ldots,s_{p}\) be \(v\)'s successors and \(c_{1},c_{2},\ldots,c_{p}\) the corresponding sub-cuts. Since \(optPlan\) is optimal: \(\sum benefit(c_{i},s_{i})+cost(o)\geq 0\), where \(i\in\{i|benefit(c_{i},s_{i})<0\}\). Thus, the merged cuts from the successors can eliminate \(v\) and the new \(bestPlan\) is optimal. #### 4.3.3. Handling adaptive optimization We implement ParCuR by extending RouLette, which uses adaptive sharing-aware optimization. RouLette splits batch execution into episodes, which last for the duration of processing one small base table vector each, and potentially uses a different global plan in each episode. RouLette learns the cost of different subplans across episodes and eventually converges into an efficient global plan. The episode-oriented design conflicts with two-phase optimization: the reuse phase chooses subexpressions to reuse based on the baseline plan at partition granularity, whereas ParCuR switches ``` 1FunctionREUSE_OPT_REC(): 2\(v\_bestPlan=\emptyset\) ; \(v\_bestCut=(v\_succ.empty())?\)\(\emptyset\) : null ; 3atLeastOne=(\(v\_succ.empty()\)) ; 4for\(s\in v\_succ\)do 5\(REUSE\_OPT\_REC(s)\) ; 6\(b\_bestPlan=v\_bestPlan\cup s.bestPlan\) ; 7if\(s\_bestPlan.contains(s)\)then 8\(atLeastOne=true\) ; 9if\(v\_bestCut!=null\)then 10if\(s\_bestCut==null\)then 11\(v\_bestCut=null\) ; 12else\(v\_bestCut=0.bestCut\cup s.bestCut\) ; 13 14ifatLeastOnethen 15\(v\_bestPlan=bestPlan\cup\{v\}\) ; 16 17if\(v\_materialized\)then 18if\(v\_bestCut==null\) or \(benefit(v\_bestCut,v)<benefit(\{v\},v)\)then\(v.bestCut=\{v\}\) ; 19if\(benefit(v\_bestCut,v)\geq 0\)then 20\(v\_bestPlan=REWRITE(v.bestPlan,v.bestCut)\) ; 21\(v\_bestCut=null\) ; ``` **Algorithm 2**Reuse Optimization Phase ## 5. Implementation We implement ParCuR on RouLette (Sundundar et al., 2017). We modify the _policy_ and _ingestion_ components, we introduce a materialization operator, and add the tuner's utilities. In general, as shown in Fig 5, ParCuR interacts with i) the optimizer to receive a plan and rewrite it, ii) with the executor in order to achieve partition-at-a-time execution and use the available access methods and iii) with the storage manager so that it selects the right views for materialization. **Tuning the Cost Model**. The presented techniques rely on RouLette's cost models. We use the same constant factors and also introduce the new constant \(c_{f}\) (Sec 4.3.2) which is set to \(c_{f}=139.45\) after using regression to fit filtering cost-estimates. **Tuning Partitioning**. To tune the parameters for the two levels of partitioning, we use the workload of Figure 5(a) and we find the minimum values for mini-partition size and block size, and the maximum sampling rate, such that overhead is less than 10% compared to the optimal value. We set the minimum size of mini-partitions to \(2^{16}\) to maintain low overhead for the reuse phase and set \(PS_{min}=2^{16}\) as 1st-level partitions contain at least one mini-partition. To keep the overhead for data and filter-skipping low, we select the size of mini-partitions to be greater or equal to \(2^{16}\) and at least large enough that the blocks contain at least 256 tuples each on average. Finally, to avoid significant overhead for tracking historical accesses, we set the sampling rate to 1%. **Limitations**. To combine reuse with adaptive optimization, ParCuR's implementation over RouLette aligns the mini-partitions of materializations with the mini-partitions of a base table. For this reason, tuning revolves around one main table that defines the partitioning schemes and the materializations. Our implementation is applicable to common workloads such as queries on star and snowflake schemas. Also, our prototype optimizes execution but tuning is single-threaded. Deciding on the frequency of tuning, the amount of resources, and how sync with execution should happen are well-known problems but orthogonal to ours. **In-memory vs disk-based**. While ParCuR relies on an memory system, the performance trends are not expected to change if we transition to a disk-based implementation. With modern SSDs and large query batches, data access would still be fast, whereas shared filtering of materialized results would continue to be expensive. Thus, we expect different speedup due to different tradeoffs, but the main insights would still be valid. ## 6. Experimental Evaluation The experiments evaluate ParCuR and show how materialization and reuse enable it to significantly outperform pure work sharing and achieve lower batch response times. Specifically, they demonstrate the following: i) Filtering costs when accessing materializations can deteriorate the performance of work sharing, and thus building access methods for materializations is necessary. ii) Query-at-a-time materialization policies make suboptimal materialization decisions. Cut selection improves budget utilization by prioritizing materialization with higher marginal benefits. iii) Homogeneity-based partitioning reduces the required budget for workloads with selective and correlated patterns. iv) Even though filters change, the reuse phase reduces worksharing's response time when possible and falls back to vanilla work sharing otherwise. v) Using partial reuse, the response time is proportional to the required computation and performance degrades gracefully. vi) End-to-end, ParCuR reduces the response time for the full SSBM and TPC-H by 6.4x and 2x, respectively. **Hardware**. All experiments took place on a single server that features an Intel(R) Xeon(R) Gold 5118 CPU @ 2.30GHz with 2 sockets, 12(x2) threads per socket, 376GB of DRAM, 32KB L1 cache, 1MB L2 cache and 16MB L3 cache. All experiments took place in memory, in a single NUMA node, and use 12 threads. Figure 5. DBMS components that ParCuR modifies **Data & Workload.** We run both macro- and micro-benchmarks. First, we perform a sensitivity analysis. We evaluate ParCuR by varying different workload properties: i) the number of filtering attributes, ii) the selectivity of predicates, iii) the number of joins and the overlap between queries, iv) the available budget, and v) the workload shift in filter attributes and predicate correlations. To control the experiment variables, we generate synthetic data in a star schema as well as appropriate queries. We use a fact table of \(100M\) rows and \(27\) columns (\(24\) are foreign keys), \(8\) dimensions with \(10k\) rows and \(9\) columns each, and \(16\) dimensions with \(10k\) rows and \(2\) columns each. All columns are \(4\)-byte integers. We describe the queries in the presentation of each micro-benchmark. Next, we show that ParCuR accelerates the queries of the widely used SSBM (Zhu et al., 2017) and TPC-H benchmarks. We use SF10 for both, which is the largest data size for which the optimal materialization fits in memory. We randomize the order of tuples for both datasets. **Methodology**. The experiments measure batch response time, which is the end-to-end time for processing the full batch. All measurements are the average of \(10\) runs. ### Impact of Reuse in Global Plans We evaluate the benefit of reuse to shared execution's response time. We assume that the tuner's workload is the same as the runtime workload and that the materializations that minimize response time are available (i.e., the top-level joins). Sections 6.2 and 6.3 lift the two assumptions. We compare ParCuR against RouLette, naive reuse, which eagerly injects materializations and has no access methods, and QaT execution using RouLette, which is on par with QaT performance of state-of-the-art in-memory databases. **Filter processing**. We examine the impact of filters and the need for building and using access methods for materializations. We use \(64\) queries generated from \(4\) different templates. The templates have \(4\) dimension joins each, and all templates share \(3\) dimension joins. The queries have \(10\) selectivity and filter on the non-shared dimension. We vary the number of filter attributes (which is equal to the number of shared filter operators) from \(1\) to \(8\). Figure 5(a) shows that access methods are necessary for accelerating work sharing. When using access methods, ParCuR's response time is \(2.07\)-\(4.57\times\) lower than RouLette's, as it eliminates join processing. RouLette is almost unaffected by increasing filter operators, as it processes filters on the dimension. ParCuR and QaT are affected because they require more \(2\)-nd level partitions and hence both more zone-map operations as well as larger mini-partitions, and thus longer time until ParCuR decides that the plan is stable. However, this effect just reduces ParCuR's benefit over RouLette. By contrast, the performance of naive reuse deteriorates drastically: it computes filters over the materialization and thus their processing time is amplified. The response time is increased with the number of filters and is up to \(3.34\times\) than RouLette's. **Takeaway**: Reuse drastically improves performance only if filtering cost is low. Building appropriate access methods is necessary for injecting materializations into global plans. **Number of joins**. We examine the impact of reuse in queries with different join costs. We use two variants of the previous workload, one where all templates share all but one join (share n-1) and another where they share all but three joins (share n-3). We vary the total number of joins per query. All queries use \(1\) dimension filter. Figure 5(b) shows larger benefits for global plans with more joins. Reuse-based approaches are insensitive to the number of joins, whereas RouLette's response time is increased. ParCuR achieves maximum speedup of \(6.33\) for share n-1 and \(8.60\) for share n-3. Also, there is a cross-point in naive reuse, where processing filters becomes preferable to large joins. **Takeaway**: The benefit from reuse is proportional to the eliminated computation. Hence, the speedup is higher when eliminated computation is significant, such as in join-heavy queries. **Selectivity**. We examine the impact of reuse for queries with different selectivity. We use the same workload as in the first experiment, use one filter attribute, and vary the selectivity (\(1\%\), \(2\%\), \(5\%\), \(10\%\), \(20\%\), \(50\%\)). The experiment models the impact of downstream processing. Figure 5(c) shows larger benefits when each query's selectivity is low. As aggregations are not affected by reuse, they close the gap between approaches for larger selectivity when they are expensive. Also, it is noteworthy that when aggregations are heavy enough, QaT is more expensive than RouLette due to concurrency. **Takeaway**: Reuse has a higher benefit when it eliminates the most expensive part of the global plan. Low selectivity results in low-cost final aggregation, and thus the relative benefit is more pronounced. Nevertheless, reuse is the best approach across all selectivities. ### Sharing-aware Materialization Policy We demonstrate cut selection solutions outperform sharing-oblivious and simple sharing-aware policies. We compare four different algorithms: a) **SCSK-Gr** solves cut selection using GF, b) **SCSK-ISK** solves cut selection using ISK, c) **Greedy Shared** solves a submodular knapsack problem for individual materializations, and d) **Frequency** solves the submodular knapsack problem where benefits are weighted by frequency, which is commonly used for query-at-a-time materialization. The evaluation uses four different workloads with \(512\) queries with \(10\%\) selectivity each. The queries use filters in a column with domain \([0,100)\). At the end of each workload, we mention the amount of DRAM it requires to minimize response time. - _Workload A_ shows the impact of frequency. It uses \(8\) query templates \((t_{1},\ldots,t_{8})\). \(t_{1},\ldots,t_{4}\) have \(1\) join each, whereas \(t_{5},\ldots,t_{8}\) have \(4\). Template \(t_{i}\) shares its join with \(t_{i+4}\). The workload contains \(112\) queries from each of \(t_{1},\ldots,t_{4}\) and \(16\) queries from \(t_{5},\ldots,t_{8}\). Requires at least \(40\)GB. - _Workload B_: it uses \(8\) query templates \((t_{5},\ldots,t_{12})\). \(t_{9},\ldots,t_{12}\) also have \(4\) joins each. Template \(t_{i}\) shares \(2\) joins with template \(t_{i+4}\). The workload contains \(64\) queries from each of the templates. The workload shows the impact of synergy. Requires at least \(32\)GB. - _Workload B-P1_: it uses workload B's templates. However, the filters for \(t_{5}\) and \(t_{6}\) are subranges of \([0,40)\), for \(t_{9}\) and \(t_{11}\) subranges of \([20,60)\), for \(t_{7}\) and \(t_{8}\) subranges of \([40,80)\), and for \(t_{10}\) and \(t_{12}\) subranges of \([60,100)\). Requires at least \(14.9\)GB. - _Workload B-P2_: Similar to workload B-P1, but uses \(2\)-D ranges. The filters for \(t_{5}\) and \(t_{6}\) are subranges of \([0,66)\times[0,66)\), for \(t_{9}\) and \(t_{11}\) subranges of \([0,66)\times[34,100)\), for \(t_{7}\) and \(t_{8}\) subranges of \([34,100)\times[0,66)\), and for \(t_{10}\) and \(t_{12}\) subranges of \([34,100)\times[34,100)\). Requires at least \(12.9\)GB. In each experiment, we vary the storage budget to the minimum that can minimize response time. We present the used budget normalized by the one that minimizes response time (i.e., 100%). Vanilla RouLette corresponds to 0% budget for all policies. **Sharing-awareness:** Figure 6(a) shows that sharing-aware policies outperform Frequency in workload A because they factor out the frequency of occurrence for subgueues, and decide based on shared costs. Frequency results in up to 2.03\(\times\) higher response time for the same budget because it prioritizes templates \(t_{1},\ldots,t_{4}\). **Synergy-awareness:** Figure 6(b) shows that exploiting the synergy between materializations that compose cuts in workload B improves the effectiveness of materializations. Both Greedy Shared and Frequency preferentially materialize the shared subqueries because they miss the synergy between the larger cuts. Thus, they both waste budget on materializing subexpressions that are later covered by the larger cuts, and consequently, 100% is not sufficient for minimizing response times. At 100%, they are slower by 1.87\(\times\) and 1.68\(\times\), respectively. **Partition-awareness:** For both workload B-P1 and B-P2, partitioning reduces the required budget for minimizing response times by 2.4\(\times\) and 2.5\(\times\) accordingly. Figure 8 shows that all algorithms achieve comparable performance because partitioning simplifies the global plans for each partition. The simplification mitigates the effect of synergy and frequency, and thus all algorithms find comparable solutions. **Gr vs ISK:** Across all experiments, ISK performs better than Gr as it enumerates more materializations and normalizes marginal benefit by the required budget. By contrast, Gr suffers from suboptimal solutions when it uses up the budget on few materializations. Still, ISK requires significant processing time to run, e.g., 217sec in workload B-P1, and thus Gr is preferable for real-time analysis as it takes up to 4msec in all experiments. **Takeaway:** Both sharing-awareness and partitioning improve budget utilization. Incorporating both shared costs and synergy permits spending the budget for materializing only the subexpressions that actually reduce response times. Furthermore, partitioning enables materializing results just for the data ranges where they are needed and thus reduces budget requirements. ### Effect of workload shift in reuse We evaluate ParCuR under workload shift. We materialize subexpressions that minimize the response time for the original workload. The experiments shift workload across two axes: a) by adding new filtering attributes, and b) by shifting query pattern predicates. We compare ParCuR against RouLette, naive reuse (for which we enable access methods), and QaT. **Filtering attributes**. Figure 8(a) shows that the reuse phase judiciously chooses between reuse and recomputation based on filtering costs. The experiment uses the same workload as Figure 7(a). We assume that the original workload is the batch with one filtering attribute, hence we only build an access method for that attribute. Naive reuse improves response time when there is no shift and deteriorates performance otherwise. QaT's performance depends on the percentage of queries that use the materializations. Finally, ParCuR improves performance when there is no shift and achieves the same performance as work sharing when reuse is detrimental. **Query patterns' predicates**. Figure 8(b) shows that partial reuse enables response times to degrade gracefully under workload shift. The experiment uses workload B-P1 to build materializations. The shifted workload slides the ranges for the filters of each template; the slide controls the percentage of the shifted workload's input that cannot reuse materializations and is processed from base data Figure 8. Impact of budget for workloads B-P1 and B-P2. Figure 6. Impact of reuse based on workload parameters in a) filters, b) joins, and c) selectivity. Figure 7. Impact of budget for workloads A and B. Figure 9. Impact of workload shift in a) filtering attributes, and b) query patterns’ predicates. (miss rate). ParCuR's response time is increased proportionally to the miss rate. Thus, when partitioning captures query patterns and isolates misses, partial reuse improves performance against all-or-nothing approaches that fall back to full processing (same performance as 100% miss rate). **Takeaway**: The reuse phase, as well as partitioned execution, enable ParCuR to benefit from materializations despite workload shifts. ParCuR exploits materializations for the partitions where they are available and beneficial to reducing the global plan's cost. ### Macro-benchmarks We evaluate ParCuR using the SSBM and TPC-H benchmarks, which contain 13 and 22 queries respectively. For each benchmark, we compare the four materialization algorithms and vary the storage budgets. We omit ISK for TPC-H, because it takes a very long to choose a materialization. **SSBM**: Figure 10a shows that ParCuR achieves a maximum speedup of 6.4 over RouLette, which corresponds to 0% budget, and 5.4 over QaT, and requires around 1GB for the optimal materialization. The speedup is high because queries are mostly selective, and thus aggregations make up a small percentage of processing time; the vast majority is filters and joins. An interesting observation is that even a small budget, at 20%, brings about a 37% decrease in response time because bottom joins are significantly more expensive, whereas upper joins are more selective and less time-consuming. **TPC-H**: Figure 10b shows that ParCuR achieves a maximum speedup of 2\(\times\) over RouLette and 1.37\(\times\) over QaT, and requires 69GB for the optimal materialization. The speedup is lower compared to SSB for two reasons: i) TPC-H also contains less selective queries with heavier aggregations. When using 100% budget, aggregation takes up around 40% of the processing time. ii) TPC-H contains LIKE predicates that filter skipping cannot eliminate using zone-maps. Even so, despite the shortcomings in our implementation, ParCuR eliminates significant join costs. **Discussion**: For the two benchmarks, ParCuR requires large materializations because, our homogeneity-based partitioning does not exploit filters on dimensions. This limitation can be addressed by: i) partitioning using the denormalized table (Shi et al., 2017), or ii) using data-induced predicates on the fact table's foreign keys (Kang et al., 2018). Both techniques are straightforward to integrate with ParCuR. Another limitation is that ParCuR cannot eliminate predicates such as LIKE, multi-attribute expressions, or UDFs using zone-maps. To eliminate such predicates, partitions require additional metadata. Sun et al. (Sun et al., 2019) handle such predicates by maintaining a feature vector that encodes whether complex predicates are satisfied. ## 7. Related Work We compare ParCuR with related work in (i) sharing, (ii) materialization, and (iii) partitioning. **Work sharing:** Work sharing exploits overlapping work between queries in order to reduce the total cost of processing. Despite using diverse execution models and optimization strategies, recent work-sharing databases use global plans (Bajaj et al., 2020; Chen et al., 2020; Chen et al., 2020; Chen et al., 2020; Chen et al., 2020; Chen et al., 2020) and the Data-Query model (Baj et al., 2020; Chen et al., 2020; Chen et al., 2020; Chen et al., 2020; Chen et al., 2020; Chen et al., 2020; Chen et al., 2020; Chen et al., 2020). Existing work-sharing databases do not support reuse; they always recompute global plans from scratch. ParCuR is compatible with such databases, and therefore this work's insights are valuable for reducing their response time for recurring workloads. **Reuse:** Reuse occurs in different forms, such as semantic caching (Chen et al., 2020; Chen et al., 2020; Chen et al., 2020; Chen et al., 2020), recycling (Chen et al., 2020; Chen et al., 2020; Chen et al., 2020; Chen et al., 2020), view selection (Chen et al., 2020; Chen et al., 2020; Chen et al., 2020; Chen et al., 2020), and subexpression selection (Chen et al., 2020; Chen et al., 2020; Chen et al., 2020). ParCuR addresses subexpression selection in the context of sharing environments. Sharing affects the data layout, the materialization policy, and the reuse policy for the selected subexpressions. This is the first work that studies the effect of work sharing on reuse. Extending semantic caching, recycling, and view selection for shared execution is a significant direction for future work. ParCuR also supports partial reuse. Similar approaches include chunk-based semantic caching (Chen et al., 2020; Chen et al., 2020), partially materialized views (Chen et al., 2020), partially-stateful dataflow (Chen et al., 2020), and separable operators (Chen et al., 2020). However, in all of these approaches, the concurrent outstanding computation can deteriorate performance. ParCuR both reuses available materializations and uses sharing to improve scalability. **Partitioning:** In modern scan-oriented analytical systems, partitioning is an indispensable tool for accelerating selective queries using data skipping (Sun et al., 2019; Sun et al., 2019). Existing partitioning strategies focus on minimizing data access. By contrast, ParCuR chooses a partitioning scheme to maximize reuse while minimizing the space overhead for partition-granularity materialization. Doing so requires that partitioning captures both access and computation patterns. ## 8. Conclusions To provide real-time responses for large recurring workloads, we propose ParCuR, a novel paradigm that combines the reuse of materialized results with work sharing. ParCuR addresses the performance pitfalls of incorporating materialized results into shared global plans \(i\) by proposing a multi-level partitioning design that improves at the same time the utilization of the storage budget, partial reuse, and filtering costs, ii) by proposing a novel sharing-aware caching policy that improves materialization decisions, and iii) by enhancing the sharing-aware optimizer with a phase that performs reuse-oriented rewrites in order to minimize runtime processing. In our experiments, ParCuR outperformed RouLette by 6.4\(\times\) and 2\(\times\) in the widely-used SSB and TPC-H benchmarks respectively.
2304.09656
Optimizations of Autoencoders for Analysis and Classification of Microscopic In Situ Hybridization Images
Currently, analysis of microscopic In Situ Hybridization images is done manually by experts. Precise evaluation and classification of such microscopic images can ease experts' work and reveal further insights about the data. In this work, we propose a deep-learning framework to detect and classify areas of microscopic images with similar levels of gene expression. The data we analyze requires an unsupervised learning model for which we employ a type of Artificial Neural Network - Deep Learning Autoencoders. The model's performance is optimized by balancing the latent layers' length and complexity and fine-tuning hyperparameters. The results are validated by adapting the mean-squared error (MSE) metric, and comparison to expert's evaluation.
Aleksandar A. Yanev, Galina D. Momcheva, Stoyan P. Pavlov
2023-04-19T13:45:28Z
http://arxiv.org/abs/2304.09656v1
Optimizations of Autoencoders for Analysis and Classification of Microscopic In Situ Hybridization Images ###### Abstract Currently, analysis of microscopic In Situ Hybridization images is done manually by experts. Precise evaluation and classification of such microscopic images can ease experts' work and reveal further insights about the data. In this work, we propose a deep-learning framework to detect and classify areas of microscopic images with similar levels of gene expression. The data we analyze requires an unsupervised learning model for which we employ a type of Artificial Neural Network -- Deep Learning Autoencoders. The model's performance is optimized by balancing the latent layers' length and complexity and fine-tuning hyperparameters. The results are validated by adapting the mean-squared error (MSE) metric, and comparison to expert's evaluation. Artificial Neural Networks, Deep Learning Autoencoders, Image Analysis, Unsupervised Learning, Fuzzy Clustering ## 1 Introduction In Situ Hybridization (ISH) is a method for the recognition and localization of specific nucleotide sequences in the nucleic acids (DNA and RNA) in cells and tissues [1]. Applications of this technique on microscopic images include the identification of infectious diseases (such as human immunodeficiency virus (HIV), herpes simplex virus (HSV), hepatitis B virus (HBV)), diagnosis and grading of cancer, cytogenetics, and analysis of gene expression. In the chromogenic variant of the reaction (chromogenic in situ hybridization - CISH), the hybridization is revealed by the production of a colored precipitate that can be easily observed, recognized, and documented using a standard bright-field microscopic imaging system [2]. The positive signals correspond to cells that actively produce (aka express) the product of a gene under investigation. [2, 3] In this study, we develop a workflow for automated, fast, reliable, and reproducible analysis of Chromogenic In-Situ Hybridization Images (CISH images). Currently, the substantial dependency of the method on tissue preparation, the conditions of hybridization and development of the color reaction, and the imaging parameters hinder the standardized analysis of large batches of such data. Thus, the "gold standard" for gene expression grading in CISH-stained tissue slides is expert assessment. This approach involves visual inspection of the slides or their images at various scales (Fig. 1) and manual labelling (as "positive" or "negative") or grading of the strength, density and/or distribution of the staining using more or less arbitrary ordinal scales for strength (e.g. "negative", "low", "moderate" and "strong"; or "-", "1+", "2+" and "3+") and patterns of expression ("ubiquitous", "regional" or "scattered") [2, 4] As mentioned above, we attempt to develop a reliable workflow using autoencoders. Autoencoders are a type of Artificial Neural Network used to learn efficient coding of unlabeled data. This unsupervised learning approach matches the profile of the examined data. The autoencoder consists of two main parts --the encoder maps the input into a code of representative features, while the decoder tries to recreate the original from this latent representation. As copying the input to the output is a common concern because it is indeed a valid solution (although not a useful one), autoencoders are usually forced to find and preserve the most representative features differently. In this paper, we explore the properties of our workflow, such as methods used for extracting meaningful data for the algorithm's training, the length and complexity of the autoencoder used, its limitations and prospects for future improvements. ## 2 Extraction of meaningful data Whole-slide CISH images' (Fig. 2A) sizes vary, but the microscopic image's dimensions usually span tens of thousands of pixels. As our goal is to classify areas of the image with similar gene expression levels, we must train the autoencoder on smaller regions (aka tiles) cut from the original. The choice of tile size depends on the goal and scale of the evaluation -- e.g., entire slice, brain area, subregional evaluation, _et cetera_. In this study, we focused on the supracellular subregional level and thus have chosen square tiles with side 150 um (at the scale of the original image -- 0.5 um/px, this corresponds to 300 px). Dividing the image into smaller tiles increases the possibility of introducing border artefacts due to random splitting of cells and areas with similar properties into different images (for example, cells may be cut in two, or the border of highly expressing region may be included in an image of a neighbouring area with lower levels of expression). Choosing an overlap between the tiles of 75 um (150 px), we have ensured that each tile also will represent the transition between its adjacent tiles, and thus will reduce the possibility of border phenomena skewing the results. Whole-slide CISH microscopic images usually include a lot of background (areas without a tissue). These areas must be programmatically cut beforehand to reduce bias in the training/evaluation process. Furthermore, the precise removal of the background will reduce the computational time and the memory footprint of the autoencoder. In order to achieve this, we created masks of the whole-slide microscopic images beforehand that provide information about whether a certain pixel is part of the tissue or the background. (Fig. 2) For mask creation, we processed a lower-scale version of the whole slide image -- gaussian blur, followed by an automatic triangle threshold and morphological fill holes. Usually, there are low-intensity imaging artifacts located outside of the tissue slice. To avoid the inclusion of the latter in the mask, we used a reconstruction from a seed created by a large-radius erosion of the mask image. In the final stage, we resize the thresholded image to the original scale, extract the coordinates where tissue is present, and define the tiles for these coordinates. Figure 1: CISH stained tissues are graded and evaluated at different scales. ## 3 The Autoencoder ### Size of the encoder's output One of the most important hyperparameters is the encoder's output size (the number of extracted features). Its dimensions are one of the deciding factors for the time/memory usage and the model's accuracy [5]. As we will see, we have chosen to reduce the 300x300px tiles to just 2 floating point numbers, which comes with its benefits and limitations. In our workflow, we have prioritized reducing computational time without introducing major memory usage disadvantages. The current best model uses less than 2 hours to complete its training, classify all data, and reconstruct the initial images in a format that experts can evaluate. Another significant advantage is that with just two floating numbers, we can visualize thousands of tiles on 2D planes and search for meaningful correlations in the data. In this way, biology experts without a computer science background can easily look for new relations in available data. However, we must note some of the limitations of the model. The major one is the inability of the model to actually "recreate" a given photo (Fig. 3). As the data we use for reconstruction (the latent layer of the autoencoder) is extremely limited, the model seems not to remember any spatial information, and the general space orientation is distorted. Thus, the decoder returns images that, despite representing similar intensity features, do not resemble the original picture and can not be easily understood by a human observer. Figure 3: Three original tiles and their respective reconstructions from the autoencoder Figure 2: Original CISH image (left) and its mask (right) ### Evaluation of loss and choice of an optimizer We are performing unsupervised classification with the freedom to vary the number of assigned classes. Testing with two of the most common functions for image analysis, such as MSE (mean-squared error) and MAE (mean absolute error), shows that the MSE performs better (Fig 4). We must acknowledge that the current mask generation is not perfect, and some background tiles are indeed fed into the autoencoder. As significant outliers, they increase the value of the loss function -- as we see in Fig. 4, the neural network converges around the value of 3. To demonstrate that this loss does not impede the correct classification of the tiles, we have performed two tests -- a real-world and a programmatic one. The real-world test includes expert evaluation of the resulting whole CISH images and approval that the labels match the manual evaluation -- areas of matching levels of gene expression belong in the same clusters appointed by the neural network. The programmatic test used the preceding work of Pavlov S., Atanasov S., Momcheva G. [6, 7], who extracted the exact coordinates for the two CISH images on which the autoencoder is trained. Evaluation of the autoencoder using the perfect dataset shows a convergence of the loss function at a similar value of 3, thus proving that the outlier background tiles do not change the result drastically. Furthermore, a comparison of the results reveals that after being trained on each data set, the resulting classification outlines the same regions as having similar levels of gene expression. These tests demonstrate the masking algorithm's ability to remove enough background data as well as the ability of our autoencoder to withstand small amounts of background data without bias in the results. We must acknowledge that such ability is necessary because of the beforementioned differences between CISH images due to uncontrollable outside factors such as dependency on tissue preparation and conditions of the microscopic image acquisition. The measurement of loss and the actual accuracy of the model is also closely related to the optimizer function. Qualities of our data, such as the evidence of rare features, strongly point to the usage of adaptive optimizers. In choosing an optimizer function, we have considered benchmarks of optimizers [8] concerning our data. After testing, although with a difference of less than 0.02, the Adam optimizer [9] outperforms RMSprop [10], Adagrad [11], and Adadelta [12]. ### Batch sizes and epochs Batch sizes are usually a predefined parameter that sets the number of data points -- in our case, tiles -- needed before an update of the neural network's parameters. Larger batch sizes reduce computational time as parameters update less frequently, but when combined with the number of epochs, disbalance may lead to underfitting of the model. We have found no significant difference in convergence when batch sizes are between 30 and 60 combined with 70 epochs for the autoencoder training. ### Layers One of the main problems encountered was the incapability of the model to extract meaningful features. The solution was the continued deepening of the model--14 blocks of layers in total (Fig. 5). By looking at results from an autoencoder with similar architecture but only 8 layers (4 for encoder and 4 for decoder - 2 convolutional and 2 linear layers each), we observed that the network still manages to extract features Figure 4: Plot of the loss function with regard to epoch number but not as complex or as well represented as needed. In conclusion, the model we have decided on has an encoder that chronologically uses 4 blocks of Convolution and MaxPooling with ReLU activation function, followed by 3 blocks of Linear with LeakyReLU activation function. The decoder uses 3 blocks of Linear with LeakyReLU activation function, followed by 3 blocks of Transpose Convolution with ReLU activation function and one with Sigmoid activation function. As the 'flatten' and the 'unflatten' functions just ease our work by changing the data dimensions, we represent them on the figure but do not consider them in counting neural network depth. In the following paragraphs, we will discuss the purpose and properties of these blocks. #### 3.4.1 Linear blocks Linear blocks are the simplest hidden layer. The purpose of this block is mainly to reshape the data while still preserving features by performing the linear transformation from (1). The activation functions [14] used afterward enable the linear blocks to learn. Activation functions are predefined mathematical expressions that consider the result of a specific neuron in the network and its bias and decide whether or not to fire the said neuron. \[y=xA^{T}+b \tag{1}\] The problem we encountered was the effectiveness of the activation functions. Initially, a compromise between ReLU and Sigmoid functions was used and provided semi-correct results, but after some testing, we encountered the so-called "Dying ReLU" problem [15] that is connected to a number of neurons never firing and thus acting as dead weight. The replacement of normal ReLU-s with Leaky ReLU-s seems to resolve the issue. In the encoder, we use three linear layers that gradually reduce the values remaining from the one tile from 800 to just 2 -- the gradual steps are, in order, from 800 to 100, from 100 to 25, and from 25 to 2. Distributing the size reduction into three layers allows for better features as each neuron's significance is calculated on multiple levels and therefore is more evident. The decoder uses the same three linear layer blocks with reversed dimensions -- from 2 to 25, from 25 to 100, and from 100 to 800. In conclusion of the linear layers, we must mention the usage of "flatten"- and "unflatten"-functions. The flatten function transforms the data from the initial two-dimensional tile to a one-dimensional array. Similarly, the unflatten layer uses the one-dimensional output provided after the linear transformations of the decoder to present the data as a tile again. #### 3.4.2 Convolution and max pooling Convolution [16] is a popular technique used for image recognition and processing that is specifically designed to process pixel data. Convolutional layers are quite complex because the output shape is affected by the shape of its input and the choice of parameters such as kernel shape, zero padding, and strides. The relationship between these properties is not trivial to infer. This difficulty contrasts with fully-connected or linear layers, whose output size is independent of the input size. Additionally, convolutional layers also feature a pooling stage, adding yet another level of complexity with respect to fully connected/linear layers. As convolution creates fields with very similar pixel values, we use a method called pooling to extract sharper features and scale down the image to reach a usable encoder output. As we want to Figure 5: Scheme of the final autoencoder created with Graphiz [13]–Encoder with four layers of Convolution combined with Max Pooling, Flatten layer, three Linear layers and decoder with three linear layers, Unflatten layer and four Transpose Convolution layers extract the most significant levels of gene expression in a particular tile, max-pooling is suitable to pass down the information. The pixels most significant for estimating the tile's features may be disregarded if the convolution kernels are too big--a single high-value cell may influence the categorization of an entire region. That is why in order to have balance and retain the true values even after convolution, we have decided to use four layers with smaller kernel sizes, each followed by a ReLU function (in this case we have not seen problems with the standard ReLU activation function as above) and a max pooling layer. After continuous testing and tweaking of the parameters of the model, the three convolutions have blocks as follows: Table 1. Convolution block layers \begin{tabular}{|c|c|c|c|} \hline Block & Convolution kernel size (in\_channels, out\_channels, & Activation function & Max pooling layer size \\ number & kernel size, padding=1) & \begin{tabular}{c} ReLU \\ function \\ \end{tabular} & \begin{tabular}{c} fix the number of clusters, let it be c, and select a fuzziness parameter, let it be m (generally 1.25\(<\)m\(<\)2), and initialize a partition matrix U [\(u_{i,j}\)] * Then repeatedly do the following: 1. Calculate cluster centers from (2) (centroids) (2) \[c_{i,j}=\frac{\sum_{l=1}^{n}u_{l}^{m}x_{i}}{\sum_{l=1}^{n}u_{l}^{m}}\] 2. Update U from (3) (centroids) (3) \[u_{i,j}=\frac{1}{\sum_{k=1}^{\varepsilon}\frac{|x_{l}-c_{j}|^{\frac{2}{m-1}}}{ |x_{l}-c_{k}|^{\frac{2}{m-1}}}}\] 3. stop process when U stops changing significantly or when desired Fig. 7 shows the number of centers and the value of the associated FPC (fuzzy partition coefficient; a metric for the model performance describing the data). In our results, FPC decreases with the number of clusters. This behavior is expected as there are no apparent clusters in our data -- the tiles form a cloud that demonstrates an almost continuous gradient in the calculated features. ## 5 Analysis of the results In order to manually inspect the results, we examine the 7 clusters model (Fig. 8) and reconstruct the starting images by returning each tile (colored coded by class) to its place in the original microscopic image. (Fig. 9) We have chosen seven clusters to recreate a real-life evaluation--as the number of clusters increases, it becomes much harder to classify the tiles correctly by hand. The apparent linearity in the two-dimensional feature space (Fig. 8) transfers into the well-visible correspondence between the colored-coded classes and regions with particular gene expression patterns in the unprocessed images (Fig. 9). The reconstructions outline and demonstrate that the algorithm classifies regions with similar staining properties in the two images to the same class. Figure 7: The distribution of tiles on the 2D plane and their clustering depending on the number of centroids Figure 8: Clustering of tiles with seven centroids Figure 9: Original images (A,C) and their color-coded reconstruction (B,D) with classified tiles ## 6 Future plans and prospects In this paper, we present a concept that an autoencoder latent layer can be used as a feature space for unsupervised classification of the staining patterns in microscopic CISH images. For more consistent results in the future, the autoencoder behavior must be studied while training with more diverse and representative batches of data. To generalize the algorithm and to account for differences in the conditions of hybridization and image exposure, the autoencoder must be trained on a larger number of random tiles from more images. Furthermore, the masking algorithm may be improved by implementing a more complex algorithm to reduce the tiles that do not contain part of the microscopic image, thus reducing computational time. Also, we plan to develop an interface for the definition of either random selection of training tiles or predefined coordinates of a region of interest for analysis; an interface for the definition of scale i.e. size of the processed tiles; strategies for the deployment of the training algorithm and the trained network and design of a user-friendly interface for practical application. We plan to gradually increase the number of features and look for any improvements and changes in the usefulness of the described approach. For example, the decoder portion of a similar autoencoder with feature-rich latent layer may be combined with a model for the random generation of feature values and included in a GAN for data augmentation. As the evaluation method of other microscopic images, when boiled down to its essence, is very similar, it will be fascinating to see how the model will behave when applied to different microscopic and macroscopic images.
2305.17999
Structure and composition tunable superconductivity, band topology and elastic response of hard binary niobium nitrides Nb$_2$N, Nb$_4$N$_3$ and Nb$_4$N$_5$
We perform a systematic \textit{ab initio} density functional study of the superconductivity, electronic and phononic band structures, electron-phonon coupling and elastic constants of all four possible structures of niobium nitride $\beta$-Nb$_2$N as well as Nb-rich $\gamma$-Nb$_4$N$_3$ and N-rich $\beta^\prime$-Nb$_4$N$_5$. First of all, we find that all four structures of $\beta$-Nb$_2$N are superconductors with superconducting transition temperatures ($T_c$) ranging from 0.6 K to 6.1 K, depending on the structure. This explains why previous experiments reported contradicting $T_c$ values for $\beta$-Nb$_2$N. Furthermore, both $\gamma$-Nb$_4$N$_3$ and $\beta^\prime$-Nb$_4$N$_5$ are predicted to be superconductors with rather high $T_c$ of 8.5 K and 15.3 K, respectively. Second, the calculated elastic constants and phonon dispersion relations show that all the considered niobium nitride structures are mechanically and dynamically stable. Moreover, the calculated elastic moduli demonstrate that all the niobium nitrides are hard materials with bulk moduli and hardness being comparable to or larger than the well-known hard sapphire. Third, the calculated band structures reveal that the nitrides possess both type I and type II Dirac nodal points and are thus topological metals. Finally, the calculated electron-phonon coupling strength, superconductivity and mechanical property of the niobium nitrides are discussed in terms of their underlying electronic structures and also Debye temperatures. The present \textit{ab initio} study thus indicates that $\beta$-Nb$_2$N, $\gamma$-Nb$_4$N$_3$ and $\beta^\prime$-Nb$_4$N$_5$ are hard superconductors with nontrivial band topology and are promising materials for exploring exotic phenomena due to the interplay of hardness, superconductivity and nontrivial band topology.
K. Ramesh Babu, Guang-Yu Guo
2023-05-29T10:22:06Z
http://arxiv.org/abs/2305.17999v1
Structure and composition tunable superconductivity, band topology and elastic response of hard binary niobium nitrides Nb\({}_{2}\)N, Nb\({}_{4}\)N\({}_{3}\) and Nb\({}_{4}\)N\({}_{5}\) ###### Abstract We perform a systematic _ab initio_ density functional study of the superconductivity, electronic and phononic band structures, electron-phonon coupling and elastic constants of all four possible structures of niobium nitride \(\beta\)-Nb\({}_{2}\)N as well as Nb-rich \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) and N-rich \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\). First of all, we find that all four structures of \(\beta\)-Nb\({}_{2}\)N are superconductors with superconducting transition temperatures (\(T_{c}\)) ranging from 0.6 K to 6.1 K, depending on the structure. This explains why previous experiments reported contradicting \(T_{c}\) values for \(\beta\)-Nb\({}_{2}\)N. Furthermore, both \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) and \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) are predicted to be superconductors with rather high \(T_{c}\) of 8.5 K and 15.3 K, respectively. Second, the calculated elastic constants and phonon dispersion relations show that all the considered niobium nitride structures are mechanically and dynamically stable. Moreover, the calculated elastic moduli demonstrate that all the niobium nitrides are hard materials with bulk moduli and hardness being comparable to or larger than the well-known hard sapphire. Third, the calculated band structures reveal that the nitrides possess both type I and type II Dirac nodal points and are thus topological metals. Finally, the calculated electron-phonon coupling strength, superconductivity and mechanical property of the niobium nitrides are discussed in terms of their underlying electronic structures and also Debye temperatures. The present _ab initio_ study thus indicates that \(\beta\)-Nb\({}_{2}\)N, \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) and \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) are hard superconductors with nontrivial band topology and are promising materials for exploring exotic phenomena due to the interplay of hardness, superconductivity and nontrivial band topology. pacs: 75.10.Jm, 75.40.-a, 75.30.-k ## I Introduction Transition metal nitrides (TMNs) are well known for their refractory characteristics such as high mechanical strength, hardness, high melting point, excellent thermal stability and resistance to corrosion and oxidation. These superior properties make them as promising materials for many practical applications, such as wear-resistance surfaces, high pressure and magnetic storage devices, and cutting tools [1; 2]. Furthermore, TMNs are good metallic conductors and some of them exhibit superconductivity [3; 4; 5; 6; 7; 8]. Interestingly, these materials were also found to possess nontrivial band topology [9; 10; 11]. Among all the TMNs, the binary niobium nitride systems are of particular interest because they exist in a variety of crystal structures with outstanding electronic and superconducting properties [12; 13; 14; 15]. At ambient pressure, the following crystalline structures of the niobium nitrides (see Table 1) are known to exist: (i) cubic \(\alpha\)-NbN, (ii) hexagonal \(\beta\)-Nb\({}_{2}\)N, (iii) tetragonal \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\), (iv) cubic \(\delta\)-NbN, (v) hexagonal \(\varepsilon\)-NbN, (vi) hexagonal WC-NbN, (vii) tetragonal \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\), (viii) hexagonal \(\delta^{\prime}\)-NbN and (ix) hexagonal \(\varepsilon^{\prime}\)-Nb\({}_{5}\)N\({}_{6}\). One interesting feature of these nitride systems is that the Nb atoms are connected with N atoms through strong covalent bonds, thus resulting in superior mechanical properties compared to the metal carbides and oxides [16]. The superconductivity of these niobium nitrides depends on both the Nb/N ratio and the crystal structure [17] (see, e.g., Table 1). For example, \(\delta\)-NbN, \(\beta\)-Nb\({}_{2}\)N, \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) and \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) are known to be superconductors while hexagonal \(\delta^{\prime}\)-NbN and \(\varepsilon^{\prime}\)-Nb\({}_{5}\)N\({}_{6}\) structures do not exhibit superconductivity down to 1.8 K [11; 17]. Because of their relatively high superconducting transition temperatures and high hardness, the \(\delta\) and \(\gamma\) phases of NbN have found applications in superconducting radio frequency circuits [18; 19], Josephson junction qubits [20; 21], terahertz wave detection hot-electron-bolometer [22], superconducting nanowire single-photon detectors [23] and also in the fabrication of superconducting quantum interference devices (SQUIDS) [24; 25; 26]. In addition, nitrogen rich structures \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) and \(\varepsilon^{\prime}\)-Nb\({}_{5}\)N\({}_{6}\) are candidates for supercapacitor applications [27]. However, the superconductivity as well as mechanical and electronic properties of many niobium nitrides have been rather poorly understood. In particular, a wide range of superconducting transition temperatures (\(T_{c}\)) have been reported for the \(\beta\)-phase Nb\({}_{2}\)N (\(\beta\)-Nb\({}_{2}\)N) [4; 28; 29; 5; 30]. For example, Gavaler _et al._ reported that \(\beta\)-Nb\({}_{2}\)N has a \(T_{c}\) value between 8.6 K - 12.1 K [4]. Skokan _et al._[5] reported that the thin films of mixed phases of cubic-NbN and hexagonal \(\beta\)-Nb\({}_{2}\)N exhibit two step resistance drop at 9 K and at 2 K. Gajar _et al._[28] reported the transformation of Nb into hexagonal \(\beta\)-Nb\({}_{2}\)N which is superconducting only below 1.0 K. Very recently, Kalal _et al._[30] claimed that the hexagonal \(\beta\)-Nb\({}_{2}\)N (P\({}_{6}\)/mmc) films have the electron-phonon interaction dominated superconductivity with a \(T_{c}\) of 4.74 K. Clearly, all these experimental studies on superconductivity of \(\beta\)-Nb\({}_{2}\)N con tradict with each other. On the other hand, we note that at least four crystalline structures (see Table 1 and Fig. 1) have been reported for \(\beta\)-Nb\({}_{2}\)N\({}^{33,35,38-42}\). Guard _et al_.[33] reported that \(\beta\)-Nb\({}_{2}\)N adopts a W\({}_{2}\)C type structure with space group P\(\bar{3}\)m1. However, Christensen [35] reported that \(\beta\)-Nb\({}_{2}\)N has a \(\varepsilon\)-Fe\({}_{2}\)N type structure with space group P\(\bar{3}\)1m. Besides, \(\beta\)-Nb\({}_{2}\)N also exists in P\({}_{6}\)mmc space group [42]. Recent _ab initio_ random structure search also predicted that \(\beta\)-Nb\({}_{2}\)N can exist in an orthorhombic structure with Pnnm space group [41]. Unfortunately, unlike niobium nitrides with other Nb/N ratios such as NbN where one structure is labelled as one phase (Table 1), all the structures of Nb\({}_{2}\)N have been labelled as the \(\beta\)-Nb\({}_{2}\)N. It is well-known that the superconductivity and physical properties of a solid are determined by its crystal structure, as we have recently demonstrated for NbN [11]. Consequently, we believe that the contradicting superconducting properties reported for \(\beta\)-Nb\({}_{2}\)N are caused by the fact that it has several different structures, as for NbN (Table 1). In this work, therefore, we perform a systematic _ab initio_ study of the superconducting and also other physical properties of \(\beta\)-Nb\({}_{2}\)N in all the four possible structures. Furthermore, to study how the superconductivity depends on the Nb/N ratio, we also consider Nb-rich \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) and N-rich \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\). Both \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) and \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) crystallize in the tetragonal NaCl-type \(\delta\)-NbN structure, respectively, by removal of half of either nitrogen or niobium atoms in alternating planes along the \(c\)-axis [32]. They are also superconductors with quite high \(T_{c}\) values (\(7.8\sim 16.0\) K) [36; 37; 17]. Materials that exhibit both superconductivity and nontrivial band topology are excellent candidates to study the fascinating phenomena such as topological superconductivity and Majorana Fermions [43]. In recent years, there is indeed a growing interest in the search for materials where superconductivity coexists with nontrivial band topology [44; 45]. In the binary Nb-N systems, the electronic structure [46; 47; 48; 49], mechanical [35; 50], phonon and superconducting properties [51; 52] of niobium mononitride (Nb/N = 1) have recently been extensively studied, and as a result, Dirac and Weyl nodal points have been predicted in several structures of NbN such as cubic \(\delta\)-NbN, hexagonal \(\varepsilon\)-NbN, \(\delta^{\prime}\)-NbN, and WC-NbN by the _ab initio_ calculations [9; 10; 11]. However, for either Nb-rich or N-rich niobium nitrides (i.e., niobium nitrides with Nb/N ratios different from 1), no theoretical studies on the band topology and superconductivity have been reported. Finally, the mechanical properties of either Nb-rich or N-rich niobium nitrides have been much less investigated and consequently remain poorly understood. This would certainly hinder their technological applications as hard superconductors. The rest of this paper is organized as follows. In section II, we introduce the crystal structures of the considered nitrides, theory of superconductivity, _ab initio_ calculation methods and computational details used in the present study. In section III, the calculated physical properties of the niobium nitrides are presented. In particular, the theoretical elastic constants, moduli and hardness of the nitrides are reported in Section III A. In section III B, the calculated electronic band structures are presented and Dirac nodal points are identified. In section III C, the calculated phonon dispersion relations as well as the contributions from the lattice vibrations and conduction electrons to the specific heat and Debye temperatures are presented. Finally, the calculated electron-phonon coupling strengths and estimated superconducting transition temperatures are reported in section III D. In section IV, we summarize the conclusions drawn from this work. ## II Crystal structures and computational methods The crystal structures and the corresponding Brillouin zones of all the considered niobium nitrides are shown in Fig. 1. Four crystalline structures have been reported for \(\beta\)-Nb\({}_{2}\)N, namely, trigonal P\(\bar{3}\)1m (No.162) [35] (\(\beta_{1}\)-Nb\({}_{2}\)N) and P\(\bar{3}\)m1 (No.164) [33] (\(\beta_{2}\)-Nb\({}_{2}\)N), hexagonal P\(\bar{6}_{3}\)mmc (No.194) [42] (\(\beta_{3}\)-Nb\({}_{2}\)N) and orthorhombic Pnm (No. 58) [41] (\(\beta_{4}\)-Nb\({}_{2}\)N). The crystal structure of \(\beta_{1}\)-Nb\({}_{2}\)N contains three formula units (f.u.) per unit cell [35]. Nb occupies the wyckoff site \(6k\) (\(\frac{1}{3}\), 0, \(\frac{1}{4}\)), N is at \(1e\) (0, 0, 0) and \(2d\) (\(\frac{1}{3}\), \(\frac{2}{3}\), \(\frac{1}{2}\)). In \(\beta_{2}\)-Nb\({}_{2}\)N, Nb is at \(2c\) (\(\frac{1}{3}\), \(\frac{2}{3}\), \(\frac{1}{4}\)) and N at \(2a\) (0, 0, 0) whereas in \(\beta_{3}\)-Nb\({}_{2}\)N, Nb occupies \(2c\) (\(\frac{1}{3}\), \(\frac{2}{3}\)), \(\frac{1}{4}\)) and N is at \(2a\) (0, 0, 0). The crystal structure of \(\beta_{4}\)-Nb\({}_{2}\)N has two f.u. per unit cell [41]. Nb occupies \(4g\) (0.2572, 0.3390, 0) and N \(2d\) (\(\frac{1}{2}\), 0, 0). Both \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) and \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) crystallize in the tetragonal structure with space group I4/mmm (No. 139) [53] and I4/m (No. 7) [54], respectively. The unit cell of \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) contains two f.u. with Nb at \(4c\) (0, \(\frac{1}{2}\), 0) and \(4e\) (0, 0, 0.2521) and N atoms \begin{table} \begin{tabular}{c c c c} \hline \hline Phase & Structure & Space group & \(T_{c}\) (K) \\ \hline \(\alpha\)-NbN & Cubic & Pm\(\bar{3}\)m & \(16^{\rm a}\) \\ \(\delta\)-NbN & Cubic & Fm\(\bar{3}\)m & \(17.3^{\rm b}\) \\ \(\delta^{\prime}\)-NbN & Hexagonal & P\(\bar{6}_{3}\)/mmc & \(<1.77^{\rm c}\) \\ \(\varepsilon\)-NbN & Hexagonal & P\(\bar{6}_{3}\)/mmc & \(11.6^{\rm d}\), \(<1.77^{\rm c}\) \\ WC-NbN & Hexagonal & P\(\bar{6}\)m2 & \\ \(\beta_{1}\)-Nb\({}_{2}\)N & Trigonal & P\(\bar{3}\)1m & \(8.6\)-\(12.1^{\rm e}\), \(<1^{\rm f}\), \(4.74\) [8] \\ \(\beta_{2}\)-Nb\({}_{2}\)N & Trigonal & P\(\bar{3}\)m1 & \\ \(\beta_{3}\)-Nb\({}_{2}\)N & Hexagonal & P\(\bar{6}_{3}\)/mmc & \\ \(\beta_{4}\)-Nb\({}_{2}\)N & Orthorhombic & Pnnm & \\ \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) & Tetragonal & I4/mmm & \(7.8\)-\(12.2^{\rm h}\) \\ \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) & Tetragonal & I4/m & \(10^{\rm i}\), \(8\)-\(16^{\rm c}\) \\ \hline \hline \end{tabular} * References [31; 32] (experiment); \({}^{\rm b}\)Reference [34] (experiment); \({}^{\rm c}\)Reference [17] (experiment); \({}^{\rm d}\)Reference [14] (experiment); \({}^{\rm e}\)Reference [4] (experiment); \({}^{\rm f}\)Reference [29] (experiment); \({}^{\rm g}\)Reference [30] (experiment); \({}^{\rm h}\)Reference [36] (experiment); \({}^{\rm g}\)Reference [37] (experiment); \({}^{\rm g}\)Reference [37] (experiment); \end{table} Table 1: Crystal structure, space group, and superconducting transition temperature \(T_{c}\) of some niobium nitrides. at \(2a\) (0, 0, 0) and \(4d\) (0, \(\frac{1}{2}\), \(\frac{1}{4}\)). The unit cell of \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) has two f.u. with Nb at 8\(h\) (0.4, 0.2, 0) and N at 2\(b\) (0, 0, \(\frac{1}{2}\)) and 8\(h\) (0.1, 0.3, 0). It is worth to mention that all the crystal structures possess inversion (\(\mathcal{P}\)) symmetry. The _ab initio_ structural optimizations, elastic constants, electronic band structures and density of states (DOS) calculations are based on density functional theory (DFT) with the generalized gradient approximation (GGA) [55]. The calculations are performed by using the accurate projector-augmented wave method [56; 57; 58], as implemented in the Vienna _Ab initio_ Simulation Package (VASP). For the Brillouin zone integration, the tetrahedron method is used with \(\Gamma\)-centered \(k\)-point meshes of 8\(\times\)8\(\times\)10, 8\(\times\)8\(\times\)6, 8\(\times\)8\(\times\)6, 8\(\times\)6\(\times\)10, 8\(\times\)8\(\times\)4 and 8\(\times\)8\(\times\)10, respectively, for \(\beta_{1}\)-Nb\({}_{2}\)N, \(\beta_{2}\)-Nb\({}_{2}\)N, \(\beta_{3}\)-Nb\({}_{2}\)N, \(\beta_{4}\)-Nb\({}_{2}\)N, \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) and \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\). A large plane-wave cut-off energy of 500 eV is used throughout. The DOS are calculated by using denser \(k\)-point meshes of 16 \(\times\) 16 \(\times\) 20 for \(\beta_{1}\)-Nb\({}_{2}\)N, 16 \(\times\) 16 \(\times\) 12 for \(\beta_{2}\)-Nb\({}_{2}\)N and \(\beta_{3}\)-Nb\({}_{2}\)N, and 16 \(\times\) 12 \(\times\) 20 for \(\beta_{4}\)-Nb\({}_{2}\)N, 16 \(\times\) 16 \(\times\) 8 for \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) and 16 \(\times\) 16 \(\times\) 20 for \(\delta^{\prime}\)-NbN. In the crystal structure optimizations, the structures are relaxed until the atomic forces are less than 0.0001 eV/A. A small total energy convergence criterion of 10\({}^{-8}\) eV is used for all the calculations. The calculated lattice constants and total energies for all the considered structures are listed in Table 2. We notice that the calculated lattice constants of all the structures are in good accord with the available experiment data [33; 34; 42; 53; 54] and previous theoretical calculations based on GGA. [38; 39; 40] Among the four structures of \(\beta\)-Nb\({}_{2}\)N, \(\beta_{1}\)-Nb\({}_{2}\)N is found to be the ground state structure with \(\beta_{4}\)-Nb\({}_{2}\)N, \(\beta_{2}\)-Nb\({}_{2}\)N, \(\beta_{3}\)-Nb\({}_{2}\)N structures being, respectively, 0.062 eV/f.u., 0.305 eV/f.u. and 0.381 eV/f.u. higher in total energy. The elastic constants of the niobium nitrides are calculated by using the linear-response stress-strain method, as implemented in the VASP code [59]. Under a small strain (\(\varepsilon_{kl}\)), according to Hooke's law, the corresponding stress (\(\sigma_{ij}\)) can be written as \(\sigma_{ij}=C_{ijkl}\varepsilon_{kl}\), where \(C_{ijkl}\) is the elastic stiffness tensor that consists of the elastic constants of the crystal. The total number of elastic constants depends on the crystal symmetry. The calculated nonzero elastic constants for all the considered structures are listed in Table 3. For the hexagonal and trigonal crystals, bulk modulus \(B\) and shear modulus \(G\) are given by \(B=\frac{2}{9}(C_{11}+C_{12}+2C_{13}+\frac{1}{2}C_{33})\) and \(G=\frac{1}{30}(12C_{44}+7C_{11}-5C_{12}+2C_{33}-4C_{13})\). For the tetragonal structures, \(B=\frac{1}{9}\{2(C_{11}+C_{12})+C_{33}+4C_{13}\}\) and \(G=\frac{1}{30}\{4C_{11}\) - \(2C_{12}\) - \(4C_{13}+2C_{33}\) + \(12C_{44}\) + \(6C_{66}\}\). In the orthorhombic crystals, \(B=\frac{1}{9}\{C_{11}\) + \(C_{22}\) + \(C_{33}\) + \(2C_{12}\) + \(2C_{13}\) + \(2C_{23}\)} and \(G=\frac{1}{15}\{C_{11}\) + \(C_{22}\) + \(C_{33}\) - (\(C_{12}+C_{13}+C_{23}\)) + \(\frac{3}{15}\{C_{44}\) + \(C_{55}\) + \(C_{66}\}\). The Young's modulus \(Y\) and Poisson's ratio are related to \(B\) and \(G\) by \(Y=9BG/(3B+G)\) and \(\nu=(3B-2G)/2(3B+G)\). The hardness \(H\) can be estimated by \(H=0.1769G-2.899\). [60] For superconductors with the dominant electron phonon interaction, the superconductiviting properties can be analyzed through calculating the Eliashberg spectral function \(\alpha^{2}F(\omega)\). Hence, we calculate the phonon dispersion relations, phonon DOS and electron-phonon interactions using the _ab initio_ density functional perturbation theory (DFPT),[61] as implemented in the Quantum Espresso code.[62] The calculations are performed with the scalar-relativistic optimized norm-conserving Vanderbilt pseudopotentials.[63; 64] The plane wave cut-off energy is set to 42 Ry and the electronic charge density is expanded up to 168 Ry. A Gaussian broadening of 0.02 Ry is used for all the calculations. All the phonon and electron-phonon coupling calculations are perfomed with a \(q\)-grids of 4\(\times\)4\(\times\)5, 4\(\times\)4\(\times\)3, 4\(\times\)4\(\times\)3, 4\(\times\)3\(\times\)5, 4\(\times\)4\(\times\)2, and 3\(\times\)3\(\times\) 4 for \(\beta_{1}\)-Nb\({}_{2}\)N, \(\beta_{2}\)-Nb\({}_{2}\)N, \(\beta_{3}\)-Nb\({}_{2}\)N, \(\beta_{4}\)-Nb\({}_{2}\)N, \(\beta_{\text{-}}\)-Nb\({}_{4}\)N\({}_{3}\) and \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\), respectively. The strength of the electron-phonon coupling in a crystal is measured by the electron-phonon coupling constant (\(\lambda\)), which can be extracted from the Eliashberg spectral function [\(\alpha^{2}F(\omega)\)] via the Allen-Dynes formula [65; 66] \[\lambda=2\int\frac{\alpha^{2}F(\omega)}{\omega}d\omega. \tag{1}\] The Eliashberg spectral function is given by \[\alpha^{2}F(\omega)=\frac{1}{2\pi N(E_{F})}\sum_{qj}\frac{\gamma_{qj}}{\omega _{qj}}\delta(\hbar\omega-\hbar\omega_{qj}), \tag{2}\] where \(N(E_{F})\) is the electronic DOS at the Fermi level (\(E_{F}\)), \(\gamma_{qj}\) is the phonon linewidth due to electron-phonon scattering, \(\omega_{qj}\) is the phonon frequency of branch index \(j\) at wave vector \(q\). Using the calculated \(\lambda\), one can estimate the superconducting transition temperature \(T_{c}\) via McMillan-Allen-Dynes formula[65; 66] \[T_{c}=\frac{\omega_{log}}{1.2}\text{exp}\Big{[}\frac{-1.04(1+\lambda)}{\lambda -\mu^{*}(1+0.62\lambda)}\Big{]}, \tag{3}\] where \(\omega_{log}\) is logarithmically averaged phonon frequency and \(\mu^{*}\) is the averaged screened electron-electron interaction. ## III Results and Discussion ### Mechanical properties Elastic constants of a solid provide insight into mechanical stability and bonding characteristics of the material. In Table 3, we list the calculated elastic constants of all the considered niobium nitrides along with the reported values of well-known hard material sapphire (\(\alpha\)-Al\({}_{2}\)O\({}_{3}\), space group R\(\bar{3}\)c)[67; 68]. Table 3 shows that all the elastic constants are positive, thereby indicating that all the considered nitride structures are mechanically stable against the corresponding specific deformations[69]. Table 3 also shows that for \(\beta_{1}\)-Nb\({}_{2}\)N, \(\beta_{3}\)-Nb\({}_{2}\)N, \(\beta_{4}\)-Nb\({}_{2}\)N and \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\), \(C_{33}\) is larger than \(C_{11}\), indicating that the materials are harder to compress along the \(c\)-axis while it is softer for both \(\beta_{2}\)-Nb\({}_{2}\)N and \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\). The calculated elastic moduli suggest that all the nitrides are hard materials. In particular, the calculated bulk modulus (\(B\)) of the niobium nitrides is either comparable to or larger than that of hard sapphire[68]. For example, for \(\beta_{3}\)-Nb\({}_{2}\)N, the calculated \(B\) value is about 30% larger than the corresponding value of sapphire[68]. Interestingly, when compared to the niobium mononitride structures[11], e.g., \(\delta\)-NbN (\(B\) = 327 GPa), both Nb-rich \(\beta\)-Nb\({}_{2}\)N and \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) as well as N-rich \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) possess up to about 20 % lower \(B\) values. This indicates that both Nb-rich and N-rich niobium nitrides are softer materials compared to the niobium mononitride. Furthermore, the \(B\) of all the nitride structures is almost twice that of the shear modulus \(G\), suggesting that \(G\) is the limiting parameter for the mechanical stability. Young's modulus (\(Y\)) of a solid is the ratio of linear stress to strain and tells us about the stiffness of the material. The calculated \(Y\) of \(\beta_{3}\)-Nb\({}_{2}\)N and \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) is about 25 % larger than that of the other nitrides, indicating their higher stiffness. According to Pugh's criteria[70], the value of \(B\)/\(G\) greater than (less than) 1.75 would indicate ductile (brittle) character of the material. Table 3 thus shows that \(\beta_{2}\)-Nb\({}_{2}\)N (\(B\)/\(G\) = 1.66) and \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) (\(B\)/\(G\) = 1.61) are brittle materials while the rest (\(B\)/\(G\) = 1.75) are ductile materials. In particular, \(\beta_{1}\)-Nb\({}_{2}\)N (\(B\)/\(G\) = 2.10) is more ductile than all the other nitrides. Poisson's ratio \(\nu\) measures the stability of a material against the shear strain. Among the studied nitrides, \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) has the smallest value (\(\nu\) = 0.24) indicating that it is relatively stable against shear strain compared to the other nitrides. Hardness (\(H\)) is an important elastic property which is responsible for wear behaviour of materials[60]. It is clear from Table 3 that \(\beta_{3}\)-Nb\({}_{2}\)N has the strongest hardness followed by \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\), \(\beta_{2}\)-Nb\({}_{2}\)N, \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\), \(\beta_{4}\) \begin{table} \begin{tabular}{c c c c c c} \hline \hline Phase & \(a\) (Å) & \(b\) (Å) & \(c\) (Å) & \(c/a\) & V (Å3/at) & \(E_{t}\) (eV/at) \\ \hline \(\beta_{1}\)-Nb\({}_{2}\)N & 5.341 & 5.009 & 0.937 & 13.75 & -10.3743 \\ Expt4 & 13.75 & -10.3743 \\ Expt4 & 5.267 & 4.987 & 0.946 & & \\ \(\beta_{2}\)-Nb\({}_{2}\)N & 3.157 & 4.858 & 1.538 & 13.98 & -10.2725 \\ Expt4 & 3.058 & 4.961 & 1.622 & & \\ \(\beta_{3}\)-Nb\({}_{2}\)N & 2.999 & 5.605 & 1.868 & 14.56 & -10.2470 \\ Expt5 & 3.055 & 4.994 & 1.634 & & \\ \(\beta_{4}\)-Nb\({}_{2}\)N & 4.931 & 5.455 & 3.066 & 0.562 & 13.75 & -10.3536 \\ \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) & 4.427 & 8.707 & 1.966 & 12.19 & -10.2478 \\ Expt4 & 4.382 & 8.632 & 1.969 & & \\ \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) & 6.933 & 4.324 & 0.624 & 11.55 & -10.0132 \\ Expt5 & 6.873 & 4.298 & 0.625 & & \\ \hline \hline \end{tabular} \end{table} Table 2: Theoretical lattice constants (\(a,b,c,c/a\)), volume (\(V\)) and total energy (\(E_{t}\)) of all the studied niobium nitrides, compared with the available experimental data (Expt). Nb\({}_{2}\)N, and \(\beta_{1}\)-Nb\({}_{2}\)N. The calculated hardness value of \(\beta_{3}\)-Nb\({}_{2}\)N (28.1 GPa) is close to the experimental values of 35 GPa [71] and 30.9 GPa [72]. Importantly, Table 3 shows that the hardness \(H\) of the nitrides \(\beta_{1}\)-Nb\({}_{2}\)N, \(\beta_{2}\)-Nb\({}_{2}\)N, \(\beta_{4}\)-Nb\({}_{2}\)N and \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) is close to that of hard sapphire [68]. Both \(\beta_{3}\)-Nb\({}_{2}\)N and \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) are harder than sapphire because of almost 40 % larger \(H\) values (Table 3). ### Band structure and Dirac nodal points The energy bands and DOS spectra calculated without including the spin-orbit coupling (SOC), of all the studied structures are displayed in Figs. 2 and 3, respectively. Figures 2 and 3 show that all the considered Nb nitrides are metallic with many bands crossing \(E_{F}\) and also have relatively large DOS at \(E_{F}\) (see Table 4). Interestingly, Figs. 3(a) and 3(d) show that the DOS spectra of \(\beta_{1}\)-Nb\({}_{2}\)N and \(\beta_{4}\)-Nb\({}_{2}\)N are very similar, although their structures are quite different (Table 1 and Fig. 1). This implies that they have similar bonding character-sites. Indeed, this explain why their elastic moduli (\(B\), \(G\), \(Y\) and \(H\)) are very similar (see Table 3). In particular, in the lower valence band region below -4.0 eV, Nb \(d\) DOS and N \(p\) DOS spectra in both cases have nearly the same magnitudes, indicating a strong covalent bonding in these two structures [Figs. 3(a) and 3(d)]. This is also the case for \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) [Fig. 3(e)] and \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) in the region below -2.0 eV [Fig. 3(f)]. Nevertheless, the weight of Nb \(d\) states in \(\beta_{2}\)-Nb\({}_{2}\)N and \(\beta_{3}\)-Nb\({}_{2}\)N becomes significantly smaller than that of N \(p\) states, indicating that the covalency in these nitrides decreases. This explains that the tetragonal \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) and \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) has superior mechanical properties compared to \(\beta_{1}\)-Nb\({}_{2}\)N. On the other hand, Fig. 3 indicates that the upper valence bands and lower conduction bands from -4.0 to 2.0 eV of \(\beta_{1}\)-Nb\({}_{2}\)N, \(\beta_{4}\)-Nb\({}_{2}\)N and \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) are Nb \(d\) dominated states. This is also the case for \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) from -2.0 to 2.0 eV (see Fig. 3). Thus, the Nb \(d\) states are important for governing the superconducting and other transport properties of these nitrides. Furthermore, for \(\beta_{1}\)-Nb\({}_{2}\)N, \(\beta_{2}\)-Nb\({}_{2}\)N, and \(\beta_{4}\)-Nb\({}_{2}\)N, the DOS in the vicinity of the \(E_{F}\) is nearly constant and takes value of 0.585 states/eV/Nb, 0.705 states/eV/Nb and 0.755 states/eV/Nb at \(E_{F}\), respectively (see Fig. 3 and Table 4). In contrast, in \(\beta_{3}\)-Nb\({}_{2}\)N [Fig. 3(c)], \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) [Fig. 3(e)] and \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) [Fig. 3(f)], the DOS monotonically decreases with energy and at \(E_{F}\) has the value of 0.610 states/eV/Nb, 0.698 states/eV/Nb, and 0.813 states/eV/Nb, respectively. Interestingly, when the SOC is neglected, the band structure exhibits symmetry protected band crossings in the vicinity of the Fermi level along \(k\)-paths K-\(\Gamma\), \(\Gamma\)-A and \(\Gamma\)-Z for \(\beta_{2}\)-Nb\({}_{2}\)N, \(\beta_{3}\)-Nb\({}_{2}\)N and \(\beta_{4}\)-Nb\({}_{2}\)N, respectively (see Fig. 2). Such band crossings also occur along \(\Gamma\)-X for \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) and \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\). For \(\beta_{4}\)-Nb\({}_{2}\)N, \(k\)-path \(\Gamma\)-Z belongs to the \(C_{2v}\) point group and the bands have \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & \(\beta_{1}\)-Nb\({}_{2}\)N & \(\beta_{2}\)-Nb\({}_{2}\)N & \(\beta_{3}\)-Nb\({}_{2}\)N & \(\beta_{4}\)-Nb\({}_{2}\)N & \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) & \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) & \(\delta\)-NbNa & \(\delta\)-NbNa & \(\alpha\)-Al\({}_{2}\)O\({}_{3}\) \\ \hline \(C_{12}\) & 417 & 402 & 555 & 399 & 597 & 508 & 692 (608\({}^{\mathrm{b}}\)) & 497\({}^{\mathrm{c}}\) \\ \(C_{12}\) & 167 & 102 & 229 & 177 & 89 & 133 & 145 (134\({}^{\mathrm{b}}\)) & 163\({}^{\mathrm{c}}\) \\ \(C_{13}\) & 184 & 173 & 168 & 181 & 187 & 134 & & 116\({}^{\mathrm{c}}\) \\ \(C_{14}\) & & & & & & & & 22\({}^{\mathrm{c}}\) \\ \(C_{23}\) & & & & 158 & & & & & \\ \(C_{22}\) & & & & 420 & & & & & \\ \(C_{33}\) & 421 & 386 & 619 & 406 & 392 & 643 & & 501\({}^{\mathrm{c}}\) \\ \(C_{44}\) & 125 & 150 & 163 & 140 & 107 & 154 & 65 (117\({}^{\mathrm{b}}\)) & 147\({}^{\mathrm{c}}\) \\ \(C_{55}\) & & & & 126 & & & & \\ \(C_{66}\) & & & & 116 & 91 & 125 & & \\ \(B\) & 258 & 232 & 318 & 257 & 280 & 273 & 327 (292\({}^{\mathrm{b}}\)) & 246\({}^{\mathrm{d}}\) \\ \(G\) & 123 & 140 & 175 & 124 & 136 & 170 & 148 (165\({}^{\mathrm{b}}\)) & 162\({}^{\mathrm{d}}\) \\ \(Y\) & 318 & 348 & 443 & 319 & 351 & 422 & 385 & \\ \(H\) & 18.8 & 21.9 & 28.1 & 19.1 & 21.2 & 27.2 & 23 & 22\({}^{\mathrm{d}}\) \\ & & & (35\({}^{\mathrm{e}}\), 30.9\({}^{\mathrm{f}}\)) & & & & & \\ \(\nu\) & 0.29 & 0.25 & 0.27 & 0.28 & 0.29 & 0.24 & \\ \(B/G\) & 2.10 & 1.66 & 1.82 & 2.07 & 2.06 & 1.61 & \\ \hline \end{tabular} \end{table} Table 3: Calculated elastic constants (\(C_{ij}\)), bulk modulus (\(B\)), shear modulus (\(G\)), Young’s modulus (\(Y\)), hardness (\(H\)), Poisson’s ratio (\(\nu\)) and \(B/G two different irreducible represnetation (IRs) D\({}_{1}\) and D\({}_{3}\). In \(\beta_{3}\)-Nb\({}_{2}\)N, \(\Gamma\)-A line has the \(C_{6v}\) point group symmetry and there exist two band crossings at 1.0 eV below the \(E_{F}\) [ Fig. 2(d)]. At about 0.2 eV above the \(E_{F}\), the band crossing is between a nondegenerate band with IR B\({}_{1}\) and a doubly degenerate band with IR E\({}_{2}\). The other band crossing at \(\sim\) 0.6 eV involves two different bands with IRs B\({}_{1}\) and E\({}_{1}\), respectively. These two band crossings are protected by the \(C_{3z}\) rotational symmetry. Another symmetry protected band crossing with IRs D\({}_{1}\) (A\({}_{1}\)) and D\({}_{4}\) (B\({}_{2}\)) is visible at the high symmetry \(k\)-point K which belongs to the \(C_{2v}\) point group. Further, the band structure of \(\beta_{2}\)-Nb\({}_{2}\)N shows a band crossing along K-\(\Gamma\) between the bands with different IRs A and B, which belong to the \(C_{2}\) point group symmetry and hence are forbidden to mix. For tetragonal \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\), the linear band crossing between IRs B\({}_{1}\) and E is located along the \(\Gamma\)-X direction and it is protected by the \(C_{4v}\) point group symmetry. In \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\), the bands with IRs B and E cross each other along the \(\Gamma\)-X path and are protected by the \(\mathcal{C}_{4}\) rotational symmetry of the \(C_{4v}\) point group. Fully relativistic band structures of the nitrides are shown in Fig. 4. When the SOC is included, significant changes in the band structure occur. Among other things, the single point group symmetry changes to the double point group symmetry and hence the IRs of the bands change as well. Importantly, since the SOC breaks SU(2) symmmetry, some band crossings would become gapped. For example, the IRs \(D_{1}\) and \(D_{3}\) of the bands crossing along \(\Gamma\)-Z in \(\beta_{4}\)-Nb\({}_{2}\)N as well as the IRs A and B of the bands crossing along \(\Gamma\)-K direction for \(\beta_{2}\)-Nb\({}_{2}\)N now become \(\Gamma_{5}\) [Figs. 4(d) and 4(j)] and \(\Gamma_{3}\) [Figs. 4(b) and 4(g)], respectively. Both band crossings are now gapped [see Figs. 4(g) and 4(j)]. The band crossing at the M point in \(\beta_{3}\)-Nb\({}_{2}\)N is now represented by \(\Gamma_{5}\) (\(D_{5}\)) [Fig. 4(d)] and it is gapped by \(\sim\) 0.05 eV. Remarkably, several band crossings remain intact after SOC is turned-on. These survived band crossings include that along the \(\Gamma\)-A line in hexagonal \(\beta_{3}\)-Nb\({}_{2}\)N [Fig. 4(c)], \(\Gamma\)-X line in tetragonal \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) [Fig. 4(e)] and \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) [Fig. 4(f)]. The two band crossings along the \(\Gamma\)-A line in \(\beta_{2}\)-Nb\({}_{2}\)N previously between the bands with IRs B\({}_{1}\) and E\({}_{2}\), B\({}_{1}\) and E\({}_{1}\), are now tranformed from B\({}_{1}\) to \(\Gamma_{7}\), E\({}_{1}\) and E\({}_{2}\) to \(\Gamma_{8}\) and \(\Gamma_{9}\) [Figs. 4(h) and 4(i)]. Consequently, there are now three band crossings which are protected by the mirror plane and \(\mathcal{C}_{3z}\) rotational symmetry. The band crossings in \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) along \(\Gamma\)-X are represented by \(\Gamma_{6}\) and \(\Gamma_{7}\) [Figs. 4(k) and 4(l)] and are protected by the \(\mathcal{C}_{4}\) rotational symmetry. Furthermore, the band crossings along \(\Gamma\)-X in \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) belong to the IRs \(\Gamma_{5}\) and \(\Gamma\gamma\) [Figs. 4(f) and 4(l)] and protected by the \(\mathcal{C}_{4}\) symmetry. All the other band crossings become gapped out when SOC is included. Overall, there exist unapped band crossings in the relativistic band structures along \(\Gamma\)-A for \(\beta_{4}\)-Nb\({}_{2}\)N [Figs. 4(d), 4(i) and 4(j)] and along \(\Gamma\)-X for both \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) [Figs. 4(e) and 4(k)] and \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) [Figs. 4(f) and 4(l)]. This demonstrates that Nb-rich \(\beta_{4}\)-Nb\({}_{2}\)N, \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) and N-rich \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) are topological metals. Importantly, all these three structures have both time-reversal (\(\mathcal{T}\)) symmetry and inversion (\(\mathcal{P}\)) symmetry and hence each energy band is two fold degenerate. Therefore, the band crossings are four fold Dirac points (DP). In particular, the DPs in Nb-rich \(\beta_{3}\)-Nb\({}_{2}\)N and \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) are conventional type I whereas in N-rich \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\), the DPs are of type II because the slopes of the two crossing bands have the same sign. ### Lattice dynamics and specific heat The calculated phonon dispersion relations and phonon DOS spectra of all the considered niobium nitrides are presented in Fig. 5 and 6. First, the absence of any imaginary frequencies in the phonon dispersion relations throughout the Brillouin zone shows the dynamical stability of the niobium nitride structures, even although some of them are not the ground state features (Table 2). There are no experimental data available on the phonon dispersion relations of the considered nitrides. Second, Figs. 5 and 6 indicate that the phonon dispersions exhibit a large gap between the Nb atom dominated low-energy modes and the N atom dominated high-energy modes. This is due to the large mass difference between the light N atoms and the heavier Nb atoms. Furthermore, a significant mixing of low-lying optical modes with the acoustic modes exists, suggesting that a strong bonding between the Nb and the N atoms. The calculated phonon DOS is used to obtain the specific heat [\(C_{v}(T)\)] with the formula [73] \[C_{v}(T)=\gamma T+\int d\omega\frac{(\hbar\omega)^{2}}{k_{B}T^{2}}\frac{g( \omega)e^{\hbar\omega/k_{B}T}}{(e^{(\hbar\omega/k_{B}T)}-1)^{2}}=\gamma T+ \beta T^{3}, \tag{4}\] where the first and second terms are, respectively, the electron and phonon contributions to the specific heat. Here \(\gamma=\frac{\pi^{2}}{3}k_{B}^{2}N(E_{F})\) is Sommerfeld coefficient [74] which is proportional to the electron DOS at \(E_{F}\), \(k_{B}\) is Boltzmann constant and \(g(\omega)\) is phonon DOS. To estimate the coefficient \(\beta\) of the phonon contribution at low temperatures, we first calculate \(C_{v}(T)\) as a function of temperature between 4 K and 9 K. The calculated \(C_{v}(T)\) is then plotted as \(\frac{C_{v}}{T}\) vs \(T^{2}\) and fitted to \(\frac{C_{v}}{T}=\gamma\) + \(\beta T^{2}\). The resulting values of \(\gamma\) and \(\beta\) for the considered niobium nitrides are listed in Table 4. Since \(\gamma\) is proportional to \(N(E_{F})\), \(\beta_{1}\)-Nb\({}_{2}\)N and \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) possess the lowest and highest values of \(\gamma\) among the niobium nitrides, respectively. There are no experimental \(\gamma\) data available to compare with. However, if we incorporate the electron-phonon coupling (\(\lambda\)), the electron specific heat \(\gamma\) takes the form of \(\frac{\gamma_{exp}}{\gamma_{th}}=1\) + \(\lambda\). Thus, we can expect that \(\frac{\gamma_{exp}}{\gamma_{th}}>1\). Finally, the values of \(\beta\) are used to calculate the Debye temperature \(\Theta_{D}\) by using the relation [73]\(\Theta_{D}=(\frac{12\pi^{4}N_{A}nkn}{5\beta})^{\frac{1}{3}}\), where \(N_{A}\) is the Avagadro's number and \(n\) is the number of atoms per formula unit. In Table 4, \(w\) is the calculated values of \(\Theta_{D}\) for the niobium nitride structures. Among the niobium nitrides, \(\beta_{4}\)-Nb\({}_{2}\)N \begin{table} \begin{tabular}{c c c c c c c c} Structure & \(\lambda\) & \(\omega_{log}\) & \(N(E_{F})\) & \(\gamma\) & \(\beta\) & \(\Theta_{D}\) & \(T_{c}\) \\ & & (K) & (states/eV/Nb) & (mJ/mol-K\({}^{2}\)) & (mJ/mol-K\({}^{4}\)) & (K) & (K) \\ \hline \(\beta_{1}\)-Nb\({}_{2}\)N & 0.36 & 285 & 0.585 & 2.75 & 0.460 & 233 & 0.57 \\ Expt & & & & & & & 8.6-12.1 a \\ \(\beta_{2}\)-Nb\({}_{2}\)N & 0.57 & 306 & 0.705 & 3.32 & 0.220 & 298 & 6.12 \\ Expt & & & & & & & 8.6-12.1 a \\ \(\beta_{3}\)-Nb\({}_{2}\)N & 0.47 & 398 & 0.610 & 2.87 & 0.163 & 330 & 3.88 \\ Expt & & & & & & & 320b & \(<\) 1 c , 4.74 b \\ \(\beta_{4}\)-Nb\({}_{2}\)N & 0.46 & 289 & 0.755 & 3.56 & 0.270 & 278 & 2.64 \\ \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) & 0.68 & 262 & 0.698 & 6.57 & 0.880 & 249 & 8.48 \\ Expt & & & & & & & 7.8-12.2d \\ \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) & 0.92 & 249 & 0.873 & 8.22 & 0.824 & 277 & 15.28 \\ Expt & & & & & & & 8-16 e \\ \(\delta\)-NbN\({}_{2}\)N & 0.47 & 398 & 0.610 & 2.87 & 0.163 & 330 & 3.88 \\ Expt & & & & & & & 320b & \(<\) 1 c \\ \(\beta_{4}\)-Nb\({}_{2}\)N & 0.46 & 289 & 0.755 & 3.56 & 0.270 & 278 & 2.64 \\ \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) & 0.68 & 262 & 0.698 & 6.57 & 0.880 & 249 & 8.48 \\ Expt & & & & & & & 7.8-12.2d \\ \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) & 0.92 & 249 & 0.873 & 8.22 & 0.824 & 277 & 15.28 \\ Expt & & & & & & & 8-16 e \\ \(\delta\)-NbN\({}_{2}\)N & 0.47 & 398 & 0.610 & 2.87 & 0.163 & 330 & 3.88 \\ Expt & & & & & & 320b \\ \(\beta_{4}\)-Nb\({}_{2}\)N & 0.46 & 289 & 0.755 & 3.56 & 0.270 & 278 & 2.64 \\ \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) & 0.68 & 262 & 0.698 & 6.57 & 0.880 & 249 & 8.48 \\ Expt & & & & & & & 7.8-12.2d , 8-16 e \\ \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) & 0.92 & 249 & 0.873 & 8.22 & 0.824 & 277 & 15.2 posses the largest \(\Theta_{D}\), which is expected beacuse of its smallest value of \(\beta\). Only the experimental \(\Theta_{D}\) value for \(\beta_{4}\)-Nb\({}_{2}\)N has been reported, which agrees well with the calculated \(\Theta_{D}\) value (Table 4). ### Electron-phonon coupling and superconductivity We display in Figs. 5 and 6 the calculated \(\alpha^{2}F(\omega)\) of the studied niobium nitrides. Equation (2) indicates that \(\alpha^{2}F(\omega)\) is essentially the phonon DOS spectrum modulated by the electron-phonon interaction matrix element \(\gamma_{qj}\) divided by the phonon frequency \(\omega_{qj}\). As a result, the \(\alpha^{2}F(\omega)\) spectrum for each structure roughly follows the corresponding phonon DOS spectrum (see Figs. 5 and 6). Therefore, the contribution from the acoustic and low energy lying optical phonon bands to the \(\alpha^{2}F(\omega)\) may become dominant. This is evident by the existance of large peaks in the \(\alpha^{2}F(\omega)\) spectrum. Interestingly, among the \(\beta\)-Nb\({}_{2}\)N structures, the magnitude of \(\alpha^{2}F(\omega)\) is highest in the \(\beta_{3}\)-Nb\({}_{2}\)N [Figs. 5(i)] and lowest for \(\beta_{1}\)-Nb\({}_{2}\)N [5(c)]. For tetragonal structures, \(\alpha^{2}F(\omega)\) of \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) shows the larger peaks than for \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) [Figs. 6(f) and 6(c)]. Overall, the magnitude of \(\alpha^{2}F(\omega)\) is highest for \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) and lowest for \(\beta_{1}\)-Nb\({}_{2}\)N. Note that the strength of the electron-phonon coupling (\(\lambda\)) is given by an integral of Eliashberg function \(\alpha^{2}F(\omega)\) divided by phonon frequency \(\omega\) over the entire phonon frequency range (Eq. 1). This results in the lowest value of \(\lambda\) (0.36) for \(\beta_{1}\)-Nb\({}_{2}\)N and highest \(\lambda\) (0.92) for \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\). In Table 4, we list the calculated \(\lambda\) values for all the niobium nitrides. Clearly, the \(\lambda\) value (0.92) of \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) is much larger than that of \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) (0.68), \(\beta_{3}\)-Nb\({}_{2}\)N (0.47), \(\beta_{2}\)-Nb\({}_{2}\)N (0.57) and \(\beta_{1}\)-Nb\({}_{2}\)N (0.36). As mentioned already in section I, the experimental studies [4; 5; 28; 29; 30] on the superconducting properties of \(\beta\)-Nb\({}_{2}\)N so far have reported conflicting results. Gavaler _et al._[4] reported the formation of hexagonal \(\beta\)-Nb\({}_{2}\)N in a thin film using the XRD data with some additional peaks which were not indexed, and also a \(T_{c}\) of 8.6 K in the thin films. In addition, they also reported another film which has mixed phases of hexagonal \(\beta\)-Nb\({}_{2}\)N and cubic-NbN with a \(T_{c}\) of 12.1 K [1]. Skokan _et al._[5] reported that the thin films of mixed phases of cubic-NbN and hexagonal \(\beta\)-Nb\({}_{2}\)N exhibit two step resistance drop at 9 K and at 2 K. Gajar _et al._[28] reported the transformation of Nb into hexagonal \(\beta\)-Nb\({}_{2}\)N which becomes superconducting below 1 K only. However, Kalal _et al._[30] recently reported that the hexagonal \(\beta\)-Nb\({}_{2}\)N (P\({}_{63}\)/mmc) films have rather strong electron-phonon interaction (\(\lambda\) = 0.54) with a \(T_{c}\) of 4.74 K. As mentioned before, four crystalline structures (see Fig. 1) have been reported for the \(\beta\)-phase Nb\({}_{2}\)[33; 35; 41; 42]. This is quite unlike other niobium nitrides with different Nb/N ratios. For example, different structures of NbN were labelled as different phases (one structure, one phase) (see Table 1). Since the physical properties of a solid depend significantly on the crystalline structure, the contradicting superconductivity reported for \(\beta\)-Nb\({}_{2}\)N could certainly be attributed to the fact that \(\beta\)-Nb\({}_{2}\)N has several different structures. This has motivated us to carry out this _ab initio_ theoretical study on the superconducting properties of \(\beta\)-Nb\({}_{2}\)N in all four reported structures. By using Allen-Dynes formula (Eq. 3) and the calculated \(\lambda\) as well as the other phonon and electron parameters, we estimate the \(T_{c}\) values for all the considered nitrides, as listed in Table 4. First of all, we notice that the calculated \(T_{c}\) values for \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) (8.48 K) and \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) (15.3 K) agree rather well with the corresponding experimental values [17; 29; 36; 37] (see Table 4). Second, different structures of \(\beta\)-Nb\({}_{2}\)N indeed have rather different \(T_{c}\) values, ranging from \(\sim\)0.6 K to 6.1 K (Table 4). The calculated \(T_{c}\) of \(\beta_{2}\)-Nb\({}_{2}\)N (P\(\bar{3}\)m1) (6.1 K) is larger than \(\beta_{3}\)-Nb\({}_{2}\)N (P\({}_{6}\)mmc) (3.9 K), \(\beta_{4}\)-Nb\({}_{2}\)N (Pnm) (2.6 K) and \(\beta_{1}\)-Nb\({}_{2}\)N (P\(\bar{3}\)m1) (0.6 K). These results clearly demonstrate that further experiments measuring the superconductivity and crystalline structure simultaneously on the same sample would be needed to clarify the current confusing experimental results for \(\beta\)-Nb\({}_{2}\)N. It is useful to find connections between the superconductivity and other physical properties of the nitrides. This could be helped by McMillan-Hopfield formula [65]\(\lambda\) = [\(\frac{N(E_{F})}{<\omega^{2}>}\)]\(\Sigma_{i}\)(\(\frac{<I^{2}>_{i}}{M_{i}}\)) where \(<I^{2}>_{i}\) is the square of the electron-phonon coupling matrix element averaged over the Fermi surface and \(M_{i}\) is the atomic mass of \(i^{th}\) atom. Also, \(<\omega^{2}>\approx 0.5\Theta_{D}^{2}\). Clearly, this indicates that \(\lambda\) and hence \(T_{c}\) would depend on \(N(E_{F})\) and would be relatively large for the electronic bands with a high DOS near the Fermi energy. The calculated DOS spectra shown in Fig. 3, indicate that the Nb \(d\)-states dominate the DOS near \(E_{F}\) for all the structures, and therefore would make major contributions to the electron-phonon coupling and superconductivity. Thus, the calculated \(N(E_{F})\) per Nb atom for the considered nitrides are listed in Table 4. As can be seen from Table 4, for the considered nitrides, roughly, \(\lambda\) and \(T_{c}\) are larger with larger \(N(E_{F})\) and smaller \(\Theta_{D}\). For example, \(\delta\)-NbN has the largest \(\lambda\), \(T_{c}\) and \(N(E_{F})\) but the smallest \(\Theta_{D}\) (Table 4). ## IV Conclusion By performing systematic _ab initio_ calculations based on the DFT and DFPT, we have investigated the superconductivity, electronic and phononic band structures, electron-phonon coupling and elastic constants of all four reported structures of \(\beta\)-Nb\({}_{2}\)N as well as Nb-rich \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) and N-rich \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\). First, all four structures of \(\beta\)-Nb\({}_{2}\)N are found to be superconductors with \(T_{c}\) ranging from 0.6 K to 6.1 K, depending on their structure (Table 4). This finding thus clarifies the long standing confusion that although Nb\({}_{2}\)N was labelled as the single \(\beta\) phase, contradicting \(T_{c}\) values for \(\beta\)-Nb\({}_{2}\)N have been reported in previous experiments. Interestingly, \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) and \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) are predicted to be superconductors with rather high \(T_{c}\) of 8.5 K and 15.3 K, respectively. Second, all the calculated elastic constants and phonon frequencies are positive, thereby showing that all the considered niobium nitride structures are mechanically and dynamically stable. This suggests that although only \(\beta_{1}\)-Nb\({}_{2}\)N is found to be the ground state, the other three structures of \(\beta\)-Nb\({}_{2}\)N could be grown in, e.g., the \(\beta\)-Nb\({}_{2}\)N films. Furthermore, the calculated elastic moduli show that all the niobium nitrides are hard materials with bulk moduli and hardness being comparable to or even larger than the well-known hard sapphire. Third, the calculated electronic band structures reveal that \(\beta_{3}\)-Nb\({}_{2}\)N, \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) and \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) are topological metals. Specifically, \(\beta_{3}\)-Nb\({}_{2}\)N and \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) possess type-I Dirac nodal points whereas \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) has type-II Dirac points. Finally, the calculated electron-phonon coupling strength, superconductivity and mechanical property of the niobium nitrides are discussed in terms of their underlying electronic structures and also Debye temperatures. For example, that \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) has the largest \(\lambda\) and highest \(T_{c}\) among the considered niobium nitrides, could be attributed to its largest DOS at \(E_{F}\). All these interesting findings indicate that \(\beta\)-Nb\({}_{2}\)N, \(\gamma\)-Nb\({}_{4}\)N\({}_{3}\) and \(\beta^{\prime}\)-Nb\({}_{4}\)N\({}_{5}\) are hard superconductors with nontrivial band topology and are promising materials for studying fascinating phenomena arising from the interplay of hardness, superconductivity and nontrivial band topology. ###### Acknowledgements. The authors acknowledge the support from the National Science and Technology Council and National Center for Theoretical Sciences (NCTS) in Taiwan. The authors are also grateful to the National Center for High-performance Computing (NCHC) in Taiwan for the computing time.
2307.06035
Systole functions and Weil-Petersson geometry
A basic feature of Teichm\"uller theory of Riemann surfaces is the interplay of two dimensional hyperbolic geometry, the behavior of geodesic-length functions and Weil-Petersson geometry. Let $\mathcal{T}_g$ $(g\geq 2)$ be the Teichm\"uller space of closed Riemann surfaces of genus $g$. Our goal in this paper is to study the gradients of geodesic-length functions along systolic curves. We show that their $L^p$ $(1\leq p \leq \infty)$-norms at every hyperbolic surface $X\in \mathcal{T}_g$ are uniformly comparable to $\ell_{sys}(X)^{\frac{1}{p}}$ where $\ell_{sys}(X)$ is the systole of $X$. As an application, we show that the minimal Weil-Petersson holomorphic sectional curvature at every hyperbolic surface $X\in \mathcal{T}_g$ is bounded above by a uniform negative constant independent of $g$, which negatively answers a question of M. Mirzakhani. Some other applications to the geometry of $\mathcal{T}_g$ will also be discussed.
Yunhui Wu
2023-07-12T09:32:24Z
http://arxiv.org/abs/2307.06035v1
# Systole functions and Weil-Petersson geometry ###### Abstract. A basic feature of Teichmuller theory of Riemann surfaces is the interplay of two dimensional hyperbolic geometry, the behavior of geodesic-length functions and Weil-Petersson geometry. Let \(\mathcal{T}_{g}\) (\(g\geqslant 2\)) be the Teichmuller space of closed Riemann surfaces of genus \(g\). Our goal in this paper is to study the gradients of geodesic-length functions along systolic curves. We show that their \(L^{p}\) (\(1\leqslant p\leqslant\infty\))-norms at every hyperbolic surface \(X\in\mathcal{T}_{g}\) are uniformly comparable to \(\ell_{sys}(X)^{\frac{1}{p}}\) where \(\ell_{sys}(X)\) is the systole of \(X\). As an application, we show that the minimal Weil-Petersson holomorphic sectional curvature at every hyperbolic surface \(X\in\mathcal{T}_{g}\) is bounded above by a uniform negative constant independent of \(g\), which negatively answers a question of M. Mirzakhani. Some other applications to the geometry of \(\mathcal{T}_{g}\) will also be discussed. Key words and phrases:Systole function, uniform bounds, moduli space of Riemann surfaces, Weil-Petersson geometry 2020 Mathematics Subject Classification: 32G15, 30F60 **Notations.** We make the following notations in this paper. 1. For any number \(r>0\), we always say \[r^{\frac{1}{\infty}}=1.\] 2. We say \[f_{1}(g)\prec f_{2}(g)\quad\text{or}\quad f_{2}(g)\succ f_{1}(g)\] if there exists a universal constant \(C>0\), independent of \(g\), such that \[f_{1}(g)\leqslant C\cdot f_{2}(g).\] And we say \[f_{1}(g)\asymp f_{2}(g)\] if \(f_{1}(g)\prec f_{2}(g)\) and \(f_{2}(g)\prec f_{1}(g)\). ## 1. Introduction Let \(S_{g}\) be a closed surface of genus \(g\) (\(g\geqslant 2\)), and \(\mathcal{T}_{g}\) be the Teichmuller space of \(S_{g}\). The moduli space \(\mathcal{M}_{g}\) of \(S_{g}\) is the quotient space \(\mathcal{T}_{g}/\mathrm{Mod}(S_{g})\) where \(\mathrm{Mod}(S_{g})\) is the mapping class group of \(S_{g}\). For any non-trivial loop \(\alpha\subset S_{g}\) and \(X\in\mathcal{M}_{g}\), there exists a unique closed geodesic \([\alpha]\subset X\) representing \(\alpha\). Denote by \(\ell_{\alpha}(X)\) the length of \([\alpha]\) in \(X\). This quantity \(\ell_{\alpha}(\cdot)\) gives a real analytic function on \(\mathcal{M}_{g}\). The Weil-Petersson gradient \(\nabla\ell_{\alpha}(X)\) of \(\ell_{\alpha}(\cdot)\) evaluated at \(X\) is a harmonic Beltrami differential of \(X\). Actually Gardiner in [1] provided a precise formula for \(\nabla\ell_{\alpha}(X)\), which one may see in formula (2.5). Recall that the _systole_\(\ell_{sys}(X)\) of \(X\) is the length of shortest closed geodesics on \(X\). We prove **Theorem 1.1** (Uniform \(L^{p}\) bounds).: _For any \(p\in[1,\infty]\) and \(X\in\mathcal{M}_{g}\), then we have_ 1. _for any_ \(\alpha\subset X\) _with_ \(\ell_{\alpha}(X)=\ell_{sys}(X)\)_,_ \[||\nabla\ell_{\alpha}(X)||_{p}\asymp\ell_{sys}(X)^{\frac{1}{p}}.\] 2. _For any simple loop_ \(\beta\subset X\) _with_ \(\ell_{\beta}(X)\leqslant L_{0}\) _where_ \(L_{0}>0\) _is any given constant,_ \[||\nabla\ell_{\beta}(X)||_{p}\asymp\ell_{\beta}(X)^{\frac{1}{p}}.\] _Where \(||\nabla\ell_{*}(X)||_{p}:=(\int_{X}|\nabla\ell_{*}(X)|^{p}\cdot\mathrm{dArea} )^{\frac{1}{p}}\)._ **Remark**.: If \(p=2\), in light of (2.7) which is due to Riera, it suffices to show the uniform upper bounds in Theorem 1.1. For this case, 1. Part (1) of Theorem 1.1 was firstly obtained in [11, Proposition 2] whose proof highly relies on formula (2.6) of Riera. In this paper, we give a different and more fundamental proof without using Riera's formula. 2. For Part (2) when \(p=2\), the curve \(\beta\subset X\) is not required to be a systolic curve of \(X\). Theorem 1.1 was firstly obtained by Wolpert [11, Lemma 3.16] by using Riera's formula; very recently Bridgeman and Bromberg in [1] applied Riera's formula to obtain an upper bound by an explicit elementary function in terms of \(\ell_{\alpha}\) (see [1, Theorem 1.6]); our proof is different without using Riera's formula. In the subsequent subsections, we make several applications of Theorem 1.1 to the geometry of \(\mathcal{T}_{g}\), especially on its Weil-Petersson geometry. ### Application to Weil-Petersson curvatures The Weil-Petersson curvature of \(\mathcal{M}_{g}\) has been studied over the past several decades. One may see Section 2 for related references. In this paper we apply Theorem 1.1 to obtain certain new uniform and optimal bounds on Weil-Petersson curvatures. The main one is as follows. **Theorem 1.2**.: _For any \(X\in\mathcal{M}_{g}\), let \(\alpha\subset X\) be a simple closed geodesic satisfying_ 1. _either_ \(\ell_{\alpha}(X)=\ell_{sys}(X)\) _or_ 2. \(\ell_{\alpha}(X)\leqslant L_{0}\) _where_ \(L_{0}>0\) _is any fixed constant,_ _then the Weil-Petersson holomorphic sectional curvature_ \[\mathrm{HolK}(\nabla\ell_{\alpha})(X)\asymp\frac{-1}{\ell_{sys}(X)}.\] **Remark**.: 1. For the behavior as \(\ell_{sys}(X)\to 0\), [20, Corollary 16] or [20, Theorem 19] tells that \(\mathrm{HolK}(\nabla\ell_{\alpha})(X)=\frac{-3}{\pi\ell_{sys}(X)}+O(\ell_{sys}(X))\) where the dominated term \(\frac{-3}{\pi\ell_{sys}(X)}\) is explicit; however the error term \(O(\ell_{sys}(X))\) may depend on the topology \(g\). The bound in Theorem 1.2 is uniform. 2. Very recently joint with Martin Bridgeman, we show in [1] that for any \(X\in\mathcal{M}_{g}\) with \(\ell_{sys}(X)\leqslant 2\operatorname{arcsinh}^{-1}(1)\), then for any \(\mu\neq 0\in T_{X}\mathcal{M}_{g}^{n}\), the Weil-Petersson Ricci curvature \(\mathrm{Ric}(\mu)\) satisfies that (1.1) \[\mathrm{Ric}(\mu)\geqslant-\frac{4}{\ell_{sys}(X)}.\] Theorem 1.2 tells that the above uniform lower bound in terms of \(\ell_{sys}(X)\) is optimal. Actually we have **Corollary 1.3**.: _For any \(X\in\mathcal{M}_{g}\) with \(\ell_{sys}(X)\leqslant 2\operatorname{arcsinh}^{-1}(1)\), then the minimal Weil-Petersson sectional curvature \(K\) at \(X\)_ \[\min_{\text{real plane span}\{\mu_{1},\,\mu_{2}\}\subset T_{X}\mathcal{M}_{g}} K(\mu_{1},\mu_{2})\asymp\min_{\mu\in T_{X}\mathcal{M}_{g}}\mathrm{Ric}(\mu) \asymp-\frac{1}{\ell_{sys}(X)}.\] Proof.: Since the Weil-Petersson sectional curvature is negative [20] or [13], by (1.1) we have \[-\frac{1}{\ell_{sys}(X)}\prec\min_{\mu\in T_{X}\mathcal{M}_{g}}\mathrm{Ric}( \mu)<\min_{\text{real plane span}\{\mu_{1},\,\mu_{2}\}\subset T_{X}\mathcal{M}_{g}} K(\mu_{1},\mu_{2}).\] Thus, it suffices to show that \(\min_{\text{real plane span}\{\mu_{1},\,\mu_{2}\}\subset T_{X}\mathcal{M}_{g}} K(\mu_{1},\mu_{2})\prec-\frac{1}{\ell_{sys}(X)}\). This can be directly followed by using Theorem 1.2 because \[\min_{\text{real plane span}\{\mu_{1},\,\mu_{2}\}\subset T_{X} \mathcal{M}_{g}}K(\mu_{1},\mu_{2}) \leqslant \min_{\alpha\subset X;\ \ell_{\alpha}(X)=\ell_{sys}(X)}\mathrm{HolK}(\nabla\ell_{ \alpha})(X)\] \[\asymp \frac{-1}{\ell_{sys}(X)}\] which completes the proof. Combining [20] with Theorem 1.2, we obtain the following uniform upper bounds. **Theorem 1.4**.: _For any \(X\in\mathcal{M}_{g}\),_ \[\min_{\mu\in T_{X}\mathcal{M}_{g}}\mathrm{HolK}(\mu)\prec-1<0.\] **Remarks**.: 1. If \(\ell_{sys}(X)\) is big enough, Theorem 1.4 was firstly obtained by Wolf and the author in [20] that partially answered a question which is due to Maryam Mirzakhani: _for any fixed constant \(\epsilon>0\), whether the Weil-Petersson curvatures restricting on any \(\epsilon\)-thick part of \(\mathcal{M}_{g}\) go to \(0\) as the genus \(g\to\infty\)?_ 2. It was shown in [23, Theorem 1.4] that the limit of the following probability holds: \[\lim_{g\to\infty}\mathrm{Prob}\{X\in\mathcal{M}_{g};\min_{\mu\in T_{X}\mathcal{M}_ {g}}\mathrm{HolK}(\mu)\prec-1<0\}=1.\] Theorem 1.4 generalizes these two results above and give a complete negative answer to Mirzakhani's question. Given any constant \(\varepsilon>0\), restricted on the \(\varepsilon\)-thick part of \(\mathcal{M}_{g}\) Huang [23] showed that the Weil-Petersson sectional curvature is uniformly bounded from below by a constant only depending on \(\varepsilon\). Later Teo [19] generalized Huang's result to Weil-Petersson Ricci curvature. The following result roughly says that the uniform lower bounds of Huang and Teo are optimal. More precisely, as an application of Theorem 1.4 we prove **Corollary 1.5**.: _Given any constant \(\varepsilon>0\), then for any \(X\in\mathcal{M}_{g}\) with \(\ell_{sys}(X)\geqslant\varepsilon\), the minimal Weil-Petersson sectional curvature \(K\) at \(X\)_ \[\min_{\text{real plane span}\{\mu_{1},\,\mu_{2}\}\subset T_{X}\mathcal{M}_{g}}K (\mu_{1},\mu_{2})\asymp\min_{\mu\in T_{X}\mathcal{M}_{g}}\mathrm{Ric}(\mu) \asymp-1.\] Proof.: Since \(\ell_{sys}(X)\geqslant\varepsilon\), by [19, Proposition 3.3] of Teo we have \[-1\prec\min_{\mu\in T_{X}\mathcal{M}_{g}}\mathrm{Ric}(\mu).\] Since the Weil-Petersson sectional curvature is negative [23] or [18], we have \[\min_{\mu\in T_{X}\mathcal{M}_{g}}\mathrm{Ric}(\mu)<\min_{\text{real plane span}\{\mu_{1},\,\mu_{2}\}\subset T_{X}\mathcal{M}_{g}}K(\mu_{1},\mu_{2}).\] Thus, it suffices to show that \(\min_{\text{real plane span}\{\mu_{1},\,\mu_{2}\}\subset T_{X}\mathcal{M}_{g}}K (\mu_{1},\mu_{2})\prec-1\). This can be directly followed by using Theorem 1.4 because \[\min_{\text{real plane span}\{\mu_{1},\,\mu_{2}\}\subset T_{X}\mathcal{M}_{g}} K(\mu_{1},\mu_{2})\leqslant\min_{\mu\in T_{X}\mathcal{M}_{g}}\mathrm{HolK}(\mu) \prec-1.\] Which completes the proof. It was shown in [23, Corollary 23] that there exists a constant \(c_{g}<0\), only depending on \(g\), such that any subspace \(S\subset T_{X}\mathcal{M}_{g}\) with \(\dim_{\mathbb{R}}S>(3g-3)\) contains a section with Weil-Petersson sectional curvature at most \(c_{g}\). In light of Theorem 1.4, it would be interesting to know that whether this result of Wolpert is still true if one replaces \(c_{g}\) by a uniform constant \(c<0\). More precisely, **Question**.: Does there exist a uniform constant \(c<0\), independent of \(g\), such that any subspace \(S\subset T_{X}\mathcal{M}_{g}\) with \(\dim_{\mathbb{R}}S>(3g-3)\) contains a section with Weil-Petersson sectional curvature at most \(c\)? ### Application to Weil-Petersson distance Let \(\operatorname{Teich}(S_{g})\) be the Teichmuller space \(\mathcal{T}_{g}\) endowed with the Weil-Petersson metric. In [21] we studied the behavior of the systole function along Weil-Petersson geodesics and proved the following uniform Lipschitz property. **Theorem**.: [21, Theorem 1.3] For all \(X,Y\in\operatorname{Teich}(S_{g})\), \[|\sqrt{\ell_{sys}(X)}-\sqrt{\ell_{sys}(Y)}|\prec\operatorname{dist}_{wp}(X,Y)\] where \(\operatorname{dist}_{wp}\) is the Weil-Petersson distance. One essential step in the proof of the theorem above is to show Theorem 1.1 for \(p=2\). We outline the proof of the theorem above as follows. One may see [21] for more details. Outline proof.: By the real analyticity of the geodesic length function, the Teichmuller space \(\mathcal{T}_{g}\) and the Weil-Petersson metric, it was shown in [21, Lemma 4.5] that the systole function \(\ell_{sys}(\cdot):\operatorname{Teich}(S_{g})\to\mathbb{R}^{>0}\) is piecewisely real analytic in the sense that for any Weil-Petersson geodesic segment \(c:[0,r]\to\operatorname{Teich}(S_{g})\) of arc-length parameter, where \(r>0\) is any constant, the function \(\ell_{sys}(c(t)):[0,r]\to R^{>0}\) is piecewisely real analytic. By [21] one may let \(c:[0,s]\to\operatorname{Teich}(S_{g})\) be the unique Weil-Petersson geodesic joining \(X\) and \(Y\) of arc-length parameter where \(s=\operatorname{dist}_{wp}(X,Y)>0\). Since the function \(\ell_{sys}(c(t)):[0,s]\to R^{>0}\) is piecewisely real analytic, one may apply Theorem 1.1 for the case \(p=2\) to obtain that for all \(t\in(0,s)\) except finite values, \[||\nabla\ell_{sys}^{\frac{1}{2}}(c(t))||_{2}\asymp 1.\] Then by the Cauchy-Schwarz inequality \[|\sqrt{\ell_{sys}(X)}-\sqrt{\ell_{sys}(Y)}| = |\int_{0}^{s}\langle\nabla\ell_{sys}^{\frac{1}{2}}(c(t)),c^{ \prime}(t)\rangle_{wp}dt|\] \[\leq \int_{0}^{s}||\nabla\ell_{sys}^{\frac{1}{2}}(c(t))||_{2}dt\] \[\asymp s=\operatorname{dist}_{wp}(X,Y)\] where we apply (1.2) in the last equation. Which completes the proof. **Remarks**.: 1. Bridgeman and Bromberg in [1] applied their explicit bounds on \(||\nabla\ell_{sys}(X)||_{wp}\) to show that the function \[\sqrt{\ell_{sys}(\cdot)}:\operatorname{Teich}(S_{g})\to\mathbb{R}^{>0}\] is \(\frac{1}{2}\)-Lipschitz. 2. In [21], motivated by the work [19] of Rupflin and Topping, we use a purely differential geometrical method to show that \(\sqrt{\ell_{sys}(\cdot)}\) is \(0.5492\)- Lipschitz, without using any estimations on \(||\nabla(\ell_{sys}(\cdot))||_{wp}\). In [21] we applied the uniform Lipschitz Theorem above to determine the growth rate of the inradius of \(\mathcal{M}_{g}\) or \(\operatorname{Teich}(S_{g})\) for large genus (also for large punctures). Recall that the _inradius_\(\operatorname{InRad}(\mathcal{M}_{g})\) of \(\mathcal{M}_{g}\) is defined as \[\operatorname{InRad}(\mathcal{M}_{g}):=\sup_{X\in\mathcal{M}_{g}}\operatorname {dist}_{wp}(X,\partial\mathcal{M}_{g})\] where \(\partial\mathcal{M}_{g}\) is the Weil-Petersson boundary of \(\mathcal{M}_{g}\) which is also the same as the boundary of the Deligne-Mumford compactification of \(\mathcal{M}_{g}\)[16]. It is known by Wolpert [21, Section 4] that for any \(X\in\mathcal{M}_{g}\), \(\operatorname{dist}_{wp}(X,\mathcal{M}_{g}^{\alpha})\leqslant\sqrt{2\pi\cdot \ell_{\alpha}(X)}\) where \(\mathcal{M}_{g}^{\alpha}\) is the stratum of \(\mathcal{M}_{g}\) whose pinching curve is \(\alpha\). If we choose \(\alpha\subset X\) with \(\ell_{\alpha}(X)=\ell_{sys}(X)\), then \(\operatorname{dist}_{wp}(X,\mathcal{M}_{g}^{\alpha})\leqslant\sqrt{2\pi\ell_ {sys}(X)}\). It is well-known that \(\ell_{sys}(X)\prec\ln{(g)}\). So we have \(\operatorname{InRad}(\mathcal{M}_{g})\prec\sqrt{\ln{(g)}}\). For the other direction, first by Buser-Sarnak [1] one may choose a surface \(\mathcal{X}_{g}\in\mathcal{M}_{g}\) such that \(\ell_{sys}(\mathcal{X}_{g})\asymp\ln{(g)}\). Clearly the uniform Lipschitz Theorem above implies that \(\operatorname{dist}_{wp}(\mathcal{X}_{g},\partial\mathcal{M}_{g})\succ\sqrt{ \ln{(g)}}\) which in particular implies that \(\operatorname{InRad}(\mathcal{M}_{g})\succ\sqrt{\ln{(g)}}\). Thus, we have **Theorem**.: [21, Theorem 1.1] For all \(g\geqslant 2\), \[\operatorname{InRad}(\mathcal{M}_{g})\asymp\sqrt{\ln{(g)}}.\] **Remark**.: Bridgeman and Bromberg in [1] showed that \[\lim_{g\to\infty}\frac{\operatorname{InRad}(\mathcal{M}_{g})}{\sqrt{\max_{X \in\mathcal{M}_{g}}\ell_{sys}(X)}}=\sqrt{2\pi}.\] One may also see [21] for an alternative proof. **Remark**.: Cavendish-Parlier [13] showed that the diameter \(\operatorname{diam}(\mathcal{M}_{g})\) of \(\mathcal{M}_{g}\) satisfies \(\sqrt{g}\prec\operatorname{diam}(\mathcal{M}_{g})\prec\sqrt{g}\cdot\ln{g}\) where the upper bound refines Brock's quasi-isometry of \(\operatorname{Teich}(S_{g})\) to the pants graph [1]. As far as we know, it is still an open problem: whether \(\sqrt{g}\cdot\ln{g}\) is the correct growth rate for \(\operatorname{diam}(\mathcal{M}_{g})\) as \(g\to\infty\)? ### Application to the \(L^{p}\) (\(1<p\leqslant\infty\)) metric on \(\mathcal{M}_{g}\) The Weil-Petersson metric is the \(L^{2}\)-metric on \(\mathcal{M}_{g}\) and the Teichmuller metric is the \(L^{1}\) metric on \(\mathcal{M}_{g}\). Similarly for any \(p\in[1,\infty]\), the \(L^{p}\) metric on \(\mathcal{M}_{g}\) can be defined. One may see Section 7 for precise definitions. We denote by \((\mathcal{M}_{g},||\cdot||_{L^{p}})\) the moduli space \(\mathcal{M}_{g}\) endowed with the \(L^{p}\) metric, and let \(\operatorname{dist}_{p}(\cdot,\cdot)\) denote the \(L^{p}\)-distance on \(\mathcal{M}_{g}\). As another application of Theorem 1.1 we show that **Theorem 1.6**.: _Let \(X\in\mathcal{M}_{g}\) and \(\alpha\subset X\) be a non-trivial simple loop with \(\ell_{\alpha}(X)\leqslant L_{0}\) where \(L_{0}>0\) is any given constant. Then for any \(p\in(1,\infty]\),_ \[\operatorname{dist}_{p}(X,\mathcal{M}_{g}^{\alpha})\prec(\ell_{\alpha}(X))^{1 -\frac{1}{p}}\] _where \(\mathcal{M}_{g}^{\alpha}\) is the stratum of \(\mathcal{M}_{g}\) whose pinching curve is \(\alpha\). In particular, the space \((\mathcal{M}_{g},||\cdot||_{L^{p}})\)\((1<p\leqslant\infty)\) is incomplete._ **Remarks**.: 1. As introduced in the proof of Theorem 1.2, for \(p=2\) Wolpert in [11, Section 4] showed that for any \(X\in\mathcal{M}_{g}\), \[\operatorname{dist}_{wp}(X,\mathcal{M}_{g}^{\alpha})\leqslant\sqrt{2\pi\cdot \ell_{\alpha}(X)}.\] Where \(\ell_{\alpha}(X)\) is not required to be uniformly bounded. 2. Theorem 7.2 tells that there exists a constant \(K(L_{0},p)>0\) depending on \(L_{0}\) and \(p\) such that \(\operatorname{dist}_{p}(X,\mathcal{M}_{g}^{\alpha})\leqslant K(L_{0},p)\cdot( \ell_{\alpha}(X))^{1-\frac{1}{p}}\) for \(X\in\mathcal{M}_{g}\) with \(\ell_{\alpha}(X)\leqslant L_{0}\). One may easily check in the proof that as \(p\to 1\), the constant \(K(L_{0},p)\) goes to \(+\infty\). This can be also followed by the completeness of the Teichmuller metric which is the same as the \(L^{1}\) metric. 3. There are interesting connections between the \(L^{p}\) metric on \(\mathcal{M}_{g}\) and renormalized volumes of quasi-Fuchsian manifolds. One may see [http://math.harvard.edu/~ctm/sem/2014.html](http://math.harvard.edu/~ctm/sem/2014.html) for related discussions. We are grateful to Curt McMullen for noticing us these notes. ### Plan of the paper Section 2 provides some necessary background and the basic properties on two-dimensional hyperbolic geometry and the Weil-Petersson metric. We show that \(||\nabla\ell_{sys}(X)||_{\infty}\) is uniformly bounded from above in Section 3 and 4. Theorem 1.1 is established in Section 5. In Section 6 we discuss several applications of Theorem 1.1 including proving Theorem 1.2 and 1.4. In the last Section 7 we prove Theorem 1.6. ### Acknowledgements The author would like to express his appreciation to Scott Wolpert for many helpful and invaluable conversations on this project. He wants to thank Michael Wolf for a collaboration with him on a related project where some of ideas in this paper emerged from. He also would like to thank Jeffrey Brock, Martin Bridgeman, Shing-Tung Yau and Xuwen Zhu for their interests. This work is supported by the NSFC grant No. 12171263 and a grant from Tsinghua University. ## 2. Notations and Preliminaries In this section we set up the notations and provide some necessary background on two-dimensional hyperbolic geometry, Teichmuller theory and the Weil-Petersson metric. ### Hyperbolic upper half plane Let \(\mathbb{H}\) be the upper half plane endowed with the hyperbolic metric \(\rho(z)|dz|^{2}\) where \[\rho(z)=\frac{1}{(\operatorname{Im}(z))^{2}}.\] A geodesic line in \(\mathbb{H}\) is either a vertical line or an upper semi-circle centered at some point on the real axis. For \(z=(r,\theta)\in\mathbb{H}\) given in polar coordinates where \(\theta\in(0,\pi)\), the hyperbolic distance between \(z\) and the imaginary axis \(\operatorname{i}\mathbb{R}^{+}\) is \[\operatorname{dist}_{\mathbb{H}}(z,\operatorname{i}\mathbb{R}^{+})=\ln| \csc\theta+|\cot\theta||. \tag{2.1}\] Thus, \[e^{-2\operatorname{dist}_{\mathbb{H}}(z,\operatorname{i}\mathbb{R}^{+})}\leqslant \sin^{2}\theta=\frac{\operatorname{Im}^{2}(z)}{|z|^{2}}\leqslant 4e^{-2 \operatorname{dist}_{\mathbb{H}}(z,\operatorname{i}\mathbb{R}^{+})}. \tag{2.2}\] It is known that any eigenfunction with positive eigenvalue of the hyperbolic Laplacian of \(\mathbb{H}\) satisfies the mean value property [10, Coro.1.3]. For \(z=(r,\theta)\in\mathbb{H}\) given in polar coordinate, the function \[u(\theta)=1-\theta\cot\theta\] is a positive \(2\)-eigenfunction. Thus, \(u\) satisfies the mean value property. It is not hard to see that \(\min\{u(\theta),u(\pi-\theta)\}\) also satisfies the mean value property. Since \(\min\{u(\theta),u(\pi-\theta)\}\) is comparable to \(\sin^{2}\theta\), from inequality (2.2) we know that the function \(e^{-2\operatorname{dist}_{\mathbb{H}}(z,\operatorname{i}\mathbb{R}^{+})}\) satisfies the mean value property in \(\mathbb{H}\). The following lemma is the simplest version of [12, Lemma 2.4]. **Lemma 2.1**.: _For any \(r>0\) and \(p\in\mathbb{H}\), there exists a positive constant \(c(r)\), only depending on \(r\), such that_ \[e^{-2\operatorname{dist}_{\mathbb{H}}(p,\operatorname{i}\mathbb{R}^{+})} \leqslant c(r)\int_{B_{\mathbb{H}}(p;r)}e^{-2\operatorname{dist}_{\mathbb{H}}( z,\operatorname{i}\mathbb{R}^{+})}\operatorname{dArea}\] _where \(B_{\mathbb{H}}(p;r)=\{z\in\mathbb{H};\ \operatorname{dist}_{\mathbb{H}}(p,z)<r\}\) is the hyperbolic geodesic ball of radius \(r\) centered at \(p\) and \(\operatorname{dArea}\) is the hyperbolic area form._ ### Uniform collars Let \(X\) be a closed hyperbolic surface of genus \(g\) (\(g\geqslant 2\)), and \(\alpha\subset X\) be an essential simple loop. There always exists a unique closed geodesic, still denoted by \(\alpha\), representing this loop. We denote by \(\ell_{\alpha}(X)\) the length of \(\alpha\) in \(X\). If the \(\delta\)-neighborhood \[\mathcal{C}_{\delta}(\alpha):=\{x\in X;\ \operatorname{dist}(x,\alpha)<\delta\}\] is isometric to a cylinder \((\rho,t)\in(-\delta,\delta)\times\mathbb{S}^{1}\), where \(\mathbb{S}^{1}=\mathbb{R}/\mathbb{Z}\), endowed with the standard hyperbolic metric \[ds^{2}=d\rho^{2}+\ell_{\alpha}(X)^{2}\cosh^{2}\rho dt^{2},\] this set \(\mathcal{C}_{\delta}(\alpha)\) is called a _\(\delta\)-collar_ of \(\alpha\) on \(X\). Let \(w(\alpha)\) be the supremum of all such \(\delta>0\). Then for all \(0<\delta<w(\alpha)\), the geodesic arcs of length \(\delta\) emanating perpendicularly from \(\alpha\) are pairwisely disjoint. We call \(\mathcal{C}_{w(\alpha)}(\alpha)\) the _maximal collar_ of \(\alpha\), and \(w(\alpha)\) the _width_ of \(\alpha\). First we recall the following classical Collar Lemma. One may refer to [1, Theorem 4.1.1] or [1, Theorem 3.8.3] for more details. **Lemma 2.2** (Collar lemma-1).: _For any essential simple closed geodesic \(\alpha\subset X\), then the width \(w(\alpha)\) of \(\alpha\) satisfies that_ \[w(\alpha)\geqslant\frac{1}{2}\ln(\frac{\cosh\frac{\ell_{\alpha}(X)}{2}+1}{ \cosh\frac{\ell_{\alpha}(X)}{2}-1}).\] As the length \(\ell_{\alpha}(X)\) of the central closed geodesic \(\alpha\) goes to \(0\), the width \(w(\alpha)\) tends to infinity. Set \(f(\ell_{\alpha}(X))=\frac{1}{2}\ln(\frac{\cosh\frac{\ell_{\alpha}(X)}{2}+1}{ \cosh\frac{\ell_{\alpha}(X)}{2}-1})\). Then the quantity \(f(\ell_{\alpha}(X))\) tends to \(0\) as \(\ell_{\alpha}(X)\) goes to infinity. Another part of the classical Collar Lemma [10, Theorem 4.1.1] or [10, Theorem 3.8.3] says that \(\mathcal{C}_{f(\ell_{\alpha_{1}}(X))}(\alpha_{1})\cap\mathcal{C}_{f(\ell_{ \alpha_{2}}(X))}(\alpha_{2})=\emptyset\) for any \(\alpha_{1}\cap\alpha_{2}=\emptyset\). In this paper we will not use this disjoint property. The systole of \(X\) is the length of a shortest closed geodesic in \(X\). Curves that realize the systole are often referred to _systolic curves_. We denote by \(\ell_{sys}(X)\) the systole of \(X\). It is known by Buser-Sarnak [10] that for each \(g\geqslant 2\), \(\ell_{sys}(X)\asymp\ln\left(g\right)\) for certain hyperbolic surface \(X\) of genus \(g\). Which in particular tells that \(\ell_{\alpha}(X)\) could be arbitrarily large for large enough \(g\). If \(\alpha\) is a systolic curve, the following lemma may be well-known to experts. Since we can not find the obvious references, we provide a proof here for completeness. **Lemma 2.3** (Collar lemma-2).: _For any \(\alpha\subset X\) with \(\ell_{\alpha}(X)=\ell_{sys}(X)\), then the width \(w(\alpha)\) of \(\alpha\) satisfies that_ \[w(\alpha)\geqslant\frac{\ell_{\alpha}(X)}{4}.\] Proof.: Suppose for contradiction that \(w(\alpha)<\frac{\ell_{\alpha}(X)}{4}\). Then there exist two geodesic arcs \(c^{\prime}\) and \(c^{\prime\prime}\) of length \(w(\alpha)\) emanating perpendicularly from \(\alpha\) such that they have the same endpoint \(p_{1}\in X\). The two starting points of \(c^{\prime}\) and \(c^{\prime\prime}\) divide \(\alpha\) into two arcs. We choose \(\alpha_{1}\) to be the shorter one. In particular, we have \[\operatorname{Length}(\alpha_{1})\leqslant\frac{\ell_{\alpha}(X)}{2}.\] For the geodesic triangle \(\Delta_{1}:=c^{\prime}\cup c^{\prime\prime}\cup\alpha_{1}\), we have \[\operatorname{Length}(\Delta_{1}) = 2w(\alpha)+\operatorname{Length}(\alpha_{1})\] \[< 2\cdot\frac{\ell_{\alpha}(X)}{4}+\frac{\ell_{\alpha}(X)}{2}\] \[= \ell_{\alpha}(X)=\ell_{sys}(X).\] Which implies that \(\Delta_{1}\) bounds a disk. On the other hand, the geodesic triangle \(\Delta_{1}=c^{\prime}\cup c^{\prime\prime}\cup\alpha_{1}\) has two interior right angles, in particular, the geodesic triangle \(\Delta_{1}\) has total interior angles no less than \(\pi\), which is a contradiction by Gauss-Bonnet. The two lemmas above imply that **Proposition 2.4** (Uniform Collar).: _For any \(\alpha\subset X\) with \(\ell_{\alpha}(X)=\ell_{sys}(X)\), then the width \(w(\alpha)\) of \(\alpha\) satisfies that_ \[w(\alpha)\geqslant\max\,\{\frac{1}{2}\ln\frac{\cosh\frac{\ell_{\alpha}(X)}{2}+ 1}{\cosh\frac{\ell_{\alpha}(X)}{2}-1},\frac{\ell_{\alpha}(X)}{4}\}. \tag{2.3}\] _Moreover, we have_ \[w(\alpha)>\frac{1}{2}.\] Proof.: It follows by Lemma 2.2 and 2.3 that \[w(\alpha)\geqslant\max\,\{\frac{1}{2}\ln\frac{\cosh\frac{\ell_{\alpha}(X)}{2}+1} {\cosh\frac{\ell_{\alpha}(X)}{2}-1},\frac{\ell_{\alpha}(X)}{4}\}.\] The minimum of the right hand side holds only if \(\frac{1}{2}\ln\frac{\cosh\frac{\ell_{\alpha}(X)}{2}+1}{\cosh\frac{\ell_{\alpha }(X)}{2}-1}=\frac{\ell_{\alpha}(X)}{4}\). If we set \(t=e^{\frac{\ell_{\alpha}(X)}{4}}\), this is equivalent to solve the equation \(t^{3}-t^{2}-t-1=0\). A direct computation implies that the unique real solution is \(t\sim 1.8392\). So we have \[\max\,\{\frac{1}{2}\ln\frac{\cosh\frac{\ell_{\alpha}(X)}{2}+1}{\cosh\frac{\ell _{\alpha}(X)}{2}-1},\frac{\ell_{\alpha}(X)}{4}\}\sim\ln\,(1.8392)\sim 0.5877> \frac{1}{2}.\] The proof is complete. ### Teichmuller space Let \(S_{g}\) be a closed surface of genus \(g\) (\(g\geqslant 2\)). Let \(M_{-1}\) be the space of Riemannian metrics on \(S_{g}\) with constant curvature \(-1\), and \(X=(S_{g},\sigma(z)|dz|^{2})\in M_{-1}\). The group \(\operatorname{Diff}_{+}(S_{g})\), which is the group of orientation-preserving diffeomorphisms of \(S_{g}\), acts by pull back on \(M_{-1}\). In particular this also holds for the normal subgroup \(\operatorname{Diff}_{0}(S_{g})\) of \(\operatorname{Diff}_{+}(S_{g})\), the group of diffeomorphisms isotopic to the identity. The group \(\operatorname{Mod}(S_{g}):=\operatorname{Diff}_{+}(S_{g})/\operatorname{ Diff}_{0}(S_{g})\) is called the mapping class group of \(S_{g}\). The Teichmuller space \(\mathcal{T}_{g}\) of \(S_{g}\) is defined as \[\mathcal{T}_{g}:=M_{-1}/\operatorname{Diff}_{0}(S_{g}).\] The moduli space \(\mathcal{M}_{g}\) of \(S_{g}\) is defined as \[\mathcal{M}_{g}:=\mathcal{T}_{g}/\mathrm{Mod}(S_{g}).\] The Teichmuller space \(\mathcal{T}_{g}\) is a real analytic manifold. Let \(\alpha\subset S_{g}\) be an essential simple closed curve, then for any \(X\in\mathcal{T}_{g}\), there exists a unique closed geodesic, still denoted by \(\alpha\) in \(X\), which represents for \(\alpha\) in the fundamental group of \(S_{g}\). The geodesic length function \(\ell_{\alpha}(\cdot)\) defines a function on \(\mathcal{T}_{g}\). It is well-known that this function \(\ell_{\alpha}(\cdot)\) is real-analytic on \(\mathcal{T}_{g}\). For more details on Teichmuller theory, one may refer to [10, 11, 12]. Recall that the systole \(\ell_{sys}(X)\) of \(X\in\mathcal{T}_{g}\) is the length of shortest closed geodesics on \(X\). It defines a continuous function \(\ell_{sys}(\cdot):\mathcal{T}_{g}\to\mathbb{R}^{+}\), which is called the systole function on \(\mathcal{T}_{g}\). In general, the systole function is continuous but not smooth because of corners where there may exist multiple essential simple closed geodesics realizing the systole. This function is very useful in Teichmuller theory. One may refer to [1, 1, 13] for more details. ### Weil-Petersson metric Let \(X=(S_{g},\sigma(z)|dz|^{2})\in\mathcal{T}_{g}\). The tangent space \(T_{X}\mathcal{T}_{g}\) at \(X\) is identified with the space of _harmonic Beltrami differentials_ on \(X\), i.e. forms on \(X\) expressible as \(\mu=\frac{\overline{\psi}}{\sigma}\) where \(\psi\in H^{0}(X,K^{2})\) is a holomorphic quadratic differential on \(X\). Let \(z=x+\mathbf{i}y\) and \(\mathrm{dArea}=\sigma(z)dxdy\) be the volume form. The _Weil-Petersson metric_ is the Hermitian metric on \(\mathcal{T}_{g}\) arising from the the _Petersson scalar product_ \[\left\langle\varphi,\psi\right\rangle_{wp}=\int_{X}\frac{\varphi\cdot \overline{\psi}}{\sigma^{2}}\,\mathrm{dArea}\] via duality. We will concern ourselves primarily with its Riemannian part \(ds^{2}_{\mathrm{WP}}\). Throughout this paper we denote the Teichmuller space endowed with the Weil-Petersson metric by \(\mathrm{Teich}(S_{g})\). By definition it is easy to see that the mapping class group \(\mathrm{Mod}(S_{g})\) acts on \(\mathrm{Teich}(S_{g})\) as isometries. Thus, the Weil-Petersson metric descends into a metric, also called the Weil-Petersson metric, on the moduli space \(\mathcal{M}_{g}\). Throughout this paper we also denote by \(\mathcal{M}_{g}\) the moduli space endowed with the Weil-Petersson metric. ### Fenchel-Nielsen deformation For any essential simple closed curve \(\alpha\subset S_{g}\), the geodesic length function \(\ell_{\alpha}(\cdot)\) is real-analytic over \(\mathcal{T}_{g}\). Let \(X=(S_{g},\sigma(z)|dz|^{2})\in\mathcal{M}_{g}\) be a hyperbolic surface and \(\Gamma\) be its associated Fuchsian group. Recall that we also let \(\alpha\) denote its unique closed geodesic representative in \(X\). One may denote by \(A:z\to e^{\ell_{\alpha}(X)}\cdot z\) the deck transformation on the upper half plane \(\mathbb{H}\) corresponding to the simple closed geodesic \(\alpha\subset X\). **Definition 2.5**.: Associated to the geodesic \(\alpha\), we define the following holomorphic quadratic differential on \(X\) as \[\Theta_{\alpha}(z):=\sum_{E\in\left\langle A\right\rangle\setminus\Gamma}\frac {E^{\prime}(z)^{2}}{E(z)^{2}}dz^{2} \tag{2.4}\] where \(\left\langle A\right\rangle\) is the cyclic group generated by \(A\). Let \(\nabla\ell_{\alpha}(X)\in T_{X}\mathcal{M}_{g}\) be the Weil-Petersson gradient of the geodesic length function \(\ell_{\alpha}(\cdot)\) at \(X\). By [20] it is known that \(\nabla\ell_{\alpha}=-2\mathbf{i}\cdot t_{\alpha}\) where \(t_{\alpha}\) is the infinitesimal Fenchel-Nielsen right twist deformation along \(\alpha\). Moreover, \[\nabla\ell_{\alpha}(X)(z)=\frac{2}{\pi}\frac{\overline{\Theta}_{\alpha}(z)}{ \rho(z)|dz|^{2}}=\frac{2}{\pi}\sum_{E\in\left\langle A\right\rangle\setminus \Gamma}\frac{\overline{E}^{\prime}(z)^{2}}{\overline{E}(z)^{2}\rho(z)}\frac{d \overline{z}}{dz}\in T_{X}\mathcal{M}_{g} \tag{2.5}\] where \(\rho(z)|dz|^{2}=\frac{|dz|^{2}}{(\mathrm{Im}(z))^{2}}\) is the hyperbolic metric on the upper half plane. A special formula of Riera in [19] says that \[\left\langle\,\nabla\ell_{\alpha},\nabla\ell_{\alpha}\right\rangle_{wp}(X)= \frac{2}{\pi}\left(\ell_{\alpha}(X)+\sum_{E\in\left\{\left\langle A\right\rangle \setminus\Gamma/\left\langle A\right\rangle-id\right\}}(u\ln\frac{u+1}{u-1}-2)\right) \tag{2.6}\] where \(u=\cosh\left(\operatorname{dist}_{\overline{\imath}}(\widetilde{\alpha},E\circ \widetilde{\alpha})\right)\) and the double-coset of the identity element is omitted from the sum. In particular, one has \[\langle\,\nabla\ell_{\alpha},\nabla\ell_{\alpha}\rangle_{wp}(X)>\frac{2}{\pi} \cdot\ell_{\alpha}(X). \tag{2.7}\] **Definition 2.6**.: For any \(p\in[1,\infty]\), \(X\in\mathcal{M}_{g}\) and \(\mu\in T_{X}\mathcal{M}_{g}\), we define the _\(L^{p}\)-norm_\(||\mu||_{p}\) of \(\mu\) to be \[||\mu||_{p}:=\left(\int_{X}|\mu|^{p}\cdot\mathrm{dArea}\right)^{\frac{1}{p}}. \tag{2.8}\] If \(p=2\), this is the standard Weil-Petersson norm. If \(p=\infty\), \(||\mu||_{\infty}=\max_{z\in X}|\mu(z)|\). One main goal of this paper is to study \(||\nabla\ell_{\alpha}(X)||_{p}\) when \(\alpha\subset X\) is a systolic curve. ### Weil-Petersson curvatures The curvature tensor of the Weil-Petersson metric on \(\mathcal{M}_{g}\) is given as follows. Let \(\mu_{i},\mu_{j}\) be two elements in the tangent space \(T_{X}\mathcal{M}_{g}\) at \(X\), so that the metric tensor might be written in local coordinates as \[g_{i\overline{j}}=\int_{X}\mu_{i}\cdot\overline{\mu_{j}}\,\mathrm{dArea}\,.\] For the inverse of \((g_{i\overline{j}})\), we use the convention \[g^{i\overline{j}}g_{k\overline{j}}=\delta_{ik}.\] Then the curvature tensor is given by \[R_{i\overline{j}k\overline{l}}=\frac{\partial^{2}}{\partial t^{k}\partial \overline{t}^{l}}g_{i\overline{j}}-g^{s\overline{t}}\frac{\partial}{\partial t ^{k}}g_{i\overline{t}}\frac{\partial}{\partial t^{l}}g_{s\overline{j}}.\] The following curvature formula was established in [10, 11]. It has been applied to study various curvature properties of the Weil-Petersson metric. Wolpert [11] and Tromba [10] independently showed that \(\mathcal{M}_{g}\) has negative sectional curvature. In [12] Schumacher showed that \(\operatorname{Teich}(S_{g})\) has strongly negative curvature in the sense of Siu. Liu-Sun-Yau in [10] showed that \(\mathcal{M}_{g}\) has dual Nakano negative curvature, which says that the complex curvature operator on the dual tangent bundle is positive in some sense. It was shown in [11] that the \(\mathcal{M}_{g}\) has non-positive definite Riemannian curvature operator. One can also see [14, 15, 16, 17, 18, 19, 20] for other aspects of the Weil-Petersson curvatures of \(\mathcal{M}_{g}\). Set \(D=-2(\Delta-2)^{-1}\) where \(\Delta\) is the Beltrami-Laplace operator on \(X=(S_{g},\sigma(z)|dz|^{2})\in\mathcal{M}_{g}\). The operator \(D\) is positive and self-adjoint. **Theorem 2.7** (Tromba, Wolpert).: _The curvature tensor satisfies_ \[R_{i\overline{j}k\overline{l}}=\int_{X}D(\mu_{i}\mu_{\overline{j}})\cdot(\mu _{k}\mu_{\overline{l}})\,\mathrm{dArea}+\int_{X}D(\mu_{i}\mu_{\overline{l}}) \cdot(\mu_{k}\mu_{\overline{j}})\,\mathrm{dArea}\,.\] #### 2.6.1. Weil-Petersson holomorphic sectional curvatures Recall that a holomorphic sectional curvature is a sectional curvature along a holomorphic line. Let \(\mu\in T_{X}\mathcal{M}_{g}\). Then Theorem 2.7 tells that the Weil-Petersson holomorphic sectional curvature \(\operatorname{HolK}(\mu)\) along the holomorphic line spanned by \(\mu\) is \[\operatorname{HolK}(\mu)=\frac{-2\cdot\int_{X}D(|\mu|^{2})\cdot(|\mu|^{2}) \operatorname{dArea}}{||\mu||_{\mathrm{WP}}^{4}}.\] Assume that \(||\mu||_{\mathrm{WP}}=1\). By the Cauchy-Schwarz inequality and an estimation of Wolf in [20] we know that [20, Proposition 2.7] \[-2\int_{X}|\mu|^{4}\operatorname{dArea}\leqslant\operatorname{HolK}(\mu) \leqslant-\frac{2}{3}\int_{X}|\mu|^{4}\operatorname{dArea}. \tag{2.9}\] #### 2.6.2. Weil-Petersson sectional curvatures Let \(\mu_{i},\mu_{j}\in T_{X}\mathcal{M}_{g}\) be two orthogonal tangent vectors with \(||\mu_{i}||_{\mathrm{WP}}=||\mu_{j}||_{\mathrm{WP}}=1\). Then the sectional curvature \(K(\mu_{i},\mu_{j})\) of the plane spanned by the real vectors corresponding to \(\mu_{i}\) and \(\mu_{j}\) is [20] \[K(\mu_{i},\mu_{j}) = \operatorname{Re}\int_{X}D(\mu_{i}\mu_{\overline{j}})\mu_{i}\mu_ {\overline{j}}\operatorname{dArea}-\frac{1}{2}\int_{X}D(\mu_{i}\mu_{\overline {j}})\mu_{\overline{i}}\mu_{j}\operatorname{dArea}\] \[-\frac{1}{2}\int_{X}D(|\mu_{i}|^{2})|\mu_{j}|^{2}\operatorname{d Area}.\] Apply [20, Lemma 4.3] and Cauchy-Schwarz inequality one may have \[\int_{X}D(|\mu_{i}||\mu_{j}|)|\mu_{i}||\mu_{j}|\operatorname{dArea} \leqslant \int_{X}D(|\mu_{i}|^{2})^{\frac{1}{2}}D(|\mu_{j}|^{2})^{\frac{1} {2}}|\mu_{i}||\mu_{j}|\operatorname{dArea}\] \[\leqslant (\int_{X}D(|\mu_{i}|^{2})|\mu_{j}|^{2}\operatorname{dArea})^{ \frac{1}{2}}\] \[\times(\int_{X}D(|\mu_{j}|^{2})|\mu_{i}|^{2}\operatorname{dArea})^ {\frac{1}{2}}\] \[= \int_{X}D(|\mu_{i}|^{2})|\mu_{j}|^{2}\operatorname{dArea}.\] Where in the last inequality we apply the fact that the operator \(D\) is self-adjoint. It is clear that \[|\int_{X}D(\mu_{i}\mu_{\overline{j}})\mu_{i}\mu_{\overline{j}}\operatorname{ dArea}|\leqslant\int_{X}D(|\mu_{i}||\mu_{j}|)|\mu_{i}||\mu_{j}|\operatorname{dArea}\] and \[|\int_{X}D(\mu_{i}\mu_{\overline{j}})\mu_{\overline{i}}\mu_{j}\operatorname{ dArea}|\leqslant\int_{X}D(|\mu_{i}||\mu_{j}|)|\mu_{i}||\mu_{j}|\operatorname{dArea}.\] Then one may have the following bound, \[K(\mu_{i},\mu_{j})\geqslant-2\int_{X}D(|\mu_{i}|^{2})|\mu_{j}|^{2}\operatorname {dArea}. \tag{2.10}\] #### 2.6.3. Weil-Petersson Ricci curvatures Let \(\{\mu_{i}\}_{i=1}^{3g-3}\) be a holomorphic orthonormal basis of \(T_{X}\mathcal{M}_{g}\). Then the Ricci curvature \(\operatorname{Ric}(\mu_{i})\) of \(\mathcal{M}_{g}\) at \(X\) in the direction \(\mu_{i}\) is given by \[\operatorname{Ric}(\mu_{i})=-\sum_{j=1}^{3g-3}R_{i\overline{j}j \overline{i}}\] \[= -\sum_{j=1}^{3g-3}(\int_{X}D(\mu_{i}\mu_{\overline{j}})\cdot(\mu_ {j}\mu_{\overline{i}})\operatorname{dArea}+\int_{X}D(|\mu_{i}|^{2})\cdot(|\mu_ {j}|^{2})\operatorname{dArea}).\] Since \(\int_{X}D(f)\cdot\overline{f}\operatorname{dArea}\geqslant 0\) for any function \(f\) on \(X\), by applying the argument in the proof of (2.10) one may have \[-2\leqslant\frac{\operatorname{Ric}(\mu_{i})}{\sum_{j=1}^{3g-3}\int_{X}D(|\mu_ {i}|^{2})\cdot(|\mu_{j}|^{2})\operatorname{dArea}}\leqslant-1. \tag{2.11}\] ## 3. A upper bound for \(\nabla\ell_{\alpha}(X)\) In this section we provide an elementary upper bound \(H(z)\) for \(|\nabla\ell_{\alpha}(X)(z)|\) and detect the region where \(H(z)\) attains its maximal value. Let \(\alpha\subset X\in\mathcal{M}_{g}\) be an essential non-trivial loop. Up to conjugacy, one may assume that the closed geodesic, still denoted by \(\alpha\), representing \(\alpha\) corresponds to the deck transformation \(A:z\to e^{\ell_{\alpha}(X)}\cdot z\) with axis \(\widetilde{\alpha}=\operatorname{i}\!\mathbb{R}^{+}\) which is the imaginary axis, and the fundamental domain \(\mathcal{A}_{\alpha}\) with respect to this cyclic group \(\langle A\rangle\) is \[\mathcal{A}_{\alpha}=\{z\in\mathbb{H};1\leqslant|z|\leqslant e^{\ell_{\alpha} (X)}\}. \tag{3.1}\] Recall that the Weil-Petersson gradient of \(\ell_{\alpha}(\cdot)\) at \(X\) is [1, 2] \[\nabla\ell_{\alpha}(X)(z)=\frac{2}{\pi}\sum_{E\in\langle A\rangle\backslash \Gamma}\frac{\overline{E}^{\prime}(z)^{2}}{\overline{\overline{E}}(z)^{2}\rho( z)}\frac{d\overline{z}}{dz}\in T_{X}\mathcal{M}_{g}\] where \(\rho(z)=\frac{1}{\operatorname{Im}(z)^{2}}\). Since \(\rho(\gamma(z))|\gamma^{\prime}(z)|^{2}=\rho(z)\) for any \(\gamma\in\operatorname{Aut}(\mathbb{H})\), the triangle inequality gives that for all \(z\in\mathcal{A}_{\alpha}\), \[|\nabla\ell_{\alpha}(X)|(z)\leqslant\frac{2}{\pi}\sum_{E\in\langle A\rangle \backslash\Gamma}\frac{1}{|E(z)|^{2}}\times\frac{1}{\rho(E(z))}. \tag{3.2}\] Set \[H(z): = \sum_{E\in\langle A\rangle\backslash\Gamma}\frac{1}{|E(z)|^{2}} \times\frac{1}{\rho(E(z))}\] \[= \sum_{E\in\langle A\rangle\backslash\Gamma}\sin^{2}(\theta(E(z)))\] where we write \(E(z)=(r(E(z)),\theta(E(z)))\) in the polar coordinate of \(\mathbb{H}\). Since \(H(\gamma\cdot z)=H(z)\) for any \(\gamma\in\Gamma\), it descends into a function on \(X\) \[h:X\to\mathbb{R}^{>0}\] defined as \[h(\pi(z)):=H(z)\] where \(\pi:\mathbb{H}\to X\) is the covering map and \(z\in\mathbb{H}\) is any lift point of \(\pi(z)\in X\). By (3.2) we know that \[|\nabla\ell_{\alpha}(X)|(\pi(z))\leqslant\frac{2}{\pi}\cdot H(z). \tag{3.4}\] Now we estimate \(\max\limits_{z\in\mathbb{H}}H(z)\) or \(\max\limits_{z\in X}h(z)\). First we compute the Laplacian of \(H(z)\) to detect the rough region where \(\max_{z\in\mathbb{H}}H(z)\) holds. We rewrite \(\frac{1}{|E(z)|^{2}}\times\frac{1}{\rho(E(z))}\) as \[f(z)=\frac{1}{4}\times\frac{(E(z)-\overline{E}(z))(\overline{E}(z)-E(z))}{E(z )\times\overline{E}(z)}.\] Note that \(E(z)\) is analytic, i.e., \(E_{\overline{z}}(z)=0\). So we have \[\frac{\partial}{\partial z}f(z) = \frac{1}{4}\times(\frac{2E_{z}(z)\cdot(\overline{E}(z)-E(z))}{E( z)\times\overline{E}(z)}\] \[- \frac{(E(z)-\overline{E}(z))\cdot(\overline{E}(z)-E(z))}{E^{2}( z)\times\overline{E}(z)}\cdot E_{z}(z))\] \[= \frac{1}{4}\frac{(E(z)+\overline{E}(z))\cdot(\overline{E}(z)-E(z ))}{E^{2}(z)\times\overline{E}(z)}\cdot E_{z}(z).\] Take one more derivative we get, \[4\frac{\partial^{2}}{\partial\overline{z}\partial z}f(z) = \frac{\overline{E}(z)-E(z)}{E^{2}(z)\times\overline{E}(z)}\cdot E _{z}(z)\cdot\overline{E}_{\overline{z}}(z)\] \[+ \frac{E(z)+\overline{E}(z)}{E^{2}(z)\times\overline{E}(z)}\cdot E _{z}(z)\cdot\overline{E}_{\overline{z}}(z)\] \[- \frac{(E(z)+\overline{E}(z))\cdot(\overline{E}(z)-E(z))}{E^{2}( z)\times\overline{E}^{2}(z)}\cdot E_{z}(z)\cdot\overline{E}_{\overline{z}}(z)\] \[= \frac{|E_{z}(z)|^{2}}{|E(z)|^{4}}\times(E^{2}(z)+\overline{E}^{2} (z)).\] Let \(E(z)=\operatorname{Re}E(z)+\mathbf{i}\cdot\operatorname{Im}E(z)\). Then \[\frac{\partial^{2}}{\partial\overline{z}\partial z}f(z)=\frac{|E_{z}(z)|^{2}} {2|E(z)|^{4}}\times\left((\operatorname{Re}E)^{2}-(\operatorname{Im}E)^{2} \right). \tag{3.7}\] Recall that for \(z=(r,\theta)\in\mathbb{H}\) given in the polar coordinate where \(\theta\in(0,\pi)\), the hyperbolic distance between \(z\) and the imaginary axis \(\mathbf{i}\mathbb{R}^{+}\) is given in (2.1) saying \[\operatorname{dist}_{\mathbb{H}}(z,\mathbf{i}\mathbb{R}^{+})=\ln|\csc\theta+| \cot\theta||.\] Consider the set \[\mathcal{P}_{\alpha}=\{(x,y)\in\mathbb{H};\ y\geqslant|x|\}\cap\{(r,\theta); \ 1\leqslant r\leqslant\ell_{\alpha}(X)\}.\] Geometrically, \(\mathcal{P}_{\alpha}\) is the set in \(\mathbb{H}\) which has distance to the imaginary axis with \[\operatorname{dist}_{\mathbb{H}}(z,\mathbf{i}\mathbb{R}^{+})\leqslant\ln( \sqrt{2}+1).\] Equation (3.7) tells that if \(E(z)\notin\mathcal{P}_{\alpha}\), then we have \[\Delta f(z)\geqslant 0. \tag{3.8}\] Recall that \(\alpha\) is the closed geodesic representing for \(\alpha\) in \(X\) which is lifted into \(\mathbb{H}\) as \(\{\mathbf{i}\cdot t\}\) where \(1\leqslant t\leqslant e^{\ell_{\alpha}(X)}\). For any \(\pi(z)\in X\) with \(\operatorname{dist}(\pi(z),\alpha)>\ln(\sqrt{2}+1)\), we have that for any lift \(z\in\mathbb{H}\) of \(\pi(z)\in X\) and any \(E\in\Gamma\), \[\operatorname{dist}_{\mathbb{H}}(E(z),\mathbf{i}\mathbb{R}^{+})>\ln(\sqrt{2}+1).\] Therefore equation (3.7) implies that for any \(z\in\mathbb{H}\) with \(\operatorname{dist}(\pi(z),\alpha)>\ln(\sqrt{2}+1)\) we have \[\Delta H(z)=\sum_{E\in\langle A\rangle\setminus\Gamma}\frac{|E_{z}(z)|^{2}}{ 4|E(z)|^{4}}\times(E^{2}(z)+\overline{E}^{2}(z))>0. \tag{3.9}\] Recall that \(h(\pi(z))=H(z)\) for any \(z\in\mathbb{H}\). The following proposition tells that the maximum of \(h\) can only happen near the central closed geodesic. More precisely, **Proposition 3.1**.: _We have_ \[\max_{\pi(z)\in X}h(\pi(z))=\max_{\pi(z)\in X;\ \operatorname{dist}(\pi(z), \alpha)\leqslant\ln(\sqrt{2}+1)}h(\pi(z)).\] Proof.: Suppose for contradiction that there exists a point \(\pi(w)\in X\) with \(\operatorname{dist}(\pi(w),\alpha)>\ln(\sqrt{2}+1)\) such that \[h(\pi(w))=\max_{\pi(z)\in X}h(\pi(z)).\] Let \(w\in\mathbb{H}\) be a lift of \(\pi(w)\in X\). Then we have \[H(w)=\max_{z\in\mathbb{H}}H(z).\] In particular, by the maximal principal we have \[\Delta H(w)\leqslant 0. \tag{3.10}\] On the other hand, since \(\operatorname{dist}(\pi(w),\alpha)>\ln(\sqrt{2}+1)\), for any \(E\in\langle A\rangle\setminus\Gamma\) we have \[\operatorname{dist}_{\mathbb{H}}(E\circ w,\alpha)>\ln(\sqrt{2}+1).\] Then it follows by (3.9) that \[\Delta H(w)>0 \tag{3.11}\] which is a contradiction. The proof is complete. **Remark 3.2**.: The proposition above tells that the maximum of \(H\) can only happen in the \(\ln(\sqrt{2}+1)-\)neighborhood of \(\cup_{E\in\langle A\rangle/\Gamma}E\circ\mathbf{i}\mathbb{R}^{+}\), which is also the same as \(\cup_{E\in\langle A\rangle/\Gamma}E\circ\mathcal{P}_{\alpha}\). ## 4. Uniform upper bounds for \(L^{\infty}\)-norms In this section we will prove the uniform upper bounds in Theorem 1.1 for \(p=\infty\). Part (1) and (2) of Theorem 1.1 in this case will be proved separately. We first show **Proposition 4.1**.: _Let \(X\in\mathcal{M}_{g}\) and \(\alpha\subset X\) with \(\ell_{\alpha}(X)=\ell_{sys}(X)\). Then_ \[||\nabla\ell_{\alpha}(X)||_{\infty}\prec 1.\] The proof of Proposition 4.1 is splitted into two parts. We first deal with the case that \(X\) is always contained in certain fixed thick part of the moduli space \(\mathcal{M}_{g}\). **Lemma 4.2**.: _For any given constant \(\varepsilon_{0}>0\) and \(\alpha\subset X\in\mathcal{M}_{g}\) with \(\ell_{\alpha}(X)=\ell_{sys}(X)\geqslant\varepsilon_{0}\), then there exists a uniform constant \(C_{1}(\varepsilon_{0})>0\), only depending on \(\varepsilon_{0}\), such that_ \[||\nabla\ell_{\alpha}(X)||_{\infty}\leqslant C_{1}(\varepsilon_{0}).\] Proof.: By (3.4) it suffices to show that \[\max_{z\in\mathbb{H}}H(z)\leqslant C_{1}^{\prime}(\varepsilon_{0})\] for some uniform constant \(C_{1}^{\prime}(\varepsilon_{0})>0\) only depending on \(\varepsilon_{0}\). Let \(z_{0}\in\mathcal{F}_{\alpha}\subset\mathcal{A}_{\alpha}\), where \(\mathcal{F}_{\alpha}\) is the fundamental domain of \(X\) which contains the lift \(\{\mathbf{i}\!\cdot\!t\}_{1\leqslant t\leqslant\ell^{\alpha}(X)}\) of the shortest closed geodesic \(\alpha\) in \(X\), such that \(H\) attains its maximum at \(z_{0}\). That is, \[H(z_{0})=\max_{z\in\mathbb{H}}H(z).\] Since \(\ell_{sys}(X)\geqslant\varepsilon_{0}\), it follows by the triangle inequality that for any \(\gamma_{1}\neq\gamma_{2}\in\Gamma\), the geodesic balls satisfy that \[\gamma_{1}\circ B(z_{0};\frac{\varepsilon_{0}}{8})\cap\gamma_{2}\circ B(z_{0} ;\frac{\varepsilon_{0}}{8})=\emptyset. \tag{4.1}\] Recall that \(\frac{1}{|E(z)|^{2}}\times\frac{1}{\rho(E(z))}=\sin^{2}(\theta(E(z)))\) where we use polar coordinate \((r,\theta)\) for \(E(z)\). For any \(E\notin\langle A\rangle\), the point \(E\circ z_{0}\) must lie outside of a lift \(\widetilde{\mathcal{C}}_{w(\alpha)}(\alpha)\) of the maximal collar \(\mathcal{C}_{w(\alpha)}(\alpha)\) where \[\widetilde{\mathcal{C}}_{w(\alpha)}(\alpha)=\{z\in\mathcal{A}_{\alpha};\ \operatorname{dist}_{H}(z,\mathbf{i}\cdot\mathbb{R}^{+})\leqslant w(\alpha)\}.\] In particular, by Lemma 2.3 we have for all \(E\notin\langle A\rangle\), \[\operatorname{dist}_{\mathbb{H}}(E\circ z_{0},\mathbf{i}\cdot\mathbb{R}^{+}) \geqslant\frac{\ell_{sys}(X)}{4}.\] Then it follows by the triangle inequality that for any \(E\notin\langle A\rangle\) and \(z\in E\circ B_{\mathbb{H}}(z_{0};\frac{\varepsilon_{0}}{8})\), \[\operatorname{dist}_{\mathbb{H}}(z,\mathbf{i}\cdot\mathbb{R}^{+}) \geqslant \operatorname{dist}_{\mathbb{H}}(E\circ z_{0},\mathbf{i}\cdot \mathbb{R}^{+})-\operatorname{dist}_{\mathbb{H}}(E\circ z_{0},z)\] \[\geqslant \frac{\ell_{sys}(X)}{4}-\frac{\varepsilon_{0}}{8}\] \[\geqslant \frac{\ell_{sys}(X)}{8}. \tag{4.2}\] Then by formula (2.1) we have that for any \(E\notin\langle A\rangle\) and \(z\in E\circ B_{\mathbb{H}}(z_{0};\frac{\varepsilon_{0}}{8})\), \[\ln\left(\frac{2}{\sin(\theta(z))}\right)\geqslant\frac{\ell_{sys}(X)}{8}.\] Recall that \(e^{x}\geqslant x\) for all \(x\geqslant 0\). Thus, for all \(z\in E\circ B_{\mathbb{H}}(z_{0};\frac{\varepsilon_{0}}{8})\) where \(E\notin\langle A\rangle\) we have \[\sin\left(\theta(z)\right)\leqslant\frac{16}{\ell_{sys}(X)}. \tag{4.3}\] Now we apply the mean value inequality. Recall that (3.3) says that \[H(z)=\sum_{E\in\langle A\rangle\setminus\Gamma}\sin^{2}(\theta(E(z))).\] Then it follows by (2.2) and Lemma 2.1 that \[H(z_{0})-\sin^{2}(\theta(z_{0}))\leqslant\sum_{E\neq\langle A \rangle\in\langle A\rangle\setminus\Gamma}4e^{-2\operatorname{dist}_{\mathbb{H }}(E\circ z_{0},\mathbf{i}\cdot\mathbb{R}^{+})}\] \[\leqslant 4c(\frac{\varepsilon_{0}}{8})\sum_{E\neq\langle A\rangle \in\langle A\rangle\setminus\Gamma}\int_{E\circ B_{\mathbb{H}}(z_{0};\frac{ \varepsilon_{0}}{8})}e^{-2\operatorname{dist}_{\mathbb{H}}(z,\mathbf{i}\cdot \mathbb{R}^{+})}\operatorname{dArea}. \tag{4.4}\] Then by the triangle inequality and (4.3) we know that for all \(E\notin\langle A\rangle\), \[E\circ B_{\mathbb{H}}(z_{0};\frac{\varepsilon_{0}}{8})\subset\{(r,\theta)\in \mathbb{H};\;e^{-\frac{\varepsilon_{0}}{8}}\leqslant r\leqslant e^{\ell_{ sys}(X)+\frac{\varepsilon_{0}}{8}}\text{ and }\sin(\theta)\leqslant\frac{16}{\ell_{sys}(X)}\}.\] Set \[\mathcal{S}:=\{(r,\theta)\in\mathbb{H};\;e^{-\frac{\varepsilon_{0}}{8}} \leqslant r\leqslant e^{\ell_{sys}(X)+\frac{\varepsilon_{0}}{8}}\text{ and }\sin(\theta)\leqslant\frac{16}{\ell_{sys}(X)}\}.\] By (4.1) we know that the balls \(\{E\circ B_{\mathbb{H}}(z_{0};\frac{\varepsilon_{0}}{8})\}\) are pairwisely disjoint. Then it follows by (2.2) and (4.4) that \[H(z_{0})\leqslant\sin^{2}(\theta(z_{0}))+4c(\frac{\varepsilon_{0} }{8})\int_{\mathbb{S}}\sin^{2}\theta\,\mathrm{dArea}\] \[\leqslant 1+8c(\frac{\varepsilon_{0}}{8})\int_{0}^{\arcsin(\min\{ \frac{16}{\ell_{sys}(X)},1\})}\int_{e^{-\frac{\varepsilon_{0}}{8}}}^{e^{\ell_{ sys}(X)+\frac{\varepsilon_{0}}{8}}}\frac{\sin^{2}\theta}{r^{2}\sin^{2}\theta}rdrd\theta\] \[=1+8c(\frac{\varepsilon_{0}}{8})\cdot(\ell_{sys}(X)+\frac{ \varepsilon_{0}}{4})\cdot\arcsin(\min\{\frac{16}{\ell_{sys}(X)},1\})\] \[\leqslant 1+10c(\frac{\varepsilon_{0}}{8})\cdot\ell_{sys}(X) \cdot\arcsin(\min\{\frac{16}{\ell_{sys}(X)},1\}) \tag{4.5}\] where we apply \(\mathrm{dArea}=\frac{|dz|^{2}}{y^{2}}=\frac{rdrd\theta}{r^{2}\sin^{2}\theta}\) in the second inequality and \(\ell_{sys}(X)\geqslant\varepsilon_{0}>0\) in the last inequality. As \(\ell_{sys}(X)\to\infty\), the right hand side above goes to \((1+160c(\frac{\varepsilon_{0}}{8}))\) which is bounded. Recall that we always assume that \(\ell_{sys}(X)\geqslant\varepsilon_{0}>0\). Therefore there exists a uniform constant \(C_{1}^{\prime}(\varepsilon_{0})>0\) such that \[H(z_{0})=\max_{z\in\mathbb{H}}H(z)\leqslant C_{1}^{\prime}(\varepsilon_{0}).\] The proof is complete. **Remark 4.3**.: The proof above does not cover Proposition 5.1 for the case that \(\ell_{sys}(X)\to 0\) because the constant \(c(\varepsilon_{0})\to\infty\) as \(\varepsilon_{0}\to 0\). We will use a different way to prove it in the next lemma. Now we deal with the case that \(\ell_{sys}(X)\) is short. For this case actually we will prove a more general result which does not require \(\alpha\) to be a systolic curve of \(X\). This is also a special case of the uniform upper bounds in Part (2) of Theorem 1.1 for \(p=\infty\). More precisely, **Lemma 4.4**.: _For any given constant \(0<L_{0}<\frac{1}{1000}\) and \(\alpha\subset X\in\mathcal{M}_{g}\) with \(\ell_{\alpha}(X)\leqslant L_{0}\), then there exists a uniform constant \(C_{2}(L_{0})>0\), only depending on \(L_{0}\), such that_ \[||\nabla\ell_{\alpha}(X)||_{\infty}\leqslant C_{2}(L_{0}).\] Proof.: We follow the argument in [20, Lemma 2.2] or [20, Proposition 6]. By (3.4) it suffices to show that \[\max_{z\in\mathbb{H}}H(z)\leqslant C_{2}^{\prime}(L_{0})\] for some uniform constant \(C_{2}^{\prime}(L_{0})>0\) only depending on \(L_{0}\). Let \(z_{0}\in\mathcal{F}_{\alpha}\subset\mathcal{A}_{\alpha}\), where \(\mathcal{A}_{\alpha}=\{z\in\mathbb{H};\ 1\leqslant|z|\leqslant e^{\ell_{ \alpha}(X)}\}\) and \(\mathcal{F}_{\alpha}\subset\mathcal{A}_{\alpha}\) is the fundamental domain which contains the lift \(\{\mathbf{i}\cdot t\}_{1\leqslant t\leqslant e^{\ell_{\alpha}(X)}}\) of the closed geodesic \(\alpha\), such that \[H(z_{0})=\max_{z\in\mathbb{H}}H(z).\] By Proposition 3.1 we know that \[\operatorname{dist}_{\mathbb{H}}(z_{0},\mathbf{i}\mathbb{R}^{+})\leqslant\ln{(1+ \sqrt{2})}. \tag{4.6}\] By Lemma 2.2 we know that the width \(w(\alpha)\) of the maximal collar of \(\alpha\) satisfies that \[w(\alpha) \geqslant \frac{1}{2}\ln\frac{\cosh\frac{\ell_{\alpha}(X)}{2}+1}{\cosh \frac{\ell_{\alpha}(X)}{2}-1}\] \[= \ln{(\frac{e^{\frac{\ell_{\alpha}(X)}{2}}+1}{e^{\frac{\ell_{ \alpha}(X)}{2}}-1})}\] \[\geqslant \ln{(\frac{2}{\ell_{\alpha}(X)})}\] where we apply \(\ell_{\alpha}(X)<\frac{1}{1000}\) in the last inequality. In particular, \[w(\alpha)>\ln{(2000)}.\] Let \(\operatorname{inj}(\pi(z_{0}))\) be the injectivity radius of \(\pi(z_{0})\in X\). By (4.6) we have \[\operatorname{dist}(\pi(z_{0}),\alpha)\leqslant\ln{(1+\sqrt{2})}.\] Then it follows by the triangle inequality that the geodesic ball \(B(\pi(z_{0});1)\) in \(X\) centered at \(\pi(z_{0})\) of radius \(1\) is contained in the maximal collar of \(\alpha\). That is, \[B(\pi(z_{0});1)\subset\mathcal{C}_{w(\alpha)}(\alpha) \tag{4.8}\] which together with the Collar Lemma [10, Theorem 4.1.6] implies that \[\operatorname{inj}(\pi(z_{0}))\geqslant\frac{\ell_{\alpha}(X)}{2}. \tag{4.9}\] Now we apply the mean value inequality. Recall that (3.3) says that \[H(z)=\sum_{E\in\langle A\rangle\backslash\Gamma}\frac{1}{|E(z)|^{2}}\times \frac{1}{\rho(E(z))}=\sum_{E\in\langle A\rangle\backslash\Gamma}\sin^{2}(\theta (E(z))).\] Then it follows by (2.2) and Lemma 2.1 that \[H(z_{0}) \leqslant 4c(1)\sum_{E\in\langle A\rangle\backslash\Gamma}\int_{B_{\mathbb{ H}}(z_{0};1)}\sin^{2}(\theta(E(z)))\,\mathrm{dArea}\] \[= 4c(1)\int_{B_{\mathbb{H}}(z_{0};1)}H(z)\,\mathrm{dArea}\,.\] Similar as [10, Lemma 2.2] or [10, Proposition 6], the projection \(\pi:\bigcup_{\gamma\in\Gamma}\gamma\circ B_{\mathbb{H}}(z_{0};1)\) onto its image in \(X\) has multiplicity at most \(\frac{c^{\prime}}{\operatorname{inj}(\pi(z_{0}))}\) where \(c^{\prime}>0\) is a uniform constant. Indeed, for any \(\gamma\notin\langle A\rangle\) it follows by the triangle inequality that \(\operatorname{dist}_{H}(\gamma\circ z_{0},z_{0})\geqslant\operatorname{dist} _{H}(\gamma\circ z_{0},\mathbf{i}\mathbb{R}^{+})-\operatorname{dist}_{H}(z_{0},\mathbf{i}\mathbb{R}^{+})\geqslant w(\alpha)-\ln{(1+\sqrt{2})}>\ln(2000)-1>2\). This tells that the multiplicity of the projection \(\pi:\bigcup_{\gamma\in\Gamma}\gamma\circ B_{\mathbb{H}}(z_{0};1)\) onto its image only comes from the cyclic group \(\langle A\rangle\). Which in particular implies that the multiplicity of the projection is at most \(\frac{c^{\prime}}{\operatorname{inj}(\pi(z_{0}))}\) for some uniform constant \(c^{\prime}>0\). Recall that \(H(\gamma\circ z)=H(z)\). Thus, the last integral in (4.10) is bounded above by \[\frac{c^{\prime}}{\operatorname{inj}(\pi(z_{0}))}\cdot\int_{\mathcal{A}_{ \alpha}\bigcap(\bigcup_{E\in\langle A\rangle/\Gamma}E\circ B_{\mathbb{H}}(z_ {0};1))}\sin^{2}(\theta(z))\operatorname{dArea}.\] Therefore, we have \[H(z_{0}) \leqslant \frac{4c^{\prime}\cdot c(1)}{\operatorname{inj}(\pi(z_{0}))} \cdot\int_{\mathcal{A}_{\alpha}\bigcap(\bigcup_{E\in\langle A\rangle/\Gamma}E \circ B_{\mathbb{H}}(z_{0};1))}\sin^{2}(\theta(z))\operatorname{dArea}\] \[\leqslant \frac{4c^{\prime}\cdot c(1)}{\operatorname{inj}(\pi(z_{0}))} \cdot\int_{\mathcal{A}_{\alpha}}\sin^{2}(\theta(z))\operatorname{dArea}\] \[= \frac{4c^{\prime}\cdot c(1)}{\operatorname{inj}(\pi(z_{0}))} \cdot\int_{0}^{\pi}\int_{1}^{e^{\ell_{\alpha}(X)}}\frac{\sin^{2}\theta}{r^{2} \sin^{2}\theta}rdrd\theta\] \[= \frac{4\pi c^{\prime}\cdot c(1)}{\operatorname{inj}(\pi(z_{0}))} \cdot\ell_{\alpha}(X)\] \[\leqslant 8\pi c^{\prime}\cdot c(1)\] where we apply (4.9) in the last inequality. Then the conclusion follows by choosing \[C_{2}^{\prime}(L_{0})=8\pi c^{\prime}\cdot c(1).\] The proof is complete. **Remark 4.5**.: Let \(\mu=\nabla\ell_{\alpha}(X)\) in [10, Definition 10]. Then the quantity \(Comp(\nabla\ell_{\alpha}(X))\) is comparable to \(\frac{1}{||\nabla\ell_{\alpha}(X)||_{w}}\). Thus, [10, Lemma 11] implies that if \(\alpha\) has length at most \(c_{0}\), then there exists two positive constants \(c^{\prime}\) and \(c^{\prime\prime}\) such that \[c^{\prime}\leqslant||\nabla\ell_{\alpha}(X)||_{\infty}\leqslant c^{\prime \prime}.\] The upper bound \(C_{2}(L_{0})\) in Lemma 4.4 is uniform. And we will use this upper bound and (2.7) of Riera to show that \(||\nabla\ell_{\alpha}(X)||_{\infty}\geqslant C_{2}^{\prime}(L_{0})\) for some uniform constant \(C_{2}^{\prime}(L_{0})>0\). The methods for obtaining these two bounds are similar. **Remark 4.6**.: The upper bound \(\frac{1}{1000}\) in the assumption of Lemma 4.4 is clearly not optimal. However it is already good enough for proving the uniform upper bounds of Theorem 1.1 for \(p=\infty\). Now we prove Proposition 4.1. Proof of Proposition 4.1.: If \(\ell_{sys}(X)<\frac{1}{1000}\), it follows by Lemma 4.4 that for any \(\alpha\subset X\) with \(\ell_{\alpha}(X)=\ell_{sys}(X)\) we have \[||\nabla\ell_{\alpha}(X)||_{\infty}\leqslant C_{2}(\frac{1}{1000})\] where \(C_{2}(\cdot)\) is the constant in Lemma 4.4. If \(\ell_{sys}(X)\geqslant\frac{1}{1000}\), it follows by Lemma 4.2 that \[||\nabla\ell_{\alpha}(X)||_{\infty}\leqslant C_{1}(\frac{1}{1000})\] where \(C_{1}(\cdot)\) is the constant in Lemma 4.2. Thus, we have that for any \(X\in\mathcal{M}_{g}\) and \(\alpha\subset X\) with \(\ell_{\alpha}(X)=\ell_{sys}(X)\), \[||\nabla\ell_{\alpha}(X)||_{\infty}\leqslant\max\{C_{1}(\frac{1}{1000}),C_{2} (\frac{1}{1000})\}.\] Then the conclusion follows. Next we prove the uniform upper bounds in Part (2) of Theorem 1.1 for \(p=\infty\). That is to extend the constant \(\frac{1}{1000}\) in Lemma 4.4 to an arbitrary fixed positive constant. More precisely, **Proposition 4.7**.: _For any given constant \(L_{0}>0\) and simple loop \(\alpha\subset X\in\mathcal{M}_{g}\) with \(\ell_{\alpha}(X)\leqslant L_{0}\), then there exists a uniform constant \(C_{3}(L_{0})>0\), only depending on \(L_{0}\), such that_ \[||\nabla\ell_{\alpha}(X)||_{\infty}\leqslant C_{3}(L_{0}).\] _That is,_ \[||\nabla\ell_{\alpha}(X)||_{\infty}\prec 1.\] Proof.: By Lemma 4.4 one may assume that \[\frac{1}{1000}\leqslant\ell_{\alpha}(X)\leqslant L_{0}\] where \(L_{0}\geqslant\frac{1}{1000}\). It follows by (3.4) and Proposition 3.1 that \[||\nabla\ell_{\alpha}(X)||_{\infty}\leqslant\frac{2}{\pi}\cdot\max_{\pi(z) \in X;\ \operatorname{dist}(\pi(z),\alpha)\leqslant\ln(\sqrt{2}+1)}h(\pi(z)) \tag{4.12}\] where \(h(\pi(z))=H(z)\) and \(H(z)=\sum_{E\in\langle A\rangle\setminus\Gamma}\sin^{2}(\theta(E(z)))\) given in (3.3). Claim: there exists a uniform constant \(s_{0}=s_{0}(L_{0})>0\) only depending on \(L_{0}\) such that \[\inf_{p\in X;\ \operatorname{dist}(p,\alpha)\leqslant\ln\,(\sqrt{2}+1)}\operatorname {inj}(p)\geqslant s_{0}. \tag{4.13}\] Proof of Claim. Let \(p\in X\) with \(\operatorname{dist}(p,\alpha)\leqslant\ln\,(\sqrt{2}+1)\) such that \[\operatorname{inj}(p)<\frac{1}{2000}; \tag{4.14}\] otherwise we are done. Since \(\ell_{\alpha}(X)\leqslant L_{0}\), it follows by the triangle inequality that \[\alpha\subset B(p;\ln\,(\sqrt{2}+1)+\frac{L_{0}}{2}). \tag{4.15}\] Since \(\ell_{\alpha}(X)\geqslant\frac{1}{1000}\), by (4.14) and the Collar Lemma [2, Theorem 4.1.6] there exists a closed geodesic \(\beta\neq\alpha\) such that \[p\in\mathcal{C}_{w(\beta)}(\beta)\] where \(\mathcal{C}_{w(\beta)}(\beta)\) is the maximal collar of \(\beta\). Let \(\partial\mathcal{C}_{w(\beta)}(\beta)\) be the boundary of the maximal collar \(\mathcal{C}_{w(\beta)}(\beta)\). Since \(\beta\) is the unique closed geodesic in \(\mathcal{C}_{w(\beta)}(\beta)\), \[\mathrm{dist}(p,\partial\mathcal{C}_{w(\beta)}(\beta))\leqslant\ln\left(\sqrt{ 2}+1\right)+\frac{L_{0}}{2}; \tag{4.16}\] otherwise the geodesic ball \(B(p;\ln\left(\sqrt{2}+1\right)+\frac{L_{0}}{2})\subset\mathcal{C}_{w(\beta)}(\beta)\) that together with (4.15) implies that \[\alpha\subset\mathcal{C}_{w(\beta)}(\beta)\] which is impossible because the collar \(\mathcal{C}_{w(\beta)}(\beta)\) can not contain two different simple closed geodesics \(\alpha\) and \(\beta\). Recall that the Collar Lemma [2, Theorem 4.1.6] tells that \[\sinh\left(\mathrm{inj}(p)\right)=\cosh\left(\frac{\ell_{\beta}(X)}{2} \right)\cosh(\mathrm{dist}(p,\partial\mathcal{C}_{w(\beta)}(\beta)))-\sinh( \mathrm{dist}(p,\partial\mathcal{C}_{w(\beta)}(\beta)))\] which together with (4.16) implies that \[\sinh\left(\mathrm{inj}(p)\right) \geqslant \cosh(\mathrm{dist}(p,\partial\mathcal{C}_{w(\beta)}(\beta)))- \sinh(\mathrm{dist}(p,\partial\mathcal{C}_{w(\beta)}(\beta)))\] \[= e^{-\mathrm{dist}(p,\partial\mathcal{C}_{w(\beta)}(\beta))}\] \[\geqslant (\sqrt{2}-1)\cdot e^{-\frac{L_{0}}{2}}.\] So we have \[\mathrm{inj}(p)\geqslant\sinh^{-1}((\sqrt{2}-1)\cdot e^{-\frac{L_{0}}{2}}).\] Since \(p\in X\) with \(\mathrm{dist}(p,\alpha)\leqslant\ln\left(\sqrt{2}+1\right)\) is arbitrary, the claim follows by setting \[s_{0}=s_{0}(L_{0})=\min\{\frac{1}{2000},\sinh^{-1}((\sqrt{2}-1)\cdot e^{-\frac {L_{0}}{2}})\}. \tag{4.18}\] The remaining follows by a standard _unfolding_ argument. More precisely we let \(\pi(z_{0})\in X\) such that \[h(\pi(z_{0}))=\sup_{p\in X}h(p).\] By Proposition 3.1 and (4.13) we know that \[\mathrm{inj}(\pi(z_{0}))\geqslant s_{0}. \tag{4.19}\] Then it follows by the triangle inequality that for any \(\gamma_{1}\neq\gamma_{2}\in\Gamma\), the geodesic balls satisfy that \[\gamma_{1}\circ B_{\mathbb{H}}(z_{0};\frac{s_{0}}{4})\cap\gamma_{2}\circ B_{ \mathbb{H}}(z_{0};\frac{s_{0}}{4})=\emptyset\] where \(z_{0}\in\{(r,\theta)\in\mathbb{H};\ 1\leqslant r\leqslant e^{\ell_{\alpha}(X)}\}\) is a lift of \(\pi(z_{0})\). Then it follows by (2.2) and Lemma 2.1 that \[h(\pi(z_{0})) = H(z_{0})=\sum_{E\in\langle A\rangle\setminus\Gamma}\sin^{2}( \theta(E(z_{0})))\] \[\leqslant 4c(\frac{s_{0}}{4})\cdot\sum_{E\in\langle A\rangle\setminus \Gamma}\int_{B_{\mathbb{H}}(E\circ z_{0};\frac{s_{0}}{4})}\sin^{2}(\theta(z)) \operatorname{dArea}.\] These balls \(\{E\circ B_{\mathbb{H}}(z_{0};\frac{s_{0}}{4})\}\) are pairwisely disjoint and for all \(E\notin\langle A\rangle\), \[E\circ B_{\mathbb{H}}(z_{0};\frac{s_{0}}{4})\subset\{(r,\theta)\in\mathbb{H}; \ e^{-\frac{s_{0}}{4}}\leqslant r\leqslant e^{\ell_{\alpha}(X)+\frac{s_{0}}{4 }}\}.\] Thus, we have \[||\nabla\ell_{\alpha}(X)||_{\infty}\leqslant\frac{2}{\pi}\cdot h (\pi(z_{0}))\] \[\leqslant\frac{8}{\pi}\cdot c(\frac{s_{0}}{4})\cdot\int_{z\in \mathbb{H};\ e^{-\frac{s_{0}}{4}}\leqslant r\leqslant e^{\ell_{\alpha}(X)+ \frac{s_{0}}{4}}}\sin^{2}(\theta(z))\operatorname{dArea}\] \[=\frac{8}{\pi}\cdot c(\frac{s_{0}}{4})\cdot\int_{0}^{\pi}\int_{e ^{-\frac{s_{0}}{4}}}^{e^{\ell_{\alpha}(X)+\frac{s_{0}}{4}}}\frac{\sin^{2} \theta}{r^{2}\sin^{2}\theta}rdrd\theta\] \[=8\cdot c(\frac{s_{0}}{4})\cdot(\frac{8_{0}}{2}+\ell_{\alpha}(X))\] \[\leqslant 8\cdot c(\frac{s_{0}}{4})\cdot(\frac{s_{0}}{2}+L_{0}) \tag{4.20}\] which completes the proof by setting \(C_{3}(L_{0})=8\cdot c(\frac{s_{0}}{4})\cdot(\frac{s_{0}}{2}+L_{0})\). ## 5. Uniform bounds for \(L^{p}(1\leqslant p\leqslant\infty)\)-norm Recall that we always use the notation: \(r^{\frac{1}{\infty}}=1\) for any \(r>0\). In this section we prove **Theorem 5.1** (=Theorem 1.1).: _For any \(p\in[1,\infty]\) and \(X\in\mathcal{M}_{g}\)\((g\geqslant 2)\), then we have_ 1. _for any_ \(\alpha\subset X\) _with_ \(\ell_{\alpha}(X)=\ell_{sys}(X)\)_,_ \[||\nabla\ell_{\alpha}(X)||_{p}\asymp\ell_{sys}(X)^{\frac{1}{p}}.\] 2. _For any simple loop_ \(\beta\subset X\) _with_ \(\ell_{\beta}(X)\leqslant L_{0}\) _where_ \(L_{0}>0\) _is any given constant,_ \[||\nabla\ell_{\beta}(X)||_{p}\asymp\ell_{\beta}(X)^{\frac{1}{p}}.\] We will first show Theorem 5.1 for the cases that \(p=1,2\) and \(\infty\). Then for general \(p\in(1,2)\cup(2,\infty)\), it follows by a standard argument through using Holder's inequality for integrals. Before proving Theorem 5.1, we first show that \(||\nabla\ell_{\alpha}(X)||_{1}\prec\ell_{\alpha}(X)\) where \(\alpha\subset X\) is not necessary to be a systolic curve. More precisely, **Lemma 5.2**.: _For any non-trivial loop \(\alpha\subset X\in\mathcal{M}_{g}\), then we have_ \[||\nabla\ell_{\alpha}(X)||_{1}\leqslant 2\ell_{\alpha}(X).\] Proof.: The proof is a standard _unfolding_ argument. Recall that \[\nabla\ell_{\alpha}(X)(z)=\frac{2}{\pi}\frac{\overline{\Theta}_{\alpha}(z)}{ \rho(z)|dz|^{2}}\] where \(\Theta_{\alpha}(z)=\sum_{E\in\langle A\rangle\setminus\Gamma} \frac{E^{\prime}(z)^{2}}{E(z)^{2}}dz^{2}\). Let \(\mathbb{F}\subset\{z\in\mathbb{H};1\leqslant|z|\leqslant e^{\ell_{\alpha}(X)}\}\) be a fundamental domain of \(X\). Then \[||\nabla\ell_{\alpha}(X)||_{1} = \int_{\mathbb{F}}|\nabla\ell_{\alpha}(X)(z)|\cdot\rho(z)|dz|^{2}\] \[\leqslant \frac{2}{\pi}\sum_{E\in\langle A\rangle\setminus\Gamma}\int_{ \mathbb{F}}\frac{|E^{\prime}(z)|^{2}}{|E(z)|^{2}}|dz|^{2}\] \[\leqslant \frac{2}{\pi}\int_{z\in\mathbb{H};\ 1\leqslant|z|\leqslant e^{\ell_{ \alpha}(X)}}\frac{1}{|z|^{2}}|dz|^{2}\] \[= \frac{2}{\pi}\int_{0}^{\pi}\int_{1}^{e^{\ell_{\alpha}(X)}}\frac{ 1}{r^{2}}\cdot rdrd\theta\] \[= 2\ell_{\alpha}(X).\] The proof is complete. Now we are ready to prove Theorem 5.1. Proof of Theorem 5.1.: We first prove Part (1). The proof is splitted into the following four cases. _Case-1: \(p=1\)._ First by Lemma 5.2 we clearly have \[||\nabla\ell_{\alpha}(X)||_{1}\prec\ell_{\alpha}(X).\] For the other direction, since \(||\nabla\ell_{\alpha}(X)||_{2}^{2}\leqslant||\nabla\ell_{\alpha}(X)||_{1} \cdot||\nabla\ell_{\alpha}(X)||_{\infty}\), by (2.7) of Riera and Proposition 4.1 we have \[||\nabla\ell_{\alpha}(X)||_{1}\geqslant\frac{||\nabla\ell_{\alpha}(X)||_{2}^{ 2}}{||\nabla\ell_{\alpha}(X)||_{\infty}}\succ\ell_{\alpha}(X).\] Thus, we have \[||\nabla\ell_{\alpha}(X)||_{1}\asymp\ell_{sys}(X). \tag{5.1}\] _Case-2: \(p=2\)._ First (2.7) of Riera says that \[||\nabla\ell_{\alpha}(X)||_{2}\succ(\ell_{\alpha}(X))^{\frac{1}{2}}.\] For the other direction, since \(||\nabla\ell_{\alpha}(X)||_{2}^{2}\leqslant||\nabla\ell_{\alpha}(X)||_{1} \cdot||\nabla\ell_{\alpha}(X)||_{\infty}\), by Proposition 4.1 and Lemma 5.2 we have \[||\nabla\ell_{\alpha}(X)||_{2}\leqslant\sqrt{||\nabla\ell_{\alpha}(X)||_{1} \cdot||\nabla\ell_{\alpha}(X)||_{\infty}}\prec(\ell_{\alpha}(X))^{\frac{1}{2}}.\] Thus, we have \[||\nabla\ell_{\alpha}(X)||_{2}\asymp\ell_{sys}(X)^{\frac{1}{2}}. \tag{5.2}\] _Case-3: \(p=\infty\)._ First Proposition 4.1 says that \[||\nabla\ell_{\alpha}(X)||_{\infty}\prec 1.\] For the other direction, since \(||\nabla\ell_{\alpha}(X)||_{2}^{2}\leqslant||\nabla\ell_{\alpha}(X)||_{1}\cdot|| \nabla\ell_{\alpha}(X)||_{\infty}\), by Case-1 and Case-2 we have \[||\nabla\ell_{\alpha}(X)||_{\infty}\geqslant\frac{||\nabla\ell_{\alpha}(X)|| _{2}^{2}}{||\nabla\ell_{\alpha}(X)||_{1}}\asymp 1.\] Thus, we have \[||\nabla\ell_{\alpha}(X)||_{\infty}\asymp 1. \tag{5.3}\] _Case-4: general \(p\in(1,2)\cup(2,\infty)\)._ First since \[||\nabla\ell_{\alpha}(X)||_{p}^{p}\leqslant||\nabla\ell_{\alpha}(X)||_{1} \cdot||\nabla\ell_{\alpha}(X)||_{\infty}^{p-1},\] by Proposition 4.1 and Case-1 we have \[||\nabla\ell_{\alpha}(X)||_{p}\prec\ell_{\alpha}(X)^{\frac{1}{p}}.\] For the other direction, let \(q\in(1,2)\cup(2,\infty)\) such that \(\frac{1}{p}+\frac{1}{q}=1\). Similar as above we have \[||\nabla\ell_{\alpha}(X)||_{q}\prec\ell_{\alpha}(X)^{\frac{1}{q}}. \tag{5.4}\] Recall that Holder inequality says that \[||\nabla\ell_{\alpha}(X)||_{p}\geqslant\frac{||\nabla\ell_{\alpha}(X)||_{2}^{ 2}}{||\nabla\ell_{\alpha}(X)||_{q}}.\] By Case-2 and (5.4) we have \[||\nabla\ell_{\alpha}(X)||_{p}\succ\ell_{\alpha}(X)^{1-\frac{1}{q}}=\ell_{ \alpha}(X)^{\frac{1}{p}}.\] Thus, we have \[||\nabla\ell_{\alpha}(X)||_{p}\asymp\ell_{sys}(X)^{\frac{1}{p}}. \tag{5.5}\] Then the conclusion follows by (5.1), (5.2), (5.3) and (5.5). To prove Part (2), it follows by the same argument as the proof of Part (1) by applying Proposition 4.7 instead of Proposition 4.1. ## 6. Applications to Weil-Petersson geometry In this section we make several applications of Theorem 1.1 to the Weil-Petersson geometry of \(\mathcal{M}_{g}\). ### A uniform lower bound for Weil-Petersson distance As discussed in the introduction, Theorem 5.1 can be applied to prove the following two results on the global Weil-Petersson geometry of \(\mathcal{M}_{g}\). **Theorem 6.1**.: _[_10_, Theorem 1.3]_ _For all \(X,Y\in\operatorname{Teich}(S_{g})\),_ \[|\sqrt{\ell_{sys}(X)}-\sqrt{\ell_{sys}(Y)}|\prec\operatorname{dist}_{wp}(X,Y)\] _where \(\operatorname{dist}_{wp}\) is the Weil-Petersson distance._ **Theorem 6.2**.: _[_10_, Theorem 1.1]_ _For all \(g\geqslant 2\),_ \[\operatorname{InRad}(\mathcal{M}_{g})\asymp\sqrt{\ln{(g)}}.\] ### Uniform bounds on Weil-Petersson curvatures In this subsection we prove several new uniform bounds on Weil-Petersson curvatures. Recall the formula (2.9) says that for all \(\mu\in T_{X}\mathcal{M}_{g}\) with \(||\mu||_{2}=1\), \[\operatorname{HolK}(\mu)\asymp-\int_{X}|\mu|^{4}\operatorname{dArea}. \tag{6.1}\] Our first bound on Weil-Petersson curvature is as follows. **Theorem 6.3** (=Theorem 1.2).: _For any \(X\in\mathcal{M}_{g}\), let \(\alpha\subset X\) be a simple closed geodesic satisfying_ 1. _either_ \(\ell_{\alpha}(X)=\ell_{sys}(X)\) _or_ 2. \(\ell_{\alpha}(X)\leqslant L_{0}\) _where_ \(L_{0}>0\) _is any fixed constant,_ _then the Weil-Petersson holomorphic sectional curvature_ \[\operatorname{HolK}(\nabla\ell_{\alpha})(X)\asymp\frac{-1}{\ell_{sys}(X)}.\] Proof of Theorem 6.3.: By (6.1) we have \[\operatorname{HolK}(\nabla\ell_{\alpha})(X)\asymp-\frac{\int_{X}|\nabla\ell_{ \alpha}(X)|^{4}\operatorname{dArea}}{(\int_{X}|\nabla\ell_{\alpha}(X)|^{2} \operatorname{dArea})^{2}}. \tag{6.2}\] We apply Theorem 5.1 for the cases that \(p=2,4\) to obtain \[\int_{X}|\nabla\ell_{\alpha}(X)|^{4}\operatorname{dArea}\asymp\ell_{sys}(X) \tag{6.3}\] and \[(\int_{X}|\nabla\ell_{\alpha}(X)|^{2}\operatorname{dArea})^{2}\asymp\ell_{sys }^{2}(X). \tag{6.4}\] Then the conclusion follows by these three equations above. Buser-Sarnak in [1] show that \(\max_{X\in\mathcal{M}_{g}}\ell_{sys}(X)\asymp\ln(g)\) for all \(g\geqslant 2\). A direct consequence of Theorem 6.3 is as follows. **Corollary 6.4**.: _For all \(g\geqslant 2\),_ \[\sup_{X\in\mathcal{M}_{g}}\operatorname{HolK}(\nabla\ell_{\alpha})(X)\asymp \frac{-1}{\ln(g)}\] _where \(\alpha\subset X\) satisfies that \(\ell_{\alpha}(X)=\ell_{sys}(X)\)._ Another application of Theorem 5.1 is to show that the minimum Weil-Petersson holomorphic sectional curvature at any \(X\in\mathcal{M}_{g}\) is bounded from above by a uniform negative number. More precisely, **Theorem 6.5** (=Theorem 1.4).: _For any \(X\in\mathcal{M}_{g}\),_ \[\min_{\mu\in T_{X}\mathcal{M}_{g}}\operatorname{HolK}(\mu)\prec-1<0.\] Proof.: Let \(X\in\mathcal{M}_{g}\) be arbitrary. We split the proof into two cases. Case-1: \(\ell_{sys}(X)=\ell_{\alpha}(X)<100\) for some \(\alpha\subset X\). It follows by Theorem 6.3 that \[\min_{\mu\in T_{X}\mathcal{M}_{g}}\operatorname{HolK}(\mu) \leqslant \operatorname{HolK}(\nabla\ell_{\alpha})(X)\] \[\asymp \frac{-1}{\ell_{sys}(X)}\] \[\prec -1 \tag{6.5}\] where we apply \(\ell_{sys}(X)<100\) in the last equation. Case-2: \(\ell_{sys}(X)\geqslant 100\). We use the \(\mu_{0}(z)=\sum_{\gamma\in\Gamma}\frac{\overline{r^{\prime}(z)^{2}}}{\rho(z)} \frac{d\overline{z}}{dz}\in T_{X}\mathcal{M}_{g}\) instead of \(\nabla\ell_{\alpha}(X)\). Since \(\ell_{sys}(X)\geqslant 100>2\ln{(3+2\sqrt{2})}\), it follows by [20, Theorem 6.1] or the proof of [20, Theorem 1.1] that \[\operatorname{HolK}(\mu_{0})(X)\prec-1.\] Thus, we have if \(\ell_{sys}(X)\geqslant 100\), \[\min_{\mu\in T_{X}\mathcal{M}_{g}}\operatorname{HolK}(P)\leqslant \operatorname{HolK}(\mu_{0})(X)\prec-1. \tag{6.6}\] Therefore the conclusion follows by the two cases above. **Remark 6.6**.: Let \(\widetilde{Q}:\wedge^{2}T_{X}\mathcal{M}_{g}\to\wedge^{2}T_{X}\mathcal{M}_{g}\) be the real Riemannian curvature operator. This is an endomorphism of a \((3g-3)(6g-7)\) dimensional vector space. It was shown in [20] that \(\widetilde{Q}\) is non-positive definite. Moreover, it was shown in [20, Theorem 1.2] that the \(\ell^{\infty}\)-norm \(||\widetilde{Q}||_{\ell^{\infty}}(X)\) of \(\widetilde{Q}\) at any \(X\in\mathcal{M}_{g}\) satisfies that \[||\widetilde{Q}||_{\ell^{\infty}}(X)\geqslant\frac{1}{2\pi}.\] In particular, we have \(||\widetilde{Q}||_{\ell^{\infty}}(X)\succ 1\). The proof of this inequality highly depends on the Weil-Petersson curvature operator formula developed in [20]. By definition and the negativity of Weil-Petersson curvature one knows that \[||\widetilde{Q}||_{\ell^{\infty}}(X)\geqslant\max_{\mu\in T_{X}\mathcal{M}_{ g}}|\operatorname{HolK}(\mu)|=-\min_{\mu\in T_{X}\mathcal{M}_{g}}\operatorname{HolK}(\mu).\] Thus, by Theorem 6.5 we also get \[||\widetilde{Q}||_{\ell^{\infty}}(X)\succ 1.\] Although the uniform lower bound is not as explicit as \(\frac{1}{2\pi}\) in [13, Theorem 1.2], as a direct consequence of Theorem 6.5, we give a completely different proof on this uniform lower bound \(||\widetilde{Q}||_{\ell^{\infty}}(X)\succ 1\) without using the Weil-Petersson curvature operator formula. ## 7. Applications to the \(L^{p}\) metric on \(\mathcal{M}_{g}\) In this section we study the \(L^{p}\) (\(1<p\leqslant\infty\)) metric on \(\mathcal{M}_{g}\) and its relation to Theorem 1.1. Let \((X,\sigma(z)|dz|^{2})\in\mathcal{M}_{g}\) be a hyperbolic surface and \(\phi\in Q(X)\) be a holomorphic quadratic differential on \(X\). For any \(p\in[1,\infty]\), we define \[||\phi||_{L^{p}}:=||\frac{\overline{\phi}}{\sigma}||_{p}=\left(\int_{X}\left( \frac{|\phi(z)|}{\sigma(z)}\right)^{p}\cdot\mathrm{dArea}\right)^{\frac{1}{p}}. \tag{7.1}\] The \(L^{p}\) metric on \(\mathcal{M}_{g}\) is defined by (7.1) via duality. More precisely, **Definition 7.1**.: For any \(p\in[1,\infty]\), the \(L^{p}\)_-metric_ on \(\mathcal{M}_{g}\) is defined as follows: for any \(\mu\in T_{X}\mathcal{M}_{g}\) which is a harmonic Beltrami differential on \(X\) we define \[||\mu||_{L^{p}}:=\sup_{\phi\in Q(X);\ ||\phi||_{L^{p}}=1}\mathrm{Re}\int_{X} \left(\frac{\phi}{\sigma}\cdot\mu\right)\cdot\mathrm{dArea} \tag{7.2}\] We call that \((\mathcal{M}_{g},||\cdot||_{L^{p}})\) is _the moduli space endowed with the \(L^{p}\)-metric_, and denote by \(\mathrm{dist}_{p}(\cdot,\cdot)\) the distance function on \((\mathcal{M}_{g},||\cdot||_{L^{p}})\). By definition we know that \((\mathcal{M}_{g},||\cdot||_{L^{1}})\) is the Teichmuller metric on \(\mathcal{M}_{g}\) which is a complete Finsler metric. And \((\mathcal{M}_{g},||\cdot||_{L^{2}})\) is the Weil-Petersson metric on \(\mathcal{M}_{g}\) which is an incomplete Kahler metric. Let \(q\in[1,\infty]\) be the conjugate number of \(p\), i.e., \(\frac{1}{q}+\frac{1}{p}=1\). For (7.2) first by the Holder inequality we have \[||\mu||_{L^{p}}\leqslant||\mu||_{q} \tag{7.3}\] where \(||\mu||_{q}\) is defined in (2.8). On the other hand, by choosing \(\phi=\frac{\overline{\mu}\cdot\sigma}{||\mu||_{p}}\) in (7.2) we have \[||\mu||_{L^{p}}\geqslant\frac{||\mu||_{2}^{2}}{||\mu||_{p}}=\frac{||\mu||_{ \mathrm{WP}}^{2}}{||\mu||_{p}}. \tag{7.4}\] In particular, \[||\mu||_{L^{2}}=||\mu||_{2}=||\mu||_{\mathrm{WP}}.\] Now we are ready to state our result in this section. **Theorem 7.2** (=Theorem 1.6).: _Let \(X\in\mathcal{M}_{g}\) and \(\alpha\subset X\) be a non-trivial simple loop with \(\ell_{\alpha}(X)\leqslant L_{0}\) where \(L_{0}>0\) is any given constant. Then for any \(p\in(1,\infty]\),_ \[\mathrm{dist}_{p}(X,\mathcal{M}_{g}^{\alpha})\prec\left(\ell_{\alpha}(X) \right)^{1-\frac{1}{p}}\] _where \(\mathcal{M}_{g}^{\alpha}\) is the stratum of \(\mathcal{M}_{g}\) whose pinching curve is \(\alpha\). In particular, the space \((\mathcal{M}_{g},||\cdot||_{L^{p}})\)\((1<p\leqslant\infty)\) is incomplete._ Proof.: Let \(\nabla\ell_{\alpha}(X)\) be the Weil-Petersson gradient of the geodesic length function \(\ell_{\alpha}(\cdot)\) at \(X\). Now we consider the integral curve of the vector field \(-\frac{\nabla\ell_{\alpha}}{||\nabla\ell_{\alpha}||_{L^{p}}}\). More precisely, let \(c:[0,s)\to\mathcal{M}_{g}\) be a curve, where \(s>0\) is length of the maximal interval, satisfying 1. \(c(0)=X\) and 2. \(c^{\prime}(t)=-\frac{\nabla\ell_{\alpha}(c(t))}{||\nabla\ell_{\alpha}(c(t))|| _{L^{p}}}\). Direct computations show that \(t\) is an arc-length parameter of \(c(\cdot)\) in \((\mathcal{M}_{g},||\cdot||_{L^{p}})\) and \(\ell_{\alpha}(c(t_{1}))<\ell_{\alpha}(c(t_{2}))\) for any \(s>t_{1}>t_{2}\geqslant 0\). Thus, as \(t\to s\), \(c(t)\) goes to \(\mathcal{M}_{g}^{\alpha}\). So we have \[\lim_{t\to s}\ell_{\alpha}(c(t))=0.\] Since \(\ell_{\alpha}(c(t))\) is decreasing, \(\ell_{\alpha}(c(t))\leqslant L_{0}\) for all \(t\in[0,s)\). It follows by Part (2) of Theorem 1.1 that for all \(t\in[0,s)\) and \(r\in[1,\infty]\), \[||\nabla\ell_{\alpha}(c(t))||_{r}\asymp(\ell_{\alpha}(c(t)))^{\frac{1}{r}} \tag{7.5}\] which together with (7.3) and (7.4) implies that \[|\nabla\ell_{\alpha}(c(t))||_{L^{p}}\asymp(\ell_{\alpha}(c(t)))^{\frac{1}{q}} \tag{7.6}\] where \(\frac{1}{q}+\frac{1}{p}=1\). Thus, we have for all \(s>t_{1}>t_{2}\geqslant 0\), \[|(\ell_{\alpha}(c(t_{1})))^{\frac{1}{q}}-(\ell_{\alpha}(c(t_{2}) ))^{\frac{1}{q}}| = |\int_{t_{2}}^{t_{1}}\frac{(\ell_{\alpha}(c(t)))^{\frac{1}{q}-1}} {q}\left\langle\nabla\ell_{\alpha}(c(t)),c^{\prime}(t)\right\rangle_{wp}dt|\] \[= \frac{1}{q}\int_{t_{2}}^{t_{1}}\frac{||\nabla\ell_{\alpha}(c(t))|| _{\mathrm{WP}}^{2}}{(\ell_{\alpha}(c(t)))^{\frac{1}{p}}\cdot|\nabla\ell_{ \alpha}(c(t))||_{L^{p}}}dt\] \[\asymp \int_{t_{2}}^{t_{1}}\frac{\ell_{\alpha}(c(t))}{(\ell_{\alpha}(c( t)))^{\frac{1}{p}}\cdot(\ell_{\alpha}(c(t)))^{\frac{1}{q}}}dt\] \[= t_{1}-t_{2}\] \[\geqslant \mathrm{dist}_{p}(c(t_{1}),c(t_{2}))\] where in the last inequality we apply \(t\) is an arc-length parameter for \(c(\cdot)\). We choose \(t_{2}=0\) and let \(t_{1}\to s\) to get \[\mathrm{dist}_{p}(X,\mathcal{M}_{g}^{\alpha})\leqslant\liminf_{t\to s} \mathrm{dist}_{p}(c(t),c(0))\prec(\ell_{\alpha}(X))^{\frac{1}{q}} \tag{7.7}\] which completes the proof because \(\frac{1}{q}=1-\frac{1}{p}\).
2302.12846
A Direct Detection View of the Neutrino NSI Landscape
In this article, we study the potential of direct detection experiments to explore the parameter space of general non-standard neutrino interactions (NSI) via solar neutrino scattering. Due to their sensitivity to neutrino-electron and neutrino-nucleus scattering, direct detection provides a complementary view of the NSI landscape to that of spallation sources and neutrino oscillation experiments. In particular, the large admixture of tau neutrinos in the solar flux makes direct detection experiments well-suited to probe the full flavour space of NSI. To study this, we develop a re-parametrisation of the NSI framework that explicitly includes a variable electron contribution and allows for a clear visualisation of the complementarity of the different experimental sources. Using this new parametrisation, we explore how previous bounds from spallation source and neutrino oscillation experiments are impacted. For the first time, we compute limits on NSI from the first results of the XENONnT and LUX-ZEPLIN experiments, and we obtain projections for future xenon-based experiments. These computations have been performed with our newly developed software package, SNuDD. Our results demonstrate the importance of using a more general NSI parametrisation and indicate that next generation direct detection experiments will become powerful probes of neutrino NSI.
Dorian W. P. Amaral, David Cerdeno, Andrew Cheek, Patrick Foldenauer
2023-02-24T19:00:01Z
http://arxiv.org/abs/2302.12846v2
# A direct detection view of the neutrino NSI landscape ###### Abstract In this article, we study the potential of direct detection experiments to explore the parameter space of general non-standard neutrino interactions (NSI) via solar neutrino scattering. Due to their sensitivity to neutrino-electron and neutrino-nucleus scattering, direct detection provides a complementary view of the NSI landscape to that of spallation sources and neutrino oscillation experiments. In particular, the large admixture of tau neutrinos in the solar flux makes direct detection experiments well-suited to probe the full flavour space of NSI. To study this, we develop a re-parametrisation of the NSI framework that explicitly includes a variable electron contribution and allows for a clear visualisation of the complementarity of the different experimental sources. Using this new parametrisation, we explore how previous bounds from spallation source and neutrino oscillation experiments are impacted. For the first time, we compute limits on NSI from the first results of the XENONnT and LUX-ZEPLIN experiments, and we obtain projections for future xenon-based experiments. Our results demonstrate the importance of using a more general NSI parametrisation and indicate that next generation direct detection experiments will become powerful probes of neutrino NSI. + [FOOTNO ###### Contents * I Introduction * II Solar Neutrino Physics and Non-Standard Interactions * II.1 NSI parametrisation * II.2 Three-flavour neutrino oscillations in the presence of NSI * II.2.1 Solar neutrino density matrix * II.2.2 Generalised neutrino cross sections * II.2.3 LE\(\nu\)NS cross section * II.2.4 E\(\nu\)ES cross section * III Extending current constraints to the full NSI parameter space * III.1.1 The CENNS-10 LAr Experiment * III.2.1 The Borexino Experiment * IV Direct Detection Experiments * IV.1 Expected Number of Events and Statistical Procedure * IV.2.1 Sensitivities in the nucleon NSI plane * IV.3 Sensitivities in the charged NSI plane * IV.4 Final Remarks * V Conclusions * A Solar neutrino transition rate * B \(\Delta\chi^{2}\) Plots for CENNS-10 LAr and Borexino ## I Introduction Neutrinos are among the most mysterious particles of the Standard Model (SM). The discovery of their flavour oscillations remains one of the strongest pieces of evidence for new physics, since it requires neutrinos to be massive [1; 2]. In the SM, however, neutrinos are described by a purely left-handed spinorial field forming part of an \(SU(2)_{L}\) doublet [3], which, by the principles of gauge invariance, disallows a mass term for neutrinos at the renormalisable level. Hence, the neutrino sector has inspired a myriad of extensions beyond the SM (BSM) (see, for example, Ref. [4] for a review). A convenient parametrisation of new physics in neutrino interactions has been established in terms of the low-energy effective field theory (EFT) of _non-standard interactions_ (NSI) [5; 6; 7; 8; 9; 10; 11; 12]. This formalism contemplates modifications to neutrino interactions with SM particles while respecting the SM vector current structure. Over the last decades, a variety of experimental bounds have been derived for the NSI couplings [13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54], most importantly from neutrino oscillation and spallation source experiments. The latter have recently succeeded in observing the coherent elastic scattering of neutrinos with nuclei (CE\(\nu\)NS) [55; 56] with a rate consistent with the SM prediction [57; 58], providing stronger constraints on new physics contributions (see, for example, Refs. [59; 60; 61; 62; 63; 64; 65]). Meanwhile, dark matter (DM) direct detection (DD) experiments have experienced remarkable progress. Current detectors have significantly increased their target size and sensitivity, to the point where they will be able to observe the scattering of solar neutrinos. This constitutes a new background for DM searches, which leads to the so-called neutrino floor (or fog) [66; 67; 68], but it also offers the unique opportunity to probe new physics with these instruments [69; 70; 71; 72; 73; 74; 75; 76; 77]. Neutrinos can be observed in DD experiment through elastic neutrino-electron scattering (E\(\nu\)ES) or their coherent elastic scattering with nuclei. Due to their larger target size, liquid noble gas detectors like LUX-ZEPLIN (LZ) [78], PandaX [79], and XENONnT [80], are better positioned than other DD techniques to carry out this kind of search. Indeed, the larger xenon-based DD experiments have thus far succeeded in placing upper bounds on the \({}^{8}\)B neutrino flux [81; 82]. The sensitivity of DD experiments to these processes has already led to studies in which the expected solar neutrino scattering rate has been used as a laboratory for gaining a deeper understanding of the nature of solar physics, neutrino oscillations, and BSM neutrino physics [72; 74; 77; 78; 83; 84; 85; 86; 87; 88; 89; 90; 91]. In this work, we set out to exploit the sensitivity of DD experiments to solar neutrino scattering with the aim of exploring their impact on the NSI landscape. In the context of dedicated neutrino experiments, NSI studies are numerous; however, the potential of DD experiments has not been fully investigated. Previous works have pointed out that non-zero NSI parameters could produce appreciable signals for both CE\(\nu\)NS and E\(\nu\)ES [70; 74] in DD, as well as potentially modify the neutrino fog [92; 93]. To this end, we will introduce a convenient parametrisation of NSI, extending the framework of Ref. [36] to include an explicit separation between NSI in the electron and proton directions. This is needed to interpret the results of DD experiments. Ignoring the electron contribution is a valid choice as long as one is mostly interested in matter effects for oscillation experiments1, but this is a non-general treatment. Striving for greater generality, we allow for the possibility that the 'charged' neutrino NSI is shared between both the proton and the electron. While the total charged contribution can be designed to leave neutrino oscillations unchanged, allowing for electron NSI can instead lead to changes in the E\(\nu\)ES cross section. This, in turn, can affect the bounds set by oscillation experiments, which could instead be dominated by NSI effects at the detection point [94]. Footnote 1: Non-standard matter effects enter the matter Hamiltonian via a contribution from the neutron and an overall charged contribution from both the proton and the electron. Furthermore, as was recently pointed out in Ref. [95], when new physics introduces potential flavour-changing neutral currents (FCNC), the full flavour structure of the cross section must be retained when dealing with a neutrino flux composed of an admixture of flavour eigenstates. This is in contrast with the SM, where interactions are diagonal in the flavour basis. Thus, in the general NSI case, it is no longer appropriate to project the neutrino state that reaches Earth onto any one particular flavour state and convolve the result with flavour-pure cross sections, as neutrinos arrive in a superposition of flavour eigenstates.2 Instead, we must consider the full flavour-structure of both the cross section and the density matrix describing the evolution of the initial neutrino state. Footnote 2: We stress that the simplified treatment of calculating the number of neutrino scattering events in the presence of new physics, given by \(N_{\nu}\propto\sum_{\alpha}P_{\rm e\alpha}\,\mathrm{d}\sigma_{\nu_{\alpha}T}/ \mathrm{d}E_{R}\), where \(P_{\rm e\alpha}\) is the transition probability to a neutrino of flavour \(\alpha\), is only appropriate in two cases. Firstly, if the flux of neutrinos incident on a target is only composed of one flavour. Secondly, if the new physics contribution is flavour-conserving. The rate of neutrino events in a generic neutrino scattering experiment is then described by the expression [95], \[\frac{\mathrm{d}R}{\mathrm{d}E_{R}}=N_{T}\int_{E_{\nu}^{\rm min}}\frac{ \mathrm{d}\phi_{\nu}}{\mathrm{d}E_{\nu}}\,\mathrm{Tr}\left[\mathbf{\rho}\,\frac{ \mathrm{d}\mathbf{\zeta}}{\mathrm{d}E_{R}}\right]\,\mathrm{d}E_{\nu}\,, \tag{1}\] where \(N_{T}\) is the number of targets, \(\phi_{\nu}\) is the neutrino flux at the source3, \(\mathbf{\rho}\) is the neutrino density matrix at the experiment and \(\mathbf{\zeta}\) is a generalised scattering cross section in the neutrino-flavour space, encoding correlations between scattering amplitudes of neutrinos with different flavours. Here, \(E_{\nu}\) is the energy of the incident neutrinos and \(E_{\nu}^{\rm min}\) is the minimum \(E_{\nu}\) required to produce a target recoil energy of \(E_{R}\). Footnote 3: Of particular relevance to experiments sensitive to solar neutrinos is the fact that electron neutrino production in the Sun proceeds through a series of charged-current interactions. Since we are only considering neutral-current NSI, the electron neutrino flux produced in the Sun is unchanged. Using this generalised framework, in this paper we study how DD experiments will constitute a valuable complementary probe of the NSI landscape. To do this, we first explore how previous limits derived from oscillation and spallation source experiments map onto the full NSI parameter space. Then, we derive new limits from the recent data from LZ and XENONnT onto the NSI parameters and make projections for their full exposure runs, as well as for the future DARWIN experiment [96]. We demonstrate that xenon-based DD experiments like XENON [81; 97; 98], DARWIN [96], PandaX [79; 100; 99] and LZ [101; 102; 103] will be sensitive to generic NSI couplings in a competitive and complementary way to oscillation, beam, and spallation source experiments. We do this by comparing our results and projections to those derived in Refs. [31; 36; 39; 74; 95]. Given that DD will be sensitive to both CE\(\nu\)NS and E\(\nu\)ES, we explore their limits and projections in our extended parametrisation, emphasising the complementarity of both signals. Indeed, due to the high flux of solar neutrinos and the excellent background reduction of DD experiments, their sensitivity to electron NSI is remarkable and is competitive with that of conventional neutrino oscillation experiments. This article is organised as follows. In Section II, we introduce the framework of non-standard neutrino interactions, explicitly incorporating interactions with electrons as well as the impact of such NSI on solar neutrino physics. We then derive the relevant formalism for computing the density matrix and the generalised cross section, both required to compute the expected solar neutrino scattering rates. In Section III, we shed light on the current landscape of NSI constraints derived from oscillation and spallation source experiments, as well as their sensitivity to interactions with electrons. We present and discuss the results of our sensitivity study of DD experiments to NSI as the main results of this work in Section IV. Finally, we draw our conclusions in Section V. ## II Solar neutrino physics and non-standard interactions In this section, we introduce the framework of neutrino NSI and study their impact on solar neutrino physics, both in propagation effects and in scattering with nuclei and electrons. In doing so, we derive the relevant expressions for the solar neutrino density matrix, \(\mathbf{\rho}\), and the generalised cross section, \(\mathbf{\zeta}\), for CE\(\nu\)NS and E\(\nu\)ES entering the rate in Eq. (1). For an explanation of the origin of this rate equation, we refer to Appendix A. ### NSI parametrisation In order to understand how potential new neutrino interactions enter the scattering rate in Eq. (1), we need to specify a BSM model. Since we want to remain as general as possible about the origin of such new physics, we will work in terms of a low-energy effective theory. Making use of the framework of neutrino NSI [12, 15, 5, 104], we can parametrise new physics effects in the neutrino sector by contact terms of the form4 Footnote 4: Note that this parametrisation is not \(SU(2)_{L}\) invariant and is mainly motivated by the structure of the SM weak current. In order to systematically capture all gauge invariant dimension-six operators modifying neutrino interactions, it is more suitable to consider a complete basis of EFT operators and map them onto the enlarged basis of general neutrino interactions [105, 106, 107, 108]. \[\mathcal{L}_{\text{NSI}}=-2\sqrt{2}\,G_{F}\sum_{\begin{subarray}{c}f=e,u,d \\ \alpha,\beta=e,\mu,\tau\end{subarray}}\varepsilon^{fP}_{\alpha\beta}\,\left[ \bar{\nu}_{\alpha}\gamma_{\rho}P_{L}\nu_{\beta}\right]\,\left[\bar{f}\gamma^ {\rho}Pf\right]\,, \tag{2}\] where \(G_{F}\) denotes the Fermi constant and \(P\in\{P_{L},P_{R}\}\). The NSI parameters \(\varepsilon^{fP}_{\alpha\beta}\), which are in general flavour-violating, quantify the strength of the interaction between the neutrinos \(\nu_{\alpha}\) and \(\nu_{\beta}\) and the pair of fermions \(f\) relative to the SM weak interaction, characterised by \(G_{F}\). In this work, we will not consider any new source of CP-violation and hence assume the parameters \(\varepsilon^{fP}_{\alpha\beta}\) to be real. Furthermore, we have assumed that the charged fermions \(f\) are identical, resembling the SM neutral current (NC) interaction. However, charged current (CC) NSI could also exist, where the neutrinos couple to two different charged fermions, \(f\) and \(f^{\prime}\). Since these are, in general, subject to much harsher constraints and DD experiments do not probe CC interactions in NRs, we do not consider them here but rather direct the reader to, for example, Refs. [109, 110, 104, 101, 25, 102, 103, 104, 105, 106, 107, 108, 109, 25, 100, 110, 111, 112]. To describe neutrinos interacting with ordinary matter (made up of electrons, protons and neutrons), only interactions with the first generation of SM fermions need to be considered. If we assume that the neutrino flavour structure of the NSI is independent of the charged fermion \(f\) that the neutrinos couple to, we can factorise the NSI coupling as [36] \[\varepsilon^{fP}_{\alpha\beta}=\varepsilon^{\eta,\varphi}_{\alpha\beta}\,\xi^{ fP}\,, \tag{3}\] where \(\xi^{fP}\) describes the relative strength of the interaction with the fermions \(f\in\{e,u,d\}\) and \(\varepsilon^{\eta,\varphi}_{\alpha\beta}\) denotes the overall strength of the NSI. We further define the vector and axial-vector NSI couplings \[\begin{split}\varepsilon^{f}_{\alpha\beta}&= \varepsilon^{fL}_{\alpha\beta}+\varepsilon^{fR}_{\alpha\beta}=\varepsilon^{ \eta,\varphi}_{\alpha\beta}\,\xi^{f}\,,\\ \tilde{\varepsilon}^{f}_{\alpha\beta}&=\varepsilon^{ fL}_{\alpha\beta}-\varepsilon^{fR}_{\alpha\beta}=\tilde{\varepsilon}^{ \eta,\varphi}_{\alpha\beta}\,\tilde{\xi}^{f}\,,\end{split} \tag{4}\] with \(\xi^{f}=\xi^{fL}+\xi^{fR}\) and \(\tilde{\xi}^{f}=\xi^{fL}-\xi^{fR}\). As matter effects are only sensitive to the vector part of the interaction, we focus only on vector NSI in this work, setting \(\tilde{\varepsilon}_{\alpha\beta}\) to zero. Since we are ultimately testing neutrino interactions with matter, it is convenient to parametrise the NSI with quarks in terms of proton and neutron NSI, \[\begin{split}&\varepsilon^{p}_{\alpha\beta}=2\,\varepsilon^{u}_{ \alpha\beta}+\varepsilon^{d}_{\alpha\beta}\,,\\ &\varepsilon^{u}_{\alpha\beta}=\varepsilon^{u}_{\alpha\beta}+2\, \varepsilon^{d}_{\alpha\beta}\,.\end{split} \tag{5}\] Extending the parametrisation of Ref. [36] by re-introducing the electron direction via a second angle \(\varphi\), the relative strengths of the electron, proton, and neutron NSI are written as5, Footnote 5: The normalisation factor of \(\sqrt{5}\) was originally introduced in Ref. [36] to have unit vectors \(\xi^{u}\) and \(\xi^{d}\) if the NSI are entirely in the up- and down-quark direction, respectively. We cohere to this normalisation for comparability of our results with the literature on NSI global fits. \[\begin{split}&\xi^{e}=\sqrt{5}\,\cos\eta\,\sin\varphi\,,\\ &\xi^{p}=\sqrt{5}\,\cos\eta\,\cos\varphi\,,\\ &\xi^{n}=\sqrt{5}\,\sin\eta\,.\end{split} \tag{6}\] In Fig. 1, we illustrate our parametrisation of the three base NSI directions \(\hat{\varepsilon}^{p}_{\alpha\beta},\hat{\varepsilon}^{n}_{\alpha\beta}\) and \(\hat{\varepsilon}^{e}_{\alpha\beta}\). We define the angle \(\eta\) as the angle of general NSI coupling \(\mathbf{\varepsilon_{\alpha\beta}}=(\varepsilon^{p}_{\alpha\beta},\varepsilon^{n} _{\alpha\beta},\varepsilon^{e}_{\alpha\beta})\) with the plane of charged NSI (\(\hat{\varepsilon}^{p}_{\alpha\beta},\hat{\varepsilon}^{e}_{\alpha\beta}\)). The second angle \(\varphi\) is defined as the angle between the general NSI element \(\mathbf{\varepsilon_{\alpha\beta}}\) and the plane of hadronic NSI (\(\hat{\varepsilon}^{p}_{\alpha\beta},\hat{\varepsilon}^{n}_{\alpha\beta}\)). In order to match our notation with the literature on global NSI fits [36; 39], we allow for both _positive_ and _negative_ values for \(\varepsilon^{n,\varphi}_{\alpha\beta}\). Thus, the azimuthal angle \(\eta\) only runs in the interval \([-\pi/2,\pi/2]\) to span the full two-dimensional plane of hadronic NSI (\(\varepsilon^{p}_{\alpha\beta},\varepsilon^{n}_{\alpha\beta}\)). The second, polar angle \(\varphi\) (taken from the hadronic NSI plane) also runs in the interval \([-\pi/2,\pi/2]\) to cover the full sphere. For example, \(\eta=0\) and \(\varphi=0\) corresponds to NSI only in the proton direction \(\hat{\varepsilon}^{p}_{\alpha\beta}\), \(\eta=0\) and \(\varphi=\pi/2\) to NSI only in the electron direction \(\hat{\varepsilon}^{e}_{\alpha\beta}\), and \(\eta=\pi/2\) to NSI only in the neutron direction \(\hat{\varepsilon}^{n}_{\alpha\beta}\). Figure 1: Extended NSI parametrisation. A given NSI is defined by the radial component, \(\sqrt{5}\,\varepsilon^{n,\varphi}_{\alpha\beta}\) (which can be either positive or negative), the angle between the charged (\(\hat{\varepsilon}^{p}_{\alpha\beta},\hat{\varepsilon}^{e}_{\alpha\beta}\))-plane and the neutron direction, \(\eta\), and the new angle \(\varphi\), which defines the NSI direction along either the proton or the electron component. The domains of these angles are \(\eta\), \(\varphi\in[-\pi/2,\pi/2]\), as visualised by the blue and red semicircles, respectively. ### Three-flavour neutrino oscillations in the presence of NSI With this extended framework, we can describe the evolution of neutrino and antineutrino states during propagation in the Hamiltonian formalism by \[H^{\nu} =H_{\rm vac}+H_{\rm mat}\,, \tag{7}\] \[H^{\bar{\nu}} =(H_{\rm vac}-H_{\rm mat})^{*}\,,\] where the standard vacuum Hamiltonian is given by \[H_{\rm vac}=U_{\rm PMNS}\,\,\frac{1}{2E_{\nu}}\begin{pmatrix}0&0&0\\ 0&\Delta m_{21}^{2}&0\\ 0&0&\Delta m_{31}^{2}\end{pmatrix}U_{\rm PMNS}^{\dagger}\,, \tag{8}\] with \(\Delta m_{ij}^{2}\equiv m_{i}^{2}-m_{j}^{2}\) and \(U_{\rm PMNS}\) being the PMNS matrix, defined as \[U_{\rm PMNS}=\underbrace{\begin{pmatrix}1&0&0\\ 0&c_{23}&s_{23}\\ 0&-s_{23}&c_{23}\end{pmatrix}}_{\equiv\,R_{23}}\underbrace{\begin{pmatrix}c_{ 13}&0&s_{13}\\ 0&1&0\\ -s_{13}&0&c_{13}\end{pmatrix}}_{\equiv\,R_{13}}\underbrace{\begin{pmatrix}c_{ 12}&s_{12}\,e^{i\,\delta_{\rm CP}}&0\\ -s_{12}\,e^{-i\,\delta_{\rm CP}}&c_{12}&0\\ 0&0&1\end{pmatrix}}_{\equiv\,U_{12}}\,. \tag{9}\] Here, \(\delta_{\rm CP}\) is the CP-phase, and \(c_{ij}\) and \(s_{ij}\) refer to \(\cos\theta_{ij}\) and \(\sin\theta_{ij}\), respectively. The matter Hamiltonian, consisting of both the SM charged current and the NSI neutral current contributions, is given by \[H_{\rm mat}=\sqrt{2}G_{F}\,N_{e}(x)\,\begin{pmatrix}1+\mathcal{E}_{ee}(x)& \mathcal{E}_{e\mu}(x)&\mathcal{E}_{e\tau}(x)\\ \mathcal{E}_{e\mu}^{*}(x)&\mathcal{E}_{\mu\mu}(x)&\mathcal{E}_{\mu\tau}(x)\\ \mathcal{E}_{e\tau}^{*}(x)&\mathcal{E}_{\mu\tau}^{*}(x)&\mathcal{E}_{\tau\tau} (x)\end{pmatrix}, \tag{10}\] with \[\mathcal{E}_{\alpha\beta}=\sum_{f}\frac{N_{f}(x)}{N_{e}(x)}\, \varepsilon_{\alpha\beta}^{f}\,, \tag{11}\] where \(N_{f}(x)\) is the spatial fermion density in matter. With the definition of the nuclear NSI couplings in Eq. (5) and the fact that in neutral matter \(N_{p}(x)=N_{e}(x)\), we can express the dimensionless NSI matter Hamiltonian elements as \[\mathcal{E}_{\alpha\beta}=\varepsilon_{\alpha\beta}^{e}+\varepsilon_{\alpha \beta}^{p}+Y_{n}(x)\,\varepsilon_{\alpha\beta}^{n}=\left[\xi^{e}+\xi^{p}+Y_{ n}(x)\,\xi^{n}\right]\,\varepsilon_{\alpha\beta}^{\eta;\varphi}\,, \tag{12}\] where \(Y_{n}(x)=N_{n}(x)/N_{e}(x)\) denotes the fractional neutron density. In studying solar neutrino propagation effects, we take \(Y_{n}(x)\) from Ref. [113]. In the context of solar neutrino physics, it is convenient to switch from the conventional neutrino flavour basis to a new basis \(\hat{\mathbf{\nu}}=O^{\dagger}\mathbf{\nu}\), which we will refer to as the _solar neutrino flavour basis_, via the rotation \(O=R_{23}\,R_{13}\). In this basis, the full Hamiltonian reads, \[H^{\nu}=\frac{1}{2E_{\nu}}\begin{pmatrix}c_{13}^{2}\,A_{\rm cc}+s_{12}^{2}\, \Delta m_{21}^{2}&s_{12}\,c_{12}\,e^{i\delta_{\rm CP}}\,\Delta m_{21}^{2}&s_{ 13}\,c_{13}\,A_{\rm cc}\\ s_{12}\,c_{12}\,e^{-i\delta_{\rm CP}}\,\Delta m_{21}^{2}&c_{12}^{2}\,\Delta m_{21 }^{2}&0\\ s_{13}c_{13}\,A_{\rm cc}&0&s_{13}^{2}\,A_{\rm cc}+\Delta m_{31}^{2}\end{pmatrix}+ \sqrt{2}G_{F}\,N_{e}(x)\,\,O^{\dagger}\mathbf{\mathcal{E}}\,O\,, \tag{13}\] where we have defined the matter potential \(A_{\rm cc}=2\,E_{\nu}\,V_{\rm cc}=2\,E_{\nu}\sqrt{2}G_{F}N_{e}(x)\). From the structure of the Hamiltonian above, we see that if \(\Delta m_{31}^{2}\gg\Delta m_{21}^{2}\), \(A_{\rm cc}\), \(2E_{\nu}\,G_{F}\sum_{f}N_{f}(x)\varepsilon_{\alpha\beta}^{f}\), the Hamiltonian is dominated by the third eigenvalue, \(\Delta m_{31}^{2}\). In this case, it is effectively block-diagonal, turning our \(3\nu\) problem into a \(2\nu\) one. In this rotated basis, the third mass eigenstate decouples from the rest of the system and evolves adiabatically. Throughout its journey from the Sun to the Earth, this third eigenstate can be well-approximated by its vacuum mass eigenstate. Within this approximation, the Hamiltonian in Eq. (13) is transformed to an effective \(2\times 2\) picture, where we only have to track the evolution of the two lighter matter mass eigenstates. The first condition, \(\Delta m_{31}^{2}\gg\Delta m_{21}^{2}\), is satisfied by current best-fits to oscillation parameters [114]. The second is satisfied for solar neutrinos across the full range of solar neutrino energies, \(E_{\nu}\lesssim 20\,{\rm MeV}\). The third condition can be interpreted as one on the value of \(\varepsilon_{\alpha\beta}^{\eta,\varphi}\), \[\varepsilon_{\alpha\beta}^{\eta,\varphi}\ll\frac{\sqrt{2}\,\Delta m_{31}^{2}} {A_{\rm cc}(x)\left[\xi^{e}+\xi^{p}+Y_{n}(x)\,\xi^{n}\right]}\,. \tag{14}\] Taking the maximum of all these quantities, which occurs at the solar core, and using \(E_{\nu}\sim 20\,{\rm MeV}\), we find that \(\varepsilon_{\alpha\beta}^{\eta,\varphi}\lesssim 3\). We treat this as an upper bound on the value of \(\varepsilon_{\alpha\beta}^{\eta,\varphi}\), and we do not interpret our results above this value throughout our analyses. Ultimately, for \(\varepsilon_{\alpha\beta}^{\eta,\varphi}\sim 3\) at these higher neutrino energies, which are relevant for NRs in DD experiments, a full numerical simulation should be performed to more accurately model neutrino oscillations. For the purposes of our sensitivity study, however, our approach is sufficient. Following the conventions of Ref. [36] and setting \(\delta_{\rm CP}=0\) in this work, we can write the effective Hamiltonian as \(H^{\rm eff}\equiv H^{\rm eff}_{\rm vac}+H^{\rm eff}_{\rm mat}\), where \[H^{\rm eff}_{\rm vac}\equiv\frac{\Delta m_{21}^{2}}{4E_{\nu}} \begin{pmatrix}-\cos 2\theta_{12}&\sin 2\theta_{12}\\ \sin 2\theta_{12}&\cos 2\theta_{12}\end{pmatrix}\,, \tag{15}\] and \[H^{\rm eff}_{\rm mat}\equiv\sqrt{2}G_{F}N_{e}(x)\left[\begin{pmatrix}c_{13}^{2 }&0\\ 0&0\end{pmatrix}+\left[\xi^{e}+\xi^{p}+Y_{n}(x)\,\xi^{n}\right]\begin{pmatrix}- \varepsilon_{D}^{\eta,\varphi}&\varepsilon_{N}^{\eta,\varphi}\\ \varepsilon_{N}^{\eta,\varphi}&\varepsilon_{D}^{\eta,\varphi}\end{pmatrix} \right]\,. \tag{16}\] The coefficients \(\varepsilon_{N}^{\eta,\varphi}\) and \(\varepsilon_{D}^{\eta,\varphi}\) are related to our parametrisation by \[\begin{split}\varepsilon_{D}^{\eta,\varphi}\equiv& \,c_{13}\,s_{13}\left(s_{23}\,\varepsilon_{e\mu}^{\eta,\varphi}+c_{23}\, \varepsilon_{e\tau}^{\eta,\varphi}\right)-\left(1+s_{13}^{2}\right)c_{23}\,s_{ 23}\,\varepsilon_{\mu\tau}^{\eta,\varphi}\\ &-\,\frac{c_{13}^{2}}{2}\left(\varepsilon_{ee}^{\eta,\varphi}- \varepsilon_{\mu\mu}^{\eta,\varphi}\right)+\frac{s_{23}^{2}-s_{13}^{2}\,c_{23 }^{2}}{2}\left(\varepsilon_{\tau\tau}^{\eta,\varphi}-\varepsilon_{\mu\mu}^{ \eta,\varphi}\right)\,,\end{split} \tag{17}\] and \[\varepsilon_{N}^{\eta,\varphi}\equiv c_{13}\left(c_{23}\,\varepsilon_{e\mu}^{ \eta,\varphi}-s_{23}\,\varepsilon_{e\tau}^{\eta,\varphi}\right)+s_{13}\left[s _{23}^{2}\,\varepsilon_{\mu\tau}^{\eta,\varphi}-c_{23}^{2}\,\varepsilon_{\mu \tau}^{\eta,\varphi}+c_{23}\,s_{23}\left(\varepsilon_{\tau\tau}^{\eta,\varphi} -\varepsilon_{\mu\mu}^{\eta,\varphi}\right)\right]\,. \tag{18}\] Diagonalising \(H^{\rm eff}\) then allows us to find the matrix \(U_{12}^{m}\) such that \(U_{12}^{m\dagger}H^{\rm eff}U_{12}^{m}={\rm diag}(E_{1}^{m},E_{2}^{m})\). Typically, \(U_{12}^{m}\) is parametrised as \[U_{12}^{m}=\begin{pmatrix}\cos\theta_{12}^{m}&\sin\theta_{12}^{m}\\ -\sin\theta_{12}^{m}&\cos\theta_{12}^{m}\end{pmatrix}\,, \tag{19}\] for some matter mixing angle \(\theta_{12}^{m}\). We find that the eigenvalues of the effective matter Hamiltonian \(H^{\rm eff}\) are given by \[E_{1,\,2}^{m}=c_{13}^{2}A_{\rm cc}\mp\frac{\Delta m_{21}^{2}}{4E_{\nu}}\,\sqrt{ p^{2}+q^{2}}\,, \tag{20}\] where we have defined the two quantities \[\begin{split} p&\equiv\,\sin 2\theta_{12}+2\,\varepsilon_{N}^{ \eta,\varphi}\left[\xi^{e}+\xi^{p}+Y_{n}(x)\,\xi^{n}\right]\,\frac{A_{\rm cc}}{ \Delta m_{21}^{2}}\,,\\ q&\equiv\,\cos 2\theta_{12}+\left(2\,\varepsilon_{D}^{ \eta,\varphi}\left[\xi^{e}+\xi^{p}+Y_{n}(x)\,\xi^{n}\right]-c_{13}^{2}\right) \,\frac{A_{\rm cc}}{\Delta m_{21}^{2}}\,.\end{split} \tag{21}\] Thus, the energy difference between the two energy eigenvalues in matter, responsible for the coherent mixing of the two matter mass eigenstates, is given by \[\Delta E_{21}^{m}\equiv E_{2}^{m}-E_{1}^{m}=\frac{\Delta m_{21}^{2}}{2E_{\nu}} \sqrt{p^{2}+q^{2}}\,. \tag{22}\] Moreover, we find that the matter mixing angle, \(\theta_{12}^{m}\), obeys the relations \[\begin{split}\sin 2\theta_{12}^{m}&=\frac{p}{ \sqrt{p^{2}+q^{2}}}\,,\\ \cos 2\theta_{12}^{m}&=\frac{q}{\sqrt{p^{2}+q^{2}}} \,,\\ \tan 2\theta_{12}^{m}&=\frac{p}{q}\,.\end{split} \tag{23}\] With these expressions, we are in the position to describe the neutrino evolution in the full \(3\times 3\) picture. Using the notation of Ref. [95] and the fact that solar neutrinos are relativistic (such that \(t\simeq x\)), we can write the evolution equation in the solar neutrino flavour basis as \[i\,\frac{\mathrm{d}}{\mathrm{d}x}\begin{pmatrix}\hat{\nu}_{e}\\ \hat{\nu}_{\mu}\\ \hat{\nu}_{\tau}\end{pmatrix}=\underbrace{\begin{pmatrix}\text{Evol}[H^{\rm eff }]&0\\ 0&\exp[-i\,\frac{\Delta m_{31}^{2}}{2\,E_{\nu}}L]\end{pmatrix}}_{\equiv\,\hat{S}} \begin{pmatrix}\hat{\nu}_{e}\\ \hat{\nu}_{\mu}\\ \hat{\nu}_{\tau}\end{pmatrix}\,. \tag{24}\] To obtain the evolved matter Hamiltonian, we can split up the distance of propagation within the Sun into \(N\) equidistant slabs of thickness \(\Delta x\) with approximate homogeneous matter density. We can then obtain the evolved Hamiltonian by formally taking the limit \[\text{Evol}[H^{\rm eff}]=\lim_{\Delta x\to 0}\,\,\prod_{n=0}^{N}U_{\rm PMNS}^{m} (x_{n})\,\,\exp\left[-i\left(U_{12}^{m\dagger}H^{\rm eff}U_{12}^{m}-i\,U_{12}^ {m\dagger}\,\dot{U}_{12}^{m}\right)\Delta x\right]\,\,U_{\rm PMNS}^{m}(x_{n})^ {\dagger}\,, \tag{25}\] where \(\dot{U}_{12}^{m}=(\mathrm{d}/\mathrm{d}x)\,U_{12}^{m}\) and \(x_{n}=x_{n-1}+\Delta x\). From this, we can write the full evolution equation in the conventional vacuum-flavour basis as \[i\,\frac{\mathrm{d}}{\mathrm{d}x}\begin{pmatrix}\nu_{e}\\ \nu_{\mu}\\ \nu_{\tau}\end{pmatrix}=\underbrace{O\,\tilde{S}\,O^{\dagger}}_{S}\begin{pmatrix} \nu_{e}\\ \nu_{\mu}\\ \nu_{\tau}\end{pmatrix}\,, \tag{26}\] with the rotation matrix \(O=R_{23}R_{13}\). In this notation, the full \(S\)-matrix is thus given by \[S=\underbrace{O\,U_{12}}_{U_{\rm PMNS}}\begin{pmatrix}\exp\left[-i\,\int_{0}^{ L}D(x)\,\mathrm{d}x\right]&0\\ 0&\exp[-i\,\Phi_{33}]\end{pmatrix}\,\underbrace{U_{12}^{m}(x_{0})^{\dagger}O^{ \dagger}}_{U_{\rm PMNS}^{m}(x_{0})^{\dagger}}\,, \tag{27}\] with \(\Phi_{33}=\Delta m_{31}^{2}L/(2E_{\nu})\), where we evolve the neutrinos from their production point within the Sun, \(x_{0}\), to their detection point at an experiment over the distance \(L\). Finally, the \(2\times 2\) time-evolution matrix is given by \[D(x)=\begin{pmatrix}E_{1}^{m}&-i\,\dot{\theta}_{12}^{m}\\ i\,\dot{\theta}_{12}^{m}&E_{2}^{m}\end{pmatrix}\,. \tag{28}\] To simplify our analysis, we make the assumption that the two light matter mass eigenstates of \(H^{\rm eff}\), \(|\nu_{1m}\rangle\) and \(|\nu_{2m}\rangle\), propagate adiabatically within the Sun. As such, the two eigenstates do not mix with one another as they travel to the surface of the Sun, remaining eigenstates of \(H^{\rm eff}\) throughout their evolution. This assumption is appropriate because the matter density within the Sun, described by \(N_{f}(x)\), varies slowly enough to allow the matter eigenstates to adapt to the medium as they propagate through it. The adiabatic approximation is valid if the adiabaticity parameter, \(\gamma\), satisfies \[\gamma\equiv\frac{|\Delta E_{21}^{m}|}{2|\dot{\theta}_{12}^{m}|}\gg 1\,, \tag{29}\] where \(\Delta E_{21}^{m}\) is given by Eq. (22). In the adiabatic approximation, the matrix \(D(x)\) is thus approximately diagonal, and after a common rephasing of the neutrino matter mass eigenstates, the upper \(2\times 2\) block in Eq. (27) can be expressed as \[\exp\left[-i\,\int_{0}^{L}D(x)\,\mathrm{d}x\right]\approx\begin{pmatrix}e^{i \,\phi}&0\\ 0&e^{-i\,\phi}\end{pmatrix}\,, \tag{30}\] with \(\phi=\int_{0}^{L}\Delta E_{21}^{m}(x)\,\mathrm{d}x\). Since the neutrinos exiting the Sun will free-stream to the Earth, there is no further evolution effect to be taken into account for the Sun-Earth propagation. However, in principle, there is a further propagation effect when neutrinos pass thorough the Earth at night, which should be taken into account for a complete treatment. For high-energy \({}^{8}\)B neutrinos, for which \(E_{\nu}\sim 10\,\mathrm{MeV}\), this effect typically changes oscillation probabilities only at the percent level [115; 116; 117]. In particular, Super-Kamiokande has determined the day-night asymmetry to about \(-3.3\%\) in \({}^{8}\)B neutrinos [118], while Borexino has found no asymmetry in \({}^{7}\)Be neutrinos [119]. Therefore, in this work we neglect Earth matter effects for simplicity. ### Solar neutrino density matrix From the expression of the \(S\)-matrix in Eq. (27), we can derive the expression for the full three-flavour density matrix for solar neutrinos reaching the Earth. With the projector onto the electron-neutrino flavour state, \(\pi^{(e)}=\mathrm{diag}(1,0,0)\), the density matrix reads, \[\rho^{(e)}=S\,\pi^{(e)}\,S^{\dagger}=\begin{pmatrix}|S_{11}|^{2}&S_{11}\,S_{2 1}^{*}&S_{11}\,S_{31}^{*}\\ S_{11}^{*}\,S_{21}&|S_{21}|^{2}&S_{21}\,S_{31}^{*}\\ S_{11}^{*}\,S_{31}&S_{21}^{*}\,S_{31}&|S_{31}|^{2}\end{pmatrix}\,. \tag{31}\] Since the density matrix is Hermitian, \(\rho_{\alpha\beta}=\rho_{\beta\alpha}^{*}\), the solar neutrino density matrix \(\rho^{(e)}\) is completely characterised by the three independent \(S\)-matrix components, \[S_{11} =e^{-i\,\Phi_{33}}\,s_{13}^{2}+c_{13}^{2}\,\left(e^{i\,\phi}\,c_{ 12}\,c_{m}+e^{-i\,\phi}\,s_{12}\,s_{m}\right)\,, \tag{32}\] \[S_{21} =c_{13}\left[s_{13}\,s_{23}\,(e^{i\,\Phi_{33}}-e^{i\,\phi}\,c_{ 12}\,c_{m}-e^{-i\,\phi}\,s_{12}\,s_{m})+e^{-i\,\phi}\,c_{23}\,(c_{12}\,s_{m}-e ^{2i\,\phi}\,s_{12}\,c_{m})\right]\,,\] (33) \[S_{31} =c_{13}\left[s_{13}\,c_{23}\,(e^{i\,\Phi_{33}}-e^{i\,\phi}\,c_{ 12}\,c_{m}-e^{-i\,\phi}\,s_{12}\,s_{m})-e^{-i\,\phi}\,s_{23}\,(c_{12}\,s_{m}-e ^{i\,2\phi}\,s_{12}\,c_{m})\right]\,, \tag{34}\] where \(c_{m}\) and \(s_{m}\) refer to \(\cos\theta_{12}^{m}\) and \(\sin\theta_{12}^{m}\), respectively. Given that we do not know precisely where neutrinos are produced in the solar core, we must average over the neutrino production positions. This effectively removes terms dependent on \(\phi\) and \(\Phi_{33}\) from the density matrix. The six independent density matrix elements then read, \[\rho_{ee} = s_{13}^{4}+c_{13}^{4}\,P_{\rm ee}^{2\nu}\,, \tag{35}\] \[\rho_{\mu\mu} = c_{13}^{2}\left[c_{23}^{2}\,\left(1-P_{\rm ee}^{2\nu}\right)+s_{ 13}^{2}\,s_{23}^{2}\,\left(1+P_{\rm ee}^{2\nu}\right)+\Delta\right],\] (36) \[\rho_{\tau\tau} = c_{13}^{2}\left[s_{23}^{2}\,\left(1-P_{\rm ee}^{2\nu}\right)+s_{ 13}^{2}\,c_{23}^{2}\,\left(1+P_{\rm ee}^{2\nu}\right)-\Delta\right],\] (37) \[\rho_{e\mu} = c_{13}\,s_{13}^{3}\,s_{23}-\frac{1}{2}\,c_{13}^{3}\left[2\,s_{13 }\,s_{23}\,P_{\rm ee}^{2\nu}+c_{23}\,\sin\left(2\theta_{12}\right)\,\cos\left( 2\theta_{12}^{m}\right)\right],\] (38) \[\rho_{e\tau} = c_{13}\,s_{13}^{3}\,c_{23}-\frac{1}{2}\,c_{13}^{3}\left[2\,s_{13 }\,c_{23}\,P_{\rm ee}^{2\nu}-s_{23}\,\sin\left(2\theta_{12}\right)\,\cos\left( 2\theta_{12}^{m}\right)\right],\] (39) \[\rho_{\mu\tau} = \frac{1}{2}\,c_{13}^{2}\left[\sin\left(2\theta_{23}\right)\left( \,\left(1+s_{13}^{2}\right)\,P_{\rm ee}^{2\nu}-c_{13}^{2}\right)+2\,\cot\left( 2\theta_{23}\right)\,\Delta\right]\,, \tag{40}\] where we have defined \[P_{ee}^{2\nu} = \frac{1}{2}\left(1+\cos\left(2\theta_{12}\right)\,\cos\left(2 \theta_{12}^{m}\right)\right), \tag{41}\] \[\Delta = \frac{1}{2}\,\,\sin\left(\theta_{13}\right)\,\sin\left(2\theta_{1 2}\right)\,\sin\left(2\theta_{23}\right)\,\cos\left(2\theta_{12}^{m}\right)\,. \tag{42}\] Since neutrinos are produced within a finite volume of the Sun and the matter mixing angle, \(\theta_{12}^{m}(x)\), depends on position, there is some ambiguity in what to take for the value of \(\cos(2\theta_{12}^{m})\). We have taken its spatial average over the radius of the Sun as a representative value, given by \[\langle\cos 2\theta_{12}^{m}\rangle_{p}\equiv\int_{0}^{1}\cos 2\theta_{12}^{m}(x )\,f_{p}(x)\,{\rm d}x\,, \tag{43}\] where \(x\) is the fractional solar radius, \(p\) denotes a particular solar neutrino population, and \(f_{p}(x)\) is the spatial distribution function describing where in the Sun that population is produced. These populations are labelled according to the reaction that generated them, with \(p\in\{pp,\,^{8}{\rm B},\,\ldots\}\). The distributions \(f_{p}(x)\) are SSM-dependent; we have used the BP16-GS98 predictions calculated by Ref. [120]. We have taken the values for each oscillation parameter from the latest NuFIT results [114]. ### Generalised neutrino cross sections Following our discussion of neutrino propagation in the presence of NSI and the relevant formalism needed to derive the neutrino density matrix, \(\mathbf{\rho}\), we move on to find expressions for the generalised scattering cross sections, \({\rm d}\mathbf{\zeta}/{\rm d}E_{R}\), for both neutrino-nucleus and neutrino-electron scattering. Considering the process of elastic scattering of a neutrino \(\nu\) off a target \(T\) with mass \(m_{T}\) via the matrix element \(\mathcal{M}\), the general expression for the cross section reads \[\frac{{\rm d}\sigma_{\nu T}}{{\rm d}t}=\frac{1}{16\,\pi}\,\frac{\mathcal{M}^{* }\mathcal{M}}{(s-m_{T}^{2})^{2}}\,. \tag{44}\] From this, we can define the generalised cross section correlating the matrix elements of the flavour specific scattering processes \(\nu_{\alpha}\,T\to f\,T\) and \(\nu_{\beta}\,T\to f\,T\) as \[\left(\frac{{\rm d}\zeta}{{\rm d}E_{R}}\right)_{\alpha\beta}=\left(\frac{{\rm d }\zeta}{{\rm d}t}\right)_{\alpha\beta}\frac{{\rm d}t}{{\rm d}E_{R}}=\frac{ \mathcal{M}^{*}(\nu_{\alpha}\to f)\,\mathcal{M}(\nu_{\beta}\to f)}{32\pi\,m_{T} \,E_{\nu}^{2}}\,, \tag{45}\] where we have made use of the relations \(t=-2\,m_{T}\,E_{R}\) and \(s=m_{T}^{2}+2\,m_{T}\,E_{\nu}\) for relativistic neutrino scattering. Note that the diagonal elements of the generalised cross section are the conventional scattering cross sections of a neutrino of flavour \(\alpha\) off the target material \(T\), \[\left(\frac{\mathrm{d}\zeta}{\mathrm{d}E_{R}}\right)_{\alpha\alpha}=\frac{ \mathrm{d}\sigma_{\nu_{\alpha}T}}{\mathrm{d}E_{R}}\,. \tag{46}\] With the general expression of Eq. (45), we can now derive the corresponding expressions for the generalised CE\(\nu\)NS and E\(\nu\)ES cross sections. #### ii.4.1 CE\(\nu\)NS cross section Following Ref. [121], we can derive the expression for the generalised coherent elastic neutrino-nucleus scattering cross section using the NSI formalism introduced in Section II.1. The cross section reads \[\left(\frac{\mathrm{d}\zeta_{\nu N}}{\mathrm{d}E_{R}}\right)_{\alpha\beta} =\frac{G_{F}^{2}\,M_{N}}{\pi}\left(1-\frac{M_{N}\,E_{R}}{2E_{\nu}^ {2}}\right)\ \sum_{\gamma}\,\langle gs\|\hat{G}^{\mathrm{SM}}\,\delta_{\alpha\gamma}+ \hat{G}^{\mathrm{NSI}}_{\alpha\gamma}||gs\rangle\langle gs||\hat{G}^{\mathrm{ SM}}\,\delta_{\gamma\beta}+\hat{G}^{\mathrm{NSI}\dagger}_{\gamma\beta}|| gs\rangle\,,\] \[=\frac{G_{F}^{2}\,M_{N}}{\pi}\left(1-\frac{M_{N}\,E_{R}}{2E_{\nu} ^{2}}\right)\ \left[\frac{1}{4}\,Q_{\nu N}^{2}\,\delta_{\alpha\beta}-Q_{\nu N }\,G^{\mathrm{NSI}}_{\alpha\beta}+\sum_{\gamma}G^{\mathrm{NSI}}_{\alpha\gamma }G^{\mathrm{NSI}}_{\gamma\beta}\right]\,F^{2}(E_{R})\,, \tag{47}\] where \(F(E_{R})\) is the Helm form factor [122; 123], and \(Q_{\nu N}=N-(1-4\,\sin^{2}\theta_{W})\,Z\) is the SM coherence factor. Furthermore, we have used the Hermiticity of the NSI nucleus coupling, defined by \[G^{\mathrm{NSI}}_{\alpha\beta} \equiv\left(2\,\varepsilon^{u}_{\alpha\beta}+\varepsilon^{d}_{ \alpha\beta}\right)Z+\left(\varepsilon^{u}_{\alpha\beta}+2\,\varepsilon^{d}_{ \alpha\beta}\right)N\,,\] \[=\left(\xi^{p}\,Z+\xi^{n}\,N\right)\,\varepsilon^{n,\varphi}_{ \alpha\beta}\,. \tag{48}\] As observed in Ref. [75], the BSM contribution can destructively interfere with the SM one. Thus, there are regions in the NSI parameter space where, despite having a non-zero NSI \(\varepsilon_{\alpha\beta}\), the cross-section is the same as for the SM. In these _blind spots_, the presence of new physics cannot be distinguished from the SM. Since the CE\(\nu\)NS rate is determined by the trace over density matrix times cross section, the cancellation conditions are non-trivial. We discuss them in greater detail in Section IV.2. #### ii.4.2 E\(\nu\)ES cross section Similarly, following Ref. [95] and by use of Eq. (45), we can derive the expression for the generalised neutrino-electron scattering cross section in the presence of NSI, \[\left(\frac{\mathrm{d}\zeta_{\nu e}}{\mathrm{d}E_{R}}\right)_{\alpha\beta} =\,\frac{2\,G_{F}^{2}\,m_{e}}{\pi}\ \sum_{\gamma}\,\left\{G^{L}_{\alpha\gamma}G^{L}_{\gamma\beta}+G^{R}_{ \alpha\gamma}G^{R}_{\gamma\beta}\left(1-\frac{E_{R}}{E_{\nu}}\right)^{2}- \left(G^{L}_{\alpha\gamma}G^{R}_{\gamma\beta}+G^{R}_{\alpha\gamma}G^{L}_{ \gamma\beta}\right)\frac{m_{e}\,E_{R}}{2E_{\nu}^{2}}\right\}\,, \tag{49}\] where we have defined the generalised neutrino-electron couplings as \[G^{P}_{\alpha\beta}=g^{e}_{P\alpha}\delta_{\alpha\beta}+\varepsilon^{eP}_{ \alpha\beta}\,. \tag{50}\] The SM electroweak neutrino-electron couplings are given by \[g^{e}_{P\alpha}=\begin{cases}1+g^{e}_{L}\,,&\text{if $\alpha=e$ and $P=L$}\,,\\ g^{e}_{P}\,,&\text{otherwise}\,,\end{cases} \tag{51}\] with \(g^{f}_{P}=T_{f}^{3}-\sin^{2}\theta_{w}\,Q_{f}^{\mathrm{EM}}\). In order to express the generalised neutrino-electrons coupling in terms of their vector and axial-vector components with the parameterisation of Section II.1, we introduce \[G^{V}_{\alpha\beta} =G^{L}_{\alpha\beta}+G^{R}_{\alpha\beta}\,, G^{A}_{\alpha\beta} =G^{L}_{\alpha\beta}-G^{R}_{\alpha\beta}\,, \tag{52}\] \[G^{L}_{\alpha\beta} =\frac{1}{2}(G^{V}_{\alpha\beta}+G^{A}_{\alpha\beta})\,, G^{R}_{\alpha\beta} =\frac{1}{2}(G^{V}_{\alpha\beta}-G^{A}_{\alpha\beta})\,. \tag{53}\] Effectively, this means that in Eq. (49) we can make the replacements \[G^{L}_{\alpha\beta} = (\delta_{e\alpha}+g^{e}_{L})\,\delta_{\alpha\beta}+\frac{1}{2}\left( \varepsilon^{\eta,\varphi}_{\alpha\beta}\,\xi^{e}+\tilde{\varepsilon}^{\eta, \varphi}_{\alpha\beta}\,\tilde{\xi}^{e}\right)\,, \tag{54}\] \[G^{R}_{\alpha\beta} = g^{e}_{R}\,\delta_{\alpha\beta}+\frac{1}{2}\left(\varepsilon^{ \eta,\varphi}_{\alpha\beta}\,\xi^{e}-\tilde{\varepsilon}^{\eta,\varphi}_{ \alpha\beta}\,\tilde{\xi}^{e}\right)\,, \tag{55}\] where \(\varepsilon^{\eta,\varphi}_{\alpha\beta}\) denotes the vector component of the general NSI as before and \(\tilde{\varepsilon}^{\eta,\varphi}_{\alpha\beta}\) denotes the axial-vector component (which does not contribute to matter effects and CE\(\nu\)NS). Note that if the NSI is only due to a vector interaction, we have \(\varepsilon^{L}=\varepsilon^{R}\), such that the axial-vector component vanishes, \(\tilde{\varepsilon}^{\eta,\varphi}_{\alpha\beta}=0\). As stated before, we only focus on the vector interaction for electron scattering. We do this because the results from oscillation and coherent experiments will have no impact on \(\tilde{\varepsilon}^{\eta,\varphi}_{\alpha\beta}\). Furthermore, to accurately predict the signal from the axial-vector interaction, one would have to use a different ionisation form factor to that of the vector interaction. ## III Extending current constraints to the full NSI parameter space With our extended formalism in place, we are ready to explore how previous NSI results map onto the extended parameter space. Earlier constraints on NSI parameters derived from spallation source [32; 39; 43; 56; 124; 125] and neutrino oscillation [36; 39; 95; 126] experiments have assumed that the NSI contribution in the charged plane is entirely in either the proton (\(\varphi=0\)) or the electron (\(\varphi=\pm\pi/2\)) directions. In the \(\varphi=0\) case, the CE\(\nu\)NS cross section is maximally modified with no change to the E\(\nu\)ES cross section, leading to the strongest constraints on \(\varepsilon^{\eta,\varphi}_{\alpha\beta}\) from spallation source experiments and limits from oscillation experiments that only arise from non-standard propagation effects. In the \(\varphi=\pm\pi/2\) case, constraints from oscillation source experiments arise from propagation effects and a maximal change to the E\(\nu\)ES cross section. However, the evolution of these bounds with variable charged NSI contribution has not yet been studied, and our parametrisation provides a convenient way to visualise this. Since we have no reason to believe that charged NSI would lie preferentially in any direction, a general treatment must be sought. In this section, we recompute the bounds from spallation source and oscillation experiments, allowing for \(\varphi\) to vary along its entire allowed range. In particular, we consider the CENNS-10 LAr [124] and Borexino [127] experiments as our spallation source and oscillation experiment candidates, respectively. To demonstrate the non-trivial evolution of previously computed constraints with variable \(\varphi\), we take inspiration from earlier analyses, showing how the same approaches can lead to very different results. ### The CENNS-10 LAr Experiment The CENNS-10 LAr experiment [124] has measured the coherent elastic scattering of neutrinos with nuclei using a liquid argon scintillator target. The neutrino flux has three components: a prompt flux of muon neutrinos generated by the decay of pions, and two delayed fluxes of anti-muon and electron neutrinos, produced in the three-body decay of anti-muons. Their normalised spectra are given by \[f_{\nu_{\mu}}(E_{\nu}) = \delta\left(E_{\nu}-\frac{m_{\pi}^{2}-m_{\mu}^{2}}{2m_{\pi}} \right)\,,\] \[f_{\tilde{\nu}_{\mu}}(E_{\nu}) = \frac{64}{m_{\mu}}\left[\left(\frac{E_{\nu}}{m_{\mu}}\right)^{2} \left(\frac{3}{4}-\frac{E_{\nu}}{m_{\mu}}\right)\right]\,, \tag{56}\] \[f_{\nu_{e}}(E_{\nu}) = \frac{192}{m_{\mu}}\left[\left(\frac{E_{\nu}}{m_{\mu}}\right)^{2} \left(\frac{1}{2}-\frac{E_{\nu}}{m_{\mu}}\right)\right]\,,\] where, from kinematics, \(E_{\nu}\in[0,m_{\mu}/2]\). The expected neutrino flux is then given by scaling these spectra to account for the total beam luminosity and distance of the liquid argon target from the source. This scaling is given by \(\eta\equiv r\,N_{\rm POT}/(4\pi L^{2})\), where \(r\) is the number of neutrinos produced per proton collision, \(N_{\rm POT}\) is the number of protons on target, and \(L\) is the length of the experimental baseline. This gives us the total expected neutrino flux, \(\phi_{\alpha}(E_{\nu})\equiv\eta\,f_{\alpha}(E_{\nu})\), where \(\alpha\in\{\nu_{\mu},\bar{\nu}_{\mu},\nu_{e}\}\). For the CENNS-10 LAr experiment, \(r=0.08\), \(N_{\rm POT}=1.37\times 10^{23}\,{\rm yr}^{-1}\), and \(L=27.5\,{\rm m}\)[124]. From these fluxes, we can retrieve the expected CE\(\nu\)NS rate spectrum. Since the neutrino beam does not undergo significant decoherence over the experimental baseline, it can be treated as being composed of independent \(\nu_{\mu}\), \(\overline{\nu}_{\mu}\), and \(\nu_{e}\) parts. This means that the rate is given by the integral of the neutrino flux and the appropriately flavoured cross section, as it is usually written. In our notation, this reads \[\frac{{\rm d}N_{\alpha}}{{\rm d}E_{R}}=\frac{M_{\rm det}}{m_{N}}\,\epsilon(E_ {R})\int_{E_{\nu}^{\rm min}}^{m_{\mu}/2}\phi_{\alpha}(E_{\nu})\left(\frac{{\rm d }\zeta}{{\rm d}E_{R}}\right)_{\alpha\alpha}\,{\rm d}E_{\nu}\,, \tag{57}\] where \(M_{\rm det}=24\,{\rm kg}\) is the mass of the detector, \(m_{N}\) is the mass of an \({}^{40}\)Ar nucleus (for which we assume 100% isotopic abundance), and \(\epsilon(E_{R})\) is the energy-dependent efficiency function, which we have taken from Analysis A of Ref. [124]. Since this function is given in units of electron-equivalent energy (\({\rm keV}_{\rm ee}\)), we convert our spectrum into \(E_{ee}\) energies before folding in the efficiency function using the energy-dependent quenching factor [124] \[Q_{F}(E_{R})=0.246+(7.8\times 10^{-4}\,{\rm keV}_{\rm nr}^{-1})\,E_{R}\,. \tag{58}\] Finally, the integral over neutrino energy runs from the minimum neutrino energy required to cause a recoil of energy \(E_{R}\), \(E_{\nu}^{\rm min}\approx\sqrt{m_{N}E_{R}/2}\). To compute the allowed regions for \(\varepsilon_{\alpha\beta}^{\eta,\varphi}\), we perform a similar analysis to that of Ref. [43], with the key difference that we allow for the charged NSI contribution to lie anywhere within the charged plane. Using a \(\chi^{2}\) statistic, we compare the number of events measured by CENNS-10 LAr to the theoretical expectation given a particular choice for \(\varphi\) and \(\varepsilon_{\alpha\beta}^{\eta,\varphi}\). We fix \(\eta=\tan^{-1}(1/2)\) in order to match the analysis of Ref. [43], equivalent to having neutrino NSI with the up-quark only when \(\varphi=0\). Our \(\chi^{2}\) statistic is given by \[\chi^{2}(\varepsilon_{\alpha\beta}^{\eta,\varphi},\varphi)=\min_{a}\left[\left( \frac{N_{\rm exp}-(1+a)\,N_{\rm CE\nu NS}(\varepsilon_{\alpha\beta}^{\eta, \varphi},\varphi)}{\sqrt{N_{\rm exp}\,+N_{\rm bkg}}}\right)^{2}+\left(\frac{a }{\sigma_{a}}\right)^{2}\right]\,, \tag{59}\] where \(N_{\rm exp}=159\) is the number of measured events and \(N_{\rm bkg}=563\) is the number of background events (primarily from the beam-related neutron rate) [124]. The nuisance parameter \(a\) acts as a pull parameter on the theoretical rate, allowing it to vary around its central value. This accounts for the systematic uncertainties in its calculation, and we take it to be \(\sigma_{a}=8.5\%\)[124]. The quadratic penalty term in Eq. (59) penalises deviations of size much greater than this. To compute the 90% CL allowed regions, we vary one NSI parameter at a time for a given angle \(\varphi\) and find those values of \(\varepsilon_{\alpha\beta}^{\eta,\varphi}\) which \(\Delta\chi^{2}(\varepsilon_{\alpha\beta}^{\eta,\varphi})\equiv\chi^{2}( \varepsilon_{\alpha\beta}^{\eta,\varphi},\varphi)-\chi_{\rm min}^{2}(\varphi) \leq 2.71\), where \(\chi_{\rm min}^{2}(\varphi)\) is the minimum \(\chi^{2}\) optimised over \(\varepsilon_{\alpha\beta}^{\eta,\varphi}\). We repeat this analysis over the full range of \(\varphi\), drawing the 90% CL allowed regions in Fig. 2. We also show our \(\Delta\chi^{2}\) plot for the extremal cases of \(\varphi=0\) and \(\varphi=\pi/2\) in Fig. 6 of Appendix B. We see that, for \(\varepsilon_{ee}^{\eta,\varphi}\) and \(\varepsilon_{\mu\beta}^{\eta,\varphi}\), these regions allow for two solutions: one that is consistent with the SM (i.e. \(\varepsilon_{\alpha\beta}=0\)) and one that is not. This first region is slightly displaced from \(\varepsilon_{\alpha\beta}^{\eta,\varphi}=0\) as CENNS-10 LAr observed a slight excess of events over the SM expectation. The second region is due to a cancellation between the interference and NSI-only terms in the cross section, which can be seen by inspecting Eq. (47) and is discussed in greater detail in the context of DD experiments in Section IV.2. While this second minimum occurs for all \(\varepsilon_{\alpha\beta}^{\eta,\varphi}\), the effect is most pronounced for \(\varepsilon_{ee}^{\eta,\varphi}\) and \(\varepsilon_{\mu\mu}^{\eta,\varphi}\), as can be seen from Fig. 6. Importantly, no bounds can be placed on \(\varepsilon_{\tau\tau}^{\eta,\varphi}\) since the CENNS-10 LAr neutrino beam has a negligible \(\nu_{\tau}\) component. Typically, the intervals that would be quoted correspond to the allowed values at \(\varphi=0\). This reflects the assumption that the charged NSI lies purely in the proton direction. However, we see in Fig. 2 that these bounds generally worsen for increasing values of \(|\varphi|\). While this trend is partially led by our parametrisation (whereby the strength of \(\varepsilon_{\alpha\beta}^{\eta,\varphi}\) required for a constant contribution should scale as \(1/\cos\varphi\) in any one of the proton, neutron, or electron directions), the bounds do not vary via this Figure 2: The 90% CL allowed regions for each NSI parameter over \(\varphi\) from the CENNS-10 LAr results [124]. The bounds usually quoted correspond to the NSI parameter values at \(\varphi=0\). We have fixed \(\eta=\arctan(1/2)\), corresponding to a pure up-quark NSI when \(\varphi=0\). same scaling. This is particularly evident from the limits drawn for the second minima in the cases of \(\varepsilon_{e\varepsilon}^{\eta,\varphi}_{e}\) and \(\varepsilon_{\mu\mu}^{\eta,\varphi}\), both of which deteriorate more rapidly than the first minima bounds. Moreover, the constraints on the NSI contribution from the neutron, which is inherently independent of \(\varphi\) in our formalism, would worsen for increasing \(|\varphi|\) (at fixed \(\eta\)), reflecting the requirement for a stronger NSI with the neutron to account for the diminishing contribution from the proton. ### The Borexino Experiment The Borexino experiment, located at the Laboratori Nazionali del Gran Sasso, observes solar neutrinos through their elastic scattering with electrons in its multi-ton scintillator target [127]. The differential scattering rate per target electron is given most generally by Eq. (1). In the case of Borexino, we consider the flux of solar neutrinos, which has contributions from different populations of electron neutrinos depending on where in the \(pp\) chain or CNO cycle they are produced. We take the spectrum for each population, \(\mathrm{d}\phi_{\nu_{e}}^{p}/\mathrm{d}E_{\nu}\), from the predictions of the B16-GS98 SSM [120], where \(p\in\{pp,\,^{8}{\rm B},\,\ldots\}\). In the case of electron recoils, the minimum neutrino energy necessary to cause a recoil of energy \(E_{R}\) is given by \[E_{\nu}^{\mathrm{min}}=\frac{1}{2}\left(E_{R}+\sqrt{E_{R}^{2}+2m_{e}E_{R}} \right)\,. \tag{60}\] Ignoring experimental effects, such as energy resolution and efficiency functions, the scattering rate due to a particular neutrino population, \(p\), is given by \[R_{\mathrm{Borexino}}^{p}=\int_{0}^{E_{R}^{p,\mathrm{max}}}\frac{\mathrm{d}R^ {p}}{\mathrm{d}E_{R}}\ \mathrm{d}E_{R}\,, \tag{61}\] where \(E_{R}^{p,\mathrm{max}}\) is the maximum possible recoil energy for the population \(p\) and \(\mathrm{d}R^{p}/\mathrm{d}E_{R}\) is the differential rate calculated from Eq. (1). We take the number of target electrons in the scintillator to be \(3.307\times 10^{31}/(100\,\mathrm{ton})\)[128]. We wish to explore the evolution of previous oscillation bounds in the full plane of charged NSI, i.e. with variable angle \(\varphi\) at a fixed angle \(\eta\). To this end, we perform a similar analysis to that of Ref. [126]. Namely, we consider how Borexino's Phase-II measurements of the \(pp\), \({}^{7}\)Be, and \(pep\) solar neutrino rates [128] can be used to constrain neutrino NSI with our more general formalism. In conducting our analysis, we determine the bounds on the off-diagonal matrix elements \(\varepsilon_{\alpha\beta}^{\eta,\varphi}\) (\(\alpha\neq\beta\)), which were not computed in Ref. [126]. This is only possible through the correct treatment of the differential rate in Eq. (1) using the density matrix formalism. We note that a direct comparison between our results and those of Ref. [126] is particularly difficult due to our different treatments of the NSI Lagrangian. The results from the Borexino's Phase-II run, along with the results of our calculations for the theoretical rate for each respective neutrino population, are shown in Table 1. As was done in Ref. [126], we assume that the fractional uncertainties in the theoretical rates for each solar neutrino population are the same in our calculation as those reported by Borexino. Our results are in good agreement with the measured rate and the rate predicted by the collaboration [128]. To perform our statistical analysis, we construct a \(\chi^{2}\) function similar to that of Section III.1: \[\chi^{2}(\varepsilon_{\alpha\beta}^{\eta,\varphi},\varphi)\equiv\min_{ \boldsymbol{a}}\left[\sum_{p}\left(\frac{R_{\mathrm{Borexino}}^{p}-(1+a^{p}) \,R_{\mathrm{Theo}}^{p}(\varepsilon_{\alpha\beta}^{\eta,\varphi},\varphi)}{ \sigma_{\mathrm{stat}}^{p}}\right)^{2}+\left(\frac{a^{p}}{\sigma_{a}^{p}} \right)^{2}\right]\,, \tag{62}\] where the sum is taken over each considered solar neutrino population, \(p\in\{pp,\,^{7}{\rm Be},\,pep\}\). The rates \(R_{\mathrm{Borexino}}^{p}\) are the measured rates from the Phase-II run, with statistical uncertainties \(\sigma_{\mathrm{stat}}^{p}\), while \(R_{\mathrm{Theo}}^{p}\) are our calculated rates given a choice of \(\varepsilon_{\alpha\beta}^{\eta,\varphi}\) and \(\varphi\). We have also introduced the pull parameters \(a^{p}\) for each rate, the values of which we show in Table 1. To compute our \(\chi^{2}\), we profile over the nuisance parameters \(\boldsymbol{a}\equiv(a^{pp},\,a^{\,{}^{7}{\rm Be}},\,a^{pep})^{\mathrm{T}}\), whose standard deviations are given in the last column of Table 1. The 90% CL regions are computed via the same prescription as in Section III.1. Figure 3: The 90% CL allowed regions for each NSI parameter along the angle \(\varphi\) from the Phase-II run of the Borexino experiment. The bounds usually quoted correspond to the NSI parameter values at \(\varphi=0\). We have fixed \(\eta=0\), corresponding to a pure electron NSI when \(\varphi=\pm\pi/2\). The dark grey region shows where we conservatively assumed the adiabatic limit to break down (\(\gamma<100\)). We show the 90% CL allowed regions in Fig. 3 and the corresponding \(\Delta\chi^{2}\) values for the extremal cases of \(\varphi=0\) and \(\varphi=\pi/2\) in Fig. 7 in Appendix B. The shapes of these regions can be understood by expanding the trace of Eq. (1) using the E\(\nu\)ES cross section of Eq. (49) when only one \(\varepsilon^{\eta,\varphi}_{\alpha\beta}\) is turned on at a time. The resulting formula for the rate then contains three types of terms: a propagation-only term, which contains NSI effects only at the level of neutrino propagation; a term linear in \(\xi^{e}\varepsilon^{\eta,\varphi}_{\alpha\beta}\), which can be understood as an interference term between the SM and NSI; and a positive-definite term quadratic in \(\xi^{e}\varepsilon^{\eta,\varphi}_{\alpha\beta}\), which encodes the pure NSI effect in the cross section. These terms can be explicitly seen in Eqs. (71) and (72) in the context of our DD analysis. At \(\varphi=0\), the NSI effect is purely due to a change in the matter potential experienced by neutrinos on their way out of the Sun, altering neutrino propagation as per the description of Section II.2. This leads to constraints that are only due to propagation effects, with the neutrino-electron cross section unchanged. Around \(\varphi=0\) and for \(\varepsilon^{\eta,\varphi}_{\alpha\beta}\lesssim 1\), NSI effects remain dominated by propagation-only effects, but non-standard cross section terms linear in \(\varepsilon^{\eta,\varphi}_{\alpha\beta}\xi^{e}\) begin to contribute. While the impact on the expected rate due to propagation-only effects in this regime is approximately symmetric under the exchange \(\varepsilon^{\eta,\varphi}_{\alpha\beta}\to-\varepsilon^{\eta,\varphi}_{\alpha \beta}\), the effect due to the term linear in \(\varepsilon^{\eta,\varphi}_{\alpha\beta}\xi^{e}\) is approximately symmetric under the combined exchange \(\{\varphi,\,\varepsilon^{\eta,\varphi}_{\alpha\beta}\}\to\{-\varphi,\,- \varepsilon^{\eta,\varphi}_{\alpha\beta}\}\). This means that, depending on the sign of \(\varphi\), cross section and propagation-only effects will either positively or negatively interfere with one another. For larger values of \(|\varphi|\), NSI effects are predominantly due to changes in the scattering cross section, which are dominated by the term quadratic in \(\xi^{e}\varepsilon^{\eta,\varphi}_{\alpha\beta}\) in the E\(\nu\)ES cross section for large values of \(\varepsilon^{\eta,\varphi}_{\alpha\beta}\). These effects are perhaps best evidenced by the lower-right panel of Fig. 3. For \(\varphi=0\), propagation effects alone lead to rates that are reconcilable with the data, allowing us to constrain \(\varepsilon^{\eta,\varphi}_{\mu\tau}\) without alterations to the E\(\nu\)ES cross section. For small negative values of \(\varphi\), both cross section and propagation effects suppress the expected neutrino rate, leading to a large overall predicted deficit and a more constrained allowed region. Beyond this, terms quadratic in \(\xi^{e}\varepsilon^{\eta,\varphi}_{\mu\tau}\) begin to dominate, reducing this deficit to momentarily retrieve the SM expectation, but ultimately leading to a large predicted excess. This results in the valley at negative \(\varphi\) values. On the other hand, for small positive values of \(\varphi\), these two effects destructively interfere with one another, resulting in a larger allowed region. For negative values of \(\varepsilon^{\eta,\varphi}_{\mu\tau}\), the oscillation-only and quadratic terms reinforce one another, such that this enlarged region quickly shrinks for large \(\varepsilon^{\eta,\varphi}_{\mu\tau}\) and \(\varphi\). For positive \(\varepsilon^{\eta,\varphi}_{\mu\tau}\), these two terms instead cancel one another out. Additionally, Fig. 3 contains a grey region for \(\varepsilon^{\eta,\varphi}_{\mu\mu}\) where the adiabatic approximation used to model matter effects in the Sun may be inappropriate [129, 130]. Within this region, the adiabaticity parameter, defined in Eq. (29), takes values \(\gamma<100\), where we have calculated \(\gamma\) at \(E_{\nu}=1\,\text{MeV}\), approximately corresponding to the highest energy reached by \({}^{7}\)Be neutrinos. We conservatively interpret these values to be in violation of the adiabaticity condition, \(\gamma\gg 1\), such that a full numerical calculation of the density matrix elements would be required for an accurate analysis. This would be beyond the scope of our work, and since the allowed NSI regions in Fig. 3 are almost entirely within the adiabatic regime, we do not believe a numerical treatment is necessary. We have checked that the we fulfil the adiabatic criterion for all other \(\varepsilon^{\eta,\varphi}_{\alpha\beta}\). Our analysis also shows that constraints are strongest for off-diagonal NSI. This is because the trace in Eq. (1) leads to two terms (which are equal in our case as \(\delta_{\text{CP}}\) is set to 0) contributing to the total NSI rate in the off-diagonal case, as opposed to a single contribution arising from diagonal NSI. Thus, \begin{table} \begin{tabular}{c c c c} \hline \hline **Population** & **Phase-II Rate** & **Theoretical Rate** & **Fractional** \\ & \(\left[\left(100\,\text{ton day}\right)^{-1}\right]\) & \(\left[\left(100\,\text{ton day}\right)^{-1}\right]\) & **Uncertainty** \\ \hline \(pp\) & \(134\pm 10\) & \(133\) & \(1.1\%\) \\ \({}^{7}\)Be & \(48.3\pm 1.1\) & \(48.5\) & \(5.8\%\) \\ \(pep\) & \(2.43\pm 0.36\) & \(2.78\) & \(1.5\%\) \\ \hline \hline \end{tabular} \end{table} Table 1: Solar neutrino rates relevant for our Borexino analysis. Shown are the measured rates from the Phase-II run of Borexino [128], our calculated theoretical rates, and the assumed fractional uncertainties in our calculation. the allowed regions for off-diagonal NSI are generally tighter than those for diagonal NSI. The exception to this is in the bounds for \(\varepsilon_{ee}^{\eta,\varphi}\), which are highly constrained due to the enhanced E\(\nu\)ES cross section arising from the additional CC contribution via the \(W\)-boson exchange. This enhanced cross section not only leads to much tighter bounds for \(\varepsilon_{ee}^{\eta,\varphi}\) but also allows for a finely tuned second minimum at non-zero \(\varepsilon_{ee}^{\eta,\varphi}\). With a non-zero \(\varepsilon_{ee}^{\eta,\varphi}\), the differential rate spectrum is modified such that the total rate, given by integrating over all recoil energies, coincidentally retrieves the SM expectation. The spectrum itself, however, is significantly modified, and incorporating spectral information into our analysis would ultimately prohibit this second solution. Our results should not be taken as a dedicated Borexino analysis. Though we have attempted to capture the variation in the calculated solar neutrino rates by introducing pull parameters, as was done in Ref. [126], this only accounts for the theoretical uncertainty in these rates. Ultimately, a more sophisticated analysis would require a spectral fit of Borexino's data and allow for multiple NSI parameters to vary at a time. Such a fit should then allow for the various background components inherent in the data to float and permit correlations between all fit parameters. Such an analysis was recently done in the context of neutrino NSI by Ref. [95] assuming charged contributions only from the electron (\(\varphi=\pi/2\)). Our results should instead be taken as a demonstration of how bounds on NSI can vary dramatically depending on what one takes as the underlying charged NSI contribution. ## IV Direct detection experiments Finally, we turn to the main motivation of this paper: determining the potential of DD experiments within the NSI landscape using the extended framework introduced in Section II. DD experiments will provide a unique probe of neutrino interactions because they have access to both CE\(\nu\)NS and E\(\nu\)ES. This is due to the properties of the solar neutrino flux. To produce a recoil in the energy range detectable in DD experiments (\(\sim 10\) eV\(-100\) keV) E\(\nu\)ES and CE\(\nu\)NS probe different solar neutrino populations. In particular, the main contribution to NRs comes from \({}^{8}\)B neutrinos, whereas for ERs it is \(pp\) and \({}^{7}\)Be that contribute the most. Thus, although the scattering cross-section for CE\(\nu\)NS is significantly larger, the much larger \(pp\) neutrino flux compensates for the smaller E\(\nu\)ES cross section. As we saw in Section III.2, experiments such as Borexino have optimal sensitivity when NSI occur purely with the electron, but they rapidly lose their constraining power as \(\varphi\to 0\). On the other hand, CE\(\nu\)NS experiments such as CENNS-10 LAr, which we explored in Section III.1, have excellent sensitivity when the charged NSI contribution is wholly in the proton direction, but they lose this sensitivity as this contribution turns to the electron. Moreover, having no \(\nu_{\tau}\) component, they are completely insensitive to \(\varepsilon_{\tau\tau}^{\eta,\varphi}\). Not only can DD experiments probe \(\varepsilon_{\tau\tau}^{\eta,\varphi}\), but their ability to measure and discriminate between NRs and ERs means that they retain their constraining power across the full range of \(\varphi\). When applied to specific BSM models, this can be crucial to identify the underlying nature of the new physics scenarios (see e.g., Ref. [75]). In this work, we focus on the xenon-based DD experiments LZ [101; 102; 103], XENON [81; 97; 98], and DARWIN [96]. More specifically, we derive exclusions from the data reported by the recent LZ WIMP search [131] and the XENONnT electron-recoil excess search [80]. We also determine the expected sensitivities of LZ, XENONnT and DARWIN, based on their projections for their final experimental configurations. Similar results can be obtained with PandaX [79; 99; 100]. ### Expected Number of Events and Statistical Procedure To calculate our sensitivities, we consider the differential rate of Eq. (1) and incorporate detector effects, such as efficiency and energy resolution. The expected recoil rate from neutrino scattering is then given by \[\frac{\mathrm{d}R}{\mathrm{d}E_{R}}=\int_{0}^{\infty}\frac{\mathrm{d}R}{ \mathrm{d}E_{R}^{\prime}}\,\epsilon(E_{R}^{\prime})\,\frac{1}{\sigma(E_{R}^{ \prime})\sqrt{2\pi}}\,e^{-\frac{(E_{R}-E_{R}^{\prime})^{2}}{2\sigma^{2}(E_{R} ^{\prime})}}\,\mathrm{d}E_{R}^{\prime}\,. \tag{63}\] Here, \({\rm d}R/{\rm d}E^{\prime}_{R}\) is given by Eq. (1). When computing this rate, we use the solar neutrino fluxes predicted by the B16-GS98 model [120], as we did for our Borexino analysis in Section III.2. The integral over the expected energy, \(E^{\prime}_{R}\), is the convolution that describes the effect that the detector resolution, \(\sigma\), has on the observed signal; we assume this to be equivalent to a Gaussian smearing. This resolution is typically reported in terms of the measured, electron-equivalent energy, so we first convert the CE\(\nu\)NS differential rate and NR efficiency functions into electron-equivalent energies when considering NRs. We do this by applying an energy-dependent quenching factor, which relates the two energy scales via \(E_{\rm ee}=Q(E_{\rm nr})\,E_{\rm nr}\). We take this to be the Lindhard factor [132] with \(k=0.1735\), reflecting the \(k\)-value found in the fit performed by the LUX collaboration [133]. Finally, \(\epsilon\) is the energy-dependent efficiency function. The differential rate depends on the number of targets per unit mass of the detector, \(n_{T}=N_{T}/m_{\rm det}\). For NRs, we take this to be the number density of atoms in the detector, \(n_{T}=1/m_{N}\), where \(m_{N}\) is the nuclear mass of the relevant xenon isotope. For ERs, this corresponds to the number of ionisable electrons given a recoil of energy \(E_{R}\), scaled to agree with the _ab initio_ calculations from the relativistic random-phase approximation for xenon [134]. This takes into account the many-body dynamics involved in such collisions, and they have shown a consistent suppression of the rate at low recoil energies [134; 135]. To compute the number of expected events within the \(i^{\rm th}\) bin, we integrate the differential rate in the energy window defined by the edges of the bin, \([E^{i}_{1},\,E^{i}_{2}]\), and sum the contributions from each nuclear isotope \(A\) multiplied by its corresponding relative isotopic abundances, \(X_{A}\): \[N^{i}_{\nu}=\varepsilon\sum_{A}X_{A}\int_{E^{i}_{1}}^{E^{i}_{2}}\frac{{\rm d }R_{A}}{{\rm d}E_{R}}\,{\rm d}E_{R}\,. \tag{64}\] Here, \(\varepsilon\) is the experimental exposure and \({\rm d}R_{A}/{\rm d}E_{R}\) is the differential rate in Eq. (63) due to isotope \(A\). We determine our sensitivities using a series of log-likelihood-ratio tests in which we vary only one NSI parameter at a time, fixing all others to zero6. We construct our likelihoods from a Poisson part and a Gaussian part, which we use to capture the effect of uncertainties on nuisance parameters. For this latter part, we consider Gaussian distributed pull parameters serving to scale the number of expected neutrino events, as we did for CENNS-10 LAr and Borexino, and the number of expected background events. We respectively label these parameters as \(a\) and \(b\), having standard deviations \(\sigma_{a}\) and \(\sigma_{b}\). Given some number of observed events in bin \(i\), \(N^{i}_{\rm obs}\), we define the likelihood function Footnote 6: A global analysis in which all parameters are allowed to vary is beyond the scope of this article, where our aim is to motivate next-generation and far-future DD experiments to be included in future such analyses. \[\mathcal{L}(\varepsilon^{\eta,\varphi}_{\alpha\beta},\,\eta,\, \varphi,\,a,\,b) \equiv\prod_{i}^{N_{\rm bins}}{\rm Po}\left[N^{i}_{\rm obs}\,|\,(1+ a)N^{i}_{\nu}(\varepsilon^{\eta,\varphi}_{\alpha\beta},\,\eta,\,\varphi)+(1+b)N^{i}_{ \rm bkg}\right] \tag{65}\] \[\qquad\qquad\times{\rm Gauss}\,(a|\,0,\,\sigma_{a}\,)\,\,{\rm Gauss }\,(b|\,0,\sigma_{b}\,)\,\] where \(N^{i}_{\rm bkg}\) is the number of expected background events in the \(i^{\rm th}\) bin. The product is over \(N_{\rm bins}\) bins. The number of observed events for each of our analyses depends on whether we compute exclusions based on data or derive projected limits. If the former, we take the number of observed neutrino events reported in each bin. If the latter, we assume an Asimov data set [136], such that \(N^{i}_{\rm obs}\) is set to the number of expected SM neutrino events (\(\varepsilon^{\eta,\varphi}_{\alpha\beta}=0\)) in the \(i^{\rm th}\) bin. Finally, to derive our limits, we use Eq. (65) to define the test statistic \[q_{\varepsilon}\equiv-2\ln\left[\frac{\mathcal{L}(\varepsilon^{\eta,\varphi}_{ \alpha\beta};\,\eta=\eta_{0},\,\varphi=\varphi_{0},\,\hat{a},\,\hat{b})}{ \mathcal{L}(\hat{\varepsilon}^{\eta,\varphi}_{\alpha\beta};\,\eta=\eta_{0},\, \varphi=\varphi_{0},\,\hat{\hat{a}},\,\hat{\hat{b}})}\right]\,, \tag{66}\] where hatted variables indicate quantities that maximise the likelihood given the parameter \(\varepsilon^{\eta,\varphi}_{\alpha\beta}\) and double-hatted variables indicate those quantities that maximise the overall, unconstrained likelihood. By fixing the angles to take some values \(\eta=\eta_{0}\) and \(\varphi=\varphi_{0}\), we constrain only the parameter of interest, \(\varepsilon_{\alpha\beta}^{\eta,\varphi}\), in each of our analyses. In practice, we perform our analysis by only considering the most dominant nuisance parameter (either \(a\) or \(b\)), which depends on the experiment and is discussed in detail below. The 90% CL limits are then calculated by finding the value for \(q_{\varepsilon}\), \(q_{\varepsilon}^{\text{lim}}\), for which \[\int_{q_{\varepsilon}^{\text{lim}}}^{\infty}f(q_{\varepsilon})\,\mathrm{d}q_{ \varepsilon}=0.90\,, \tag{67}\] where \(f(q_{\varepsilon})\) is the distribution of the test statistic. In the limit of high statistics and that the true parameter value does not lie on the boundary of our parameter space, Wilks' theorem tells us that this distribution asymptotically follows a \(\chi^{2}\)-distribution with number of degrees of freedom \(k=1\)7. This leads to \(q_{\varepsilon}^{\text{lim}}=2.71\), and our limits then follow from finding that \(\varepsilon_{\alpha\beta}^{\eta,\varphi}\) which yields this value for the test statistic. Footnote 7: We have checked that this holds true in all of our analyses. To implement the analysis in the sections that follow, we make use of SNuDD[137]8 (Solar NeUtrinos for Direct Detection), a novel code-base that we have developed. SNuDD is a Python package that calculates the generalised cross section of Section II.4 and the density matrix elements of Section II.3, combining them to compute the trace of Eq. (1) and arrive at a prediction for the expected solar neutrino rate at a DD experiment while folding in detector efficiency and resolution effects. We will release it in a separate publication. We hope that SNuDD facilitates future DD analyses in the NSI landscape. Footnote 8: [https://github.com/dwpamaral/SNuDD.git](https://github.com/dwpamaral/SNuDD.git) ### Sensitivities in the nucleon NSI plane From the many existing DD NR constraints, we consider only the recent, leading LZ result [131] to derive a bound in the NSI landscape. We take the efficiency function given in Ref. [131] and we model the energy resolution according to Ref. [138]. For our signal region, we use the 90% quantile of the nuclear recoil band in the S1\(c\) (scintillation) and S2\(c\) (ionisation) event reconstruction space as shown in Fig. 4 of Ref. [131]. For our analysis, we focus only on the low-energy region (\([5,\,15]\,\mathrm{keV}\)), which is sensitive to solar neutrinos, and we integrate over it to constitute one bin. We take the number of expected background and observed events to be 1 and 0, respectively (see Ref. [139]). We have validated our procedure by reproducing the WIMP-nucleon cross-section limit reported by the collaboration (Fig. 5 of Ref. [131]) to a good agreement in the low and high DM mass values. To assess the future prospects of detecting an NR signal, we plot the projected sensitivities for LZ, XENONnT, and DARWIN with the full exposures of 15.34, 20, and 200 ton yr, respectively. We have taken a background-free scenario in the NR search, which is consistent with experimental aims (for our analysis, the neutrino signal is not considered a background). With no backgrounds, we then conservatively perform a one-bin analysis, using the total number of expected solar neutrino NR events as our observation.Additionally, since the expected background is so low, the nuisance parameter that will have the largest impact on our sensitivities will be associated with the solar neutrino flux, and hence we set \(b=0\) in this case. For our remaining pull parameter in Eq. (65), we assume an uncertainty of \(\sigma_{a}=12\%\), reflecting the 12% uncertainty in the theoretical value of the total \({}^{8}\)B flux in the B16-GS98 SSM [120]. We take the resolution function for LZ at full exposure to be the same as that of their first result [131, 138], whereas for XENONnT and DARWIN we use the resolution function given in Ref. [81]. The NR efficiency functions, as presented by the collaborations, reach 50% at \(3.8\,\mathrm{keV}_{\mathrm{nr}}\) for LZ [140] and \(5.7\,\mathrm{keV}_{\mathrm{nr}}\) for XENONnT and DARWIN [97]. However, to explore how DD experiments could feasibly probe NSI in the future, we take the liberty of further lowering these thresholds. This is to take advantage of the higher \({}^{8}\)B rate at these energies. In particular, we augment the efficiency functions such that, for each future experiment, the efficiency instead reaches 50% at \(3\,\mathrm{keV}_{\mathrm{nr}}\), which we consider to be a feasible future goal. For instance, the xenon-based LUX experiment was able to reach thresholds as low as \(1.1\,\mathrm{keV}_{\mathrm{nr}}\) while retaining NR/ER discrimination [141]. The LUX collaboration has also developed techniques allowing for single-photon sensitivities, resulting in sensitivities to much lower recoil energies at the cost of a lower overall detection efficiency. Finally, the XENON1T collaboration has recently performed a dedicated \({}^{8}\)B search by lowering their threshold to \(1.6\,\mathrm{keV_{nr}}\), achieved by relaxing the necessity for a three-fold S1 coincidence in the PMTs to a two-fold one [142]. Furthermore, taking the systematic \({}^{8}\)B uncertainty to be \(12\%\), we find that lowering the threshold further provides little-to-no benefit in terms of NSI sensitivity. For each future experiment, we take \(E_{R}^{\mathrm{max}}=30\,\mathrm{keV}\). We show our results in Fig. 4. The shaded areas represent the \(90\%\) CL limits set by the different experimental configurations. From less constraining (smaller areas) to more constraining (larger areas), we show the limits derived from the first LZ results (turquoise with solid boundary) and the expected sensitivities of the full exposure of LZ (baby blue, dashed), XENONnT (dark blue, dashed) and the proposed DARWIN (purple, dashed). For comparison, we also show with red bars the NSI limits derived from the global study of Ref. [39], which included the results from COHERENT and a variety of oscillation experiments, when NSI take place purely with the proton (\(\eta=0\)), up-quark (\(\eta=\tan^{-1}(1/2)\)) and down-quark (\(\eta=\tan^{-1}(2)\)). We extract the limits we have computed using the LZ WIMP search data and tabulate them in Table 2, contrasting them with the results from the global fits of [39]. We see that, currently, DD experiments are not sensitive to globally allowed NSI values, but our projections indicate that they will be in the near future. Like in Fig. 3 of our Borexino analysis in Section III.2, Fig. 4 contains grey regions, indicating those points in the parameter space where the adiabatic approximation may be invalid. Within these regions, \(\gamma<100\), where in this case we have calculated \(\gamma\) at \(E_{\nu}=16\,\mathrm{MeV}\), approximately corresponding to the highest energy reached for \({}^{8}\)B neutrinos. As current global fits show that the allowed values of the NSI parameters are firmly within the adiabatic regime, we believe that our analytical approach is valid for the regions of interest. However, it is important to keep in mind that our sensitivities may be inaccurate within the grey bands. Our limits exhibit many interesting non-trivial features. Specifically, we see that there are regions in each NSI parameter space where every DD experiment loses sensitivity. The two most remarkable of these regions are, firstly, the strong cancellation in the angle \(\eta\) occurring at \(\eta\approx-35^{\circ}\) and, secondly, the band of insensitivity in \(\varepsilon_{\alpha\beta}^{\eta,\varphi}\) across the entire range of \(\eta\) values (made manifest by the gaps in the projected sensitivity areas). These blind spots present a challenge for DD experiments, and they should be understood if DD experiments are to maximise their constraining power in the NSI landscape. We first consider the cancellation in \(\eta\), which occurs at the same point regardless of the nature of the NSI. From Eq. (47) and Eq. (48), we see that the non-standard contribution to the CE\(\nu\)NS cross section vanishes when \(\xi^{p}Z+\xi^{n}N=0\), recovering the SM cross section regardless of the value of \(\varepsilon_{\alpha\beta}^{\eta,\varphi}\). For a given nuclear isotope, this occurs when9 Footnote 9: This cancellation was used in direct dark matter detection to argue that dark matter particles might escape detection in some specific targets (see e.g., Ref. [143]). \[\eta=\tan^{-1}\left(-\frac{Z}{N}\cos\varphi\right)\,. \tag{68}\] This condition depends on the choice of target material. For composite materials or targets, or non-mononuclidic targets, the cancellation is not exact, as the contributions from the different isotopes must be added up in Eq. (64). Yet, for xenon the ratio \(Z/N\) is very similar in all its natural isotopes, and the observed rate is greatly reduced for \(\eta\approx-35^{\circ}\) when \(\varphi=0\). Since stable nuclei tend to have similar \(Z/N\) fractions, the position of the blind spot does not vary greatly for different target choices. Interestingly, argon (a target employed in current detectors [144] and planned tonne-scale ones [145]) leads to a considerable shift, with \(\eta\approx-39^{\circ}\). This could lead to a noticeable effect if a full spectral analysis is performed of the observed signal, thus strengthening the notion of complementarity among DD targets. This would be even more noticeable in light nuclei, such as He, F, Na, or Si, for which \(Z/N\sim 1\) and the cancellation takes place for \(\eta\to 45^{\circ}\) (although a large detector would still be needed). The second blind spot occurs at intermediate values of \(\varepsilon_{\alpha\beta}^{\eta,\varphi}\), stretching over the full range of \(\eta\) values. These insensitivity bands arise due to interference effects, where the NSI contribution is cancelled Figure 4: The 90% CL limits set by multi-ton LXe DD experiments in the NSI parameter space using NRs. Shown are the limits from the first results of LZ [131] (turquoise), the full LZ exposure (baby blue), XENONnT (dark blue), and DARWIN (purple) in the typically assumed case that \(\varphi=0\). The bounds from the global analysis of Ref. [39] are shown for comparison (red bars). The grey regions indicate where the adiabaticity parameter is such that \(\gamma<100\), where we consider the adiabatic approximation to begin to falter [130]. and thus the SM CE\(\nu\)NS differential rate is restored. The exact location of these bands differs for flavour-conserving and flavour-violating NSI. In the case of flavour-conserving NSI, we can derive a simple analytical formula for the values of the NSI parameters leading to a non-trivial realisation of the SM differential rate. This relation, which defines the centres of each of these insensitivity bands where the NSI contribution exactly cancels, has \begin{table} \begin{tabular}{l c c} \hline \hline & LZ 2022 (**this work**) & Global Fits [39] \\ \hline \(\varepsilon_{ee}^{u}\) & \([-0.545,\,1.222]\) & \([-0.031,\,0.476]\) \\ \(\varepsilon_{\mu\mu}^{u}\) & \([-0.971,\,1.397]\) & \([-0.029,\,0.068]\oplus[0.309,\,0.415]\) \\ \(\varepsilon_{\tau\tau}^{u}\) & \([-0.645,\,1.598]\) & \([-0.029,\,0.068]\oplus[0.309,\,0.414]\) \\ \(\varepsilon_{e\mu}^{u}\) & \([-0.630,\,0.679]\) & \([-0.048,\,0.020]\) \\ \(\varepsilon_{e\tau}^{u}\) & \([-0.721,\,0.558]\) & \([-0.077,\,0.095]\) \\ \(\varepsilon_{\mu\tau}^{u}\) & \([-1.120,\,0.518]\) & \([-0.006,\,0.007]\) \\ \hline \(\varepsilon_{ee}^{d}\) & \([-0.540,\,1.084]\) & \([-0.034,\,0.426]\) \\ \(\varepsilon_{\mu\mu}^{d}\) & \([-0.863,\,1.233]\) & \([-0.027,\,0.063]\oplus[0.275,\,0.371]\) \\ \(\varepsilon_{\tau\tau}^{d}\) & \([-0.576,\,1.241]\) & \([-0.027,\,0.067]\oplus[0.274,\,0.372]\) \\ \(\varepsilon_{e\mu}^{d}\) & \([-0.542,\,0.635]\) & \([-0.050,\,0.020]\) \\ \(\varepsilon_{e\tau}^{d}\) & \([-0.655,\,0.455]\) & \([-0.076,\,0.097]\) \\ \(\varepsilon_{\mu\tau}^{d}\) & \([-0.982,\,0.461]\) & \([-0.006,\,0.007]\) \\ \hline \(\varepsilon_{ee}^{p}\) & \([-1.805,4.195]\) & \([-0.086,\,0.884]\oplus[1.083,\,1.605]\) \\ \(\varepsilon_{\mu\mu}^{p}\) & \([-3.330,4.791]\) & \([-0.097,\,0.220]\oplus[1.063,\,1.410]\) \\ \(\varepsilon_{\tau\tau}^{p}\) & \([-2.209,5.710]\) & \([-0.098,\,0.221]\oplus[1.063,\,1.408]\) \\ \(\varepsilon_{e\mu}^{p}\) & \([-2.209,2.249]\) & \([-0.124,\,0.058]\) \\ \(\varepsilon_{e\tau}^{p}\) & \([-2.434,2.006]\) & \([-0.239,\,0.244]\) \\ \(\varepsilon_{\mu\tau}^{p}\) & \([-3.849,1.772]\) & \([-0.013,\,0.021]\) \\ \hline \(\varepsilon_{ee}^{n}\) & \([-1.714,2.915]\) & — \\ \(\varepsilon_{\mu\mu}^{n}\) & \([-2.331,3.282]\) & — \\ \(\varepsilon_{\tau\tau}^{n}\) & \([-1.564,2.705]\) & — \\ \(\varepsilon_{e\mu}^{n}\) & \([-1.426,1.846]\) & — \\ \(\varepsilon_{e\tau}^{n}\) & \([-1.829,1.147]\) & — \\ \(\varepsilon_{\mu\tau}^{n}\) & \([-2.275,1.250]\) & — \\ \hline \hline \end{tabular} \end{table} Table 2: 90% CL allowed intervals for NSI in the up quark, down quark, proton, and neutron directions. Shown are the results from our analysis of the LZ 2022 data [131] and those of the global fit study of Ref. [39]. Note that the latter do not quote NSI in the neutron direction. previously been pointed out in Ref. [74] and in our framework is given by \[\varepsilon^{\eta,\varphi}_{\alpha\alpha}=\frac{Q_{\nu N}}{\xi^{p}Z+\xi^{n}N}\,. \tag{69}\] The dependence on \(\eta\), encoded in \(\xi^{p}\) and \(\xi^{n}\), gives us the band over different values of \(\varepsilon^{\eta,\varphi}_{\alpha\alpha}\) as a function of \(\eta\). Note that, as with the first blind spot, the locations of these bands depend on the choice of the target material due to the dependence on \(Z\) and \(N\). For \(\eta=0\), for instance, Eq. (69) gives \(\varepsilon^{\eta,\varphi}_{\alpha\alpha}\approx 0.6\) for xenon, whereas it yields the lower \(\varepsilon^{\eta,\varphi}_{\alpha\alpha}\approx 0.5\) for argon. Since the non-trivial cancellation occurs at different values for different targets, this could be important in determining whether global minima are driven by data or just artefacts of the blind spot that nuclei have, see for instance Ref. [48]. Considering different materials thus gives us one possible avenue to mitigate this particular loss of sensitivity, though we note that the blind spots of xenon and argon move closer together as \(\eta\to\pi/2\). In the case of flavour-changing NSI, the cancellation condition becomes more complicated and is only retrieved in the (correct) basis-independent formulation of the scattering rate in terms of the trace \(\mathrm{Tr}\,[\mathbf{\rho}\ \mathrm{d}\zeta/\mathrm{d}E_{R}]\) in Eq. (1). Due to the flavour-coherence effects, we still expect regions where the SM-NSI interference term cancels the NSI-only term; however, these regions now also depend on the density matrix elements. To investigate this behaviour, we consider, as a simplification, the values of \(\varepsilon^{\eta,\varphi}_{\alpha\beta}\) for which the differential rate spectrum returns to its expected SM value for a given recoil energy \(E_{R}\). This prescription removes the need to integrate over \(E_{R}\) to find the number of events. From Eqs. (1) and (47), we find that in general, for \(\alpha\neq\beta\), the condition for restoring the SM rate reads \[\int_{E^{\mathrm{min}}_{\nu}}\frac{\mathrm{d}\phi_{\nu_{\epsilon}}}{\mathrm{ d}E_{\nu}}\left(1-\frac{m_{N}E_{R}}{2E_{\nu}^{2}}\right)\left[(\xi^{p}Z+\xi^{n}N)( \rho_{\alpha\alpha}+\rho_{\beta\beta})\,\varepsilon^{\eta,\varphi}_{\alpha \beta}-2\,Q_{\nu N}\,\rho_{\alpha\beta}\right]\,\mathrm{d}E_{\nu}=0\,. \tag{70}\] The difference in the forms of the relations in Eqs. (69) and (70) is why the positions of these bands are different for flavour-conserving and flavour-violating NSI. In particular, we note that, unlike in the case of the former, for the latter we have a flavour dependence through the appearance of the density matrix elements. This is why, for instance, we see a sign flip of the bands in the case of \(\varepsilon^{\eta,\varphi}_{\varepsilon\tau}\) and \(\varepsilon^{\eta,\varphi}_{\mu\tau}\) with respect to \(\varepsilon^{\eta,\varphi}_{\varepsilon\mu}\), as the relevant off-diagonal density matrix elements \(\rho_{e\tau}\) and \(\rho_{\mu\tau}\) are negative in contrast to \(\rho_{e\mu}\). As we can see from Fig. 4, the insensitivity bands for the off-diagonal NSI elements \(\varepsilon^{\eta,\varphi}_{e\mu}\) and \(\varepsilon^{\eta,\varphi}_{e\tau}\) exhibit some interesting features at \(\eta\approx-5/16\,\pi\), where they seem to expose a _kink_. The origin of these kinks in the off-diagonal NSI insensitivity bands can be traced back to the appearance of the off-diagonal density matrix elements, \(\rho_{\alpha\beta}\), in the CE\(\nu\)NS cancellation condition. These kinks arise because the last term in Eq. (70) proportional to \(\rho_{\alpha\beta}\) undergoes a qualitative change of behaviour at \(\eta\approx-5/16\,\pi\). We will discuss the behaviour of the kinks using the insensitivity band for \(\varepsilon^{\eta,\varphi}_{e\mu}\) in the top right plot of Fig. 4 as an example. At very negative angles \(\eta\approx-\pi/2\), the off-diagonal interference term in Eq. (70) proportional to \(\rho_{\alpha\beta}\) has an extremum for positive \(\varepsilon^{\eta,\varphi}_{\alpha\beta}\). In this regime, the behaviour of the cancellation line (which occurs at negative \(\varepsilon^{\eta,\varphi}_{\alpha\beta}\)) is entirely dominated by the NSI-only term proportional to the diagonal density matrix elements, \(\rho_{\alpha\alpha}\). However, at larger angles, \(\eta\approx-5/16\,\pi\), the extremum in the off-diagonal term shifts from positive values of \(\varepsilon^{\eta,\varphi}_{\alpha\beta}\) to negative values, and it hence begins to dominate the behaviour of the cancellation bands in Eq. (70). This change of behaviour in the cancellation integral leads to the appearance of the kinks in the insensitivity band. For \(\varepsilon^{\eta,\varphi}_{e\tau}\), the same effect leads to the appearance of a kink, but with opposite signs of \(\varepsilon^{\eta,\varphi}_{\alpha\beta}\). In principle, the same reasoning holds for \(\varepsilon^{\eta,\varphi}_{\mu\tau}\); however, in this case, the off-diagonal term exhibits an almost negligible extremum, such that there is no visible kink. Finally, the fact that the behaviour of the off-diagonal density matrix element \(\rho_{\alpha\beta}\) is responsible for the appearance of these kinks also explains why they are absent for the diagonal NSI elements, \(\varepsilon^{\eta,\varphi}_{\alpha\alpha}\), since there is no contribution from \(\rho_{\alpha\beta}\). ### Sensitivities in the charged NSI plane While the E\(\nu\)ES cross section will only be modified when \(\varepsilon_{\alpha\beta}^{\eta,\varphi}\neq 0\) and \(\varphi\neq 0\), even in the case of pure nuclear NSI couplings (\(\varphi=0\)), propagation effects within the solar medium can still alter the expected ER rate in DD experiments. Thus, since the charged plane contains proton NSI modifications, both ER and NR signals must be considered. There is only one direction in which a non-zero NSI will not affect the NR signal, and that is precisely the electron-only direction, \(\varphi=\pm\pi/2\) and \(\eta=0\). For this reason, in Fig. 5 we include both the NR and ER analysis to show the projected sensitivities of DD when NSI lie in the \((\varepsilon_{\alpha\beta}^{p}\varepsilon_{\alpha\beta}^{e})\)-plane. Currently, the world-leading DD constraint on ERs comes from XENONnT [80], which has thus far reached an exposure of \(1.16\,\mathrm{ton\,yr}\). We have replicated this analysis by taking the efficiency function, expected backgrounds, and observed number of events from Ref. [80], where, for the background model \(B_{0}\), we subtract their expected solar neutrino background (Fig. 4 of Ref. [80]). In the signal region of \([0-140]\,\mathrm{keV}\), the SM counts predicted by SNuDD is 274, which is lower than the quoted \(300\pm 30\)[80]. This could be due to the fact that the neutrino signal in Ref. [80] uses a simplified modelling of the neutrino spectrum. Specifically, SNuDD uses the relativistic random phase approximation (RRPA) studied in Ref. [134] along with a series of step-functions to model the effect of electron binding energies in xenon. This reduces the overall rate and introduces discontinuous jumps in the spectra when more electrons can be ionised above certain energies. One such discontinuity is around \(\sim 30\,\mathrm{keV}\), which does not appear to be present in Fig. 4 of Ref. [80]. When we remove both RRPA and the step function approximation in SNuDD to determine the expected number of solar neutrino events with the same setup as Ref. [80], we predict 299 solar neutrino events. Unlike in the NR case, the sizeable backgrounds for ERs mean that spectral information should be used to harness greater sensitivity. Following XENONnT, our analysis uses \(2\,\mathrm{keV}\)-width bins from \([0-30]\,\mathrm{keV}\). We have refrained from using the entire signal region because the backgrounds are at their lowest at low energies. Additionally, the backgrounds below \(30\,\mathrm{keV}\) are dominated by one source, \({}^{214}\mathrm{Pb}\), which has an associated uncertainty that we treat as a nuisance parameter. If we consider higher recoil energies, backgrounds such as \({}^{124}\mathrm{Xe}\), \({}^{83m}\mathrm{Kr}\) and \({}^{136}\mathrm{Xe}\) become important, all of which have different associated uncertainties. A dedicated experimental analysis would include all backgrounds and their uncertainties to perform a multivariate fit to the observed events. We believe such an in-depth study should be done in consort with the collaboration. The uncertainty we take for the \({}^{214}\mathrm{Pb}\) dominated background is \(\sigma_{b}=12.5\%\)[80]. We consider this to be our dominant nuisance parameter and find that if we instead perform the fit assuming the \(pp\) neutrino flux is the dominant nuisance parameter (\(\sigma_{a}=1\%\)[95]) our limits for XENONnT see a substantial improvement. As mentioned above, the potential for future ER analyses relies primarily on the anticipated background reduction. For the full XENONnT run, we take the backgrounds from Ref. [98], for LZ we use Ref. [103], and for DARWIN we use the predictions given in Ref. [84]. Unlike with the NR signal, the ER neutrino spectrum does not fall off sharply at \(E_{R}\sim\mathrm{keV}\), so we do not extend the ER efficiency functions to lower energies as we did for our NR projections. We use the efficiency functions given in Ref. [140] for LZ, and in Ref. [81] for XENONnT and DARWIN, where they reach \(50\%\) at \(1.46\,\mathrm{keV}_{\mathrm{ee}}\) and \(1.51\,\mathrm{keV}_{\mathrm{ee}}\), respectively. For these projections, we also perform spectral analyses, binning with \(2\,\mathrm{keV}\)-width bins in the energy ranges \([0-60]\,\mathrm{keV}\) for XENONnT and DARWIN, but in the range \([0-30]\,\mathrm{keV}\) for LZ. LZ's maximum is limited by their reported efficiency function [140]. Additionally, we have assumed that these experiments will have a greater understanding of their backgrounds and therefore consider the dominant nuisance parameter to be the \(pp\) neutrino flux, \(\sigma_{a}=1\%\). We believe this is an achievable goal for future DD experiments and see our projected sensitivities as an additional motivation for improved understanding and reduction of backgrounds. We also considered a far-future xenon detector with an exposure of \(10^{3}\) ton yr as in Ref [74] and found that the \(pp\) flux uncertainty drives the projected sensitivity to the extent that even with five times the exposure of DARWIN and no backgrounds, only marginal improvements are made to the sensitivities. We show our results in Fig. 5, where we have filled contours for ER (pink/red colours, dotted lines for projections) and NR (blue/purple colours, dashed lines for projections) analyses. In order to place these sensitivities in the wider experimental context, we take the recent results of Ref. [95], which used the spectral data from Phase-II of the Borexino experiment [128] to constrain \(\varepsilon_{\alpha\beta}^{e}\). As Ref. [95] does not mention the potential impact of either proton or neutron NSI on neutrino oscillations, we assume Figure 5: Same as Fig. 4 but now fixing the NSI to lie in the \((\varepsilon_{\alpha\beta}^{p}\varepsilon_{\alpha\beta}^{e})\)-plane (\(\eta=0\)) and using both NRs and ERs. The colour scheme for the NR results is the same as in Fig. 4. For the ER analyses, we show the limits derived from the first set of data from XENONnT [80] (dark orange), as well as projections for XENONnT (amber), LZ (light orange), and DARWIN (red). The bounds from the global analysis of Ref. [39] (red bars) and the Borexino analysis of Ref. [95] (green bars) are shown for comparison. The grey regions show where the adiabatic limit breaks down (\(\gamma<100\)) for energies relevant to NRs (light grey) and ERs (dark grey). \begin{table} \begin{tabular}{c c c c c} \hline \hline & \(\mathbf{L}\) & \(\mathbf{R}\) & \(\mathbf{V}\) & Ref. \\ \hline & \([-0.021,0.052]\) & \([-0.18,0.51]\) & – & SK \& KamLAND [21] \\ \(\varepsilon_{ee}^{e}\) & \([-0.046,0.053]\) & \([-0.21,0.16]\) & – & Borexino Phase-I [146] \\ & – & – & \([-0.56,0.24]\) & Borexino \& COHERENT [74] \\ & \([-1.37,-1.29]\)\(\oplus\) & \([-0.23,0.07]\) & \([-0.09,0.14]\) & Borexino Phase-II [95] \\ & \([0.03,0.06]\) & & \([-0.23,0.07]\) & XENONnT 2022 (this work) \\ & – & – & \([-2.65,0.78]\) & XENONnT 2022 (this work) \\ \hline & – & – & \([\varepsilon_{\tau\tau}^{e}-\varepsilon_{\mu\mu}^{e}]<0.097\) & SK atm. [147] \\ & \([-0.03,0.03]\) & \([-0.03,0.03]\) & – & react. + acc. [15, 148] \\ \(\varepsilon_{\mu\mu}^{e}\) & – & – & \([-0.58,0.72]\) & Borexino \& COHERENT [74] \\ & \([-0.20,0.13]\)\(\oplus\) & \([-0.36,0.37]\) & \([-0.51,0.35]\) & Borexino Phase-II [95] \\ & – & – & \([-2.19,2.34]\) & XENONnT 2022 (this work) \\ \hline & \([-0.12,0.060]\) & \([-0.99,0.23]\) & – & SK \& KamLAND [21] \\ & \([-0.23,0.87]\) & \([-0.98,0.73]\) & – & Borexino Phase-I [146] \\ & – & – & \([\varepsilon_{\tau\tau}^{e}-\varepsilon_{\mu\mu}^{e}]<0.097\) & SK atm. [147] \\ \(\varepsilon_{\tau\tau}^{e}\) & – & – & \([-0.60,0.72]\) & Borexino \& COHERENT [74] \\ & \([-0.26,0.26]\)\(\oplus\) & \([-0.58,0.47]\) & \([-0.66,0.52]\) & Borexino Phase-II [95] \\ & \([0.45,0.86]\) & \([-0.58,0.47]\) & \([-0.66,0.52]\) & Borexino Phase-II [95] \\ & – & – & \([-2.09,2.20]\) & XENONnT 2022 (this work) \\ \hline & \([-0.13,0.13]\) & \([-0.13,0.13]\) & – & react. + acc. [148] \\ \(\varepsilon_{e\mu}^{e}\) & – & – & \([-0.58,0.60]\) & Borexino \& COHERENT [74] \\ & \([-0.17,0.29]\) & \([-0.21,0.41]\) & \([-0.34,0.61]\) & Borexino Phase-II [95] \\ & – & – & \([-1.03,1.41]\) & XENONnT 2022 (this work) \\ \hline & \([-0.33,0.33]\) & \([-0.28,-0.05]\)\(\oplus\) & – & react. + acc. [148] \\ & – & \([0.05,0.28]\) & – & TEXONO [149] \\ & – & \([-0.19,0.19]\) & – & TEXONO [149] \\ \(\varepsilon_{e\tau}^{e}\) & – & – & \([-0.60,0.62]\) & Borexino \& COHERENT [74] \\ & \([-0.26,0.23]\) & \([-0.35,0.31]\) & \([-0.48,0.47]\) & Borexino Phase-II [95] \\ & – & – & \([-1.26,1.11]\) & XENONnT 2022 (this work) \\ \hline & – & – & \([-0.035,0.018]\) & SK atm. [147] \\ & – & – & \([-0.20,0.07]\) & MINOS [150] \\ \(\varepsilon_{\mu\tau}^{e}\) & – & – & \([-0.018,0.016]\) & IceCube [151] \\ & – & – & \([-0.67,0.62]\) & Borexino \& COHERENT [74] \\ & \([-0.62,-0.52]\)\(\oplus\) & \([-0.26,0.23]\) & \([-0.25,0.36]\) & Borexino Phase-II [95] \\ & \([-0.09,0.14]\) & & \([-1.57,1.50]\) & XENONnT 2022 (this work) \\ \hline \hline \end{tabular} \end{table} Table 3: Limits on electron NSI, most extracted from Ref. [31]. The limits derived from our analysis of the recent XENONnT ER results are highlighted in orange. The limits derived by Ref. [95] using the Borexino Phase-II data, which we compare to in Fig. 5, are highlighted in green. that they have only considered NSI with the electron, with no contribution from either the proton or the neutron. As a result, we set \(\eta=0\), and we place their bounds at \(\varphi=\pm\pi/2\), corresponding to electron-only NSI. We note that, while previous studies have also constrained electron NSI, most of them place individual bounds on the left- and right-handed components of the interaction [146, 147, 149, 15, 21]. For comparison with the previous literature, we tabulate many of these results alongside the corresponding allowed intervals derived in this work from the XENONnT ER data in Table 3. In this context, it is worth noting that Ref. [74] previously considered the potential impact of including E\(\nu\)ES data from DUNE and a theoretical high-exposure DD experiment on global fit results of electron NSI. We investigate this impact in more detail by computing the solar neutrino scattering rate via the coherent treatment of oscillation effects in the density matrix formalism in Eq. (1), considering both CE\(\nu\)NS and E\(\nu\)ES, and studying the non-trivial behaviour of DD sensitivities in the full plane of charged NSI (\(\varepsilon^{e}_{\alpha\beta}\varepsilon^{p}_{\alpha\beta}\)) by means of our parametrisation in Fig. 5. Finally, comparing our XENONnT limits and future projections to the limits derived using Borexino data in Refs. [74] and [95], from Table 3 we see that while current DD data sets are not able to yield competitive bounds, next-generation and far-future DD experiments will be able to improve on current limits. As in Figs. 3 and 4, we show the points of parameter space where the adiabatic limit may no longer be valid for neutrino propagation in the Sun. Since \(\gamma\) is energy-dependent, this region is different for the values of \(E_{\nu}\) probed by NR and ER analyses. In light grey, we show the regions relevant for NRs (\(E_{\nu}=16\,\mathrm{MeV}\) as in Fig. 4) and in dark grey we show the regions for ERs (\(E_{\nu}=1\,\mathrm{MeV}\)), which roughly correspond to the highest energy of \({}^{7}\)Be neutrinos. This is not the primary neutrino source for the E\(\nu\)ES signal (\(pp\)), but it does contribute at higher energies. Taking this value is a conservative choice since higher energies correspond to a greater violation of adiabaticity. This is reflected in the fact that the dark grey regions, if present, are contained within the light grey regions in Fig. 5. Fig. 5 demonstrates that next-generation and far-future DD experiments will form powerful probes of electron NSI, with almost all of our projections cutting into portions of the bounds placed with the Borexino experiment. DARWIN can give us considerably more sensitivity to all NSI parameters, showcasing its excellent potential in searching for new physics in the neutrino sector. However, such potential exhorts substantial efforts in background modelling and reduction, something which is already well underway in the respective collaborations. As in the CE\(\nu\)NS case, the limits for the E\(\nu\)ES case exhibit blind spots where the predicted rate is indistinguishable from the SM expectation. Once again, this weakens the limits at certain values of \(\varphi\) and a series of bands where DD experiments appear to lose sensitivity. Here, the complementarity with the NR analysis can be seen explicitly, since at precisely \(\varphi=0\), the NSI effect on the NR signal is maximal. However, there are two notable physical differences between the E\(\nu\)ES case and the CE\(\nu\)NS case, arising from both the different CE\(\nu\)NS and E\(\nu\)ES cross sections and the way in which non-standard matter effects enter. Firstly, we have no strong cancellation in the ER limits. While one might expect a complete loss of sensitivity when \(\varphi=0\), where the E\(\nu\)ES cross section is unchanged by the presence of NSI, neutrino oscillations are still impacted by the NSI contribution to the matter Hamiltonian from the nucleons (in this case only the proton since \(\eta=0\)). Thus, for high enough values of \(\varepsilon^{\eta,\varphi}_{\alpha\beta}\), the effect of NSI on the neutrino flavour fractions is large enough to give us an observable deviation from the SM expectation. This is analogous to what we saw in our Borexino analysis of Section III.2. Consequently, while we do lose sensitivity in ERs as \(\varphi\) approaches zero, our limits ultimately reach a finite value.10 Footnote 10: Note that this is only possible as the cross section for electron neutrinos contains the extra CC contribution, making it different from that of the muon and tau neutrinos. Changes in the electron neutrino fraction then lead to measurable changes in the total number of CC interactions in the detector; the NC interactions from all flavours, on the other hand, remain equal. Secondly, we have fewer bands of insensitivity in ER over \(\varphi\) than we did for the NR case over \(\eta\). The location of these bands can be calculated through identical arguments to the CE\(\nu\)NS case, whereby those values of the NSI parameters where the NSI-augmented rate is equal to the expected SM rate, \(\mathrm{d}R/\mathrm{d}E_{R}-\mathrm{d}R/\mathrm{d}E_{R}|_{\mathrm{SM}}=0\), are found. The derivation of these cancellation conditions again crucially hinges on the coherent treatment of the neutrino propagation via the density matrix formalism in Eq. (1). Critically, the condition for cancellation in the off-diagonal NSI elements \(\varepsilon_{\alpha\beta}\) is completely missed in the simplified treatment of the rate as the sum over the oscillation probabilities times scattering cross section, \(\sum_{\alpha}P_{e\alpha}\,\mathrm{d}\sigma_{\nu_{\alpha T}}/\mathrm{d}E_{R}\). In the case that only one diagonal NSI element, \(\varepsilon_{\alpha\alpha}^{\eta,\varphi}\), is active, the cancellation equation for the differential rate for E\(\nu\)ES reads, \[\int_{E_{\nu}^{\mathrm{min}}}\frac{\mathrm{d}\phi_{\nu_{e}}}{ \mathrm{d}E_{\nu}}\,\,\rho_{\alpha\alpha}\,\left\{\,\left(1-\frac{E_{R}}{E_{ \nu}}\left(1+\frac{m_{e}-E_{R}}{2E_{\nu}}\right)\right)\left[4\,s_{W}^{2}+\xi^ {e}\,\varepsilon_{\alpha\alpha}^{\eta,\varphi}\right]\,\xi^{e}\,\varepsilon_{ \alpha\alpha}^{\eta,\varphi}\right.\\ \left.+\left(1-\frac{m_{e}\,E_{R}}{2\,E_{\nu}^{2}}\right)\left[4\, s_{W}^{2}\,\frac{\rho_{ee}-\rho_{ee}^{\mathrm{SM}}}{\rho_{\alpha\alpha}}+\left(2\, \delta_{\alpha e}-1\right)\xi^{e}\,\varepsilon_{\alpha\alpha}^{\eta,\varphi} \right]\,\right\}\mathrm{d}E_{\nu}=0\,. \tag{71}\] On the other hand, for off-diagonal NSI, \(\varepsilon_{\alpha\beta}^{\eta,\varphi}\) with \(\alpha\neq\beta\), the cancellation condition can be expressed as \[\int_{E_{\nu}^{\mathrm{min}}}\frac{\mathrm{d}\phi_{\nu_{e}}}{ \mathrm{d}E_{\nu}}\,\left\{\,\left(1-\frac{E_{R}}{E_{\nu}}\left(1+\frac{m_{e}- E_{R}}{2E_{\nu}}\right)\right)\left[\left(\xi^{e}\,\varepsilon_{\alpha\beta}^{ \eta,\varphi}\right)^{2}\,\left(\rho_{\alpha\alpha}+\rho_{\beta\beta}\right)+ 8\,s_{W}^{2}\,\xi^{e}\,\varepsilon_{\alpha\beta}^{\eta,\varphi}\,\rho_{\alpha \beta}\right]\right.\\ \left.+\left(1-\frac{m_{e}\,E_{R}}{2\,E_{\nu}^{2}}\right)\,\left[ 4\,s_{W}^{2}\,\left(\rho_{ee}-\rho_{ee}^{\mathrm{SM}}\right)-\delta_{\alpha \mu}\delta_{\beta\tau}\,\,2\,\xi^{e}\,\varepsilon_{\alpha\beta}^{\eta, \varphi}\,\rho_{\alpha\beta}\right]\,\right\}\mathrm{d}E_{\nu}=0\,, \tag{72}\] where the last term in the second line is only present for \(\alpha\beta=\mu\tau\). In the above expressions, \(\rho^{\mathrm{SM}}\) refers to the density matrix obtained in the SM case (i.e. \(\varepsilon_{\alpha\beta}=0\)) and \(\rho\) to the one obtained with non-zero NSI elements. As can be seen in Fig. 5, for E\(\nu\)ES we obtain insensitivity bands similar to those in the CE\(\nu\)NS case for the projected limits; however we only see this for \(\varepsilon_{ee}^{\eta,\varphi}\) and \(\varepsilon_{\mu}^{\eta,\varphi}\). In most cases, DD sensitivities are not good enough to reach these cancellation regions where the SM scattering rate is recovered. Finally, it is worth noting that, for very small but non-zero \(\varphi\), the line of exact cancellation has a very sharp zero-transition from very large (positive) to very small (negative) values of \(\varepsilon_{\alpha\beta}^{\eta,\varphi}\) (or vice versa) making it seem like there exists an asymptote at \(\varphi=0\). This effect is similar to that exhibited by Fig. 3 in our analysis of Borexino data. In this region of parameter space, the NSI-only term is negligible since it is quadratic in \(\xi^{e}\), and thus \(\varphi\). The reason for this change in sign is a rapid flattening of the SM-NSI interference terms in Eqs. (71) and (72), which are linearly proportional to \(\xi^{e}\), and thus \(\varphi\). This flattening leads to a rapid change in the value of \(\varepsilon_{\alpha\beta}^{\eta,\varphi}\) where the interference term cancels off the residual SM-like term proportional to \(\rho_{ee}-\rho_{ee}^{\mathrm{SM}}\) and hence restores the SM neutrino rate. The bounds from the global analysis of Ref. [39] are shown in Fig. 5 as they were in Fig. 4, but now only the proton direction (\(\eta=0,\varphi=0\)) is visible. By plotting the NR analyses, we can see how the sensitivities of Fig. 4 extend into the \(\varphi\) direction. We observe the same regions of nonzero \(\varepsilon_{\alpha\beta}^{\eta,\varphi}\) where our xenon-based DD experiments lose sensitivity. For the diagonal elements, this region simply follows from Eq. (69) and, for the off-diagonal elements, the more complicated behaviour is expressed in Eq. (70). Furthermore, the off-diagonal elements again exhibit some non-trivial behaviour in the form of 'kinks' (see for example \(\varepsilon_{\mu}^{\eta,\varphi}\approx 1.0\) and \(\varphi\approx-5/16\,\pi\)). The appearance of these kinks is analogous to those observed in the NR sensitivities in the nucleon plane, as described at the end of Section IV.2. We re-iterate that the limits presented in Figs. 4 and 5 have been calculated by switching on only one NSI parameter at a time. Due to potential interference effects between different NSI parameters, a global analysis that allows all NSI parameters to vary, before marginalising to compute the limits on any one parameter, would generally lead to weaker bounds [74; 152]. However, the point of our study is to illustrate the potential of DD experiments in this direction. Our study makes a strong case for their inclusion in future global analyses. ### Final Remarks Having access to both the NR and ER signals in one experiment makes DD incredibly powerful as a probe for NSI. As far as we are aware, this is the only experimental technology that is able to perform such simultaneous analyses. For example, if a signal inconsistent with the SM is detected in the future, both channels would be pivotal for exploring the possible values of \(\eta\) and \(\varphi\), or, equivalently, the relative strength of NSI with electrons, protons and neutrons. This will come in tandem with other more traditional searches for new physics in the neutrino sector. However, given the number of parameters one is trying to constrain or fit, the addition of DD will provide important input complementary to that of oscillation and spallation source experiments. Above, we have treated the NR and ER signals in DD experiments as separable. Indeed, in the name of background discrimination for DM searches, DD experiments are capable of this for large parts of the signal region. Taking into account experimental inputs, as described in Eq. (64), we are able to model NR and ER spectra accurately without resorting to a full Monte Carlo simulation of the detector responses in terms of S1 (scintillation) and S2 (ionisation) signals. Since detector responses from the point of interaction, be it NR or ER, have been well studied and calibrated within experimental collaborations, we are confident that introducing nonzero NSI will not alter the expectation that future experiments will be able to resolve S1 and S2 signals. Interestingly, many DD collaborations also perform S2-only analyses, which has the benefit of lowering the experimental threshold \(E_{\rm th}\), increasing the sensitivity to lighter DM values. However, this comes at the cost of losing NR/ER discrimination. In our NR projections for future experiments, we took the liberty of lowering \(E_{\rm th}\) in a modest way, assuming that NR/ER discrimination was still possible, and indeed S2-only analyses boast much lower thresholds. As this choice implies, reducing \(E_{\rm th}\) is beneficial for the NR signal, but not necessarily for the ER signal. This is because low-energy neutrinos are unable to impart sufficient energy to excite the bound electrons. It is likely then that an S2-only analysis will only improve the prospects of the NR signals, but one would then have to account for larger background rates. Furthermore, in our analysis of DD experiments, we have not included argon-based liquid experiments. This direction is not without its potential, but we leave incorporation of such experiments for future work. As can be seen in Ref. [73], the prospects for argon detectors are not as promising as those for xenon. However, Ref. [73] only considered the implications for specific BSM scenarios. The limiting factor for argon detectors seemed to be the experimental threshold and increased ER backgrounds, both of which tend to be much higher than their xenon counterparts. Recent progress from the DarkSide collaboration indicates that argon detectors may be able to provide competitive bounds in the future [153, 154, 155]. To our knowledge, this work is the first to derive dedicated limits on NSI from both CE\(\nu\)NS and E\(\nu\)ES in DD experiments from the recent first results XENONnT [80] and LZ [131]. Our analysis of future multi-ton LXe detectors in this section has exposed the huge potential that DD experiments have in fully exploring the parameter space of NSI, especially due to their increased sensitivity to E\(\nu\)ES. While there have been some initial studies considering the E\(\nu\)ES signals for non-zero NSI [70, 74], we take a comprehensive approach by considering the solar CE\(\nu\)NS and E\(\nu\)ES signals, modelling the experimental setups as close to the experimental collaborations as possible, and treating solar neutrino propagation in the coherent density matrix approach. Combined with our convenient parametrisation of the NSI parameter space, this allows us to derive an accurate overview of DD sensitivities and blind spots in the entire NSI parameter space. Moreover, as we pointed out in the previous section, since the blind spots for CE\(\nu\)NS and E\(\nu\)ES do not coincide in the charged NSI plane, DD experiments using a combination of both signatures can effectively avoid these and remain sensitive for most of this region. When presenting our extended parametrisation in Section II, we introduced the axial-vector NSI coupling \(\tilde{\varepsilon}^{f}_{\alpha\beta}\), only to set it to zero because it does not contribute to matter effects. Similarly, the effect of \(\tilde{\varepsilon}^{f}_{\alpha\beta}\) on NRs will be minimal because of the coherent nucleon number enhancement that the vector current receives over the axial-vector. This enhancement is no longer present when one considers ERs, and would constitute an additional set of parameters that one could probe. The sensitivity of DD experiments to such axial-vector NSI is an interesting direction to be studied in future work. Finally, we comment on the particle physics interpretation of the most promising projections we report in this work. Namely, the potential for a DARWIN-like experiment to probe NSI at the level of \(\varepsilon^{\eta,\,\varphi}_{ee}\sim 10^{-2}\). Reinterpreting this value in a more canonical EFT approach implies \(\Lambda_{\rm NP}/\sqrt{C}\sim 1.8\,{\rm TeV}\), where \(C\) is the Wilson coefficient of the four-fermion operator. We see here that for \(C>1\) these experiments have the capability to probe new physics above the TeV scale. In this context, it would presumably make the most sense to embed NSI analyses within the more general SMEFT framework. It would be interesting to study whether this can already be done in a consistent way. SMEFT observables are typically at collider scales, while NSI studies are much below, so the effective approach is appropriate for a greater range of \(\Lambda_{\rm NP}\). ## V Conclusions We have demonstrated that direct detection experiments will soon become powerful probes of neutrino non-standard interactions, testing the parameter space in a complementary way to spallation source and oscillation experiments. This owes to their simultaneous sensitivity to nuclear and electron recoils and their unique capability to test tau neutrinos from the solar neutrino flux. To do so, we have developed an extension of an earlier NSI parametrisation, allowing for non-standard interactions with nucleons and electrons simultaneously. Our parametrisation captures the rich phenomenology that arises when one allows for NSI to impact both neutrino propagation and neutrino scattering. We have shown that previous NSI limits from spallation source experiments, such as CENNS-10 LAr, and oscillation experiments, such as Borexino, map non-trivially to this extended parameter space, demonstrating the importance of allowing for a variable NSI contribution from the proton and the electron. We have derived current direct detection constraints and projected the sensitivities of future direct detection experiments on the NSI landscape using the expected solar neutrino rate. We have thoroughly studied the resulting bounds in the different NSI directions by taking into account both CE\(\nu\)NS and E\(\nu\)ES signals, accurately modelling the experimental setups and consistently treating the coherent neutrino propagation via the density matrix. Furthermore, we have identified the potential blind spots where sensitivity is lost due to cancellations in the expected rate. While current leading constraints from LZ and XENONnT are not competitive in this landscape yet, we have shown that those from future experimental runs and the projected DARWIN detector will cut into new regions of the NSI parameter space. We believe that the conclusion is clear: upcoming multi-ton, LXe-based DD experiments are poised to make a considerable impact in the neutrino NSI landscape. We therefore recommend that they be included in future global NSI studies, incorporating a more complete treatment of the systematics. ## Acknowledgements We want to thank Michele Maltoni for many insightful discussions on neutrino oscillations and non-standard interactions. We also thank Felix Kahlhoefer for the invaluable help with the LZ implementation, as well as Christopher Tunnell and Aaron Higuera for valuable discussions regarding XENONnT. Finally, we are also grateful to Pilar Coloma, Enrique Fernandez Martinez, Danny Marfatia, Ivan Martinez-Soler, Pablo Martinez-Mirave and Yuber F. Perez-Gonzalez for helpful discussions during the preparation of this manuscript. DA is supported by the National Science Foundation under award 2209444. DGC acknowledges support from the Spanish Ministerio de Universidades under grant SI2/PBG/2020-00005. AC is supported by the grant "AstroCeNT: Particle Astrophysics Science and Technology Centre" carried out within the International Research Agendas programme of the Foundation for Polish Science financed by the European Union under the European Regional Development Fund. PF would like to express special thanks to the Mainz Institute for Theoretical Physics (MITP) of the Cluster of Excellence PRISMA+ (Project ID 39083149), for its hospitality and support. The work of PF was partially supported by the UKRI Future Leaders Fellowship DARKMAP. This work is partially supported by the Spanish Agencia Estatal de Investigacion through the grants PID2021-125331NB-I00 and CEX2020-001007-S, funded by MCIN/AEI/10.13039/501100011033. ## Appendix A Solar neutrino transition rate In writing Eq. (1), we automatically retain the full phase correlation of the different solar neutrino flavour states reaching the detector. The way to understand how this formula comes about is to consider the amplitude for the combined propagation \(\nu_{\alpha}\to\nu_{\gamma}\) of solar neutrinos from the point of production to the detector, and the scattering \(\nu_{\gamma}\,T\to\nu_{\beta}\,T\) of the propagated neutrino \(\nu_{\gamma}\) with the target material \(T\) into any flavour state \(\nu_{\beta}\), \[{\cal A}_{\alpha\to\beta}=\langle\nu_{\beta}|S|\nu_{\alpha}\rangle\,, \tag{10}\] where we have factored out the nuclear part of the elastic scattering process, and \(S\) is the \(S\)-matrix for the full process. From this we can then derive the full transition probability to an arbitrary final state \(|f\rangle=\sum_{\beta}|\nu_{\beta}\rangle\) as follows, \[|{\cal A}_{\alpha\to f}|^{2}= \Big{|}\sum_{\beta}{\cal A}_{\alpha\to\beta}\Big{|}^{2} \tag{11}\] \[= \Big{|}\sum_{\beta}\langle\nu_{\beta}|S_{\rm int}\left(\sum_{ \gamma}|\nu_{\gamma}\rangle\langle\nu_{\gamma}|\right)S_{\rm prop}|\nu_{ \alpha}\rangle\Big{|}^{2}\] (12) \[= \sum_{\beta,\gamma,\delta,\lambda}\langle\nu_{\beta}|S_{\rm int }|\nu_{\gamma}\rangle\langle\nu_{\gamma}|S_{\rm prop}\left(\sum_{\rho}|\nu_{ \rho}\rangle\langle\nu_{\rho}|\right)|\nu_{\alpha}\rangle\langle\nu_{\alpha}| \left(\sum_{\sigma}|\nu_{\sigma}\rangle\langle\nu_{\sigma}|\right)S_{\rm prop }^{\dagger}|\nu_{\delta}\rangle\langle\nu_{\delta}|S_{\rm int}^{\dagger}|\nu_ {\lambda}\rangle\] (13) \[= \sum_{\gamma,\delta,\rho,\sigma}\underbrace{\left(S_{\rm prop} \right)_{\gamma\rho}\,\,\pi^{(\alpha)}_{\rho\sigma}\,(S_{\rm prop})^{*}_{ \delta\sigma}}_{\equiv\rho^{(\alpha)}_{\gamma\delta}}\,\underbrace{\sum_{ \lambda,\beta}(S_{\rm int})^{*}_{\lambda\delta}\,(S_{\rm int})_{\beta\gamma}}_{ \mathcal{M}^{*}(\nu_{\delta}\to f)\,\mathcal{M}(\nu_{\gamma}\to f)}\,. \tag{14}\] Here, \(\pi^{(\alpha)}\) is the projector onto the neutrino-flavour state \(|\nu_{\alpha}\rangle\). In the second line we have separated the \(S\)-matrix into \(S_{\rm prop}\), describing the propagation of the initial neutrino \(\nu_{\alpha}\) from the source to the detector, and \(S_{\rm int}\), describing the interaction with the detector material. Thus, decorating the expression Eq. (12) with the relevant phase-space factors for the generalised cross section (cf. Eq. (45)), we finally find that \[|{\cal A}_{\alpha\to f}|^{2}\propto{\rm Tr}\left[\mathbf{\rho}^{(\alpha)}\, \frac{\mathrm{d}\mathbf{\zeta}}{\mathrm{d}E_{R}}\right]\,. \tag{15}\] ## Appendix B \(\Delta\chi^{2}\) Plots for CENNS-10 LAr and Borexino Figure 7: The variation in the \(\Delta\chi^{2}\) statistic in our Borexino analysis under two assumptions for \(\varphi\): \(\varphi=0\) (black) and \(\varphi=\pi/2\) (red). We have fixed \(\eta=0\), corresponding to a pure proton NSI when \(\varphi=0\) and a pure electron NSI when \(\varphi=\pi/2\). The dashed line shows where \(\Delta\chi^{2}=2.71\), where we draw our 90% CL limit.
2302.03793
Self-Supervised Unseen Object Instance Segmentation via Long-Term Robot Interaction
We introduce a novel robotic system for improving unseen object instance segmentation in the real world by leveraging long-term robot interaction with objects. Previous approaches either grasp or push an object and then obtain the segmentation mask of the grasped or pushed object after one action. Instead, our system defers the decision on segmenting objects after a sequence of robot pushing actions. By applying multi-object tracking and video object segmentation on the images collected via robot pushing, our system can generate segmentation masks of all the objects in these images in a self-supervised way. These include images where objects are very close to each other, and segmentation errors usually occur on these images for existing object segmentation networks. We demonstrate the usefulness of our system by fine-tuning segmentation networks trained on synthetic data with real-world data collected by our system. We show that, after fine-tuning, the segmentation accuracy of the networks is significantly improved both in the same domain and across different domains. In addition, we verify that the fine-tuned networks improve top-down robotic grasping of unseen objects in the real world.
Yangxiao Lu, Ninad Khargonkar, Zesheng Xu, Charles Averill, Kamalesh Palanisamy, Kaiyu Hang, Yunhui Guo, Nicholas Ruozzi, Yu Xiang
2023-02-07T23:11:29Z
http://arxiv.org/abs/2302.03793v1
# Self-Supervised Unseen Object Instance Segmentation via Long-Term Robot Interaction ###### Abstract We introduce a novel robotic system for improving unseen object instance segmentation in the real world by leveraging long-term robot interaction with objects. Previous approaches either grasp or push on object and then obtain the segmentation mask of the grasped or pushed object after one action. Instead, our system defers the decision on segmenting objects after a sequence of robot pushing actions. By applying multi-object tracking and video object segmentation on the images collected via robot pushing, our system can generate segmentation masks of all the objects in these images in a self-supervised way. These include images where objects are very close to each other, and segmentation errors usually occur on these images for existing object segmentation networks. We demonstrate the usefulness of our system by fine-tuning segmentation networks trained on synthetic data with real-world data collected by our system. We show that, after fine-tuning, the segmentation accuracy of the networks is significantly improved both in the same domain and across different domains. In addition, we verify that the fine-tuned networks improve top-down robotic grasping of unseen objects in the real world 1. Footnote 1: Video, dataset and code are available at [https://irvlutd.github.io/SelfSupervisedSegmentation](https://irvlutd.github.io/SelfSupervisedSegmentation) ## I Introduction Object perception is a critical task in robot manipulation. Model-based methods leverage 3D models of objects and solve the 6D object pose estimation problem to localize objects in 3D [12, 37, 33, 35]. Using the estimated object poses and the 3D models of objects, a planning scene can be set up for manipulation trajectory planning. However, requiring a 3D model for every object that needs to be manipulated is not feasible in the real world. Recent model-free approaches for object perception focus on segmenting unseen objects from images [38, 40, 14]. A segmented point cloud of an object can be used in grasp planning for robot manipulation [21, 29]. In this way, an object can be grasped from partial observations without using its 3D model. Recent model-based and model-free methods for object perception train neural networks to recognize objects. Since it is difficult to obtain large-scale real-world datasets in robot manipulation settings, synthetic data is widely used for training [32, 39, 5]. Although models trained with synthetic data can be directly used in the real world by leveraging domain randomization [31] or domain transfer [6, 44] techniques, these models still have errors in the real world due to the sim-to-real gap. The question we would like to address in this paper is how can a robot automatically obtain training data in the real world to improve its object segmentation model pre-trained with synthetic data. We focus on improving Unseen Object Instance Segmentation (UOIS) to facilitate robot manipulation. Interactive perception [7] emphasizes that robots can apply actions to the environments and utilize the visual-motor relationship to improve perception. In the context of object recognition, two widely used interaction types are robot grasping and pushing. Previous works have explored leveraging robot grasping or pushing to obtain object segmentation data in a self-supervised way [24, 15, 41]. All these methods can only obtain the segmentation mask of the grasped or pushed object by comparing the scene before and after grasping [24] or utilizing optical flow to segment the moved objects in robot pushing [15, 41]. The drawbacks of segmenting objects from one action are that, first, the method cannot segment unmoved objects in the scene; second, if two objects are moved together, the method will segment them as one object. Although [41] proposes to train a classifier to decide whether a single object is pushed or not, since the classifier is trained in simulation, it still suffers from the sim-to-real gap. To overcome the limitations of existing work on self-supervised object segmentation via robot interaction, we propose a new system that leverages long-term robot interaction to segment unseen objects in a self-supervised way. Our key idea is to defer the decision on object segmentation until a robot has interacted with all the objects in a scene for a period Fig. 1: Our system leverages robot pushing to collect real-world images and generate segmentation masks of objects in the collected images in a self-supervised way. The collected images can be used to fine-tune segmentation networks trained with synthetic data and improve their performance. of time. Intuitively, if a robot has pushed objects in a scene for a number of times, i.e., around 20 pushes for 5 objects in our experiments, these objects are very likely to be separated from each other. Once the objects are separated, existing approaches on unseen object segmentation such as [38, 19] can successfully segment them. In this way, our system can segment all the objects in the scene but not only the pushed object in one action. More importantly, the system enables the robot to propagate a correctly segmented mask of each object to all the collected images during robot pushing including images where objects are very close to each other. This is achieved by combining multi-object tracking to extract object tracklets, i.e., segments of objects in video frames, and video object segmentation where an initial mask of an object can be propagated to all other frames. The system utilizes the object tracklet to select a good initial mask for propagation. Consequently, our system enables a robot to collect a sequence of images of objects in a scene and obtain segmentation masks of all the objects in these images. We demonstrate the usefulness of our system by using the collected real-world images to fine-tune existing, pre-trained object segmentation models [19]. We show that after fine-tuning, the object segmentation accuracy of the model can be significantly improved. The improvement is achieved in the same domain as the fine-tuning data as well as on the benchmark datasets for evaluating unseen object instance segmentation [25, 28]. Fig. 1 illustrates the fine-tuning process. In addition, we show that using the fine-tuned segmentation model can improve top-down grasping performance in a table clearing task where a robot is asked to put all the objects on a table into a bin. In summary, the contributions of our work are as follows. * We introduce a novel robotic system that leverages long-term robot interaction to segment unseen objects in a self-supervised way. * Our system illustrates that combining multi-object tracking and video object segmentation with robot pushing can help robots to singulate objects from each other in cluttered scenes. * We demonstrate that using our system to collect real-world images for fine-tuning can improve object segmentation accuracy and robot grasping performance. ## II Related Work ### _Unseen Object Instance Segmentation_ Different from category-based object instance segmentation methods [17, 8, 9] that focus on segmenting object instances among a set of pre-defined object categories, unseen object instance segmentation emphasizes segmenting arbitrary objects that present in input images. The testing objects can be novel such that a segmentation model has not seen them during training. Earlier works on UOIS utilize low-level image cues such as edges, contours, and surface normals to group pixels into objects [25, 34, 11]. These bottom-up approaches tend to over-segment objects since there is no object-level supervision to learn the concept of objects. Recent approaches on UOIS leverage large-scale synthetic data and deep neural networks to segment unseen objects [39, 38, 40, 14]. These methods significantly improve object segmentation accuracy, which enables robotic grasping of unseen objects [21, 29]. However, since these models are trained with synthetic data, they still suffer from the sim-to-real gap. The primary error is under-segmentation in the real world. When objects are very close to each other, the models trained with synthetic data cannot separate them. Recently, Zhang et al. [44] propose to apply test-time domain adaption to improve the segmentation performance, where a set of images without ground truth labels in the test domain are used to adapt the segmentation network. Our system is complementary to domain adaption techniques since it is able to obtain training images with ground truth labels automatically. Therefore, we can use supervised learning to fine-tune segmentation networks. More importantly, we show that, after fine-tuning in one domain, the performance of the segmentation networks can be improved in other domains, which avoids adaption in every testing domain. ### _Self-Supervised Robot Perception_ Self-supervised learning is an attractive learning paradigm where training data and training signals can be obtained automatically without human labor. Since a robot can naturally interact with its environment to collect data [7], self-supervised learning for robot perception has received more attentions recently. One type of approach utilizes multi-view consistency of images captured from different viewpoints to obtain the ground truth annotations for learning. Multi-view consistency based self-supervised learning has been applied to object segmentation [42], object detection [20], 6D object pose estimation [13] and dense pixel-wise correspondences [26, 16] in robot manipulation settings. Another type of approach leverages robot actions such as grasping and pushing to interact with objects and then computes scene differences [24] or optical flow [15, 41] before and after applying an action to obtain ground truth labels of objects for learning. Our system falls into this category where we also employ robot pushing with optical flow to help segment objects in a self-supervised way. The main novelty of our system compared to previous methods on self-supervised object segmentation [15, 41] is that we leverage long-term robot pushing to segment all the objects in a collected video sequence, while previous methods can only segment the grasped or pushed object in an image. ## III Self-Supervised Unseen Object Instance Segmentation ### _System Overview_ The motivation to build our system is to fix segmentation errors in existing UOIS methods [38, 19]. These methods are trained with synthetic RGB-D images generated using 3D models of objects. Due to the sim-to-real gap and the arrangements of objects in the simulator, these methods often cannot separate objects that are very close to each other. One example is shown in the first initial segmentation image in Fig. 2, where five objects are packed together and the MSMFormer [19] only outputs one mask for all five objects. In grasping applications, a robot cannot grasp these objects due to the incorrect segmentation result. Our idea to fix these errors is to obtain ground truth masks of these packed objects in a self-supervised way by leveraging robot interaction with objects. Then, we can use these images with the corresponding ground truth masks to fine-tune the segmentation networks [38, 19]. With enough data for fine-tuning, the networks should be able to segment closely packed objects. The main challenge in this scenario is obtaining the ground truth masks when objects are close to each other. Previous methods that leverage robot interaction to obtain object masks [15, 41] can only obtain one mask of the pushed or grasped object in an image. They cannot generate masks of all the objects in the scene because they only use one robot action and try to figure out which object has been moved. Instead, in our system, we allow the robot to continuously push objects in a random fashion, and we generate a sequence of images before and after each pushing action, i.e., around 20 pushes for each scene in our experiments. Finally, we use these images to perform multi-object tracking and video object segmentation. In this way, our system can generate masks of all the objects in the image sequence including the first image, where all the objects are close to each other. Fig. 2 illustrates an overview of our system. The collected images with their generated masks can be used to fine-tune existing methods for unseen object instance segmentation [38, 19] in order to improve their performance in the real world. We introduce each component of the system in the following sections. ### _Data Collection via Robot Pushing_ Since our goal is to collect hard to segment images to fine-tune the segmentation networks, we intentionally put objects together for each scene in the beginning of the data collection process. After setting up a scene on a tabletop, a robot starts pushing these objects. A Fetch mobile manipulator is employed in our system, and an RGB-D image is captured before and after each push action, where we used the RGB-D camera on the Fetch robot to capture images. Different from methods that carefully learn a pushing or grasping policy for singulation [41], we design a simple pushing strategy using object instance segmentation from the MSMFormer [19] as input. This is because our system does not require all the objects to be singulated at the end of the interaction. As long as an object has been separated from other objects for a period of time during pushing, the system is able to generate correct segmentation masks for it thanks to the multi-object tracking and video object segmentation techniques utilized in the system. In cases where one push action cannot separate two objects if both objects move together, multiple push actions may separate them. Therefore, our system benefits from long-term robot interactions with a sequence of pushes. Specifically, suppose at time \(t\), the system captures an RGB-D image \(I_{t}\). We obtain a set of \(n_{t}\) object segmentation masks \(\{o_{t}^{i}\}_{i=1}^{n_{t}}\) on \(I_{t}\) by running the MSMFormer network on it. These masks are illustrated as the initial segmentation in Fig. 2. Based on the object segmentation, the robot randomly Fig. 2: System overview. Our system leverages robot pushing to interact with objects. The pushing actions are guided by the initial segmentation masks of the objects generated from a segmentation network trained with synthetic data. Images before and after each pushing action are captured. By using the sequence of images with the initial segmentation masks, our system combines optical flow-based multi-object tracking and video object segmentation to compute the final segmentation masks, which fix errors in the initial segmentation masks. Red arrows indicate the segmentation errors. The collected images and the final segmentation masks can be used to fine-tune the segmentation network to improve its performance. selects an object to push. First, a 3D bounding box is computed for each segmented object by bounding the 3D point cloud of the object. Using the depth image and the camera intrinsic parameters, we can back-project the depth image into a 3D point cloud of the scene in the camera frame. Since we also know the camera pose in the robot frame, we can convert the point cloud into the robot frame. Using the segmentation mask of each object, we can extract the points of the object and compute a 3D bounding box for it in the robot frame. Second, according to center of the 3D bounding box, the robot decides to either push the object to the left or to the right. We select the pushing direction to always push the object towards the center of the robot, which prevents objects being pushed outside the reach of the robot. Third, a motion trajectory is planned to the left side (pushing right) or right side (pushing left) of the object. We used the MoveIt motion planning framework to plan the trajectories. Then the planned trajectory is executed to move the robot arm to the pushing location. Finally, the pushing action is achieved by adding an offset to the shoulder joint of the Fetch arm depending on the pushing direction. Note that our pushing strategy cannot achieve perfect singulation results compared to learned polices or designed strategies for singulation. However, singulation is not our main goal. We also want to collect diverse datasets for learning. Our pushing strategy is effective to separate objects and perturb objects in the scene in order to generate diverse images. In addtion, although the initial segmentation has errors, it can still be used to guide the pushing process. A sequence of pushing actions and the generated images are shown in Fig. 2. ### _Optical Flow-based Multi-Object Tracking_ After the data collection via robot pushing, we obtain a sequence of images \(I_{1},I_{2},\ldots,I_{N}\) with the corresponding initial segmented objects \(\{o_{1}^{i}\}_{i=1}^{n_{1}},\{o_{2}^{i}\}_{i=1}^{n_{2}},\ldots,\{o_{N}^{i}\}_ {i=1}^{n_{N}}\), where \(N\approx 20\) in our experiments. Since there are errors in these initial masks, our next task is to fix these errors and obtain correct segmentation masks for all the objects in the image sequence. Our idea is to leverage the observation that if a mask incorrectly includes more than one object, after a robot pushing, the mask will be broken down into multiple objects. On the other hand, if a mask correctly segments one object, after pushing, the mask will remain the same. However, one pushing action may not be able to singulate an object successfully. Therefore, we leverage a sequence of robot pushing actions in our system. In this case, if a mask remains the same after several pushing actions, it is highly likely to be a correct segmentation. In order to compare the initial segmentation masks across image frames, we need to associate masks across frames. This problem is studied in the literature as tracking by detection [43, 4, 36, 27]. The most important component in a tracking-by-detection method is a similarity measurement between two object detections across video frames, which can be learned from data [27] or defined using image features [36]. In our system, since we do not have many data to learn the similarity measurement in robotic manipulation settings, we design one based on optical flow between image frames. Let \(o_{t_{1}}^{i}\) be a mask on image \(I_{t_{1}}\) and \(o_{t_{2}}^{j}\) be a mask on image \(I_{t_{2}}\). We would like to compute a similarity score between the two masks as \(s(o_{t_{1}}^{i},o_{t_{2}}^{j})\). We only consider adjacent images in data association. Therefore, we can assume \(t_{2}=t_{1}+1\). We leverage optical flow between the two images to define the similarity score. Let \(o_{t_{2}}^{i}=o_{t_{1}}^{i}+f_{t_{1},t_{2}}^{i}\) be the propagated mask of object \(o_{t_{1}}^{i}\) to frame \(I_{t_{2}}\) using forward flow \(f_{t_{1},t_{2}}^{i}\). Similarly, we can propagate the mask of object \(o_{t_{2}}^{j}\) to frame \(I_{t_{1}}\) using backward flow: \(o_{t_{1}}^{j}=o_{t_{2}}^{j}+f_{t_{2},t_{1}}^{j}\). The similarity score between the two masks is defined as \[s(o_{t_{1}}^{i},o_{t_{2}}^{j})=\min\big{(}\text{IoU}(o_{t_{2}}^{i},o_{t_{2}}^{ j}),\text{IoU}(o_{t_{1}}^{i},o_{t_{1}}^{j})\big{)}, \tag{1}\] where the \(\text{IoU}(\cdot,\cdot)\) function computes the intersection over union between two binary masks. Intuitively, one mask is propagated to another image using optical flow and compared to the other mask. Fig. 3 illustrates two examples of the computed matching scores. In case (a), at time \(t_{1}\), the initial segmentation cannot separate the corn and the salt bottle. The propagated mask to time \(t_{2}\) cannot match the mask of the corn at time \(t_{2}\) well. Therefore, the matching score is low. In case (b), the masks of the tomato match well using both the forward flow and the backward flow. The matching score is high. When the optical flow estimation is accurate, the similarity score in Eq. (1) serves as a good measurement for data association between objects. In our system, we use the RAFT [30] network to compute optical flow. With the above similarity score, we can leverage existing multi-object tracking methods such as network flow-based approaches [43, 27] or Markov decision process-based approaches [36] to generate trajectories of objects across image frames. Instead, we found that a simple greedy search algorithm works well in the tabletop robot pushing settings Fig. 3: Illustration of the matching scores between objects based on forward and backward optical flow. since there are no long-term occlusions between objects or new objects coming in and out in these settings. The greedy data association algorithm starts from one mask in the last image frame \(I_{N}\). Then it associates the mask to a previous mask which has the highest matching score if their matching score is larger than a pre-defined threshold, and repeats this process until the highest matching score is smaller than the threshold. In this way, it generates a tracklet for one object. After that, it selects a remaining mask and repeats the process to generate the next tracklet. We start the data association from the last frame in a backward way because objects are likely to be separated in the end of the robot pushing, which helps for object tracking. ### _Mask Propagation via Long-Term Object Segmentation_ The output from the multi-object tracking algorithm is a set of tracklets \(\{\mathcal{T}_{i}\}_{i=1}^{M}\), where tracklet \(\mathcal{T}_{i}=(o_{t_{1}}^{i},o_{t_{2}}^{i},\ldots,o_{t_{m}}^{i})\) consists of a sequence of object masks from the initial segmentation. The lengths of these tracklets can be different. Majority masks in each tracklet correctly segment one object, since wrong initial segmentation masks have low matching scores as illustrated in Fig. 3. If we can utilize the extracted tracklets and propagate the correct masks to all the image frames for all the objects, we can obtain correct segmentation masks for the collected data via robot pushing. To achieve this goal, we utilize a state-of-the-art video object segmentation method named XMem [10]. Given an initial mask of an object, XMem can segment the object in the following video frames. It maintains a memory buffer that stores the features of the target object, which enables it to segment the target in long video sequences and handle occlusions. In the traditional video segmentation scenarios, the initial mask of a target is given manually on the first video frame. In our case, we need to generate the initial mask automatically. It is critical to select a correct initial mask for an object. Otherwise, a wrong mask will be propagated to other frames. We utilize the observation that if a mask being pushed can still have high matching scores (Eq. (1)) to the previous mask and the next mask in a tracklet, the mask is likely to contain a single object. Therefore, we select the pushed mask with the highest matching score as the initial mask to initialize XMem. The segmentation goes with two directions, where one goes to the first frame and the other one goes to the last frame in the collected image sequence. Fig. 4 shows two examples of the object segmentation with XMem. After all the tracklets are processed, the segmentation masks are combined to generate the final segmentation of the images (see Fig. 2). In this way, our system can obtain segmentation masks of objects when they are very close to each other. ## IV Applications ### _Transfer Learning for Object Segmentation_ Our system can be used to collect images with the corresponding object segmentation masks in a self-supervised way. Then we can use these images to fine-tune the object segmentation networks to improve their performance. Since the collected data include correct segmentation masks when objects are very close to each other, the fine-tuned model is able to fix segmentation errors and correctly separate objects in cluttered scenes. For the fine-tuning, we start with a segmentation model trained with synthetic data. We used MSMFormer [19] in our experiments, which is also used to generate the initial segmentation masks for robot pushing. We initialize the network with the pre-trained weights on the synthetic data, and then train the network for a number of epochs on the collected real-world data with a smaller learning rate. We conducted an ablation study on different fine-tuning strategies. Specifically, the backbone of the network can be fixed or be trainable during fine-tuning. The fine-tuning data can be a mixture of synthetic images and real-world images or real-world images only. The effect of these strategies are presented in Section V. ### _Top-Down Robot Grasping_ Unseen object instance segmentation can facilitate robot grasping of unknown objects as demonstrated in previous works [21, 22]. These methods use the segmented point clouds of objects to plan grasps for grasping. Improvement in object segmentation can benefit the grasp planning stage and improve the grasping performance subsequently. In this work, we show Fig. 4: Illustration of the XMem [10] video object segmentation in our collected data. The initial mask is used to initialize the segmentation process. that using our collected data for fine-tuning can improve object segmentation and top-down grasping consequently. With accurate object segmentation, top-down grasp planning can be achieved in an analytic way. A top-down grasp for a two-finger gripper is defined as the 3D location \(p=(x,y,z)\), orientation \(\theta\) of the gripper in the \(x,y\) plane and the width \(w\) between the two fingers, where axis-\(z\) is the gravity direction. The grasping position \(p\) is defined as the object center, where the object center is computed as the mean of the segmented point cloud of the object. The grasping orientation \(\theta\) is computed to align the gripper with the second largest principal component of the object point cloud in the \(x,y\) plane. In this way, the robot can grasp the narrower side of a long object. Finally, the width between the two fingers is determined by the width of the object along the second largest principal component of the object point cloud in the \(x,y\) plane. It can be shown that if the center of mass of the object is the same as the object center, a grasp computed in this way can achieve force closure. The described grasp planning algorithm relies on accurate segmentation of all the objects in a scene. We can use it to verify the benefit of our system in collecting data to improve object segmentation for robot grasping. ## V Experiments ### _Datasets and Evaluation Metrics_ **Data Collected by the Robot.** We used a set of play food for kids as the objects for robot interaction. For reproducibility, these objects can be purchased from [1]. A Fetch mobile manipulator is used for data collection. Five different objects are used in each scene, and the robot performs around 20 pushing actions for each scene to collect images before and after each pushing action. In total, we collected images from 20 scenes. Images from 15 scenes are used for fine-tuning and the remaining images are used for testing the fine-tuned model in the same domain. Specifically, 321 images are used for fine-tuning, while 107 images are available for testing. Each image contains an average of 6 objects, but no more than 8 objects. **Evaluation Datasets.** We evaluate the performance of our fine-tuned models on the pushing test dataset from our system, the Object Clutter Indoor Dataset (OCID) [28] and the Object Segmentation Database (OSD) [25]. The dataset from robot interaction is in the same domain as our collected data for fine-tuning, whereas OCID and OSD are in the different domains. The OCID dataset contains 2,390 RGB-D images, with at most 20 objects and on average 7.5 objects per image. The OSD dataset is composed of 111 RGB-D images, with up to 15 objects and an average of 3.3 objects per image. **Evaluation Metrics.** We analyze the object segmentation performance through precision, recall, and F-measure [39, 38]. To obtain the values for these three metrics, we initially calculate the values between all pairs of predictions and ground truth objects. Subsequently, we employ the Hungarian algorithm with pairwise F-measure to match predictions with the ground truth objects. Consequently, the precision, recall, and F-measure are determined by \(P=\frac{\sum_{i}[c\cap g(c_{i})]}{\sum_{i}[c_{i}]},R=\frac{\sum_{j}[c_{i}\cap g (c_{i})]}{\sum_{j}[g_{j}]},F=\frac{2PR}{P+R}\), where \(c_{i}\) indicates the segmentation for the predicted object \(i,g\left(c_{i}\right)\) is the segmentation for the corresponding ground truth object of \(c_{i}\), and \(g_{j}\) denotes the segmentation for the ground truth object \(j\). Overlap \(\mathrm{P/R/F}\) are the above three metrics when the intersection over union between two segmentation masks is used to determine the amount of true positives. Boundary \(\mathrm{P/R/F}\) are also used to measure the sharpness of the predicted boundary against the ground truth boundary, where the intersection pixels of the two boundaries determines the amount of true positives. Additionally, Overlap F-measure \(\geq 75\%\) is the percentage of objects segmented with a certain accuracy [23]. ### _Ablation Studies on the Fine-tuning Strategies_ We first investigate how to fine-tune the pre-trained segmentation networks with our collected real-world data. Regarding the training data for fine-tuning, we have two types of data: the 321 real-world images obtained via robot pushing and the synthetic images from the Tabletop Object Dataset [39]. The synthetic dataset consists of 280,000 RGB-D images which is used for training most unseen object instance segmentation networks [39, 38, 19]. In this work, we use the MSMformer model [19] trained on the Tabletop Object Dataset for fine-tuning, since it achieves very competitive performance and is end-to-end trainable. MSMformer consists of two stages in segmenting objects, where the first stage segments the whole input image while the second stage performs zoom-in refinement for each segment from the first stage. We have two choices on using these data for fine-tuning: i) using the real-world images only, ii) using both real-world images and synthetic images. On the other hand, we have two choices on how to fine-tune the backbone network in MSMFormer: i) fixing the backbone during fine-tuning, ii) fine-tuning the backbone. We conduct ablation studies on the four combinations and present the results on the OCID and the OSD datasets in Table I. We fine-tune the models for 6 epochs as the training loss converges quickly, where each epoch loops over the 321 real-world images once. We employ the AdamW optimizer [18] with the learning rate 1e-5. We set the batch size as 4. When using the mixture dataset for fine-tuning, for the first-stage model of MSMFormer, we randomly select 2 samples from the synthetic dataset and 2 samples from the real-world pushing dataset for each batch. For the second-stage model (zoom-in model), each batch has 3 random samples from the synthetic dataset and 1 pushing sample since the parameters are more sensitive to the pushing data for the second-stage. Table I shows that the performance of MSMFormer fine-tuned only using the small number of real-world pushing data is worse on the OCID dataset. This is due to overfitting to these real data. Using both the synthetic data and the real-world data for fine-tuning improves performance on both datasets. Using the mixture dataset is motivated by continual learning approaches such as [2, 3] which maintains a buffer of previously seen data. In our case, we can consider the synthetic dataset to be a data buffer. Table I also reveals that using learnable backbones achieves better performance than fixed backbones due to more flexibility in learning. According to these results, our fine-tuning strategy is to train the pretrained MSMFormer with mixture data and learnable backbones. We use this fine-tuning strategy in the following experiments. ### _Ablation Studies on the Number of Fine-tuning Images_ Our collected pushing training set has 15 scenes in total. We investigate the correlation between the number of images and the performance of the fine-tuned model. We partition the training set according to scenes and gradually add more scenes to the fine-tuning dataset. Table II shows the performance of the MSMFormer models fine-tuned with datasets in different sizes. We can see that, the performance on the OCID and OSD datasets continually improves as the amount of scenes expands. After 12 scenes, the model performance begins to saturate. According to this experiment, a small number of real-world images for fine-tuning is sufficient, which avoids collecting a large number of images in the real world for fine-tuning. We use all the 15 scenes with 321 images for fine-tuning in the following experiments. ### _Object Instance Segmentation in the Same Domain_ Table III presents the evaluation results on the 107 real-world test images of the models before and after fine-tuning. Since the pushing test dataset has the same settings as the fine-tuning dataset, we view the pushing test dataset as in same domain. It is clear that the fine-tuned models significantly improve the segmentation accuracy in the same domain. Imagine a robot entering a new domain, it can utilize our system to collect a few images to improve object segmentation in this new domain. We experiment fine-tuning both the RGB version and the RGB-D version of MSMFormer. In addition, we investigate the effect of fine-tuning on each stage of the segmentation network. "Zoom-in" in Table III indicates the second-stage network. From the table, we can see that fine-tuning consistently improves the performance over the original models. The best performance is achieved by fine-tuning both stages of MSMFormer. Generally, RGB-D models tend to surpass RGB models due to the additional depth input. However, we can observe that the fine-tuned two-stage RGB model (RGB with zoom-in) achieves the same Overlap F-measure and a higher Boundary F-measure compared to the fine-tuned two-stage RGB-D model. This result indicates that it is possible to segment unseen objects with RGB images only as long as we can obtain RGB training images with ground truth labels. Our system provides a solution by utilizing robot interaction for data collection. It is worth noting that using RGB images only is valuable since certain objects such as transparent objects or metal objects cannot be captured well by depth images. ### _Object Instance Segmentation across Domains_ To evaluate the performance of the fine-tuned models across domains, we test them on the OCID and OSD datasets and compare the achieved results with the state-of-the-art methods in Table IV. From the table, we can see that the fine-tuned models improve over the state-of-the-art methods on the OCID dataset for both RGB and RGB-D input. On the OSD dataset, UOAIS-Net [5] achieves better performance for RGB-D input by utilizing photo-realistic synthetic images for training. In most cases, the fine-tuning strategy consistently improves the pre-trained models with synthetic images. However, the RGB-D fine-tuned zoom-in refinement is not as effective as the original zoom-in refinement on the OCID dataset. The primary reason for this is that the environment and objects in our pushing dataset are simpler and more restricted than those presented in the OCID dataset. The combination of the fine-tuned first-stage model and the original zoom-in part is more effective on the OCID dataset. We visualize the differences of using the original models and fine-tuned models on different datasets in Fig. 5. The fine-tuned models are able to separate adjacent objects to mitigate the under-segmentation problem in the same domain as the fine-tuning images as well as different domains in the OCID and OSD datasets. ### _Top-Down Grasping with Object Instance Segmentation_ We show the usefulness of the proposed system for grasping unknown objects in a table-top setting where the objects are Fig. 5: Illustration of the effect of fine-tuning of the MSMFormer. The fine-tuning of the model allows it to distinguish objects that are stacked or adjacent to each other, where the original model cannot separate these objects. placed in a cluttered environment. A Fetch mobile manipulator is used for the experiments with its parallel jaw gripper for grasping, and built in RGB-D camera for perception. We compute the top-down grasp after segmenting all the objects in the scene via the procedure described in Section IV-B. We formulate the experiment as a pick-and-place task where the goal is to clear the table and place all the objects in a nearby bin. One example is shown in Fig. 6. The experiment is conducted with two sets of unknown objects (i.e., not seen during fine-tuning or training) with each set containing five objects. For each object set, we consider the pick-and-place task with four different initial configurations of the object placement on the table, ranging from highly cluttered to well separated as shown in Fig. 7. The pick-and-place grasping trials are conducted with the baseline2 and fine-tuned3 segmentation models with RGB-D input for each configuration to bring out the relative improvement of fine-tuning on data collected using the proposed method. Footnote 2: Baseline: MSMFormer_R34 + Zoom-in in Table V-A Footnote 3: Fine-tuned: MSMFormer_R34* + Zoom-in* in Table V-A Given a configuration for object arrangement on the table, there are five pick-and-place trials associated with each of the five objects. A trial is counted as a success if a grasp of an object guided by its segmentation boundary allows for a successful pick-and-place operation, otherwise its counted as a failure. A hard-failure occurs for a scene if the segmentation masks are incorrect in the beginning, with the 5 objects in the scene. Such an error is not favorable due to possibility of collision and damage of the gripper and hence the grasping is stopped in this case, and none of the objects count towards the success rate metric. It potentially occurs if the segmentation model is not able to establish clear boundaries between nearby objects which induces errors in the grasping pipeline, specifically in positioning the gripper for picking up the object. For example, cases 1-A and 1-B with the baseline model in Table V are hard failures due to segmentation error at the very start. Consequently, no feasible grasping motion is found for any object in the scene and hence they have no score for the respective trials. Therefore, accurate object segmentation is critical for grasping in cluttered scenes. We obtain data for the 40 individual trials (10 objects in total, across 4 table-top configurations for each) for each of the baseline and fine-tuned models and report their number of successful actions. As seen in Table V, we see a clear improvement in the grasp success rate when using the fine-tuned model, especially in scenes with high clutter. This highlights the need for precise segmentation masks of objects in cluttered scenes as any errors in this stage likely affect downstream applications like grasping. Additional details and qualitative results will be provided in the supplementary material. ## VI Conclusion and Future Work We introduced a robotic system for self-supervised unseen object instance segmentation. Our system leverages robot pushing to interact with objects and collect images before and after each pushing action. In order to generate segmentation masks of objects in the collected images, the system allows the robot to push objects until a sequence of images is collected, then an optical flow based multi-object tracking algorithm and a video object segmentation method are combined to segment object instances in the collected images automatically. Using a sequence of images from robot pushing enables the system to segment all the objects in the sequence including objects that are very close to each other. To the best of our knowledge, this is a first system that leverages long-term robot interaction for object segmentation. We verify the usefulness of the system by using the collected images to fine-tune object segmentation networks. Our experiments show that the fine-tuned networks achieve better segmentation accuracy both in the same domain and in different domains. We also demonstrate that improving object Fig. 6: Setup for top-down grasping with segmentation. Robot images on each column show the three stages: approach, pickup and placing in the bin. Fig. 7: Examples of scene configurations with varying amount of clutter. segmentation with fine-tuning benefit top-down robot grasping in a pick-and-place task, where accurate object segmentation can be used to plan grasps in cluttered scenes. For future work, we plan to extend the system beyond tabletop scenarios such as segmenting objects inside bins or cabinets. Robot interaction in these environments requires motion planning to account for the constraints from the environments. Robot pushing may not be sufficient in these environments. We plan to investigate different interaction actions such as grasping and scooping for data collection. ## Acknowledgments This work was supported in part by the DARPA Perceptually-enabled Task Guidance (PTG) Program under contract number HR00112220005. Kaiyu Hang is supported by NSF CMMI-2133110.
2304.10307
MATOQ: a Monte Carlo Simulation of Electron Transport in Environmental-friendly Gas Mixtures for Resistive Plate Chambers
The increasing interest in environmentally friendly gas mixtures for gaseous particle detectors, especially tetrafluoropropene-based gas mixtures for Resistive Plate Chambers (RPCs), has prompted the need for simulating electron transport coefficients and reaction rates in these mixtures in recent years. MATOQ is a Monte Carlo simulation program that calculates electron transport parameters, specifically designed for studying and optimizing environmental-friendly gas mixtures for RPCs. Unlike other existing codes, MATOQ allows for the simulation of electron avalanches by including the effect of space charge electric field, which can significantly impact the avalanche evolution in gaseous detectors such as RPCs. After the validation of the MATOQ simulation in the temporal and spatial growth configurations, we present the electron transport coefficients and the reaction rates in tetrafluoropropene-based gas mixtures, which may represent a valid alternative to the standard gas mixtures currently used for RPCs.
Antonio Bianchi
2023-04-20T13:41:04Z
http://arxiv.org/abs/2304.10307v1
MATOQ: a Monte Carlo Simulation of Electron Transport in Environmental-friendly Gas Mixtures for Resistive Plate Chambers ###### Abstract The increasing interest in environmentally friendly gas mixtures for gaseous particle detectors, especially tetrafluoropropene-based gas mixtures for Resistive Plate Chambers (RPCs), has prompted the need for simulating electron transport coefficients and reaction rates in these mixtures in recent years. MATOQ is a Monte Carlo simulation program that calculates electron transport parameters, specifically designed for studying and optimizing environmental-friendly gas mixtures for RPCs. Unlike other existing codes, MATOQ allows for the simulation of electron avalanches by including the effect of space charge electric field, which can significantly impact the avalanche evolution in gaseous detectors such as RPCs. After the validation of the MATOQ simulation in the temporal and spatial growth configurations, we present the electron transport coefficients and the reaction rates in tetrafluoropropene-based gas mixtures, which may represent a valid alternative to the standard gas mixtures currently used for RPCs. ## I Introduction Resistive Plate Chambers (RPCs) are gaseous particle detectors used in high-energy physics experiments [1; 2; 3; 4] and medical imaging applications [5; 6]. They consist of two parallel plates made of high-resistivity materials with a gap between them filled with a gas mixture. Tetrafluoroethane (C\({}_{2}\)H\({}_{2}\)F\({}_{4}\)) is generally the main component of the gas mixtures for RPCs. This gas is typically mixed with quench gases such as isobutane (_i_-C\({}_{4}\)H\({}_{10}\)) and sulfur hexafluoride (SF\({}_{6}\)) in various proportions to optimize the performance of RPCs for specific applications. In view of supporting the transition to a green economy and fighting climate change, recent regulations of the European Union have prohibited the use of C\({}_{2}\)H\({}_{2}\)F\({}_{4}\) for many applications, since it is a greenhouse gas. Indeed, the global warming potential of C\({}_{2}\)H\({}_{2}\)F\({}_{4}\) is about 1430 [7]. This means that the impact of this gas on the greenhouse effect is estimated to be 1430 times higher than an equivalent mass of carbon dioxide (CO\({}_{2}\)) in the atmosphere. Although there are no European regulations restricting the use of C\({}_{2}\)H\({}_{2}\)F\({}_{4}\) for scientific applications, some research teams [8; 9; 10; 11] have explored the possibility of replacing C\({}_{2}\)H\({}_{2}\)F\({}_{4}\) with more environmentally friendly gases. Many experimental studies are currently focused on measuring the performance of RPCs by replacing the current C\({}_{2}\)H\({}_{2}\)F\({}_{4}\)-based gas mixture with environmental-friendly alternatives [11; 12; 13; 14]. Recently, some encouraging results have been obtained by replacing the C\({}_{2}\)H\({}_{2}\)F\({}_{4}\) with tetrafluoropropene. Tetrafluoropropene (C\({}_{3}\)H\({}_{2}\)F\({}_{4}\)) has a chemical composition similar to C\({}_{2}\)H\({}_{2}\)F\({}_{4}\), but it seems much more electronegative than C\({}_{2}\)H\({}_{2}\)F\({}_{4}\)[15; 16]. This would result in too high operating voltages for RPCs that are not compatible with the already existing power supply systems. Instead of replacing C\({}_{2}\)H\({}_{2}\)F\({}_{4}\) with only C\({}_{3}\)H\({}_{2}\)F\({}_{4}\), binary mixtures of C\({}_{3}\)H\({}_{2}\)F\({}_{4}\) and CO\({}_{2}\) may be a feasible alternative [11; 12; 13; 14]. However, purely experimental studies require a large number of trials to identify gas mixtures that yield satisfactory performance of RPCs. The simulation of electron transport coefficients and reaction rates under the influence of the electric field can assist in the selection of the most promising eco-friendly gas mixtures for RPCs. In recent years, some Monte Carlo programs to simulate electron transport in gases under the influence of a static and uniform electric field have been developed for specific applications. One of the most widely used codes is MAGBOLTZ that was developed by S. Biagi in the 1990s and has been regularly updated [17]. This open-source code, written in FORTRAN, is still used in the field of gaseous particle detectors. However, one of the most significant limitations is that the input electron collision cross sections for all gases are deeply embedded in the MAGBOLTZ code. This makes it challenging to implement new sets of electron collision cross sections, like those for C\({}_{3}\)H\({}_{2}\)F\({}_{4}\)[18], and modifying the code for specific purposes can be complicated. Some attempts have been recently done to implement more user-friendly Monte Carlo simulations similar to MAGBOLTZ. In particular, the METHES program [19] overcomes the limitation of MAGBOLTZ in simulating electron transport in gases not yet included in the internal database. Indeed, different sets of electron collision cross sections can be easily adopted as input in METHES. However, the execution of METHES requires a commercial license of MATLAB [19]. In addition, it is important to note that MAGBOLTZ and METHES do not account for the effect of the space charge electric field, which can significantly impact the avalanche evolution, especially in RPCs with narrow gas gaps. This paper describes the MATOQ program that is a Monte Carlo simulation focused on the calculation of electron transport coefficients and reaction rates in any gas mixture of interest under the influence of the electric field. This is obtained by simulating the temporal and spatial growth of electron avalanches along gas gaps. In addition, MATOQ allows simulating the electron avalanche growth under the influence of a static applied electric field together with the space charge electric field that changes depending on the avalanche evolution along a given gas gap. This aspect sets the MATOQ program apart from all other available Monte Carlo simulations. MATOQ is implemented in the programming language C++, which facilitates its usage and customization in various research fields where C++ is commonly used, such as in the simulation of gaseous particle detectors. MATOQ is compatible with the file format of electron collision cross sections adopted by the open-access Plasma Data Exchange Project (LXCat) [20] in order to allow the user to easily control all input parameters. Moreover, MATOQ is interfaced with OpenMP [21] to enable multi-thread execution, and with the ROOT program [22] for the graphical representation of simulation results. The paper is organized as follows. In section II we describe how the space charge electric field may play a significant role in the electron avalanche growth. The methodology to simulate the electron transport in gases is presented in section III, while the simulation of collisions between electrons and neutral gas molecules is examined in section IV. Sections V and VI describe the temporal and spatial growth configurations of MATOQ, whereas the simulation of the avalanche growth under the influence of the applied electric field together with the space charge electric field is detailed in section VII. In section VIII, we compare the electron transport coefficients and reaction rates in pure C\({}_{2}\)H\({}_{2}\)F\({}_{4}\), pure C\({}_{3}\)H\({}_{2}\)F\({}_{4}\), and in binary mixtures of C\({}_{3}\)H\({}_{2}\)F\({}_{4}\) and CO\({}_{2}\). Furthermore, we describe how the avalanche size in a narrow-gap RPC is affected when C\({}_{2}\)H\({}_{2}\)F\({}_{4}\) is substituted by C\({}_{3}\)H\({}_{2}\)F\({}_{4}\). Finally, conclusions are drawn in section IX. ## II Space charge electric field Free charged particles gain energy in gases under the influence of an electric field. Since the ion mobility is generally three orders of magnitude lower than that of electrons [23], the velocity of ions in gases is generally negligible in comparison to that acquired by electrons. As a result, the number of electrons grows exponentially by giving rise to an electron avalanche. Indeed, under the influence of the electric field, electrons can gain enough energy to ionize a certain number of gas molecules along their drift towards the anode. On the contrary, ions generally drift towards the cathode without playing a fundamental role in the charge multiplication due to their low energy. In this work, the motion of ions is not simulated since they move much slower than electrons. Nevertheless, the motion of ions can be easily implemented in MATOQ by including the ion mobility of each species in the gas mixture of interest. During the avalanche growth in gases, electrons and ions are partially overlapped in space while they move towards the opposite electrodes. This generates an electric field, generally called space charge electric field, that is superimposed on the applied electric field. As a consequence, the applied electric field turns out to be reduced in the middle of the electron avalanche because it is lowered by the space charge electric field between electrons and ions. On the contrary, the applied electric field is strengthened in the upstream and downstream avalanche as it is reinforced by the space charge electric field. This causes a non-uniform electric field during the avalanche evolution in gas gap that depends on free charges and their positions in time. One of the common devices where space charge effects can have a significant impact is gaseous particle detectors, in particular narrow-gap RPCs. [23; 24]. MATOQ allows for calculating or excluding space charge effects in simulations depending on the desired outcome. In sections V and VI, the space charge electric field is not assessed in order to validate the MATOQ calculations with the results of MAGBOLTZ, in which the space charge effects cannot be simulated. On the contrary, in section VII, the space charge electric field is considered for simulating the average avalanche size as a function of the applied electric field in an RPC with a gas gap of 0.1 mm. This is done to enable a comparison between the MATOQ results and those obtained by the Lippmann et al.'s model, which can be used to calculate the electron avalanche size in RPCs [25]. ## III Methods The MATOQ simulation program tracks the motion of electrons for the entire duration of the simulation, while ions are considered motionless during electron avalanche development due to their much lower mobility compared to electrons. The position \(\vec{r}\) and the velocity \(\vec{v}\) of a free electron in a gas mixture under the influence of the applied electric field \(\vec{E}\) is determined according to the following equations: \[\vec{r}\rightarrow\vec{r}+\vec{v}\Delta t+\frac{1}{2}\frac{e\vec{E}}{m_{e}} \Delta t^{2}\ \text{and}\ \vec{v}\rightarrow\vec{v}+\frac{e\vec{E}}{m_{e}}\Delta t \tag{1}\] where \(\Delta t\) is the time step of the simulation, while \(m_{e}\) and \(e\) are the electron mass and charge, respectively. The kinetic energy \(\varepsilon\) of an electron with velocity \(\vec{v}\) is given by: \[\varepsilon=\frac{1}{2}m_{e}|\vec{v}|^{2} \tag{2}\] The choice of an appropriate time step \(\Delta t\) to perform the MATOQ simulation is determined by the null-collision technique [26; 27]. According to this technique, the probability \(P(\Delta t)\) of time steps higher than \(\Delta t\) is: \[P(\Delta t)=e^{-\int_{0}^{\Delta t}\nu(|\vec{v}(t)|)\,dt} \tag{3}\] where \(t\) is the time and \(\nu\) is the collision frequency, which depends on the electron velocity \(\vec{v}\). Indeed, the collision frequency \(\nu\) can be expressed as: \[\nu(|\vec{v}|)=N\sigma(|\vec{v}|)|\vec{v}| \tag{4}\] where \(N\) denotes the number of gas molecules per unit volume, which is assumed constant in space and time in all MATOQ simulations, and \(\sigma\) is the cross section of each individual process that can take place in the gas mixture. For a gas mixture consisting of \(M\) components with respective concentrations \(c_{m}\), the total cross section \(\sigma_{tot}\) is given by: \[\sigma_{tot}(|\vec{v}|)=\sum_{m}^{M}\sum_{i}^{I}c_{m}\sigma_{m,i}(|\vec{v}|) \tag{5}\] where \(m\) is the index of each gas component, while the index \(i\), ranging from 1 to \(I\), corresponds to each individual electron collision process that can occur in the gas component \(m\). Using the null-collision technique, a constant trial collision frequency (\(\nu^{\prime}\)) is introduced and assumed higher than the total collision frequency \(\nu_{tot}\) in the whole energy range of interest. As a consequence, the expression of \(\nu^{\prime}\) is: \[\nu^{\prime}>\max(\nu_{tot}(|\vec{v}|))=\max(N\sigma_{tot}(|\vec{v}|)|\vec{v}|) \tag{6}\] The total cross section \(\sigma_{tot}\) is evaluated in MATOQ between 0 eV and 100 eV, whereas the trial collision frequency \(\nu^{\prime}\) is assumed three times higher than \(\nu_{tot}\). As a result, the introduction of a constant trial collision frequency \(\nu^{\prime}\) that is independent of the electron velocity \(\vec{v}\) gives the possibility to recast the probability \(P(\Delta t)\) as follows: \[P(\Delta t)=e^{-\nu^{\prime}\Delta t} \tag{7}\] Therefore, the selection of the time step can be determined by the generation of a random number. Using the inverse transformation method for the distribution \(P(\Delta t)\), the time step \(\Delta t\) is calculated as follows: \[\Delta t=-\frac{1}{\nu^{\prime}}\ln(s) \tag{8}\] where \(s\) is a random number generated from a uniform distribution in the range (0, 1). For each electron, identified by the index \(k\), the possibility that a real collision may occur is checked after every time step \(\Delta t\). The number \(L\) of all possible processes at the corresponding velocity \(\vec{v}_{k}\) is determined to define a vector with \(L\)+1 items for each electron [19]. Subsequently, the single item \(C_{l}\) of the vector is initialized as: \[C_{l}=C_{l-1}+\frac{N\cdot c_{m}\cdot\sigma_{m,l}(|\vec{v}_{k}|)\cdot|\vec{v} _{k}|}{\nu^{\prime}} \tag{9}\] where \(C_{l}\) is the occurrence probability of \(l\)-th electron collision process. According to equation 9, the content of each item is cumulatively summed to obtain an integral function that monotonically increases. Since a null-collision can occur, the total number of items in the vector is \(L\)+1 and each collision frequency \(N\cdot c_{m}\cdot\sigma_{m,l}(|\vec{v}_{k}|)\cdot|\vec{v}_{k}|\) is normalized by \(\nu^{\prime}\), which is higher than the total collision frequency \(\nu_{tot}\), in accordance with the equation 6. By means of this normalization, the last item of the vector represents the occurrence probability of a null-collision, in which no real collision occurs. Types of electron collisions For each type of electron collision, MATOQ needs the corresponding cross section as a function of the incident electron energy as input. Since there are different sets of electron collision cross sections present in the literature for the most common gases, MATOQ is compatible with the data format used in the LXCat database, which includes a vast collection of sets for various gases [20]. This is the same approach adopted in METHES, whereas MAGBOLTZ utilizes a database included in the code and updated and reviewed periodically by its developer. In MATOQ, simulations of electron-neutral gas molecule collisions are carried out, while interactions between electrons and between electrons and ions are not considered. The possible electron collisions that can be simulated include: (a) elastic processes, where the interaction involves an electron and a neutral gas molecule in both the initial and final states; (b) excitations, where an incident electron transfers part of its energy to a neutral gas molecule, promoting it to an excited state. However, secondary effects such as photon emission are not considered in MATOQ; (c) ionization and attachment events, where an incident electron ionizes a neutral gas molecule or is captured by it, respectively. In the LXCat database, cross sections for every type of electron collision are presented as tables of values that vary as a function of the incident electron energy. In MATOQ, each cross section value is linearly interpolated with the subsequent value in the table to obtain a continuous function that spans the entire energy range of interest. In order to select the type of electron collision to simulate in MATOQ, a random number \(r\) is generated from a uniform distribution ranging from 0 to 1 for each time step \(\Delta t\). If \(r\) is smaller than the content of \(C_{1}\) in equation 9, the first collision process is simulated. If \(r\) is larger than the (\(l\)-1)-th item and smaller than the \(l\)-th item, the \(l\)-th process is simulated. If \(r\) is higher than the content of \(C_{L}\), a null-collision is selected, which means that the electron continues its free motion without any interactions with the gas medium [19]. In the case of elastic collisions, isotropic scattering is assumed in the simulation, meaning that the polar angle \(\theta\) and the azimuthal angle \(\phi\) are determined as follows [19]: \[\theta=\arccos(1-2r_{1})\text{ and }\phi=2\pi r_{2} \tag{10}\] where \(r_{1}\) and \(r_{2}\) are random numbers generated from a uniform distribution ranging from 0 to 1. In addition to the simulation of the resulting electron trajectory, it is necessary to evaluate the energy loss of the incident electron for each elastic collision. Since gas molecules are assumed to rest in the laboratory system for simplicity, the energy loss \(\Delta\epsilon\) of the incident electron after the elastic scattering with the neutral gas molecule of mass \(M\) is given by [28; 19]: \[\Delta\epsilon=\frac{1}{2}\epsilon\frac{m_{e}}{M}(1-\cos(\theta)) \tag{11}\] where \(\epsilon\) is the incident electron energy, \(m_{e}\) is the electron mass and \(\cos(\theta)\) is given by the equation 10. On the contrary, the electron energy loss \((\Delta\epsilon)^{*}\) after an excitation or ionization process is given by: \[(\Delta\epsilon)^{*}=\epsilon^{*} \tag{12}\] where \(\epsilon^{*}\) is the energy threshold of the specific event to simulate. In other terms, the incident electron energy is assumed to be reduced by the minimum energy required to excite or ionize the gas molecule, depending on the type of process simulated. The scattering of the incident electron in excitation and ionization processes is considered isotropic as in the case of elastic processes. The values of \(\epsilon^{*}\) used in MATOQ are obtained from the LXCat database along with the electron collision cross sections. More details on simulating ionization and attachment processes are given in sections V and VI, depending on the chosen temporal or spatial growth configuration. In the case of a null-collision, the electron does not interact with the gas medium and continues its motion. ## V Temporal growth configuration In the temporal growth configuration of MATOQ, trajectories and collisions of an ensemble of electrons are simulated in an infinite gas volume. The number of electrons remains constant for the entire duration of the simulation. This configuration enables the determination of various electron properties such as their mean energy, drift velocity, and ionization and attachment coefficients. MATOQ implements the same technique used in METHES [19] to maintain a constant number of electrons at every time step \(\Delta t\). In the case of an ionization event, an additional electron is added to the electron ensemble, while a different electron is randomly removed. The new electron is simulated from the initial position where the ionization occurred. The remaining electron energy, calculated as the difference between the energy of the incident electron and the ionization energy of the involved gas molecule (according to equation 12), is equally shared between the two electrons resulting from the ionization collision. On the contrary, in the case of attachment processes, the attached electron is removed from the electron ensemble, and an additional electron is added to the same position where the attachment event occurred. The direction and energy of the new electron are assumed to be the same as those of an electron randomly selected from the ensemble. The reliable calculation of electron transport coefficients and reaction rates is only possible after the electron ensemble has reached a steady state regime, where the mean electron energy remains constant, except for statistical fluctuations. All electrons in the MATOQ simulation start with an initial energy of 0.1 eV. Figure 1 shows the time evolution of the mean energy of \(10^{5}\) electrons in pure argon (Ar) at a reduced electric field of 150 Td1. After approximately \(10^{-11}\) s, the mean electron energy reaches a constant value of around 7 eV, with small statistical variations (\(<\sim\)0.5 eV). Footnote 1: 1 Td = \(10^{-21}\) V-m2 The instantaneous mean electron energy \(<\!\varepsilon(t)\!>\) at the time \(t\) is given by: \[<\!\varepsilon(t)\!>=\frac{1}{2}m_{e}\frac{1}{K}\sum_{k=1}^{K}|\vec{v}_{k}(t)| ^{2} \tag{13}\] where \(k\) indicates the \(k\)-th electron with velocity \(\vec{v}_{k}(t)\), whereas \(K\) is the total number of electrons. Similarly to the calculation of the instantaneous mean electron energy, the instantaneous mean electron velocity \(\vec{v}(t)\) at the time \(t\) is given by: \[\vec{v}(t)=\frac{1}{K}\sum_{k=1}^{K}\vec{v}_{k}(t) \tag{14}\] Since the number of electrons does not change in the temporal growth configuration of MATOQ, the number of ionizations and attachments increases linearly in time [19]. Therefore, the ionization (\(\nu_{ion}(t)\)) and electron attachment (\(\nu_{att}(t)\)) coefficients at the time \(t\) are given by: \[\nu_{ion}(t)=\frac{N_{ion}(t)-N_{ion}(t_{0})}{K|\vec{v}(t)|(t-t_{0})}\ \mbox{and}\ \nu_{att}(t)=\frac{N_{att}(t)-N_{att}(t_{0})}{K|\vec{v}(t)|(t-t_{0})} \tag{15}\] where \(N_{ion}(t_{0})\) and \(N_{att}(t_{0})\) are the number of the ionizations and electron attachments at the time instant \(t_{0}\), respectively, whereas \(N_{ion}(t)\) and \(N_{att}(t)\) correspond to the number of ionization and attachment events at time instant \(t\) with \(t>t_{0}\). To accurately evaluate all electron transport coefficients and reaction rates, the electron ensemble must reach a steady state. Therefore, \(<\!\varepsilon(t)\!>\), \(\vec{v}(t)\), \(\nu_{ion}(t)\) and \(\nu_{att}(t)\) are only calculated after this condition has been met. The Figure 1: Mean energy of \(10^{5}\) electrons as a function of the time in pure Ar at 150 Td. mean electron energy \(<\!\varepsilon\!>\) and the mean electron velocity \(\vec{v}\) are determined in MATOQ by averaging all respective values of \(<\!\varepsilon(t)\!>\) and \(\vec{v}(t)\), sampled at each time step \(\Delta t\) after reaching the steady state. In this work, the drift velocity \(v_{drift}\) is defined as the component of velocity \(\vec{v}\) along the direction of the applied electric field \(\vec{E}\). The ionization and attachment rates, \(\nu_{ion}\) and \(\nu_{att}\), are calculated by counting the respective processes every 1000 time steps after reaching the steady state. This ensures that enough ionization and attachment events occur between samplings. The values of \(<\!\varepsilon\!>\), \(\vec{v}\), \(\nu_{ion}\) and \(\nu_{att}\) are subject to statistical fluctuations, and their uncertainties are estimated by computing the standard deviation of the corresponding values obtained at each sampling. The temporal growth configuration in MATOQ simulation ends after a specified number of real collisions, which is selected by the user. Increasing the number of real collisions improves the accuracy of results in Monte Carlo simulations, but it also increases the computation time [17; 19]. In MATOQ, tens of millions of real collisions generally result in a reasonable computation time with satisfactory accuracy. To validate the MATOQ simulation, we compare the calculated values of \(<\!\varepsilon\!>\), \(v_{drift}\), \(\nu_{ion}\), and \(\nu_{att}\) as a function of the reduced electric field \(E/N\) with MAGBOLTZ results in both pure Ar and nitrogen (N\({}_{2}\)) gases. Additionally, we examine the accuracy of MATOQ by comparing the results with MAGBOLTZ in a binary mixture of 50% Ar and 50% N\({}_{2}\) as well as in pure CO\({}_{2}\) where electron attachments can occur, unlike in Ar and N\({}_{2}\). The electric field is assumed uniform in the gas medium in all these cases. No space charge effects are considered here. Figure 2 shows values of \(<\!\varepsilon\!>\), \(v_{drift}\), \(\nu_{ion}\) and \(\nu_{att}\) as a function of \(E/N\) in pure Ar, N\({}_{2}\), CO\({}_{2}\) and in the gas mixture composed of 50% Ar and 50% N\({}_{2}\). For each value of \(E/N\), MATOQ simulation results are obtained with 10\({}^{5}\) electrons and the maximum number of real collisions is set equal to 4\(\cdot\)10\({}^{8}\) as in MAGBOLTZ. The steady state is assumed to be reached in MATOQ after 1\(\cdot\)10\({}^{8}\) real collisions. Sets of electron collision cross sections used for simulations are specified in the appendix. The comparison between MATOQ and MAGBOLTZ calculations in all four gas mixtures shows a good agreement, Figure 2: Values of average electron energy \(<\!\varepsilon\!>\) (a), drift velocity \(v_{drift}\) (b), ionization coefficient \(\nu_{ion}\) (c) and attachment coefficient \(\nu_{att}\) (d) as a function of the reduced electric field \(E/N\) in pure Ar, N\({}_{2}\), CO\({}_{2}\) and in the gas mixture of 50% Ar and 50% N\({}_{2}\). Some statistical error bars are hidden by markers. as displayed in figure 2. The relative error2 Footnote 2: The relative error \(r\) of the generic parameter \(A\) is defined in percentage as follows: \[r_{A(E/N)}=\left(\frac{A_{MQ}(E/N)-A_{MZ}(E/N)}{A_{MZ}(E/N)}\right)\cdot 100 \tag{16}\] where \(A_{MQ}(E/N)\) and \(A_{MZ}(E/N)\) are the values of the parameter \(A(E/N)\) calculated at the value \(E/N\) with MATOQ and MAGBOLTZ, respectively. in the values of \(<\!\varepsilon\!>\), \(v_{drift}\), \(\nu_{ion}\) and \(\nu_{att}\) is below 1% for all gas mixtures tested, except for pure Ar at \(E/N\) values above 200 Td, where the relative error increases up to about 5%. This discrepancy is likely due to the assumption in MATOQ that the energy resulting from ionization collisions is equally shared between the two electrons in the final state, which may lead to less accurate results compared to MAGBOLTZ. This effect is more pronounced in Ar at higher \(E/N\) values where ionization events are more frequent compared to the other gases. ## VI Spatial growth configuration The spatial growth configuration simulates an electron avalanche in an infinite gas volume, with a certain number of initial electrons and a given \(E/N\) value. The number of electrons is not fixed during the simulation. An additional electron is added to the avalanche upon ionization, while the trajectory of an electron is not simulated anymore when it becomes captured by a gas molecule. This configuration allows us to determine the ionization (\(\alpha\)) and attachment (\(\eta\)) Townsend coefficients. In the presence of ionization and attachment processes, the number of electrons \(n(x)\) at the distance \(x\) is given by: \[n(x)=n_{0}e^{(\alpha-\eta)x}=n_{0}e^{\alpha_{eff}x} \tag{17}\] where \(n_{0}\) is the initial number of electrons while \(\alpha\) and \(\eta\) are the ionization and attachment Townsend coefficients, respectively. The difference between \(\alpha\) and \(\eta\) is usually named effective ionization Townsend coefficient \(\alpha_{eff}\). The spatial growth configuration of MATOQ simulates the electron transport and energy transfer after collisions in the same way as the temporal growth configuration, with the exception of ionization and attachment processes. Upon ionization, an additional electron with a random initial direction is added to the simulation, and the two electrons resulting from the collision share the remaining energy equally. The trajectory of the new electron starts from the same position as the incident electron. In the case an electron becomes attached to a gas molecule, its motion is not simulated any further. It should be noted that the spatial growth configuration may not be effective in the presence of high attachment coefficients, as all electrons in the avalanche could become attached before the simulation concludes. The evaluation of \(\alpha\) and \(\eta\) for a given \(E/N\) value in MATOQ is performed by counting the number of electrons that cross a series of virtual planes, placed at the same distance apart and perpendicular to the electric field direction. The position of the virtual planes is determined based on the positions of the slower electron at the beginning of the steady state and at the end of the simulation. To obtain reliable values of \(\alpha\) and \(\eta\), tens of virtual planes are usually sufficient. During the simulation, the initial and final position of each electron is recorded, and the number of electrons crossing each virtual plane is counted. Interpolation of the number of electrons as a function of virtual plane position with an exponential function allows for the calculation of the effective ionization Townsend coefficient, \(\alpha_{eff}\), using equation 17. To obtain \(\alpha\), the interpolation is repeated without accounting for attachment processes. Finally, \(\eta\) is calculated as the difference between \(\alpha\) and \(\alpha_{eff}\). Uncertainties of \(\alpha\) and \(\alpha_{eff}\) are assumed to be equal to the uncertainties of the corresponding best-fit functions on simulation data, and error propagation is used to determine the uncertainty of \(\eta\). The instantaneous mean electron energy \(<\!\varepsilon(t)\!>\) and the instantaneous velocity \(\vec{v}(t)\) of electrons in the avalanche are evaluated by implementing equations 13 and 14 where \(K\) is here the number of electrons simulated at time \(t\). Similarly to the temporal growth configuration, the mean electron energy \(<\!\varepsilon\!>\) and the electron velocity \(\vec{v}\) are calculated in MATOQ by averaging all respective values sampled at each time step \(\Delta t\) after reaching the steady state. The validation of the spatial growth configuration of MATOQ is done by comparing the calculated values of \(\alpha\) and \(\eta\) as a function of \(E/N\) with those obtained from MAGBOLTZ calculations in pure Ar, N\({}_{2}\), CO\({}_{2}\), and in the gas mixture of 50% Ar and 50% N\({}_{2}\). In the spatial growth configuration of MATOQ, the simulation ends after either reaching 1\(\cdot\)10\({}^{8}\) real collisions or simulating a maximum number of 10\({}^{6}\) electrons, whichever comes first. The steady state is assumed to be reached after 2.5\(\cdot\)10\({}^{7}\) real collisions, and 10 virtual planes are used to evaluate the values of \(\alpha\) and \(\eta\). Space charge effects are not taken into account in the MATOQ results to ensure consistency with the MAGBOLTZ calculations. The simulation is initiated with 300 electrons having an initial energy of 0.1 eV. Figure 3 shows that the values of \(\alpha\) and \(\eta\) obtained by MATOQ and MAGBOLTZ are in good agreement, with a relative error of less than \(\sim\)2% for \(\alpha\) and a maximum relative error of \(\sim\)25% for \(\eta\) from 10 Td to 300 Td. ## VII Spatial growth under the influence of the space charge electric field The MATOQ program allows the simulation of the electron avalanche growth under the influence of an applied electric field together with the space charge electric field. As highlighted in section II, electrons and ions are partially overlapped in space during the avalanche development in a gas volume. This causes the formation of the space charge electric field that is superimposed on the applied electric field. The externally applied electric field remains static and uniform in the gas medium, while the space charge electric field changes in space and time during the avalanche evolution. As a consequence, the applied electric field is strengthened in the upstream and downstream electron avalanche, whereas its strength results reduced in the middle of the avalanche. The presence of the space charge electric field reduces the electron multiplication in the gas volume compared to the case where it is absent [23]. To compute the space charge electric field in MATOQ, electron avalanche growth is simulated in a defined three-dimensional grid within a gas gap between two opposite electrodes. This approach contrasts with the simulation of electron avalanches in an infinite gas volume, as described in section VI. The gas gap's width, the applied electric field strength, the initial electron positions, and the gas mixture's composition and volume fractions can be selected as desired. The spatial mesh, used to calculate the total electric field, can also be chosen. For simplicity, ions are assumed to be motionless, as their mobility is typically three orders of magnitude lower than that of electrons. To compute the space charge effects during the growth of the avalanche, the gas gap is partitioned into cubic elements with position vectors \(\vec{r}_{i}\) in a Cartesian coordinate system. Each grid point in the gas gap corresponds to a vector \(\vec{r}_{i}\). The volume of each grid element is \(\Delta x\cdot\Delta y\cdot\Delta z\). The applied electric field \(\vec{E}\) is assumed to be parallel to the \(z\)-axis. Electron avalanches are initiated from a given number of initial electrons, placed anywhere in the gas gap. The simulation continues until all electrons reach the anode. The space charge electric field is computed for each grid point during the simulation and is recalculated after a given number of time steps \(\Delta t\), arbitrarily selected by the user of MATOQ. This enables the dynamic evaluation of the space charge electric field for the entire duration of the electron avalanche. The calculation of the electric field in the gas gap at a given time \(t\) is carried out in four steps. Firstly, the total electric charge of each cubic element is calculated by counting how many electrons and ions are inside the corresponding cubic element of the grid. Secondly, the electric potential \(V(\vec{r})\) in each grid point \(\vec{r}_{i}\) is calculated at the time \(t\) as: \[V(\vec{r}_{i})=\frac{1}{4\pi\varepsilon}\sum_{a=1}^{A}\frac{q(\vec{r_{a}})}{| \vec{r}_{i}-\vec{r_{a}}|} \tag{18}\] where \(\varepsilon\) is the permittivity, \(a\) indicates the \(a\)-th element in the spatial mesh, \(A\) is the total number of cubic elements in which the gas gap is divided and, finally, \(q(\vec{r_{a}})\) is the total electric charge in the cubic element identified by the position vector \(\vec{r_{a}}\). The expression of the electric potential presents a singularity if \(a=i\). In order to overcome this discontinuity, the difference \(|\vec{r}_{i}-\vec{r_{a}}|\) is assumed equal to \(\sqrt{(\Delta x/10)^{2}+(\Delta y/10)^{2}+(\Delta z/10)^{2}}\) in the case \(\vec{r}_{i}=\vec{r}_{a}\). This means that the electric charge inside the \(a\)-th element is considered to be slightly displaced from the center of the cubic element where the electric potential is evaluated. In other words, the components of vector \(\vec{r_{a}}\) along the \(x\)-, \(y\)-, Figure 3: Values of ionization Townsend coefficient \(\alpha\) (a) and attachment Townsend coefficient \(\eta\) (b) as a function of the reduced electric field \(E/N\) in pure Ar, N\({}_{2}\), CO\({}_{2}\) and in the gas mixture of 50% Ar and 50% N\({}_{2}\). Some statistical error bars are hidden by markers. and \(z\)-axes are respectively increased by a tenth of \(\Delta x\), \(\Delta y\) and \(\Delta z\), which are the sizes of each cubic element. This is an arbitrary assumption that can be easily modified by the user of MATOQ as wished, however it gave satisfactory results in the calculation of avalanche sizes in a narrow-gap RPC, as will be demonstrated in the following. Thirdly, the total electric field at the time \(t\) is calculated in each grid point \(\vec{r}_{i}\) as the sum of the applied electric field \(\vec{E}\) and the space charge electric field \(\vec{E}_{sp-ch}(\vec{r}_{i})\), starting from the electric potential \(V(\vec{r})\) in each grid point. Thirdly, the space charge electric field \(\vec{E}_{sp-ch}(\vec{r}_{i})\) is calculated by evaluating the electric potential \(V(\vec{r})\) in each grid point. Finally, the total electric field at time \(t\) is computed as the sum of the applied electric field \(\vec{E}\) and the space charge electric field \(\vec{E}_{sp-ch}(\vec{r}_{i})\). Unlike previous cases in sections V and VI, the MATOQ simulation in the spatial growth configuration under the influence of the space charge electric field cannot be validated by a comparison of results obtained by different simulation codes, like MAGBOLTZ or METHES. To validate the MATOQ simulations, the avalanche sizes at the anode as a function of the electric field are compared with values obtained by the Lippmann et al.'s model [25] in a narrow-gap RPC. The size of the avalanche at the anode in this type of gaseous particle detector depends on certain parameters. The number and position of primary electrons, released by an incoming radiation that ionizes the gases in the detector, the gas mixture and its density, and the applied electric field play a crucial role in the charge multiplication in RPCs. Figure 6 shows the temporal evolution of the avalanche, simulated with MATOQ, for two different values of the applied electric field. All avalanches are originated from one single electron starting from the origin at \(t\) = 0 s. The charge multiplication is simulated in a gas mixture of 85% C\({}_{2}\)H\({}_{2}\)F\({}_{4}\), 5% \(i\)-C\({}_{4}\)H\({}_{10}\) and 10% SF\({}_{6}\) at 296.15 K and 970 mbar. The number of electrons in avalanches at 13 kV/mm increases in time faster than that at 9 kV/mm. For avalanches simulated at 13 kV/mm, the number of electrons in the gas gap is \(\sim\)10\({}^{6}\) electrons after \(\sim\)0.25 ns and then it progressively decreases until all electrons reach the anode, whereas avalanches at \(\sim\)9 kV/mm reach the maximum number of \(\sim\)10\({}^{3}\) electrons at \(\sim\)0.5 ns. This is caused by the fact that \(\alpha_{eff}\) and \(v_{drift}\) of the gas mixture at 13 kV/mm are higher than those at 9 kV/mm. Electrons and ions are partially overlapped in space during the avalanche growth in the gas gap. Figures 5a and 5b show the number of electrons and ions along the gas gap at 0.15 ns and 0.26 ns, respectively. In this case, the number of electrons and ions as a function of the distance are evaluated in an avalanche originated from one single electron, which starts from the origin and at \(t\) = 0 s. The MATOQ simulation is carried out in the gas mixture of 85% C\({}_{2}\)H\({}_{2}\)F\({}_{4}\), 5% \(i\)-C\({}_{4}\)H\({}_{10}\) and 10% SF\({}_{6}\) at 296.15 K and 970 mbar with an applied electric field of 14 kV/mm. The overlap of positive and negative charges generates the space charge electric field along the gas gap. Values of the space charge electric field at 0.15 ns and 0.26 ns are shown in figures 5c and 5d, respectively. The MATOQ simulation results in figure 5 are consistent with the findings of Lippmann et al.'s model [25]. In particular, there are regions in the gas gap where the space charge electric field is decreased, while in other regions, it is increased. This effect becomes more evident during the evolution of the avalanche in both space and time. A comparison between avalanche sizes at the anode simulated with MATOQ and those calculated with the Lippmann et al.'s model is presented in figure 6. The comparison is carried out in a 0.1-mm single-gap RPC, using a gas mixture of 85% C\({}_{2}\)H\({}_{2}\)F\({}_{4}\), 5% \(i\)-C\({}_{4}\)H\({}_{10}\) and 10% SF\({}_{6}\) at 296.15 K and 970 mbar, for electric fields ranging from 6 kV/mm to 15 kV/mm. In addition, the average size of avalanches originating from a single electron at the cathode is calculated in Figure 4: Number of electrons in six avalanches as a function of time in a gas gap of 0.1 mm at 9 kV/mm and 13 kV/mm. The MATOQ simulation results are obtained in the gas mixture of 85% C\({}_{2}\)H\({}_{2}\)F\({}_{4}\), 5% \(i\)-C\({}_{4}\)H\({}_{10}\) and 10% SF\({}_{6}\) at 296.15 K and 970 mbar. the absence of space charge effects and shown in figure 6. The appearance of space charge effects during the avalanche development generally leads to a decrease in gas gain and reduced avalanche sizes [23]. The agreement between the average sizes obtained from MATOQ simulations, with and without considering the space charge effects, is good for low electric field values. On the contrary, for electric field values higher than 12 kV/mm, the difference between the average avalanche size calculated with and without space charge effects becomes significant. Specifically, at 15 kV/mm, the average avalanche size without considering space charge effects is two orders of magnitude higher than that obtained by accounting for the space charge effects. Figure 6 also shows the average avalanche sizes computed using the Lippmann et al.'s model. The MATOQ simulation results with space charge effects exhibit good agreement with the calculations performed by Lippmann et al., except for electric field values below 11 kV/mm where there is a difference by a factor of approximately 2. There could be some reasons for the discrepancies between the results of the Lippmann et al.'s model and the MATOQ simulation. Firstly, the Lippmann et al.'s model generates avalanches using a pattern of initial electrons with an accurate estimation of their positions and energies, while all avalanches in MATOQ are generated by a single electron at the cathode with an initial energy of 5 eV. Secondly, there could be differences in the electron collision cross sections used in the model and in the simulation. Lippmann et al. used MAGBOLTZ 2.2 to evaluate electron transport coefficients and reaction rates, while MATOQ uses electron collision cross sections from different databases. In fact, electron collision cross sections of SF\({}_{6}\) and \(i\)-C\({}_{4}\)H\({}_{10}\) for the MATOQ simulation are the same as those implemented in MAGBOLTZ 10.6, whereas the cross sections for C\({}_{2}\)H\({}_{2}\)F\({}_{4}\) are provided by Sasic et al. in their work published in 2013 [29]. During testing the MATOQ code, a limitation has been found in the simulation of more than \(\sim\)5\(\cdot\)10\({}^{7}\) electrons. Indeed, several simulations of avalanches with this large number of electrons turned out to be incomplete probably because of a memory allocation limitation. The amount of simulation data to temporarily record might have saturated the available memory of the platform where the code was running. Nevertheless, this does not represent an important limitation if the gas gain is not too high, as in the case shown in figure 6. More details of the system where the Figure 5: Number of electrons and ions as a function of the distance in the same avalanche at 14 kV/mm is shown at 0.15 ns (a) and 0.26 ns (b). Values of the space charge electric field are presented at 0.15 ns (c) and 0.26 ns (d) for the same avalanche. The MATOQ simulation is carried out in a 0.1-mm single-gap RPC using a gas mixture of 85% C\({}_{2}\)H\({}_{2}\)F\({}_{4}\), 5% \(i\)-C\({}_{4}\)H\({}_{10}\) and 10% SF\({}_{6}\) at 296.15 K and 970 mbar. MATOQ program is executed are provided in the appendix. ## VIII Electron transport parameters in C\({}_{3}\)H\({}_{2}\)F\({}_{4}\)-based gas mixtures The reduction of fluorinated greenhouse gas emissions in the European Union countries has been made mandatory by new regulations [30] introduced since January 2015. The primary objective of the regulation is to gradually phase out hydrofluorocarbons (such as C\({}_{2}\)H\({}_{2}\)F\({}_{4}\)), currently available on the market, to limit their overall production. Even though research applications are exempt from current regulations, the phasing out of hydrofluorocarbons could gradually increase their price due to their limited future availability. A number of R&D studies [8; 9; 10; 11] are ongoing to investigate the potential replacement of C\({}_{2}\)H\({}_{2}\)F\({}_{4}\)-based gas mixtures for RPCs with other more environmental-friendly gases. One alternative to C\({}_{2}\)H\({}_{2}\)F\({}_{4}\) is C\({}_{3}\)H\({}_{2}\)F\({}_{4}\), which appears to be a viable solution for RPCs. However, directly replacing C\({}_{2}\)H\({}_{2}\)F\({}_{4}\) with C\({}_{3}\)H\({}_{2}\)F\({}_{4}\) is not feasible due to the resulting high operating voltages of RPCs. A potential solution to address this issue is to replace C\({}_{2}\)H\({}_{2}\)F\({}_{4}\) with a binary mixture of C\({}_{2}\)H\({}_{2}\)F\({}_{4}\) and CO\({}_{2}\) in varying proportions. By using the MATOQ code, we can compare how the electron transport parameters vary in C\({}_{2}\)H\({}_{2}\)F\({}_{4}\) and C\({}_{3}\)H\({}_{2}\)F\({}_{4}\) as well as in gas mixtures of C\({}_{3}\)H\({}_{2}\)F\({}_{4}\) and CO\({}_{2}\) in different percentages. Figures 7a and 7b show the \(<\!\!\varepsilon\!>\) and \(v_{drift}\) in C\({}_{2}\)H\({}_{2}\)F\({}_{4}\) and C\({}_{3}\)H\({}_{2}\)F\({}_{4}\) as a function of the reduced electric field \(E/N\), respectively. In pure C\({}_{2}\)H\({}_{2}\)F\({}_{4}\), values of \(<\!\!\varepsilon\!>\) and \(v_{drift}\) are higher than those in pure C\({}_{3}\)H\({}_{2}\)F\({}_{4}\) between 10 Td and 300 Td. At a reduced electric field of 150 Td, the average electron energy \(<\!\!\varepsilon\!>\) is \(\sim\)5 eV in pure C\({}_{2}\)H\({}_{2}\)F\({}_{4}\), while it is \(\sim\)3 eV in pure C\({}_{3}\)H\({}_{2}\)F\({}_{4}\). These values are \(\sim\)7 eV and \(\sim\)4.5 eV at 300 Td. Similarly, the drift velocity \(v_{drift}\) in C\({}_{3}\)H\({}_{2}\)F\({}_{4}\) at 150 Td is about three times lower than that in C\({}_{2}\)H\({}_{2}\)F\({}_{4}\), with the reduction being about four times at 300 Td. When C\({}_{3}\)H\({}_{2}\)F\({}_{4}\) is mixed with 50% or 60% of CO\({}_{2}\), the values of \(<\!\!\varepsilon\!>\) and \(v_{drift}\) increase slightly as a function of \(E/N\), compared to those in pure C\({}_{3}\)H\({}_{2}\)F\({}_{4}\), as shown in figures 7a and 7b. The increase of \(<\!\!\varepsilon\!>\) is \(\sim\)20% at both 150 Td and 300 Td. Similarly, \(v_{drift}\) in C\({}_{3}\)H\({}_{2}\)F\({}_{4}\)/CO\({}_{2}\) mixtures is increased by approximately 20% at 150 Td compared to that in pure C\({}_{3}\)H\({}_{2}\)F\({}_{4}\), whereas the increase is \(\sim\)25% at 300 Td. Figure 8a shows the positive values of the effective ionization Townsend coefficient \(\alpha_{eff}\) as a function of \(E/N\) in pure C\({}_{2}\)H\({}_{2}\)F\({}_{4}\), pure C\({}_{3}\)H\({}_{2}\)F\({}_{4}\), and C\({}_{3}\)H\({}_{2}\)F\({}_{4}\)-based gas mixtures with 50% or 60% CO\({}_{2}\). These values are reported to identify the reduced electric field values at which electron avalanches can occur in RPCs. For pure C\({}_{2}\)H\({}_{2}\)F\({}_{4}\), the effective ionization Townsend coefficient is higher than 0, indicating that ionization events occur more frequently than attachment events, at a reduced electric field of \(\sim\)50 Td. On the contrary, the electron avalanche growth can occur at \(\sim\)290 Td in pure C\({}_{3}\)H\({}_{2}\)F\({}_{4}\). When C\({}_{3}\)H\({}_{2}\)F\({}_{4}\) is diluted with CO\({}_{2}\), the growth of electron avalanches occurs at lower values of the electric field, as reported by experimental studies [11; 12; 13; 14; 18]. This observation is also supported by MATOQ simulations. Indeed, in a gas mixture with equal proportions of C\({}_{3}\)H\({}_{2}\)F\({}_{4}\) and CO\({}_{2}\), \(\alpha_{eff}\) begins to exceed 0 at Figure 6: Avalanche sizes at the anode as a function of the electric field in a 0.1-mm single-gap RPC using a gas mixture of 85% C\({}_{2}\)H\({}_{2}\)F\({}_{4}\), 5% \(i\)-C\({}_{4}\)H\({}_{10}\) and 10% SF\({}_{6}\) at 296.15 K and 970 mbar. Values of avalanche size obtained by the MATOQ simulation with and without considering the space charge effects are compared with the results of Lippmann et al.’s model, which takes into account those effects. Concerning the MATOQ results obtained by the simulation of space charge effects, the distribution of the avalanche sizes is represented by plotting the maximum and minimum value, the lower (25%) and higher (75%) quartile as well as the median and the mean value of the distribution. Data of the model are provided by Lippmann et al. in their paper [25]. \(\sim\)190 Td. If the CO\({}_{2}\) percentage is increased from 50% to 60%, ionization events are more frequent than attachment events starting from 170 Td. Figure 8b shows the ionization Townsend coefficient \(\alpha\) and the attachment Townsend coefficient \(\eta\) as a function of \(E/N\) in C\({}_{3}\)H\({}_{2}\)F\({}_{4}\)-based gas mixture with the addition of 50% of CO\({}_{2}\) or 60% of CO\({}_{2}\). As shown in figure 8b, values of \(\alpha\) increase progressively with \(E/N\), whereas values of \(\eta\) tend to reach a plateau after the initial growth. The replacement of C\({}_{2}\)H\({}_{2}\)F\({}_{4}\) with C\({}_{3}\)H\({}_{2}\)F\({}_{4}\) requires dedicated studies also in narrow-gap RPCs, where the effect of space charge electric field may play a crucial role. Figure 9 presents the average avalanche size at the anode in a 0.1-mm single-gap RPC. The simulation is carried out for a range of applied electric fields from 6 kV/mm to 15 kV/mm, including the space charge effects in the gas gap and using the gas mixture consisting of 85% C\({}_{2}\)H\({}_{2}\)F\({}_{4}\), 5% \(i\)-C\({}_{4}\)H\({}_{10}\) and 10% SF\({}_{6}\) at 296.15 K and 970 mbar. Additionally, the simulations are performed by replacing C\({}_{2}\)H\({}_{2}\)F\({}_{4}\) with an equal amount of C\({}_{3}\)H\({}_{2}\)F\({}_{4}\). In the C\({}_{3}\)H\({}_{2}\)F\({}_{4}\)-based gas mixture, the average avalanche size follows an exponential trend until approximately 10 kV/mm. At low electric field values, the average avalanche size is reduced by a factor of 20 to 30 when C\({}_{2}\)H\({}_{2}\)F\({}_{4}\) is replaced with C\({}_{3}\)H\({}_{2}\)F\({}_{4}\). On the contrary, the average avalanche size in the C\({}_{3}\)H\({}_{2}\)F\({}_{4}\)-based gas mixture is approximately 5\(-\)8 times smaller than that in the C\({}_{2}\)H\({}_{2}\)F\({}_{4}\)-based gas mixture at high electric field values. Figure 8: (a) Effective ionization Townsend coefficients \(\alpha_{eff}\) as a function of \(E/N\) in pure C\({}_{2}\)H\({}_{2}\)F\({}_{4}\), C\({}_{3}\)H\({}_{2}\)F\({}_{4}\), and in the C\({}_{3}\)H\({}_{2}\)F\({}_{4}\)-based gas mixtures with 50% and 60% CO\({}_{2}\). (b) Ionization Townsend coefficient \(\alpha\) and attachment Townsend coefficient \(\eta\) as a function of \(E/N\) in C\({}_{3}\)H\({}_{2}\)F\({}_{4}\)-based gas mixtures with CO\({}_{2}\). Some statistical error bars are hidden by markers. Figure 7: Values of average electron energy \(<\!\!\varepsilon\!>\) (a) and drift velocity \(v_{drift}\) (b) as a function of the reduced electric field \(E/N\) in pure C\({}_{2}\)H\({}_{2}\)F\({}_{4}\), C\({}_{3}\)H\({}_{2}\)F\({}_{4}\), and in the C\({}_{3}\)H\({}_{2}\)F\({}_{4}\)-based gas mixtures with 50% or 60% CO\({}_{2}\). Some statistical error bars are hidden by markers. ## IX Conclusions The MATOQ program has been developed to study environmental-friendly gas mixtures for RPCs. This program enables the simulation of electron transport coefficients and reaction rates in gases under the influence of an electric field. Mean energy and drift velocity of electrons as well as ionization and attachment coefficients are evaluated both in temporal and in spatial growth configurations. Unlike the already existing programs, the MATOQ code also allows the evaluation of space charge effects between electrons and ions during the development of the electron avalanche. MATOQ is written in C++, which makes it a multi-platform software, and supports the multi-thread execution, which speeds the computation time. The data format of electron collision cross sections adopted in the LXCat database, which is widely used and regularly upgraded, is compatible with MATOQ. The temporal and spatial growth configurations of electron avalanches under the influence of uniform electric fields have been validated by comparing the MATOQ results with those obtained by MAGBOLTZ. The electron transport coefficients and reaction rates, namely \(\varepsilon\), \(v_{drift}\), \(\nu_{ion}\), \(\nu_{att}\), \(\alpha\) and \(\eta\), show good agreement with experimental data in pure Ar, N\({}_{2}\), CO\({}_{2}\) and in the binary mixture of 50% Ar and 50% N\({}_{2}\). Moreover, the simulation of electron avalanches influenced by both uniform and space charge electric fields is validated by comparing the avalanche sizes obtained from MATOQ to those calculated using Lippman et al.'s model. According to several experimental R&D studies, one potential alternative to C\({}_{2}\)H\({}_{2}\)F\({}_{4}\) in RPCs is C\({}_{3}\)H\({}_{2}\)F\({}_{4}\), which is considered to be more environmentally friendly. Using MATOQ, we calculated the changes in electron transport coefficients and reaction rates in pure C\({}_{2}\)H\({}_{2}\)F\({}_{4}\) and C\({}_{3}\)H\({}_{2}\)F\({}_{4}\), as well as in gas mixtures of C\({}_{3}\)H\({}_{2}\)F\({}_{4}\) and CO\({}_{2}\) in various proportions. The dilution of C\({}_{3}\)H\({}_{2}\)F\({}_{4}\) with CO\({}_{2}\) is considered a viable solution for operating RPCs within the voltage range currently used. This solution may also be applicable to narrow-gap RPCs. Indeed, the simulations conducted using MATOQ suggest that the average avalanche size is reduced approximately by one order of magnitude when C\({}_{2}\)H\({}_{2}\)F\({}_{4}\) is only replaced with C\({}_{3}\)H\({}_{2}\)F\({}_{4}\) in a 0.1-mm single-gap RPC. ## Appendix All sets of electron collision cross sections used in this work as input for MATOQ are summarized in table 1: Comparisons between MATOQ and MAGBOLTZ calculations in figures 2 and 3 are carried out by using the version 8.97 of MAGBOLTZ in the case of Ar and N\({}_{2}\), whereas the version 11.6 of MAGBOLTZ is used for the result simulations in pure CO\({}_{2}\), according to table 1. The MATOQ program is interfaced with the software ROOT, freely provided by the European Organization for Nuclear Research (CERN), to plot the results during the simulation [22]. All simulations of this work have been performed in the virtual machines of the Linux Public Login User Service (LXPLUX7) provided by CERN. These machines are organized in a cluster of PCs running CERN CentOS Linux in 64-bit mode. More details can be found in the LXPLUS7 documentation. Figure 9: Average avalanche size at the anode in an RPC with a gas gap of 0.1 mm ranging from 6 kV/mm to 15 kV/mm at 296.15 K and 970 mbar. The MATOQ simulation is conducted using a C\({}_{2}\)H\({}_{2}\)F\({}_{4}\)-based gas mixture (black) and a C\({}_{3}\)H\({}_{2}\)F\({}_{4}\)-based gas mixture (red), both with the addition of 5% _i_-C\({}_{4}\)H\({}_{10}\) and 10% SF\({}_{6}\). Simulations consider the space charge effects in the gas gap. Some statistical error bars are hidden by markers. ## Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
2305.18472
Deep Predictive Coding with Bi-directional Propagation for Classification and Reconstruction
This paper presents a new learning algorithm, termed Deep Bi-directional Predictive Coding (DBPC) that allows developing networks to simultaneously perform classification and reconstruction tasks using the same weights. Predictive Coding (PC) has emerged as a prominent theory underlying information processing in the brain. The general concept for learning in PC is that each layer learns to predict the activities of neurons in the previous layer which enables local computation of error and in-parallel learning across layers. In this paper, we extend existing PC approaches by developing a network which supports both feedforward and feedback propagation of information. Each layer in the networks trained using DBPC learn to predict the activities of neurons in the previous and next layer which allows the network to simultaneously perform classification and reconstruction tasks using feedforward and feedback propagation, respectively. DBPC also relies on locally available information for learning, thus enabling in-parallel learning across all layers in the network. The proposed approach has been developed for training both, fully connected networks and convolutional neural networks. The performance of DBPC has been evaluated on both, classification and reconstruction tasks using the MNIST and FashionMNIST datasets. The classification and the reconstruction performance of networks trained using DBPC is similar to other approaches used for comparison but DBPC uses a significantly smaller network. Further, the significant benefit of DBPC is its ability to achieve this performance using locally available information and in-parallel learning mechanisms which results in an efficient training protocol. This results clearly indicate that DBPC is a much more efficient approach for developing networks that can simultaneously perform both classification and reconstruction.
Senhui Qiu, Saugat Bhattacharyya, Damien Coyle, Shirin Dora
2023-05-29T10:17:13Z
http://arxiv.org/abs/2305.18472v1
# Deep Predictive Coding with Bi-directional Propagation for Classification and Reconstruction ###### Abstract This paper presents a new learning algorithm, termed Deep Bi-directional Predictive Coding (DBPC) that allows developing networks to simultaneously perform classification and reconstruction tasks using the same weights. Predictive Coding (PC) has emerged as a prominent theory underlying information processing in the brain. The general concept for learning in PC is that each layer learns to predict the activities of neurons in the previous layer which enables local computation of error and in-parallel learning across layers. In this paper, we extend existing PC approaches by developing a network which supports both feedforward and feedback propagation of information. Each layer in the networks trained using DBPC learn to predict the activities of neurons in the previous and next layer which allows the network to simultaneously perform classification and reconstruction tasks using feedforward and feedback propagation, respectively. DBPC also relies on locally available information for learning, thus enabling in-parallel learning across all layers in the network. The proposed approach has been developed for training both, fully connected networks and convolutional neural networks. The performance of DBPC has been evaluated on both, classification and reconstruction tasks using the MNIST and FashionMNIST datasets. The classification and the reconstruction performance of networks trained using DBPC is similar to other approaches used for comparison but DBPC uses a significantly smaller network. Further, the significant benefit of DBPC is its ability to achieve this performance using locally available information and in-parallel learning mechanisms which results in an efficient training protocol. This results clearly indicate that DBPC is a much more efficient approach for developing networks that can simultaneously perform both classification and reconstruction. Predictive coding, classification, reconstruction, convolutional neural network, local learning. ## I Introduction Deep neural networks (DNN) such as AlexNet [1], GoogLeNet [2], VGG [3], and ResNet [4], have performed well on computer vision tasks. These performance benchmarks have been achieved using deeper and wider networks with a large number of parameters which also lead to high computational requirements [5, 6]. Widespread use of edge devices (like mobile phones and drones) has created a necessity for the development of computationally efficient techniques [7, 8] as limited computing available on these devices impedes the deployment of computationally intensive DNNs. Further, most existing DNNs are trained using error-backpropagation (EBP) [9, 10] which relies on sequential layer-wise transmission of information from the last to first layer in the network during training. This is termed as the weight transport problem [11] and severely affects the efficiency of hardware realizations of EBP [12]. Different from EBP, most forms of plasticity observed in the brain rely on locally available information on a synapse which circumvents the weight transport problem. Local learning techniques also create opportunities for parallelizing learning across deep networks with many layers [13, 14]. This has motivated researchers to utilize biological phenomena for developing alternative learning techniques. _Predictive coding_ (PC) [15] has been proposed as a theoretical model of information processing in the brain. PC utilizes locally available information for learning [13, 14, 16] which enables parallelization parameter updates across all layers in the network [13]. The seminal work of Rao and Ballard [15] developed a neural network based implementation of PC that reproduced various phenomena observed in the visual cortex of the brain. The underlying principle of PC is to build generative models by estimating representations that are capable of reconstructing a given input. Each layer in the network generates predictions about representations associated with the previous layers. PC utilizes the gradient of errors in these predictions to update both representations associated with a given layer and the weights in the network. Both representations and weights are updated in parallel across all layers of the network. It has also been shown that the representations inferred using PC are also suitable for classification [17, 18]. This has led to the development of PC based approaches that involve training a single DNN to perform both discriminative tasks like classification and generative tasks like reconstructing an input [18]. Such techniques are particularly beneficial for edge devices as a single network could perform multiple tasks simultaneously. However, most existing algorithms involving PC have utilized locally available information to update the weights for either image classification [19, 20] or reconstruction [21] but not both at the same time. In this paper, we develop a new method called Deep Bi-directional Predictive Coding (DBPC) which can be used to build networks that can simultaneously perform classification and reconstruct a given input. The networks trained using DBPC are referred to as Deep Bi-directional Predictive Coding Networks (DBPCNs). The synapses in a DBPCN allow both feedforward and feedback propagation of information using the same weights. This is in contrast to existing studies on PC which only allow feedback propagation to transmit predictions and the errors in these predictions are used to update representations and weights [15]. In DBPCN, each layer simultaneously predicts the activities of neurons in both, previous layer using feedback propagation and next layer using feedforward propagation. The errors in these predictions are used to estimate representations associated with each layer and the weights in the network. Once trained, feedforward propagation from the input to output layer is used for classification. Feedback propagation is used to reconstruct a given input based on representations associated with any given layer in a DBPCN. The DBPC has been implemented in this paper for networks with both fully connected (DBPC-FCN) and convolutional layers (DBPC-CNN). The performance of these networks has been evaluated using MNIST and FashionMNIST datasets for both classification and reconstruction. The classification accuracy and the images reconstructed using DBPC-FCN and DBPC-CNN are similar to the existing best performing algorithms. For both types of problems, network trained using DBPC require fewer parameters and utilize local learning rules which support parallel learning across all layers in the network. The rest of the paper is organized as follows. Section II summarizes other approaches in literature that simultaneously perform classification and reconstruction. The architecture of DBPCN and its learning algorithm are presented in Section III. Experimental results using DBPCN are presented in Section IV. Finally, Section V summarizes the conclusions from this study and identifies directions for the future. ## II Related work Over the last few decades, PC has emerged as an important theory of information processing in the brain [22, 23]. Due to a lack of supervisory signal in the brain, most computational studies involving PC in neuroscience develop generative models using unsupervised forms of learning [24, 25]. These studies clamp the activity associated with the input layer while neural activity in other layers is updated to estimate suitable representations. The layer representations estimated using these methods can be used to reconstruct the original input. In [21], PC is further developed and used to train convolutional neural networks for image denoising on Color-MNIST and CIFAR-10. Several recent studies have developed supervised forms of PC [18, 19, 20]. The key idea in these studies is to clamp activities associated with both, input and output layers to samples and corresponding labels, respectively during training. For testing, only the activity associated with the input layer is clamped to a given sample and the estimated output layer representations are utilized to predict class labels. The networks developed in the above-mentioned studies are only suitable for classification. In [20], PC is used to develop a network that simultaneously performs classification and reconstruction. However, this paper utilizes separate set of parameters in each layer for classification and reconstruction which increases its computational requirements. In [26], PC is used to develop networks that can simultaneously perform classification and reconstruction using the same set of parameters. PC is only used to estimate the representations associated with each layer. The weights in the network are updated using EBP which relies on non-local information to update the weights and is unsuitable for parallel learning across all layers. This paper aims to develop a new method to develop networks that can simultaneously perform classification and reconstruction using the same connections. The representations and weights in the proposed method are learned using locally available information to support parallelization of learning across the network. ## III Deep Bi-directional Predictive Coding (DBPC) DBPC can be used for networks with fully connected layers and convolutional neural networks. Here we describe 1) the computations for bi-directional propagation of information in DBPC using a Fully Connected Network (DBPC-FCN); 2) the learning algorithm for estimating representations and updating the weights in DBPC-FCN; and 3) a network architecture for using DBPC to train convolutional neural networks (DBPC-CNN). ### _Network Architecture_ Fig. 1 shows the architecture of the Deep Bi-directional Predictive Coding Fully Connected Network (DBPC-FCN) with \(L\) layers. \(\mathbf{y}_{l}\) is a vector of shape (\(n_{l}\times 1\)) which represents the activity of neurons in the \(l^{th}\) layer of the network. \(n_{l}\) denotes the number of neurons in \(l^{th}\) layer. DBPC-FCN employs bi-directional connections (black lines with arrows at both ends) between all layers of the network which enables information to propagate in both, feedforward and feedback directions. Based on feedforward propagation from \((l-1)^{th}\) to \(l^{th}\) layer, the activity of neurons in the \(l^{th}\) layer is given by \[\hat{\mathbf{y}}_{l}^{ff}=f(\mathbf{W}_{l-1}\mathbf{y}_{l-1}) \tag{1}\] where \(f\) denotes the activation function and \(\mathbf{W}_{l-1}\) is a (\(n_{l}\times n_{l-1}\)) matrix which denotes the weights of the connections between \((l-1)^{th}\) and \(l^{th}\) layer of the network. The Rectified Linear Unit (ReLU) is used as the activation function for all networks in this paper. Fig. 1: Network architecture of DBPC-FCN with \(L\) layers. Similarly, when feedback propagation is used, the activity of neurons in the \(l^{th}\) layer is determined using \((l+1)^{th}\) layer, given by \[\hat{\mathbf{y}}_{l}^{fb}=f(\mathbf{W}_{l}^{T}\mathbf{y}_{l+1}) \tag{2}\] where \(\mathbf{W}_{l}^{T}\) denotes the transpose of weights \(\mathbf{W}_{l}\). Equations (1) and (2) represent the _predictions_ about the activity of neurons in the \(l^{th}\) layer based on feedforward propagation from \((l-1)^{th}\) and \((l+1)^{th}\) layer, respectively (see Section III.\(B\) for further explanation). It should be noted that both, feedforward and feedback propagation employ the same set of weights. While training the network an input sample is processed using both, feedforward and feedback propagation. During inference, feedforward propagation is utilized for classification tasks and feedback propagation is used to infer representations that allow reconstructing a given input. For classification, an input sample is presented through the first layer and the predicted class is determined by propagating information from the first layer to the \(L^{th}\) layer. Given an input sample \((\mathbf{x}_{k})\) and the associated class label \((\mathbf{c}_{k})\), the goal of the learning algorithm is to estimate output layer representations that enable correct classification. ### _Learning Algorithm_ The goal of the DBPC is to estimate representations in all layers that can simultaneously be used for classification and reconstruction. The learning algorithm relies only on the locally available information to simultaneously infer representations and update weights in the network. For the \(l^{th}\) layer in the network, locally available information includes activities of neurons in the previous \(\left(l-1\right)^{th}\) and next \(\left((l+1)^{th}\right)\) layer, and the weights (\(\mathbf{W}_{l-1}\) and \(\mathbf{W}_{l}\) ) of the connections between these layers. The fundamental concept underlying the learning algorithm is that each layer in the network aims to predict the activities of neurons in the previous (feedback propagation) and next layer (feedforward propagation). The errors in these predictions form the basis of inferring suitable representations (representation learning) and updating the weights (model learning). Below, the two steps of the learning algorithm namely, representation learning and model learning are described in detail. #### Ii-B1 Representation learning Using feedforward propagation, \(l^{th}\) layer in the network receives a prediction of its own neuronal activity from the \((l-1)^{th}\) layer (Equation (1)) and generates a prediction about the activities of neurons in the \((l+1)^{th}\) layer. Based on feedforward propagation, the error \(\left(\mathbf{e}_{l-1}^{ff}\right)\) in the prediction about the activity of neurons in the \(l^{th}\) layer is given by \[\mathbf{e}_{l-1}^{ff}=(\mathbf{y}_{l}-\hat{\mathbf{y}}_{l}^{ff})^{2} \tag{3}\] Similarly, using feedback propagation, \(l^{th}\) layer in the network receives a prediction (Equation (2)) of its own activities from the \((l+1)^{th}\) layer and generates a prediction about the activities of neurons in the \((l-1)^{th}\) layer. Based on feedback propagation, the error \(\left(\mathbf{e}_{l}^{fb}\right)\) in the prediction about the activity of neurons in the \(l^{th}\) layer is given by \[\mathbf{e}_{l}^{fb}=(\mathbf{y}_{l}-\hat{\mathbf{y}}_{l}^{fb})^{2} \tag{4}\] Figure 2 shows a visualization for computation of all locally computed errors that involve representations \((\mathbf{y}_{l})\) associated with \(l^{th}\) layer in the network. \(\mathbf{y}_{l}\) is updated by performing gradient descent on all the locally computed errors, given by \[\mathbf{E}_{y_{l}}=\lambda_{f}(\mathbf{e}_{l-1}^{ff}+\mathbf{e}_{l}^{ff})+ \lambda_{b}(\mathbf{e}_{l-1}^{fb}+\mathbf{e}_{l}^{fb}) \tag{5}\] where \(\lambda_{f}\) and \(\lambda_{b}\) denote feedforward and feedback factors, respectively. \(\lambda_{f}\) controls the impact of errors in feedforward predictions on the updated representations. Similarly, \(\lambda_{b}\) determines the influence of errors in feedback predictions on the updated representations. Minimizing the errors in feedback predictions improves the reconstructions generated by the network and reducing the errors in feedforward predictions improves the classification accuracy of the network. Thus, suitable values for \(\lambda_{f}\) and \(\lambda_{b}\) help the network to simultaneously perform well on classification and reconstruction tasks. Based on the error in equation (5), the update \((\Delta\mathbf{y}_{l})\) in the representations associated with \(l^{th}\) layer are given by \[\Delta\mathbf{y}_{l}=-\ell_{y}\frac{\delta\mathbf{E}_{y_{l}}}{\delta\mathbf{y }_{l}} \tag{6}\] where \(\ell_{y}\) denotes the learning rate for updating representation. The representations are updated using the Equations (1)-(6) multiple times as in the original PC algorithm [15]. In this paper, the representations are updated 20 times in DBPC-FCN. Since, all the information required to compute the error in Equations (5) is available locally, the representations for all layers are updated in parallel. Fig. 2: Visualization of the locally computed errors for representation learning and model learning in DBPC. The dotted and dashed rectangles represent the feedforward and feedback predictions, respectively. The circles represent item-wise subtraction required to compute errors in feedforward and feedback predictions. #### Iii-B2 Model Learning The weights in DBPCN are also updated using only locally available information. The weights between \((l-1)^{th}\) and \(l^{th}\) layers of the network are updated to minimize the errors in predictions based on feedforward and feedback propagation involving \(\mathbf{W}_{l}\). Thus, \(\mathbf{W}_{l}\) is updated by performing gradient descent on the errors in Equation (3) and (4), given by \[\mathbf{E}_{W_{l}}=\beta_{c}\mathbf{e}_{l}^{ff}+\beta_{r}\mathbf{e}_{l}^{fb} \tag{7}\] where \(\beta_{c}\) and \(\beta_{r}\) denote classification and reconstruction factors for updating weights, respectively. \(\beta_{c}\) controls the change in weight to improve the feedforward predictions and hence, the classification performance of the DBPCN. Similarly, \(\beta_{r}\) determines the change in weight to improve the feedback predictions and hence, the reconstruction performance of the network. Suitable values for \(\beta_{c}\) and \(\beta_{r}\) enable the network to simultaneously achieve good performance on classification and reconstruction tasks. Based on the error in equation (7), the change in \(\mathbf{W}_{l}\) is given by \[\Delta\mathbf{W}_{l}=-\ell_{w}\frac{\delta\mathbf{E}_{W_{l}}}{\delta\mathbf{ W}_{l}} \tag{8}\] where \(\ell_{w}\) denotes the learning rate for updating weights. The locally computed error for updating weights ensures that all weights in the network can be updated in parallel. Algorithm 1 presents the pseudocode for the DBPC learning algorithm. ``` 1:Samples and labels {\(\mathbf{x}_{k}\), \(\mathbf{c}_{k}\)} 2:for each epoch do 3:for each sample do 4: % Clamp the first and last layer to \(\mathbf{x}_{k}\) and \(\mathbf{c}_{k}\) 5: % Feedforward propagation 6:\(\hat{\mathbf{y}}_{l}^{ff}=f(\mathbf{W}_{l-1}\mathbf{y}_{l-1})\) 7: % Feedback propagation 8:\(\hat{\mathbf{y}}_{l}^{fb}=f(\mathbf{W}_{l}^{T}\mathbf{y}_{l+1})\) 9:for each iteration do 10: % Compute errors in Equation 3 and 4 11:\(\mathbf{e}_{l-1}^{ff}=(\mathbf{y}_{l}-\hat{\mathbf{y}}_{l}^{ff})^{2}\) 12:\(\mathbf{e}_{l}^{fb}=(\mathbf{y}_{l}-\hat{\mathbf{y}}_{l}^{fb})^{2}\) 13: % Update representation 14:\(\Delta\mathbf{y}_{l}=-\ell_{y}\frac{\delta\mathbf{E}_{W_{l}}}{\delta\mathbf{ y}_{l}}\), \(\forall l\in[2,\cdots,(L-1)]\) 15:endfor 16: %Update weights 17:\(\Delta\mathbf{W}_{l}=-\ell_{w}\frac{\delta\mathbf{E}_{W_{l}}}{\delta\mathbf{ W}_{l}}\), \(\forall l\in[1,\cdots,(L-1)]\) 18:endfor ``` **Algorithm 1** Learning algorithm for DBPC During testing, only the activities of the neurons in the first layer are clamped to a given input. The activities of neurons in all the other layers of the network are estimated using representation learning. The predicted class is estimated based on the representations associated with the output layer neurons. Further, estimated representations for any other layer can be used to reconstruct the given input using feedback propagation. ### _DBPC for Convolutional Neural Networks (DBPC-CNN)_ We have also developed a network architecture to use DBPC for Convolutional Neural Networks (DBPC-CNN). To enable feedforward and feedback propagation using the same kernels in DBPC-CNN, each layer employs a padding \((P)\), given by \[P=\frac{K-1}{2} \tag{9}\] where \(K\) is the size of the kernel and stride in all layers is set to 1. Choosing padding in this way ensures that both input and output for a convolution operation have the same shape. This ensures that convolution operation can be applied in both feedforward and feedback direction using the same kernel. ## IV Experiments This section presents the results of performance evaluation of DBPC for classification and reconstruction tasks. The performance of DBPC is also compared with the other existing algorithms for both tasks. The classification accuracy of DBPC is compared with other PC approaches, namely FIPC\({}_{3}\)[20], PC-1 [19] and PCN-E-1 [26]. In addition, the classification accuracy is also compared with the performance of classical DNNs, which include MobileNet-v2 [27] and GoogLeNet [27]. The reconstruction performance of DBPC is compared with the performance of FIPC\({}_{3}\) which is the only other PC approach that is simultaneously capable of classification and reconstruction using representations from any layer in the network. The performance of DBPC is evaluated in terms of the number of network parameters and accuracy for classification. Given a confusion matrix \(Q\), the classification accuracy (\(\eta_{c}\)) is given by \[\eta_{c}=\frac{\Sigma_{i\in\{1,\cdots,N_{C}\}}c_{ii}}{\Sigma_{i,j\in\{1, \cdots,N_{C}\}}c_{ij}} \tag{10}\] where \(c_{ij}\) represents the values in \(i^{th}\) row and \(j^{th}\) column of the confusion matrix and \(N_{C}\) denotes the total number of classes. The Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) are used for comparing performance on reconstruction task. The PSNR [28]\((\eta_{r})\) is given by \[\eta_{r}=10\times\log_{10}\frac{MAX^{2}}{MSE} \tag{11}\] where \(MAX\) represents the maximum pixel intensity in the image and \(MSE\) is the mean squared error between the original image and the reconstructed image. The SSIM [29]\((\eta_{s})\) is given by \[\eta_{s}(x,y)=\frac{(2\mu_{x}\mu_{y}+C_{1})(2\sigma_{xy}+C_{2})}{(\mu_{x}^{2}+ \mu_{y}^{2}+C_{1})(\sigma_{x}^{2}+\sigma_{y}^{2}+C_{2})} \tag{12}\] where \(x\) and \(y\) are the original and the reconstructed images, respectively. \(\mu_{x}\) and \(\mu_{y}\) represent mean pixel intensity of \(x\) and \(y\), respectively. \(\sigma_{x}\) and \(\sigma_{y}\) represent the standard deviations of the pixel intensities in \(x\) and \(y\), respectively. \(\sigma_{xy}\) denotes the covariance of pixel intensities across \(x\) and \(y\). \(C_{1}\) and \(C_{2}\) are constants to prevent division by zero. The performance evaluation is conducted using the MNIST [30] and FashionMNIST [31] datasets. MNIST is a dataset that contains images of hand-written digits from 0 to 9. It has 60,000 grayscale images for training and 10,000 grayscale images for testing. The FashionMNIST dataset is a more challenging dataset that contains images of ten fashion items like T-shirts, trousers and bags. Similar to MNIST, FashionMNIST contains 60,000 grayscale images for training and 10,000 grayscale images for testing. Each image in both datasets is of the size \(28\times 28\) pixels. Table I shows the architectures for DBPC-FCN and DBPC-CNN used for the two datasets. A given row in the table shows details of the corresponding layer in the network. The performance on the MNIST dataset has been evaluated using a fully connected network and a convolutional neural network. For the FashionMNIST dataset, the performance of DBPC is evaluated using only a convolutional neural network. The number of neurons in each layer of DBPC-FCN has been shown using the prefix 'FC'. Similarly, the prefix 'Conv' is used to specify the number of channels in a particular convolutional layer of DBPC-CNN. All convolutional layers use kernel, padding and stride of 3, 1 and 1, respectively. All models presented in this paper have been implemented in PyTorch and trained using an Nvidia V100 GPU. Training data is augmented using random rotation and affine transformations. A minibatch of 32 is used during training and stochastic gradient descent (SGD) is used to optimize the network parameters. The total number of epochs is set to 50 and 100 on MNIST and FashionMNIST datasets, respectively. ### _Performance Comparison for Classification_ Table II shows the results of performance comparison between DBPC and other existing learning algorithms for classification on the MNIST dataset. Figure 3 shows how the classification accuracy of DBPC-FCN and DBPC-CNN evolves during training for the MNIST dataset. DBPC-FCN uses a network with 1.225 million parameters to achieve a classification accuracy of 97.67% which is 1.2% lower than the best-performing method. FIPC\({}_{3}\) is the best-performing algorithm with an accuracy of 98.84% but it uses twice the number of parameters used by DBPC-FCN. PC-1 uses the smallest network with 0.532 million parameters to achieve an accuracy of 98.00%. It may be noted that representations estimated in both, PC-1 and PC-2 can't be used for reconstruction whereas DBPC-FCN also supports reconstruction of inputs. The performance of all the methods used for comparison is better using convolutional neural networks. DBPC-CNN employs a network with 0.425 million parameters to achieve an accuracy of 99.33% which is similar to the performance of other learning algorithms used for comparison. PCN-E-1 (tied) is the best performing algorithm with an accuracy of 99.57% and it uses a network with 0.07 million parameters. It may be noted that PCN-E-1 uses error-backpropagation for training which relies on non-local information for learning and is not suitable for parallel training across layers in the network. DBPC-CNN allows reconstruction of inputs using representations estimated for any layer in the network. The ability of PCN-E-1 to reconstruct images using representations estimated for different layers hasn't been studied. The performance of DBPC-CNN is also similar to the classification accuracy of established networks like MobileNet-v2 [27] and GoogLeNet [27] which are not capable of reconstruction. Further, DBPC-CNN employs a network that is much smaller than those used by MobileNet-v2 and GoogLeNet. \begin{table} \begin{tabular}{c|c|c} \hline Methods & Testing Accuracy \(\eta_{c}\) (\%) & Parameters \\ \hline \multicolumn{3}{c}{Fully Connected Networks} \\ \hline FIPC\({}_{3}\)[20] & 98.84 & 2.450M \\ \hline FC-1 [19] & 98.00 & 0.532M \\ \hline PC-2 [18] & 98.00 & 1.672M \\ \hline DBPC-FCN (Proposed work) & 97.67 & 1.225M \\ \hline \multicolumn{3}{c}{Convolutional Neural Networks} \\ \hline PCN-E-1(tied) [26] & 99.57 & 0.070M \\ \hline MobileNet-v2 [27] & 99.43 & 13.600M \\ \hline GoogleNet [27] & 99.47 & 49.700M \\ \hline DBPC-CNN (Proposed work) & 99.33 & 0.425M \\ \hline \end{tabular} \end{table} TABLE II: Performance comparison of DBPC with other methods on the MNIST dataset \begin{table} \begin{tabular}{c|c|c} \hline Dataset & MNIST & FashionMNIST \\ \hline Architecture & DBPC-FCN & DBPC-CNN & DBPC-CNN \\ \hline Input Size & \multicolumn{3}{c}{\((28\times 28)\)} \\ \hline & FC-1000 & Conv-16 & Conv-16 \\ \hline \multirow{3}{*}{ \begin{tabular}{c} Number \\ of \\ neurons in \\ layer \\ \end{tabular} } & FC-400 & Conv-32 & Conv-32 \\ \cline{2-3} & FC-100 & Conv-32 & Conv-32 \\ \cline{2-3} & Conv-48 & Conv-48 \\ \cline{2-3} & Conv-48 & Conv-48 \\ \cline{2-3} & Conv-48 & Conv-48 \\ \cline{2-3} & & Conv-64 \\ \cline{2-3} & & Conv-64 \\ \cline{2-3} & & Conv-96 \\ \cline{2-3} & & Conv-96 \\ \hline Classification & \multicolumn{3}{c}{FC-10} \\ \hline \#Parameters & 1.225M & 0.425M & 1.004M \\ \hline \end{tabular} \end{table} TABLE I: Architecture for DBPC-FCN and DBPC-CNN Fig. 3: Classification accuracy of DBPC-FCN and DBPC-CNN on MNIST after each epoch of training. \begin{table} \begin{tabular}{c|c|c} \hline Methods & Testing Accuracy \(\eta_{c}\) (\%) & Parameters \\ \hline FC-1 [19] & 89.00 & 0.532M \\ \hline DBPC-CNN & 91.61 & 1.004M \\ \hline \end{tabular} \end{table} TABLE III: Performance comparison of DBPC-CNN with PC-1 on the FashionMNIST dataset Table III shows a performance comparison of DBPC-CNN with other methods on the more challenging FashionMNIST dataset. It may be noted that only DBPC-CNN is used for the FashionMNIST dataset due to the higher complexity of this dataset. Figure 4 shows the changes in the classification accuracy of DBPC-CNN after each epoch of training for the FashionMNIST dataset. DBPC-CNN uses a network with 1.004 million parameters to achieve an accuracy of 91.61%. The performance of DBPC-CNN is 2.9% higher than the classification accuracy of PC-1. Furthermore, as highlighted above, the representations estimated in PC-1 cannot be used for reconstructing the inputs. ### _Performance Comparison for Reconstruction_ In this section, the performance of DBPC-FCN and DBPC-CNN is evaluated and compared for reconstruction problems using MNIST and FashionMNIST datasets. Figure 4(a) and 4(b) show how the PSNR of the reconstructed images form each layer in DBPC-FCN and DBPC-CNN evolves during training, respectively. For both DBPC-FCN and DBPC-CNN, earlier layers achieved a higher PSNR compared to deeper layers in the network. Further, reconstructed images obtained using DBPC-CNN exhibited higher PSNR compared to DBPC-FCN. Similar results are also obtained for the SSIM based on reconstructed images obtained from DBPC-FCN and DBPC-CNN. Figure 6 shows the reconstructed images obtained using representations estimated for each layer in FIPC\({}_{3}\), DBPC-FCN and DBPC-CNN. The first column in each figure shows the original figure in the dataset and the following columns show the reconstructions obtained from successively deeper layers in the network. These reconstructions are obtained by propagating backward from a given layer using the representations estimated in that layer. It may be observed that the quality of reconstructed images deteriorates as we go from earlier to deeper layers in all three algorithms. The deterioration in image quality is lowest for DBPC-CNN. It may be noted that FIPC\({}_{3}\) uses a separate set of weights for classification and reconstruction which results in a network having a large number of parameters. Figure 7 show the images reconstructed using representations associated with each layer of DBPC-CNN for the FashionMNIST dataset. The layout of Figure 7 and the method used for reconstructing these images is the same as that used for the MNIST dataset. Table IV provides a summary of the results presented above using a quantitative comparison of images reconstructed by DBPC-FCN and DBPC-CNN based on the PSNR and SSIM metric on both datasets. in which each layer simultaneously predicts the activities of neurons in the previous and next layer to simultaneously perform classification and reconstruction tasks. The performance of networks trained using DBPC has been evaluated for classification and reconstruction tasks using the MNIST and FashionMNIST datasets. The results of performance comparison clearly indicate that the classification and reconstruction performance of DBPC is similar to other existing approaches but, DBPC employs a significantly smaller network. In addition, DBPC relies on locally available information for learning and employs in-parallel updates across all layers in the network which results in a more efficient training protocol. Future directions on this work will focus on extending the reconstruction capabilities of DBPC to generate samples in the input space.
2305.17741
Monotonicity Anomalies in Scottish Local Government Elections
Single Transferable Vote (STV) is a voting method used to elect multiple candidates in ranked-choice elections. One weakness of STV is that it fails multiple fairness criteria related to monotonicity and no show paradoxes. We analyze 1,079 local government STV elections in Scotland to estimate the frequency of such monotonicity anomalies in real-world elections, and compare our results with prior empirical and theoretical research about the rates at which such anomalies occur. In 62 of the 1079 elections we found some kind of monotonicity anomaly. We generally find that the rates of anomalies are similar to prior empirical research and much lower than what most theoretical research has found. The STV anomalies we find are the first of their kind to be documented in real-world multiwinner elections.
David McCune, Adam Graham-Squire
2023-05-28T14:49:05Z
http://arxiv.org/abs/2305.17741v3
# Monotonicity anomalies in Scottish local government elections ###### Abstract. The single transferable vote (STV) voting method is used to elect multiple candidates in ranked-choice elections. One weakness of STV is that it fails multiple fairness criteria related to monotonicity and no-show paradoxes. We analyze 1,079 local government STV elections in Scotland to estimate the frequency of such monotonicity anomalies in real-world elections, and compare our results with prior empirical and theoretical research about the rates at which such anomalies occur. In 41 of the 1079 elections we found some kind of monotonicity anomaly. We generally find that the rates of anomalies are similar to prior empirical research and much lower than what most theoretical research has found. Most of the STV anomalies we find are the first of their kind to be documented in real-world elections. Key words and phrases:single transferable vote, monotonicity, empirical results 2010 Mathematics Subject Classification: Primary 91B10; Secondary 91B14 ## 1. Introduction The single transferable vote (STV) election procedure has been used for multiwinner elections in many countries since the early to mid-20th century. For example, members of the Australian Senate have been elected using STV since 1948, and members of the Dall Eireann, the lower legislative house of the Irish legislature, have been elected using STV since 1921. In the 21st century the method has experienced a surge in interest and usage. Many municipalities in the United States currently use the single-winner version of STV, often referred to as instant runoff voting (IRV), for local elections. Such elections include city council races in Minneapolis, MN, Oakland, CA, and San Francisco, CA, as well as primary races for city office in New York City. IRV was even used for the 2020 US Presidential election in the state of Maine. In Scotland, STV has been used for multiwinner local government elections in council areas since 2007, and IRV has been used for a handful of single-winner elections. While STV has its advantages as a voting method, such as its ability to achieve proportional representation in multiwinner elections, the method also has its drawbacks. One of its most serious weaknesses is that STV is non-monotonic, where a candidate might be worse off receiving more support from voters (an _upward monotonicity anomaly_), or a candidate might be better off receiving less support from voters (a _downward monotonicity anomaly_). That is, the following scenario is possible when using STV: a candidate \(X\) wins a seat but there exists a set of ballots such that if \(X\) were moved up the rankings on these ballots, \(X\) would not win a seat. Similarly, it is possible that \(X\) does not win a seat but there exists a set of ballots such that \(X\) would win a seat if they were moved down the rankings on these ballots. Other types of non-monotonicity are also possible. For example, it is possible that \(X\) does not win a seat in an election but if fewer seats were available then \(X\) would win a seat (a _committee size monotonicity anomaly_). Also, it is possible that a losing candidate \(X\) would have won a seat if some of \(X\)'s supporters had abstained from voting in the election (a _no-show anomaly_). The purpose of this article is to investigate how often such anomalies occur in real-world elections. To that end, we collected and analyzed the freely available vote data from 1,079 Scottish local government elections, 30 single-winner and 1,049 multiwinner. All elections used STV (or IRV) to elect a set of winners. For each type of monotonicity anomaly mentioned above, we wrote Python code that searched the ballot data from each of the Scottish elections to try to determine how many of the elections demonstrated the anomaly. Our general finding is that monotonicity anomalies occur rarely in these elections, on the order of 1-2% for each type. As far as we are aware this paper is the largest empirical study of monotonicity to date, as the prior (mathematically-oriented) social choice literature has not analyzed this large database of Scottish STV elections. ## 2. Previous literature on the frequency of monotonicity anomalies Previous literature regarding the frequency with which STV can produce monotonicity anomalies mostly addresses only the single-winner upward case, and very little of this literature is empirical. One empirical analysis [10] considered IRV elections in San Francisco and Alameda County, California between 2008 and 2016, as well as the 2009 mayoral election in Burlington, Vermont. The study found an upward monotonicity anomaly rate of 0.74% (1/135) of all IRV elections, 2.71% (1/37) of IRV elections that went to at least a second round, and 7.7% (1/13) of competitive three-candidate IRV elections. The most comprehensive empirical analysis of US IRV elections that went to a second round [9] found anomaly rates of 2.2% (upward), 1.6% (downward) and 0.5% (no-show). Additional empirical work tends to focus on a single election of interest, which does not provide insight on anomaly rates [8], [18], [22]. Semi-empirical research (i.e., research that does not have access to complete ballot preference data) finds small percentages of elections demonstrating anomalies when considering all elections, with estimates of zero [2], 0.028% [1], 1.4% [20], and 1.5% [5]. For extremely close elections, [20] found that 33% of elections demonstrate a monotonicity failure, and this percentage increases as elections become more competitive. Both [1] and [2] address multiwinner STV elections, but [1] uses poll data in the absence of complete preference data and considers only very restricted kinds of monotonicity anomalies, and the methodology in [2] is not clear. In a semi-empirical analysis, [13] found that 20% of past French presidential elections likely demonstrated a monotonicity failure under the voting method of plurality runoff, which is similar to IRV. Theoretical research into three-candidate IRV elections tends to find a higher frequency of upward anomalies, although the prevalence varies depending on the assumptions of the model and the closeness of the election. Estimates that 1.76% to 4.51% of all elections would demonstrate upward anomalies are found in [15], where the percentage depends on which model of voter behavior is used. Between 4.5% and 6.9% was found in [25], whereas [23] finds a frequency of less than 1%. Using a different model of voter behavior and a broader definition of monotonicity, [25] found that the percentage of elections demonstrating anomalies tends to 100% as the number of candidates increases. In elections where the top three candidates all receive more than 25% of the first-place vote, estimates range from as low as 10% [20] to 51% in highly competitive elections where the top three candidates are in a virtual tie [22]. Some theoretical research has also examined the prevalence of downward and no-show anomalies in three-candidate IRV elections. For downward anomalies, estimates for a lower bound range from 1.97% [16] to 3.8% [20]. For no-show anomalies, [23] found rates of 0.38% to 0.47%, and [16] found rates about 10 times higher, between 4.1% and 5.6%. The former used a spatial model, and the latter utilized the impartial anonymous culture and impartial culture models. In empirical research, [10] found a rate of 0% for no-show anomalies in the 135 IRV elections analyzed. There has been no prior theoretical analysis of the frequency of committee size anomalies. As far as we are aware, there have been no prior documented monotonicity anomalies of any kind in real-world multiwinner elections, where by "documented" we mean that full preference data is available and a set of ballots can be found which demonstrate the given anomaly. The reason for the lack of examples is that the database of Scottish elections is the first large set of multiwinner elections with available preference data which has been searched for monotonicity anomalies. All prior documented instances of monotonicity anomalies have occurred in single-winner IRV political elections in the United States, which are listed below. * The 2009 mayoral election in Burlington, VT, which demonstrated an upward anomaly [20], [22]. * The 2020 board of supervisors election in the seventh ward of San Francisco, CA, which demonstrated a downward anomaly [9]. * The 2021 city council election in the second ward of Minneapolis, MN, which demonstrated upward and downward anomalies [18]. * The August 2022 Special Election for the US House of Representatives in Alaska, which demonstrated upward and no-show anomalies [8]. * The 2022 school director election in district 4 of Oakland, CA, which demonstrated upward and downward anomalies [17]. Our results (Table 9) significantly increase the number of documented monotonicity anomalies in real-world elections, and represent the first such documented anomalies in multiwinner elections. ## 3. Preliminaries: Single Transferable Vote and Monotonicity Anomalies The Scottish elections we study use the method of STV to choose the set of election winners. There are different voting methods which can be classified as \(STV\); we use the term "STV" to refer only to the Scottish STV rules, which we outline below. Let \(n\) denote the number of candidates in an election and let \(S\) denote the size of the winner set, which equals the number of available legislative seats. In an STV election, each voter casts a preference ballot where the voter provides a preference ranking of the candidates. In Scottish elections voters are not required to provide a complete ranking and thus it is common for voters to rank only a subset of the candidates, leaving some candidates off their ballots. The ballots are combined into a _preference profile_, which provides a count of how many different kinds of ballot were cast; the preference profile of each election is the data we collected and analyzed. Table 1 shows an example of a preference profile in an election with 501 voters and \(n=4\) candidates \(A\), \(B\), \(C\), and \(D\). The table shows that 19 voters rank \(A\) first, \(B\) second, and leave \(C\) and \(D\) off the ballot; the other numbers across the top row convey similar information about the number of voters who cast the corresponding ballot. When discussing a given ballot we use the notation \(\succ\) to denote that a candidate is ranked immediately above another candidate, so that 41 people cast the ballot \(A\succ B\succ C\succ D\), for example. An _election_ is an ordered pair \((P,S)\) where \(P\) is a preference profile. STV takes an election as input and outputs a winner set, which we denote \(W(P,S)\). It is difficult to provide a complete definition of STV in a concise fashion. Therefore, we provide a high level description which we illustrate using examples with the preference profile in Table 1. The formal description of the rules can be found at [https://www.legislation.gov.uk/sdsi/2007/0110714245](https://www.legislation.gov.uk/sdsi/2007/0110714245). The method of STV proceeds in rounds. In each round, either a candidate earns enough votes to be elected or no candidate is elected and the candidate with the fewest (first-place) votes is eliminated. The number of votes required to be elected is called the _quota_, and is calculated by \[\text{quota }=\left\lfloor\frac{\text{Number of Voters}}{S+1}\right\rfloor+1.\] If no candidate reaches quota in a given round then the candidate with the fewest first-place votes is eliminated, and this candidate's votes are transferred to the next candidate on their ballots who has not been elected or eliminated. If a candidate reaches quota, that candidate is elected and the votes they receive above quota (_surplus votes_) are transferred in a fashion similar to that of an eliminated candidate, except the surplus votes are transferred in proportion to the number of ballots on which each other candidate appears. To explain how these transfers work, suppose candidate \(A\) is elected with a total of \(a\) votes and a surplus of \(A_{s}\) votes (so that \(A_{s}=a-\) quota), and candidate \(B\) is the next eligible candidate on \(b\) of these ballots. Rather than receive \(b\) votes from the election of \(A\) candidate \(B\) receives \((A_{s}/a)b\) votes, resulting in a fractional vote transfer. The method continues in this fashion until \(S\) candidates are elected, or until some number \(S^{\prime}<S\) of candidates have been elected by surpassing quota and there are only \(S-S^{\prime}\) candidates remaining who have not been elected or eliminated. We illustrate this description using the preference profile in Table 1 and seat values of \(S=1\) and \(S=2\). **Example 1**.: When \(S=1\) the quota is \(\lfloor 501/2\rfloor+1=251\) and a candidate must receive a majority of votes to win. No candidate initially receives a majority of first-place votes and thus \(C\), the candidate with the fewest first-place votes, is \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c|c} Num. Voters & 19 & 41 & 60 & 15 & 73 & 51 & 19 & 57 & 12 & 40 & 8 & 47 & 59 \\ \hline 1st Choice & A & A & A & A & B & B & C & C & C & D & D \\ 2nd Choice & B & B & C & D & C & A & D & A & B & D & A & C & B \\ 3rd Choice & & C & D & & A & D & C & & A & B & C & B & \\ 4th Choice & & D & & & & C & A & & D & A & & \\ \end{tabular} \end{table} Table 1. An example of a preference profile with 501 voters. eliminated. As a result 57 votes are transferred to \(A\), 12 to \(B\), and 40 to \(C\), as displayed in the vote totals for the next round of votes in the left side of Table 2. None of the remaining candidates have reached quota and thus \(D\), who now has 154 votes, is eliminated, causing 56 votes to transfer to \(A\) and 146 votes to transfer to \(B\). The STV method declares \(B\) the winner, as they have now surpassed quota. Thus, \(W(P,1)=\{B\}\). A transfer of surplus votes never occurs when \(S=1\). This changes when \(S=2\), as shown in the right table of Table 2. In this case the vote totals in the first two rounds are identical to the \(S=1\) case because no candidate achieves quota in the first round; however, \(A\) surpasses quota in the second round and their 24 surplus votes must be transferred. Since \(C\) has been eliminated, \(60(24/192)=7.5\) votes are transferred to \(B\), \(75(24/192)=9.375\) votes are transferred to \(D\), and \(57(24/192)=7.125\) votes are removed from the election because the 57 ballots of the form \(C\succ A\) do not indicate which candidate should these receive votes should \(A\) be elected or eliminated. Therefore, in the third round \(B\) has 162.500 votes and \(D\) has 163.375. \(B\) is eliminated, causing \(D\) to surpass quota with 233.375 votes. Thus, \(W(P,2)=\{A,D\}\). Note that if \(D\) were not to appear on any of the ballots that are transferred when \(B\) is eliminated then \(D\) would finish with only 163.375 votes, 4.625 votes shy of quota. Since there is still one seat left to fill, \(D\) would be elected because they are the only candidate left, and this would be an example where a candidate wins without achieving quota. As mentioned in the introduction, we are interested in four types of monotonicity anomaly that can occur in STV elections. We now define each type, focusing on the multiwinner context since 97% of the elections in our database satisfy \(S>1\). Because we are concerned with how these anomalies manifest in our database of actual elections and because our work with these elections never produces ties, our definitions assume a unique winner set. A careful theoretical treatment of these anomalies, such as what appears in [3], must take ties into account and thus articles like [3] treat STV as a set-valued method that can output multiple sets of winners, and defines the various monotonicity anomalies accordingly. We avoid the issue of ties, and the corresponding technical notation, due to the empirical nature of our work. Our first type of monotonicity, which we term _committee size monotonicity_ following terminology in [3], was first introduced in [29]. Committee size monotonicity \begin{table} \begin{tabular}{c|c|c|c} \multicolumn{3}{c}{\(S=1\), quota = 251} & \multicolumn{3}{c}{\(S=2\), quota = 168} \\ \hline \hline Cand. & \multicolumn{3}{c}{Votes By Round} \\ \hline \(A\) & 135 & 192 & 200 \\ \(B\) & 143 & 155 & **301** \\ \(C\) & 109 & & \\ \(D\) & 114 & 154 & \\ \hline \end{tabular} \begin{tabular}{c|c|c|c} \multicolumn{3}{c}{\(S=2\), quota = 168} \\ \hline \hline Cand. & \multicolumn{3}{c}{Votes By Round} \\ \hline \(A\) & 135 & **192** & & \\ \(B\) & 143 & 155 & 162.500 & \\ \(C\) & 109 & & \\ \(D\) & 114 & 154 & 163.375 & **233.375** \\ \hline \end{tabular} \end{table} Table 2. The left (respectively right) table shows the vote totals for each candidate by round, and eventual STV winners, for \(S=1\) (respectively \(S=2\)) seats. A bold number represents when a candidate is elected. requires that when we increase the number of seats available, every candidate who won a seat under the smaller seat size still wins a seat under the larger seat size. **Definition 1**.: **(Committee Size Monotonicity)** Given an election \((P,S)\), for any \(1\leq i<S\) we have \(W(P,i)\subseteq W(P,S)\). An election \((P,S)\) for which there exists \(1\leq S^{\prime}<S\) such that \(W(P,S^{\prime})\not\subset W(P,S)\) is said to demonstrate a _committee size monotonicity anomaly_. Such an anomaly is found in Example 1: note that \(W(P,1)=\{B\}\), which is not a subset of \(W(P,2)=\{A,D\}\). It seems paradoxical \(B\) is simultaneously the "best" single candidate when \(S=1\), but not in the "top half" of candidates when \(S=2\). One of the reasons monotonicity anomalies are of interest to social choice theorists is that anomalies can demonstrate "harm" toward a political candidate or some voters, and that harm seems paradoxical. In this example, it is understandable if candidate \(B\), and voters who prefer that \(B\) receive a seat, feel treated unfairly by the outcome. In addition to candidates and voters feeling harmed, in partisan elections (i.e., elections in which candidates belong to a political party) it is also possible for political parties to be harmed. Suppose in this example \(B\) belongs to the Scottish Labour Party but \(A\) and \(D\) belong to the Scottish Conservative Party. Then Labour loses their only seat in moving from \(S=1\) to \(S=2\), and thus the party is harmed as well. Most of the previous literature on monotonicity anomalies implicitly studies non-partisan elections, choosing to focus only on the candidates, and sometimes the voters, affected by an anomaly. Since our study concerns partisan Scottish elections, we also discuss harm to political parties when presenting our results. We note that an empirical analysis of committee size paradoxes has limitations, in that we cannot know if voters would vote substantially differently if the number of seats available were different. If Example 1 were a real-world election with \(S=2\), we would need to conduct high quality polls to know if \(B\) would be the IRV winner when \(S=1\). We do not have access to such poll data for the Scottish elections and thus we use the definition of committee size monotonicity from the previous literature, which assumes the same underlying vote data for each choice of \(S\). We now define the other three types of monotonicity, which have been studied primarily in a single-winner context in which it is assumed that each voter casts a ballot with a complete ranking of the candidates. Adapting these definitions to a real-world multiwinner context in which voters often cast partial ballots is not straightforward. First, we state how we handle partial ballots. We adopt the _weak order model_[24] wherein we assume that a voter who casts a partial ballot is indifferent among candidates left off the ballot, all of which are ranked beneath candidates that appear on the ballot. We use only the preference information provided by the voter, and choose not to try to complete partial ballots using statistical inference. In this way we are similar to an office of elections, which does not infer any information on a ballot beyond what a voter communicated1. As discussed in [24] there are other ways to process partial ballots, but empirical studies regarding STV tend to interpret partial ballots as we do (see [10], [14], [19]), although some similar studies which also use real-world elections to generate simulated elections handle partial ballots in a variety of ways (see [24], for example). Informally, upward monotonicity states that a candidate who wins a seat should not become a loser by gaining voter support, where that extra support consists of shifting the winning candidate up the rankings on some ballots and leaving the relative rankings of the other candidates unchanged. Because we use the weak order model for partial ballots, "shifting a winner up the rankings" includes scenarios where the winning candidate does not appear on the actual ballots and we place that winner at the first ranking on these ballots, shifting all other candidates down one ranking. We note that we choose the term "upward monotonicity" to accord with the literature for the single-winner case; this notion of monotonicity is also referred to as _candidate monotonicity_ in [3]. **Definition 2**.: **(Upward Monotonicity)** Given an election \((P,S)\), let \(X\in W(P,S)\) and let \(\mathcal{B}\) be a set of ballots from \(P\). If we construct a new preference profile \(P^{\prime}\) from \(P\) by moving \(X\) to a higher position in the ballots from \(\mathcal{B}\) but leave unchanged the relative positions of all other candidates on the ballots from \(\mathcal{B}\) then \(X\in W(P^{\prime},S)\). An election is said to demonstrate an _upward monotonicity anomaly_ if there exists a winning candidate \(X\) and a set of ballots \(\mathcal{B}\) such that moving \(X\) to a higher position on the ballots from \(\mathcal{B}\), but leaving the relative positions of the other candidates unchanged, creates a preference profile in which \(X\) loses. Informally, downward monotonicity states that a candidate who does not win a seat should not become a winner by losing voter support, where that lost support consists of shifting the candidate down the rankings on some ballots and leaving the relative rankings of the other candidates unchanged. Because of partial ballots, downward monotonicity is more difficult to define in a real-world context. For example, suppose candidate \(A\) does not win a seat but \(A\) would win a seat if we take 10 ballots with \(A\) ranked first and no other candidates listed on the ballot (we refer to such ballots as _bullet votes_ for \(A\)) and change those ballots to bullet votes for \(B\). Under the weak order model, shifting \(B\) up the rankings in such a manner changes the relative ordering of the candidates besides \(A\), and thus such an outcome would not count as a violation of downward monotonicity under a traditional definition. However, this scenario fits the spirit of a downward monotonicity violation. To deal with this issue of partial ballots, we adapt the classical single-winner definition of downward monotonicity into strong and weak forms, where the strong form insists that the relative rankings of candidates besides the affected losing candidate are unchanged (similar to the classical notion of downward monotonicity), whereas the weak form allows for situations in which we change bullet votes. **Definition 3**.: **(Downward Monotonicity)** Given an election \((P,S)\), let \(X\not\in W(P,S)\) and let \(\mathcal{B}\) be a set of ballots from \(P\) such that \(X\) appears on all ballots in \(\mathcal{B}\). * **Strong Downward Monotonicity**: If we construct a new preference profile \(P^{\prime}\) from \(P\) by moving \(X\) to a lower position in the ballots from \(\mathcal{B}\) but leave unchanged the relative positions of all other candidates on the ballots from \(\mathcal{B}\) then \(X\not\in W(P^{\prime},S)\). * **Weak Downward Monotonicity**: Let \(\mathcal{B}_{1}\) and \(\mathcal{B}_{2}\) be a partition of \(\mathcal{B}\) such that \(\mathcal{B}_{2}\) consists of bullet votes for \(X\). If we construct a new preference profile \(P^{\prime}\) from \(P\) by moving \(X\) to a lower position in the ballots from \(\mathcal{B}_{1}\) but leave the relative positions of all other candidates on the ballots from \(\mathcal{B}_{1}\) unchanged, and we change all ballots in \(\mathcal{B}_{2}\) to bullet votes for \(Y\) or to ballots of the form \(Y\succ X\) for some candidate \(Y\neq X\), then \(X\not\in W(P^{\prime},S)\). A _downward monotonicity anomaly_, either strong or weak, is defined similarly to an upward monotonicity anomaly. When \(S=2\), the election with the preference profile in Table 1 contains both an upward and a strong downward monotonicity anomaly. To demonstrate the upward anomaly, observe that if six voters who cast the ballot \(D\succ A\succ C\) move \(A\), who is a winner in the original election, up one ranking so that the 6 ballots become \(A\succ D\succ C\), then \(A\) no longer wins a seat. As illustrated in the left example of Table 3, even though \(A\) receives more votes initially, shifting \(A\) up on those 6 ballots causes \(D\) to be eliminated first instead of \(C\) and the winner set changes from \(\{A,D\}\) to \(\{B,C\}\). That is, as a result of 6 voters being persuaded that \(A\) is their favorite candidate rather than their second-favorite, \(A\) becomes a losing candidate because the order of elimination/election changes. Note that for this outcome to count as an anomaly we simply need \(A\) to drop from the winner set; the simultaneous removal of \(D\) is an unfortunate side effect for this candidate, but if moving \(A\) up on some ballots causes \(D\) to lose but \(A\) remains a winner, we do not say that an anomaly occurred. To demonstrate a strong downward monotonicity anomaly, suppose 6 voters who cast the ballot \(B\succ C\succ A\) in the original election cast the ballot \(C\succ B\succ A\) instead, moving \(B\) down one ranking. As illustrated in the right example of Table 3, \(D\) is eliminated first and the winner set is \(\{B,C\}\) for the modified election. If \(B\) were moved down one ranking on this handful of ballots, \(B\) would have been an election winner rather than a loser. We now define our final type of monotonicity, participation monotonicity, and its corresponding type of anomaly, a no-show anomaly (this is also sometimes referred to as an _abstention paradox_). Informally, participation monotonicity requires that voters are better off casting ballots than abstaining from the election. This is succinctly stated in [12]: "it should always be better to vote honestly than not to vote at all." The notion of a no-show anomaly has been formally defined in different ways in the context of single-winner elections. For example, [4] states (harkening back to the original definition in [21]), "The no-show paradox occurs whenever a group of identically minded voters is better off abstaining than by voting according to its preferences." In such a definition, the group of voters affected by the paradox must all cast the exact same ballot. Other definitions relax this assumption. Consider the definition from [11]: "if a candidate \(x\) is the winner in an initial election, then if we add to that scenario some new voters who rank \(x\) above \begin{table} \begin{tabular}{c|c|c|c} \multicolumn{2}{c}{\(S=2\), quota = 168} & \multicolumn{2}{c}{\(S=2\), quota = 168} \\ \hline \hline Cand. & \multicolumn{2}{c}{Votes By Round} \\ \hline \(A\) & 141 & 143 & 151.58 \\ \(B\) & 143 & **202** & \\ \(C\) & 109 & 156 & **171.49** \\ \(D\) & 108 & & \\ \hline \end{tabular} \begin{tabular}{c|c|c|c} \multicolumn{2}{c}{Cand.} & \multicolumn{2}{c}{Votes By Round} \\ \hline \(A\) & 135 & 143 & 150.29 \\ \(B\) & 137 & **196** & \\ \(C\) & 115 & 162 & **174.29** \\ \(D\) & 115 & & \\ \hline \end{tabular} \end{table} Table 3. The left (respectively right) table demonstrates an upward (respectively downward) monotonicity anomaly for the election \((P,2)\) from Example 1. \(y\), then the addition of these new voters should not make \(y\) the winner." Under this definition, the voters affected by the anomaly need not cast identical ballots, they merely must agree that they prefer \(x\) to \(y\). We are unaware of previous attempts to define participation monotonicity in a multiwinner context in which voters cast preference ballots. Definitions have been proposed for multiwinner elections which do not use preference ballots (see [27], for example), but such definitions do not easily translate to the STV setting. We choose to adapt the definition from [11], but multiwinner elections contain subtleties which complicate attempts to formalize the sentiment "it should always be better to vote honestly than not to vote at all." The reason is that, as argued in [26], a voter's preferences about winner sets cannot always be distilled into a preference ranking of the individual candidates. For example, suppose in a three-seat election a voter casts the ballot \(A\succ B\succ C\succ D\succ E\succ F\). From this ranking it is clear that the voter prefers a winner set of \(\{A,B,C\}\) to \(\{D,E,F\}\), but does this voter prefer \(\{A,C,F\}\) to \(\{B,C,E\}\)? Given only the voter's preference ranking of the candidates, we cannot say. A more pertinent question when trying to define a no-show anomaly is: does this voter prefer \(\{A,B,D\}\) to \(\{A,B,E\}\)? Suppose that when the voter participates in the election the winner set is \(\{A,B,E\}\) but when they abstain the winner set is \(\{A,B,D\}\); is the voter necessarily worse off when they cast a ballot? We choose to say the answer is Yes; however, it is conceivable that the voter would prefer \(\{A,B,E\}\) to \(\{A,B,D\}\), perhaps because of the group dynamics of the three candidates. In addition to the concerns outlined above, there are computational challenges when searching for no-show anomalies in actual data. For these reasons, we prefer to focus on winner changes among only the two candidates \(x\) and \(y\) from the definition in [11]. Thus, our definition of a no-show anomaly insists that if voters who prefer \(x\) to \(y\) abstain rather than vote, the only change to the winner set is that \(x\) replaces \(y\). Other definitions, either more or less restrictive, are also sensible. **Definition 4**.: **(Participation Monotonicity)** Let \((P,S)\) be an election, with \(X\not\in W(P,S)\) and \(Y\in W(P,S)\). Let \(\mathcal{B}\) be a set of ballots on which \(X\) is ranked higher than \(Y\). Then if we remove the ballots in \(\mathcal{B}\) from the election, it should not be the case that the resulting winner set is \((W(P,S)-\{Y\})\cup\{X\}\). A _no-show anomaly_ is said to occur in an election \((P,S)\) if there exists \(X\not\in W(P,S)\), \(Y\in W(P,S)\), and a set of ballots \(\mathcal{B}\) on which \(X\) is ranked higher than \(Y\) such that if the ballots from \(\mathcal{B}\) were removed from the preference profile then \(X\) replaces \(Y\) in the winner set. Given the potential ambiguity about whether a set of voters truly is better off not voting, when searching for no-show anomalies we look for instances of the anomaly that are unambiguous. Specifically, we try to find instances in which candidate \(X\) is ranked in the top \(S\) candidates on the affected voters' ballots, and \(Y\) is either not present on the ballots or is not ranked in the top \(S\) candidates. Such an outcome seems like the clearest way to demonstrate that voters would have created a more desirable electoral outcome by abstaining. Our running example \((P,2)\) demonstrates a no-show anomaly: if 35 voters who cast the ballot \(B\succ C\succ A\) are removed from the election, creating the preference profile \(P^{\prime}\), then \(W(P^{\prime},2)=\{A,C\}\). These 35 voters prefer \(C\) to \(D\), yet when they cast a ballot \(D\) is a winner, and when they abstain \(D\) is replaced by \(C\) in the winner set. In this example the voters removed from the election cast identical ballots but for our definition of a no-show anomaly, it is only relevant that the voters prefer \(C\) to \(D\). Furthermore, this seems like an unambiguous instance of a no-show anomaly, as these voters rank \(C\) in their top two and thus presumably they truly are worse off when \(D\) (who does not appear on their ballots) replaces \(C\) in the winner set. To conclude this section we note that these four types of monotonicity are logically independent, in the sense that an election which contains an upward anomaly may not contain a downward or a committee size anomaly, for example. An election such as our running example which demonstrates all four types of anomaly is most likely extremely rare. We found no examples of a Scottish election that exhibits all four anomalies, although one election demonstrates three of the four. Before providing our results about the frequency of monotonicity anomalies in real-world elections, we discuss our sources of data and how we searched the data for anomalies. ## 4. Data Sources: Scottish Local Government Elections For the purposes of local government, Scotland is partitioned into 32 council areas, each of which is governed by a council. The councils provide a range of public services that are typically associated with local governments, such as waste management, education, and building and maintaining roads. The council area is divided into wards, each of which elects a set number of councilors to represent the ward on the council. The number of councilors representing each ward is determined primarily by the ward's population, although other factors play a role2. Every five years each ward holds an election in which all seats available in the ward are filled using the method of STV. Footnote 2: For complete details about how the number of councilors for a ward is determined, see [https://boundaries.scot/reviews/fifth-statutory-reviews-electoral-arrangements](https://boundaries.scot/reviews/fifth-statutory-reviews-electoral-arrangements). Every Scottish ward has used STV for local government elections since 2007. Preference profiles from the 2007 elections are difficult to obtain; we contacted several council election offices and either received no response or were told that the 2007 data is not available. Thus there are no elections from 2007 in our database. We obtained preference profile data for the 2012 and 2017 ward elections from the Local Elections Archive Project [30], although some of this data is still available on various council websites. We obtained data for the 2022 preference profiles from the council websites. In addition to the regularly scheduled local government elections which occur on a five-year cycle, council areas sometimes hold off-schedule by-elections to fill a seat that is open due to the death or resignation of a councilor. These by-elections are almost always single-winner IRV elections. The data for many of these elections is not available because some councils hand-count these ballots, not using the STV tabulation software that is used for the regularly-scheduled elections. We obtained preference profiles for the available by-elections from various council websites, and by request from several council election offices. In all, we collected the preference profile data of 1,079 STV elections, 30 single-winner and 1,049 multiwinner. While we would prefer to have preference data from all Scottish local government elections, including 2007 elections and all off-schedule by-elections, the database we use is large enough to make robust conclusions about the frequency of monotonicity anomalies in real-world STV elections. As mentioned in Section 2, this collection of actual ballot data is what sets our study apart from most of the prior empirical and semi-empirical research on monotonicity anomalies. For each election in our database we have a complete record of the preference ranking of candidates expressed by each voter, which means that we do not need to rely on surveys or other such tools to search for monotonicity anomalies. When we detect an anomaly, we can provide an exact set of ballots, and (in the case of an upward or downward anomaly) how to alter the ballots, to demonstrate it. We conclude this section by providing basic information about the number of voters, candidates, seats, and voter behavior in these Scottish elections. Across all elections the minimum number of voters3 in an election is 661, the maximum is 14,207, and the median is 4,790. Thus, the electorates under consideration are not tiny, but the size of an electorate in these Scottish elections tends to be much smaller than electorates in many other publicly accessible databases of elections that use preference ballots. For example, the city of Minneapolis, Minnesota uses IRV to elect a single city council from each of its 13 wards. In the 2021 Minneapolis city council elections4 the median number of voters across the wards was 11,326, more than double the median from the Scottish elections. Electorates from other American IRV elections in places such as New York City or the state of Maine tend to be much larger. Footnote 3: When we refer to a “number of voters,” we mean the number of voters who cast a valid ballot. Ballots with errors are not counted in these elections. Footnote 4: The vote data for these elections can be found at [https://vote.minneapolisismm.gov/results-data/election-results/2021/](https://vote.minneapolisismm.gov/results-data/election-results/2021/). Table 4 (resp. 5) shows a breakdown of the number of elections by number of seats (resp. candidates). The number of seats for elections in the database tends to be 3 or 4; there was no election with \(S>5\). The number of candidates ranges from 3 to 14, although the majority of elections have 6, 7, or 8 candidates. In Scottish local government elections voters are not required to provide a complete ranking of all the candidates, and thus many ballots contain only a partial ranking (often referred to as _ballot truncation_). When we process the ballot data we assume that a voter prefers any candidate ranked on their ballot to any candidate not ranked on their ballot and we make no inference as to how the voter would have ranked candidates left off their ballot. It is possible that our results would change if the ballots were processed differently; we handle the ballots as we do because we prefer to consider precisely the ranking information provided by the \begin{table} \begin{tabular}{l|c c c c c c c c c c} Num. & Seats & 1 & 2 & 3 & 4 & 5 & \\ \hline Num. & Elections & 30 & 5 & 549 & 492 & 3 & \\ \end{tabular} \end{table} Table 4. The number of elections in the database of 1,079 elections with the given number of seats. \begin{table} \begin{tabular}{l|c c c c c c c c c c c} Num. & Cands & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 \\ \hline Num. & Elections & 3 & 39 & 119 & 212 & 289 & 205 & 113 & 63 & 22 & 8 & 5 & 1 \\ \end{tabular} \end{table} Table 5. The number of elections in the database of 1,079 elections with the given number of candidates. voters. We note that ballot truncation is more the norm than an aberration in Scottish elections. Specifically, the average voter casts a ballot which ranks fewer candidates than seats to be elected, and many fewer than the number of available candidates. Table 6 shows the average number of candidates ranked (which we refer to as _ballot length_) for elections with a given number of seats; the median ballot length was 3 for any number of seats. To get a sense of the relationship between average ballot length and the number of candidates, Table 7 shows that as the number of candidates increases in a 4-seat election, the average ballot length also generally increases. However, the growth is quite slow-in elections with 7 or more candidates, the average voter ranks less than half of the candidates. In 4-seat elections, the median ballot length was 3 for any number of candidates. ## 5. Methodology: How We Search for Monotonicity Anomalies In this section we provide a high level description of the code we created to search for monotonicity anomalies. The code is available at [7], and is adapted from programs used in [10]. Searching for committee size anomalies is straightforward: calculate \(W(P,S^{\prime})\) for \(1\leq S^{\prime}<S\) and check if \(W(P,S^{\prime})\subset W(P,S)\). If an election contains a committee size anomaly then such code definitely finds it. Searching for the other types of monotonicity anomaly in an election is much more difficult, as the code must search for a set of ballots which demonstrate the given anomaly. Unless \(S=1\) and \(n=3\) (which occurs in none of our elections) there are no known necessary and sufficient conditions for an election to demonstrate a given anomaly, and therefore if an anomaly exists we cannot guarantee that our code will find it. Our programs make a reasonable attempt to find anomalies, using the fact that for an anomaly to occur there must be a change in the order in which candidates are eliminated or elected. At each round of the election, the programs look for modifications to the preference profile (raising or lowering a candidate's ranking, or eliminating certain ballots) that could change the order of elimination or candidates being elected in the original election, and then the programs check to see if the modified profile would result in appropriately different winners. We provide a more detailed description of the upward monotonicity program below; the downward and no-show programs are conceptually similar. The upward monotonicity program first runs the original STV election and calculates the winner set \(W(P,S)\) and the set \(E\) of eliminated candidates, in order \begin{table} \begin{tabular}{l|c c c c c c c c} Number of Seats & 2 & 3 & 4 & 5 & & & \\ \hline Avg. Ballot Length & 2.79 & 2.99 & 3.28 & 3.54 & & & \\ \end{tabular} \end{table} Table 6. Average number of rankings for the given number of seats in an election. \begin{table} \begin{tabular}{l|c c c c c c c c c} Num. Candidates & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 \\ \hline Avg. Ballot Length & 2.82 & 3.01 & 3.16 & 3.32 & 3.44 & 3.57 & 3.52 & 3.64 & 3.46 & 3.32 \\ \end{tabular} \end{table} Table 7. Average number of rankings with the given number of candidates in 4-seat elections of elimination. Let \(\mathcal{C}\) denote the set of candidates in the election and \(E_{1}\) be first eliminated candidate, \(E_{2}\) the second eliminated, etc. The program then proceeds as follows: it chooses a winner \(W_{m}\in W(P,S)\), and a candidate \(C_{i}\) in \(\mathcal{C}-\{W_{m},E_{1}\}\). The program checks for ballots with \(C_{i}\) listed first where the following would happen: \(W_{m}\) could be raised higher in enough ballots so that \(C_{i}\) would be eliminated before \(E_{1}\), without first making \(W_{m}\) surpass quota. If such ballots exist, the program shifts \(W_{m}\) to the top of all such ballots and reruns the election with the modified profile \(P^{\prime}\). If \(W_{m}\) is not in \(W(P^{\prime},S)\), then the program reports an anomaly. The program then reverts back to the original profile \(P\) and checks all other \(C_{k}\) for a given \(W_{m}\), then chooses a different \(W_{j}\) and repeats the process until all \(W_{m}\) and \(C_{i}\) have been exhausted at the level of \(n\) candidates. At this point, the program eliminates candidate \(E_{1}\) to get a new profile \(P_{n-1}\), and repeats the process above for the second eliminated candidate \(E_{2}\), remaining winners \(W_{m}\), and remaining candidates \(C_{i}\). The program continues eliminating candidates and checking all possible changes of elimination order until all eliminated candidates are exhausted. If an anomaly is reported at this stage then it is possible that the program has returned a false positive, which occurred a few times. While we cannot guarantee that we have found all anomalous elections, we did the following to test and double-check our work: * All programs were tested on elections we created that had different anomalies to make sure the programs would find different varieties of how the anomalies present. * All anomalies reported in this paper were discovered by our programs and then double-checked by hand to guarantee the anomalies actually occur. * We looked at the votes-by-round tables (tables of the form provided in Table 2) for all 1,079 elections and attempted to find anomalies by hand for elections in which the vote totals in one of the rounds suggested that an anomaly might be present. We were unable to find any anomalous elections in this tedious, manual fashion beyond what our code found. * Similar programs have been used to find anomalies in single-winner ranked choice voting, and no anomalous elections have been found beyond those discovered by the programs. Thus we believe that we have found all, or almost all, of the Scottish STV elections which demonstrate a monotonicity anomaly. ## 6. Results Of the 1,079 elections in the database we found a monotonicity anomaly of some type in 41 of them, 40 multiwinner and one single-winner. Table 9 summarizes our findings, providing a list of all elections which contain an anomaly and indicating which anomalies we are able to find in each election. Complete details of how each anomaly arises are available in the Appendix. Recall that these elections are _partisan_, meaning that each candidate runs as a member of a political party or runs as an independent, and thus we also provide information about when an anomaly affects a political party. ### Committee Size Monotonicity Anomalies There are nine elections which demonstrate a committee size monotonicity anomaly, accounting for only \((9/1049)=0.86\%\) of the multiwinner elections in the database. Since we can definitively check for instances of this anomaly for a given election, we conclude that such anomalies should occur very infrequently in practice. While nine is a small sample size, these elections lead to several observations about committee size monotonicity anomalies in actual elections. First, a political party is harmed by this anomaly in only four elections. For example, in the 2012 Dundee City Ward 5 election the candidate McIrvine of the Labour Party loses their seat in the increase from \(S=2\) to \(S=3\), but the Labour Party receives exactly one seat for both values of \(S\), and thus from the party's perspective it seems no harm was done. By contrast, in the 2017 East Dunbartonshire Ward 4 election Labour receives one seat when \(S=3\) but receives zero seats in the actual election when \(S=4\). From the perspective of political parties the rate of committee size anomalies is \(4/1049=0.38\%\), suggesting that this anomaly should not be of concern to parties in real-world elections. Second, in theory these anomalies can be quite extreme, in the sense that if an election contains enough candidates then it is possible that \(W(P,S-1)\) and \(W(P,S)\) are not only different, but also disjoint. We do not see such outlandish outcomes in the actual data, although we did find one election (2017 Moray Ward 3) where the IRV winner is not a member of the winner set \(W(P,3)\). Our findings suggest that in real-world elections, when this anomaly occurs a single candidate loses their seat when \(S-1\) seats is increased to \(S\) seats. Third, our code did not find any other type of anomaly in these nine elections. Thus our hypothetical example from Section 3 which demonstrates all four anomaly types represents a purely theoretical possibility. ### Upward Monotonicity Anomalies We found 21 elections which demonstrate an upward monotonicity anomaly, accounting for \(21/1079=1.95\%\) of the elections in the database. Twenty of the elections are multiwinner, providing a rate of \(20/1049=1.91\%\) for elections with \(S\geq 2\), and only one of the elections is single-winner, providing a rate of \(1/30=3.33\%\) for IRV elections. When an election contains an upward anomaly, it is perhaps not clear that harm has been done to any particular candidate. The winning candidate \(X\) who would lose were they to be moved up on some ballots certainly isn't harmed, as the anomaly benefits them in a paradoxical way. It seems that if any candidate is harmed, it is a losing candidate \(Y\) who would have won a seat if they had campaigned for \(X\), causing \(X\) to rise on some ballots and subsequently lose their seat in the resulting modified preference profile \(P^{\prime}\). We choose to say that such a candidate \(Y\) is harmed by an upward anomaly, and if a political party wins more seats in the modified election \((P^{\prime},S)\) than in the original election \((P,S)\), we say that this party has been harmed. We found thirteen elections in which a political party was harmed by an upward anomaly. For example, in the 2022 Highland Ward 13 election, if MacKintosh of the Green Party were ranked higher on some ballots then Fraser of the Labour Party would replace MacKintosh in the winner set, suggesting that Labour should have done some carefully targeted campaigning for the Green Party. None of the examples found were as extreme as the hypothetical example from Section 3. In that example, if 6 voters who cast the ballot \(D\succ A\succ C\) swapped \(A\) and \(D\) at the top of their ballots, then these voters would have caused both \(A\) and \(D\) to lose their seats, perhaps causing a party to lose two seats. We were unable to find any anomalies in the data where a set of voters would have caused their top \(K\geq 2\) favorite candidates to lose their seats if those candidates were rearranged at the top of the voters' ballots. We note that a monotonicity anomaly can sometimes illustrate just how "close" an election is. For the 2012 Aberdeenshire Ward 18 contest, in the original election candidate Samways received the fewest first place votes and was eliminated in the second of nine rounds. However, if the winning candidate Christie were moved up on some ballots, then Christie would eventually lose a seat and be replaced by Samways in the winner set. It seems odd that a candidate seemingly as weak as Samways could end up winning a seat through an upward anomaly, which we interpret as a sign of this election's competitiveness. Of the 21 elections demonstrating an upward anomaly, 15 also demonstrate a no-show anomaly and four also demonstrate a downward anomaly. For only three of the 21 elections could we not find some other type of monotonicity anomaly. While 21 is a small sample size, this suggests that upward anomalies tend to occur in conjunction with other anomalies in real-world STV elections. ### Downward Monotonicity Anomalies We found fifteen elections which demonstrate a downward monotonicity anomaly, seven strong and eight weak. All of these anomalies occur in multiwinner elections, and thus we obtain a rate of \(15/1049=1.43\%\) for downward anomalies when \(S\geq 2\), which drops to \(7/1049=0.67\%\) for strong anomalies. Four of the elections demonstrating downward anomalies also demonstrate upward anomalies, including one election which demonstrates upward, downward, and no-show anomalies. We could not find any other kind of anomaly in the other 11 elections demonstrating a downward anomaly. In an election with a downward anomaly, it is clear which candidate and party (if any) have been harmed: if a candidate could have won a seat by being moved down on some ballots then this candidate is harmed by the anomaly, and if a party could have gained seats by having one of their candidates moved down on some ballots then the party is harmed as well. Of the 15 elections demonstrating downward anomalies, a political party was harmed in twelve of them. The Conservative Party seems to be the most affected by downward anomalies, with that party not winning a seat in six of the twelve elections as a result of this anomaly. For example, in the 2017 Argyll and Bute Ward 8 election, the Conservative Party did not win a seat in the original election but would have won a seat if their candidate Wallace were moved down on some ballots. As with the upward anomalies, none of the documented downward anomalies are as extreme as the hypothetical example from Section 3. We could not find any elections in which there exists a set of voters whose ballots start with \(A\succ B\) and both \(A\) and \(B\) do not win a seat, but if \(A\) were moved down on these ballots then both \(A\) and \(B\) win a seat. However, a few of the strong downward anomalies occur in a fashion we have not observed before. In a "typical" downward anomaly from prior literature, a losing candidate \(A\) loses in the penultimate round to another candidate \(B\), but when \(A\) is shifted down on some ballots then \(A\) is able to win by changing the elimination order so that \(A\) no longer faces \(B\) in that penultimate round. Our results show that downward anomalies in multiwinner elections can exhibit much different dynamics. For example, in the 2022 Perth and Kinross Ward 4 election Murray loses to Williamson by approximately 13.4 votes in the penultimate round, as shown in Table 8. If we shift Murray down one ranking on 37 ballots of the form Murray \(\succ\) McDougall then Murray still faces Williamson in the penultimate round but now Murray beats Williamson by approximately 7.74 votes. This anomaly occurs by swapping McDougall and Metcalf in the elimination order, but otherwise the order of elimination and election remains the same. It is strange that eliminating McDougall in the fourth round and eliminating Metcalf in the sixth round results in Williamson winning a seat, but eliminating McDougall in the sixth round and eliminating Metcalf in the fourth round results in Murray winning a seat. Some other examples of downward anomalies in our data are similarly strange when compared to downward anomalies from prior literature. We do not have any insight into why strong downward anomalies occur with much lower frequency than upward anomalies in the Scottish data. This empirical finding is consonant with prior work such as [9], [15] and [20], which show that upward anomalies occur more frequently in IRV elections than strong downward anomalies5. Footnote 5: We note that [15] and [20] use the term“downward monotonicity,” which is equivalent to our notion of strong downward monotonicity. ### No-show Anomalies We found 15 elections which demonstrate a no-show anomaly, accounting for \(15/1079=1.39\%\) of the elections in the database, and a political party was harmed in nine of them. The Labour Party is the most affected by this anomaly, with six of the nine elections featuring a losing Labour candidate who would have won a seat if some of their supporters abst \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \multicolumn{10}{c}{**Actual Election**} \\ \hline Candidate & \multicolumn{10}{c}{Votes by Round} \\ \hline Duff & 1110 & **1120** & & & & & & & \\ Hunter & 147 & 166 & 166.18 & & & & & \\ McDade & 977 & 1009 & 1010.10 & 1076.15 & **1148.16** & & & \\ McDougall & 203 & 212 & 212.10 & 247.11 & & & & \\ McMahoon & 87 & & & & & & \\ Metcalf & 268 & 275 & 279.00 & 283.03 & 291.06 & 297.86 & & \\ Murray & 807 & 811 & 811.15 & 829.16 & 899.17 & 905.09 & 916.98 & \\ Williamson & 856 & 857 & 857.16 & 865.16 & 908.16 & 914.89 & 930.38 & **1740.42** \\ \hline \multicolumn{10}{c}{**Modified Election**} \\ \hline Candidate & \multicolumn{10}{c}{Votes by Round} \\ \hline Duff & 1110 & **1120** & & & & & & \\ Hunter & 147 & 166 & 166.18 & & & & \\ McDade & 977 & 1009 & 1010.10 & 1076.15 & **1208.65** & & \\ McDougall & 240 & 249 & 249.10 & 284.11 & 295.21 & 312.23 & & \\ McMahoon & 87 & & & & & & \\ Metcalf & 268 & 275 & 279.00 & 283.03 & & & \\ Murray & 770 & 774 & 774.15 & 792.16 & 795.21 & 806.18 & 950.61 & **1759.20** \\ Williamson & 856 & 857 & 857.16 & 865.16 & 870.21 & 886.90 & 942.87 & \\ \hline \end{tabular} \end{table} Table 8. The strong downward monotonicity anomaly in the 2022 election in the Highland Ward of the Perth and Kinross council area. The top table is constructed from the actual preference profile, and the bottom table is constructed from a modified profile in which Murray is shifted down on some ballots. fifteen elections are multiwinner; we found a no-show anomaly in only one of the single-winner elections. All fifteen elections also demonstrate an upward anomaly, and only one also demonstrates a downward anomaly. These findings suggest that no-show anomalies in multiwinner elections are very likely to occur in conjunction with upward anomalies, even though it is straightforward to construct hypothetical elections which demonstrate a no-show but not an upward anomaly. For twelve of the fourteen multiwinner elections demonstrating a no-show anomaly we could find a set of ballots to remove such that the affected candidate is ranked in the top \(S\) rankings on all removed ballots. The two elections in which we could not find such a set of ballots are marked in Table 9. For example, in the 2022 Fife Ward 10 election if we remove 93 ballots on which the losing candidate Smart is ranked above the winning candidate Leslie then Smart replaces Leslie in the winning set, but for some of these ballots Smart is not ranked in the voters' top four candidates. ## 7. Discussion: Close Elections In this section we discuss our results through an examination of how frequently anomalies arise in close multiwinner elections, since much of the prior literature focuses on the frequency of monotonicity anomalies in elections that are _close_ in some sense. For example, [21] and [22] examine the single-winner case with \(n=3\), and they define an election to be close if all three candidates receive more than 25% of the first-place votes. Both papers then argue that monotonicity anomalies are much more likely to occur in such close elections. To build on this literature, we investigate how much closeness matters for monotonicity anomalies in the 1,049 multiwinner Scottish elections. The primary difficulty of such an investigation is that closeness is more difficult to define in the multiwinner setting with more than three candidates. We briefly define and examine three reasonable notions of closeness. **Closeness Notion 1**: If all \(S\) winners achieve quota in Round 1, we know without examining the ballot data that it is not possible for the election to demonstrate an upward, downward, or no-show anomaly. Such elections are analogous to single-winner elections in which a candidate achieves a majority of votes in the first round, which is a common occurrence in other election databases such as municipal IRV elections in the United States. Our first notion of closeness is that the election does not terminate after only one round, so that not all winners achieve quota initially. Of the 1,049 multiwinner elections in the database 1,026 satisfy this notion of closeness, and thus it is rare for a Scottish election to terminate in the first round. Using a denominator of 1,026 rather than 1,049 does not significantly alter the percentages provided in the previous section. **Closeness Notion 2**: For this notion we strengthen the notion of closeness found in [21], which states that a three-candidate election is close if the candidate with the fewest first-place votes has at least half as many first-place votes as the candidate with the most. We say that an election is _close_ if there exists a round of the election and a three candidate subset of candidates who have not been eliminated or previously elected in this round such that (1) this subset of candidates contains at least one candidate who eventually wins a seat and one candidate who does not win a seat, and (2) the smallest of the vote totals for the three candidates in this round is at least 60% of the largest vote total. There are 723 such elections \begin{table} \begin{tabular}{l|c|c|c|c|c|c} Election & \(S\) & Comm. Size & Upward & Downward & No-show \\ \hline 2012 Dundee Ward 5 & 3 & **Yes** & No & No & No \\ 2012 N-Lanarks Ward 3 & 4 & **Yes** & No & No & No \\ 2017 Dumgal Ward 12 & 3 & **Yes** & No & No & No \\ 2017 E-Duns Ward 4 & 4 & **Yes** & No & No & No \\ 2017 Moray Ward 3 & 4 & **Yes** & No & No & No \\ 2017 W-Duns Ward 3 & 4 & **Yes** & No & No & No \\ 2022 E-Duns Ward 4 & 4 & **Yes** & No & No & No \\ 2022 Edinburgh Ward 15 & 4 & **Yes** & No & No & No \\ 2022 S-Lanarks Ward 9 & 3 & **Yes** & No & No & No \\ 2012 Aberdeenshire Ward 18 & 4 & No & **Yes** & No & **Yes** \\ 2012 Eilean Siar Ward 5 & 3 & No & **Yes** & No & **Yes** \\ 2012 Eilean Siar Ward 7 & 4 & No & **Yes** & **Yes (S)** & No \\ 2012 Highland Ward 20 & 4 & No & **Yes** & **Yes (S)** & No \\ 2017 Argyll Bute Ward 8 & 3 & No & **Yes** & **Yes (S)** & No \\ 2017 E-Duns Ward 6 & 3 & No & **Yes** & No & No \\ 2017 Edinburgh Ward 4 & 4 & No & **Yes** & No & **Yes** \\ 2017 Fife Ward 12 & 3 & No & **Yes** & No & **Yes** \\ 2017 Glasgow Ward 5 & 4 & No & **Yes** & No & No \\ 2017 Glasgow Ward 9 & 4 & No & **Yes** & No & **Yes\({}^{\dagger}\)** \\ 2017 N-Lanarks Ward 3 & 4 & No & **Yes** & No & **Yes** \\ 2017 Perth-Kinross Ward \(10^{*}\) & 1 & – & **Yes** & No & **Yes** \\ 2022 Aberdeenshire Ward 18 & 4 & No & **Yes** & No & **Yes** \\ 2022 Dumgal Ward 7 & 3 & No & **Yes** & No & **Yes** \\ 2022 Edinburgh Ward 5 & 4 & No & **Yes** & No & **Yes\({}^{\dagger}\)** \\ 2022 Fife Ward 10 & 3 & No & **Yes** & No & **Yes\({}^{\dagger}\)** \\ 2022 Glasgow Ward 13 & 4 & No & **Yes** & No & **Yes** \\ 2022 Highland Ward 13 & 3 & No & **Yes** & No & **Yes** \\ 2022 Orkney Ward 5 & 3 & No & **Yes** & No & No \\ 2022 S-Lanarks Ward 12 & 3 & No & **Yes** & No & **Yes** \\ 2017 N-Ayrshire Ward 9 & 3 & No & No & **Yes (W)** & No \\ 2017 N-Lanarks Ward 16 & 3 & No & No & **Yes (S)** & No \\ 2017 N-Lanarks Ward 20 & 4 & No & No & **Yes (W)** & No \\ 2017 Stirling Ward 3 & 4 & No & No & **Yes (W)** & No \\ 2017 Renfrewshire Ward 6 & 3 & No & No & **Yes (W)** & No \\ 2022 Aberdeen City Ward 8 & 4 & No & No & **Yes (W)** & No \\ 2022 Aberdeenshire Ward 8 & 4 & No & No & **Yes (W)** & No \\ 2022 Argyll Bute Ward 8 & 3 & No & No & **Yes (W)** & No \\ 2022 Falkirk Ward 2 & 3 & No & No & **Yes (S)** & No \\ 2022 Glasgow Ward 23 & 4 & No & No & **Yes (W)** & No \\ 2022 Perth-Kinross Ward 4 & 3 & No & No & **Yes (S)** & No \\ \end{tabular} \end{table} Table 9. The one single-winner (out of 30) and 40 multiwinner (out of 1049) elections which demonstrate an anomaly. The last four columns denote the four types of monotonicity anomaly. The S (resp. W) in the Downward column denotes that the downward anomaly is strong (resp. weak). The * denotes this was a by-election. The \(\dagger\) denotes this no-show anomaly is weak in the sense that we could not find a set of ballots where the affected candidate is ranked in the top \(S\) candidates. in the database, including the 40 multiwinner elections with anomalies we found. If we use a denominator of 723, we find that the anomalous elections account for 5.5% of close elections. **Closeness Notion 3**: The previous notion focuses on closeness among three candidates, but we could also define closeness by focusing on two candidates. We say that an election is _close_ if there exists a round of the election and a two candidate subset of candidates who have not been eliminated or previously elected in this round such that (1) one of the candidates eventually wins a seat and the other does not win a seat, and (2) the smaller of the vote totals of the two candidates in this round is at least 85% of the larger. There are 590 such multiwinner elections in the database, including the 40 elections with anomalies we found. If we use a denominator of 590, we find that the anomalous elections account for 6.8% of close elections. There has been no prior theoretical work on closeness and the frequency of monotonicity anomalies for multiwinner STV elections, and thus we cannot directly compare our percentages to prior work. However, there has been substantial research related to closeness for 3-candidate IRV elections. Our percentages are much lower than what is predicted by [20] or [22], both of which give probabilities between 12.5% and 51% for an election to demonstrate an upward or downward anomaly in closely contested single-winner contests, with the highest percentages found for the most competitive elections. Our work confirms prior observations that the closeness of an election matters for the frequency of monotonicity anomalies, but we do not obtain the large probabilities predicted by some previous work. Under any of our notions of closeness the highest rate of monotonicity failure is 6.8% for all anomalies combined, which drops to \(31/590=5.3\%\) if we exclude the committee size anomalies. It is unsurprising that the percentages we find are much lower than what occurs under theoretical models, for two main reasons. First, the theoretical models tend to provide upper bounds for the frequency of an anomaly occurring. That is, theoretical models often provide the "worst-case" scenario because these models can produce elections which contain conflicted electorates at a higher proportion that what we see in practice. For example, under the random impartial culture and impartial anonymous culture models, IRV has a much larger tendency not to choose a Condorcet winner than we observe in actual elections (see [6] for a summary of the theoretical results, and [19] and [28] for an empirical analysis). Second, as noted previously, there is a very high rate of ballot truncation in the Scottish elections, which likely reduces the frequency of anomalies as compared to theoretical work which uses exclusively full ballots. It is unknown precisely what affect ballot truncation has on anomaly rates, however, which is an area for further study. For these reasons, our lower percentages than the theoretical work is entirely expected. ## 8. Conclusion The 41 elections demonstrating monotonicity anomalies that we found, including the 32 elections which contain an upward or downward anomaly, seem to undermine the claims of [1], [2], and [5], which essentially state that monotonicity anomalies either do not occur in actual STV elections or occur extremely rarely and therefore monotonicity issues are of no practical concern. On the other hand, the anomaly
2310.02844
The heart fan of an abelian category
We apply convex geometry (cones, fans) to homological input (abelian categories, hearts of bounded t-structures) to construct a new invariant of an abelian category, its heart fan. This can be viewed as a `universal phase diagram' for Bridgeland stability conditions with the given heart. When the abelian category is the module category of a finite-dimensional algebra, the heart fan is complete and contains the g-fan as the subfan of full-dimensional cones. The heart fan is also closely related to the wall-and-chamber structure for King semistability.
Nathan Broomhead, David Pauksztello, David Ploog, Jon Woolf
2023-10-04T14:21:44Z
http://arxiv.org/abs/2310.02844v2
# The heart fan of an abelian category ###### Abstract. We apply convex geometry (cones, fans) to homological input (abelian categories, hearts of bounded t-structures) to construct a new invariant of an abelian category, its heart fan. When the abelian category is the module category of a finite-dimensional algebra, the heart fan is complete and contains the \(g\)-fan as the subfan of full-dimensional simplicial cones. The heart fan is also closely related to the wall-and-chamber structure for King semistability, and to the Hall algebra scattering diagram. Key words and phrases:Abelian category, triangulated category, bounded t-structure, heart, convex cone, face, fan, scattering diagram, g-fan 2020 Mathematics Subject Classification: 18G80, 16E35, 14F08, 18E40, 52A20 ###### Contents * 1 The cone of an abelian category * 1.1.1 The cone of an abelian category * 1.1.2 Faces and Serre subcategories * 1.1.3 The fan of a bounded heart * 1.1.4 \(g\)-fans * 1.1.5 King semistability and wall-and-chamber structures * 1.1.6 Scattering diagrams * A Homological algebra * A.1.7 Convex geometry * B Convex geometry ## Introduction A large part of homological algebra is phrased in terms of abelian and triangulated categories, and there is now a large body of literature in which these are used to study algebraic geometry, the representation theory of finite-dimensional algebras, symplectic geometry and constructible topology. We apply convex geometry to study abelian and triangulated categories. In the first three sections we explain how to naturally construct a convex cone \(C(\mathsf{H})\) -- the _heart cone_ -- from an abelian category \(\mathsf{H}\) and a fan \(\Sigma(\mathsf{H})\) -- the _heart fan_ -- from the heart \(\mathsf{H}\) of a bounded t-structure (henceforth, a 'bounded heart' or often just 'heart') in a triangulated category \(\mathsf{D}\). (For the sake of simplicity, we gloss over an important technical detail in the introduction. In truth, \(\Sigma(\mathsf{H})\) is a 'dual face fan' which is a subtly different notion pertaining to sets of dual cones, see Appendix B.5. There is an associated 'naive heart fan' \(\Sigma^{\mathsf{n}}(\mathsf{H})\) which has the same maximal cones. In many, but not all, cases \(\Sigma(\mathsf{H})=\Sigma^{\mathsf{n}}(\mathsf{H})\). Where they differ, the cones in \(\Sigma(\mathsf{H})\) are more closely connected with the homological algebra of \(\mathsf{H}\).) When \(\mathsf{H}\) is the category of finite-dimensional representations of a finite-dimensional algebra the heart fan \(\Sigma(\mathsf{H})\) is closely related to three constructions which are the subject of intense current research: \(g\)-fans, wall-and-chamber structures, and Hall algebra scattering diagrams. In the final three sections we explain the precise relationships. Our running examples come from quivers with relations, and coherent sheaves on smooth projective varieties. However, we emphasise that our construction is very general, and applies to any bounded heart. From the algebraic perspective it can be viewed as a completion of the \(g\)-fan constructed using Happel-Reiten-Smalo tilting rather than silting, and so existing in contexts where there is no silting theory available. In more detail, the following result defines the heart cones and the heart fan they make up. For simplicity, we assume in the introduction that the abelian and triangulated categories have Grothendieck groups that are free of finite rank; later, we weaken this requirement by fixing a finite rank free quotient \(\Lambda\) of the Grothendieck group. We denote the associated finite-dimensional real vector space by \(\Lambda_{\mathbb{R}}\) and its dual vector space by \(\Lambda_{\mathbb{R}}^{*}\). There is a standard partial order on bounded hearts in which \(\mathsf{H}\leq\mathsf{H}[1]\), namely we set \(\mathsf{H}\leq\mathsf{H}^{\prime}\) if there is an inclusion between the co-aisles of the associated t-structures. **Theorem A** (Corollary 3.2, Proposition 3.4 and Theorem 3.8).: _Let \(\mathsf{H}\) be an abelian category occurring as a bounded heart in a triangulated category \(\mathsf{D}\). Assume that the Grothendieck group \(\Lambda\) of \(\mathsf{H}\) is free of finite rank._ 1. _The set_ \(C(\mathsf{H})=\{v\in\Lambda_{\mathbb{R}}^{*}\mid v([h])\geq 0\ \forall h\in\mathsf{H}\}\) _is a closed, strictly convex cone in_ \(\Lambda_{\mathbb{R}}^{*}\)_._ 2. _There is a fan_ \(\Sigma(\mathsf{H})\) _in_ \(\Lambda_{\mathbb{R}}^{*}\) _whose set of maximal cones is_ \(\{C(\mathsf{K})\mid\mathsf{H}\leq\mathsf{K}\leq\mathsf{H}[1]\}\)_._ 3. _Each cone in_ \(\Sigma(\mathsf{H})\) _has the form_ \(C(\mathsf{K}/\mathsf{S})\) _where_ \(\mathsf{H}\leq\mathsf{K}\leq\mathsf{H}[1]\) _and_ \(\mathsf{S}\) _is a 'face subcategory' of_ \(\mathsf{K}\)_, i.e. a special kind of Serre subcategory._ 4. _The fan_ \(\Sigma(\mathsf{H})\) _does not depend on the ambient triangulated category_ \(\mathsf{D}\)_._ 5. _The fan_ \(\Sigma(\mathsf{H})\) _is complete when_ \(\mathsf{H}\) _is a length category._ We illustrate this theorem with two examples: finite-dimensional modules over the Kronecker algebra, \(\mathsf{H}_{1}=\mathsf{mod}(\mathsf{k}(\bullet\xrightarrow{\bullet}\bullet))\) and coherent sheaves on the projective line, \(\mathsf{H}_{2}=\mathsf{coh}(\mathbb{P}^{1})\). Their Grothendieck groups are free of rank two, so all cones live in \(\Lambda_{\mathbb{R}}^{*}\cong\mathbb{R}^{2}\). The heart cone \(C(\mathsf{H}_{1})\) is spanned by a basis whereas \(C(\mathsf{H}_{2})\) is a ray. The heart fans are depicted in Figure 1. See Figure 2 on page 2 for further examples. ### Motivation and relation to other constructions Our initial motivation was the observation that the dual cones of tilts fit neatly into a fan, akin to how fans of toric varieties arise. From that initial insight we set out to explore the connection between homological algebra and convex geometry systematically. Apart from this intrinsic incentive, our work is also motivated by connections to current research. Let \(A\) be a finite-dimensional algebra. In the literature, there are three different approaches to attach a convex-geometric invariant, labelled by algebraic data, to \(A\): * The _\(\boldsymbol{g}\)-fan_ constructed via two-term silting complexes, or equivalently, support \(\tau\)-tilting pairs in the module category; see Section 4 and [2], [4], [13]. Figure 1. Heart fans of \(\mathsf{H}_{1}\) (left) and of \(\mathsf{H}_{2}\) (right). The heart \(\mathsf{H}_{1}\) is length and as per Theorem A(5) its heart fan is complete even though it has infinitely many cones. The heart fan of the Noetherian but non-Artinian category \(\mathsf{H}_{2}\) is supported on a half-plane. * The _wall-and-chamber structure_ defined by King-semistability of modules; see Section 5 and [10], [19]. * The _Hall algebra scattering diagram_, defined in [8]; see Section 6. In each case the convex-geometric object is closely related to the heart fan of \(\mathsf{mod}(A)\), as defined in Theorem A, but with a different indexing by algebraic data. The following statement outlines these relationships, which are explained in full detail in Sections 4, 5 and 6 respectively. **Theorem B** (Propositions 4.1, 5.7 and Subsection 6.3).: _Let \(A\) be a finite-dimensional algebra._ 1. _The_ \(g\)_-fan of_ \(A\) _is the subfan of the heart fan of_ \(\mathsf{mod}(A)\) _consisting of all full-dimensional, simplicial cones together with their faces. The cones are labelled by two-term presulting complexes in_ \(\mathsf{K}^{b}(\mathsf{proj}(A))\)_._ 2. _Each_ \(v\in\Lambda_{\mathbb{R}}^{*}\) _determines a full subcategory of_ \(v\)_-semistable modules in_ \(\mathsf{mod}(A)\) _which is constant on the relative interior of each cone of the heart fan; the stability space_ \(\mathcal{D}(M)=\{v\in\Lambda_{\mathbb{R}}^{*}\mid M\text{ is $v$-semistable}\}\) _of_ \(M\in\mathsf{mod}(A)\) _is the support of a subfan. The wall-and-chamber structure has walls the codimension one stability spaces and chambers the relative interiors of the full-dimensional, simplicial cones in the heart fan._ 3. _The support of the scattering diagram is the union_ \[\bigcup_{M\neq 0}\mathcal{D}(M)=\bigcup_{\mathsf{S}\neq 0}C(\mathsf{K}/ \mathsf{S})\] _of stability spaces of non-zero modules, equivalently of cones in the heart fan indexed by non-zero face subcategories. The wall-crossing automorphism at a general point_ \(v\) _of the support is determined by the subcategory of_ \(v\)_-semistable modules._ For example, the \(g\)-fan of the Kronecker algebra \(\mathbf{k}(\bullet\xrightarrow{\ \ }\bullet)\) is the heart fan shown above except for the highlighted ray which is a maximal cone but has codimension one. The wall-and-chamber structure and scattering diagrams consist of the rays in the heart fan, but in each case with a different labelling by algebraic data. For the derived category of representations of a Dynkin quiver, the heart fan can be identified with the tropical cluster variety. Our selection of hearts between \(\mathsf{H}\) and \(\mathsf{H}[1]\) is closely related to the Nagao-Plamondon section; see [9, SS5.4]. **Future work.** In a sequel to this article, we construct a'multifan' \(\Sigma(\mathsf{D})\) for a triangulated category \(\mathsf{D}\). Intuitively a multifan should be thought of as a 'fan over a vector space' rather than a 'fan in a vector space'; multifans formalise the gluing of compatible fans. To briefly explain, consider again the examples in Figure 1. As is well-known, \(\mathsf{D}^{b}(\mathsf{H}_{1})\cong\mathsf{D}^{b}(\mathsf{H}_{2})=:\mathsf{D}\). In particular, the ray corresponding to \(\mathsf{coh}(\mathbb{P}^{1})\) occurs in the heart fan of \(\mathsf{H}_{1}\), and is the highlighted diagonal ray emanating from the origin to the top left. In fact, the violet sections in each heart fan can be identified. The multifan is obtained by going through all possible bounded hearts of \(\mathsf{D}\) and identifying common heart cones. In addition, we define a tangent multifan \(T\Sigma(\mathsf{D})\) by gluing together the tangent spaces of each dual cone. The tangent multifan describes the local combinatorics and geometry of \(\Sigma(\mathsf{D})\). Its realisation as a space can be interpreted as the'space of lax stability functions' on all hearts in \(\mathsf{D}\), and we show that the Bridgeland stability space \(\operatorname{Stab}(\mathsf{D})\) embeds as the open subset consisting of the stability functions with support and Harder-Narasimhan properties. There are many other questions which it would be interesting to pursue. In particular, using the standard construction of toric geometry one can define a regular toric scheme from the subfan of simplicial cones in \(\Sigma(\mathsf{H})\), or indeed from \(\Sigma(\mathsf{D})\). In general the toric schemes will be neither separated nor Noetherian. Their orbit structure is closely connected to the exchange graph of hearts. _Acknowledgments._ It is a pleasure to thank Klaus Altmann, Lutz Hille, Andreas Hochenegger and Greg Stevenson for their comments. This project has been supported by EPRSC grant no. EP/V050524/1 of the second author. We dedicate this work to the memory of Helmut Lenzing who sadly passed away while this manuscript was being prepared. **Notation and terminology.** Fix a lattice, i.e. a free abelian group of finite rank, \(\Lambda\). Denote by \(\Lambda^{*}=\operatorname{Hom}(\Lambda,\mathbb{Z})\) the dual lattice, by \(\Lambda_{\mathbb{R}}=\Lambda\otimes\mathbb{R}\) and \(\Lambda_{\mathbb{R}}^{*}=\Lambda^{*}\otimes\mathbb{R}=\operatorname{Hom}( \Lambda,\mathbb{R})\) the associated real vector space and its dual respectively. Let \(\mathsf{H}\) be a category having a Grothendieck group \(K(\mathsf{H})\), e.g. \(\mathsf{H}\) is abelian or has a triangulated structure. We call \(\mathsf{H}\) a _lattice category_ if \(K(\mathsf{H})\) is a lattice, i.e. a finite rank, free abelian group. Throughout, we fix a surjective homomorphism \(\lambda\colon K(\mathsf{H})\to\Lambda\). If \(\mathsf{H}\) is a lattice category, \(\lambda\) will be taken as identity unless explicitly stated otherwise. In many other settings, a natural choice is the numerical Grothendieck group, \(\Lambda=K(\mathsf{H})/\ker(\chi)\), obtained by factoring out the kernel of the Euler pairing \(\chi\). This makes sense when \(\chi\) is well-defined and its left and right kernels agree; see [25] for details. The datum \(\lambda\colon K(\mathsf{H})\to\Lambda\) is inspired by the theory of stability conditions. By a _category over \(\Lambda\)_, we mean \(\mathsf{H}\) together with a homomorphism \(\lambda\colon K(\mathsf{H})\to\Lambda\). Occasionally this is denoted \(\mathsf{H}/\Lambda\), trusting that no confusion with quotient categories is possible. Elements of \(\Lambda_{\mathbb{R}}^{*}\) are denoted \(v,w\) or similar. We write \(\lambda(h)\) for \(\lambda([h])\) where \(h\in\mathsf{H}\) and \([h]\) is its class in \(K(\mathsf{H})\), and often suppress \(\Lambda\) and especially \(\lambda\) from notation. In particular, we write \(v(h)\coloneqq v(\lambda([h]))\) for \(v\in\Lambda_{\mathbb{R}}^{*}\) and \(h\in\mathsf{H}\). In the examples, varieties and algebras are over a field \(\mathbf{k}\) which we assume for convenience to be algebraically closed (although this assumption can be weakened in places). By a'module' we always mean a finite-dimensional left module. ## 1. The cone of an abelian category To an abelian category, we attach the real version of the Grothendieck monoid and its dual in the sense of convex geometry. Before this, we set up notation for cones generated by a set of vectors and their dual cones; see Appendix B.1. For any subset \(S\subseteq\Lambda_{\mathbb{R}}\) we write \[E(S) \coloneqq\{a_{1}s_{1}+\cdots+a_{n}s_{n}\mid n\in\mathbb{N},a_{i} \in\mathbb{R}_{\geq 0},s_{i}\in S\}\subseteq\Lambda_{\mathbb{R}},\] \[C(S) \coloneqq E(S)^{\vee}=\{v\in\Lambda_{\mathbb{R}}^{*}\mid v(x)\geq 0 \quad\forall x\in E(S)\}\subseteq\Lambda_{\mathbb{R}}^{*}\] for the cone generated by \(S\) inside \(\Lambda_{\mathbb{R}}\) and its dual cone, respectively. We denote by \(\mskip 1.5mu \overline{\mskip-1.5mu E\mskip-1.5mu }\mskip 1.5mu (S)\) the closure of \(E(S)\) in \(\Lambda_{\mathbb{R}}\) and by \(C^{\circ}(S)\) the relative interior of \(C(S)\), i.e. the topological interior of \(C(S)\) inside its linear span in \(\Lambda_{\mathbb{R}}^{*}\). **Definition 1.1**.: Let \(\mathsf{H}\) be an abelian category and \(\lambda\colon K(\mathsf{H})\to\Lambda\) a surjective homomorphism onto a lattice. For any collection \(\mathsf{S}\) of objects of \(\mathsf{H}\), denote \(E(\mathsf{S})\coloneqq E(\lambda(\mathsf{S}))\) and \(C(\mathsf{S})\coloneqq C(\lambda(\mathsf{S}))\). For the special case of all objects, these cones are called \[E(\mathsf{H}) =\{a_{1}\lambda(h_{1})+\cdots+a_{n}\lambda(h_{n})\mid n\in \mathbb{N},a_{i}\in\mathbb{R}_{\geq 0},h_{i}\in\mathsf{H}\} \text{the \emph{effective cone} of $\mathsf{H}$};\] \[C(\mathsf{H}) =E(\mathsf{H})^{\vee}=\{v\in\Lambda_{\mathbb{R}}^{*}\mid v(x)\geq 0 \quad\forall x\in E(\mathsf{H})\} \text{the \emph{heart cone} of $\mathsf{H}$}.\] **Remark 1.2**.: The terminology 'effective cone' is inspired by effective divisors or cycles in algebraic geometry. We call its dual the 'heart cone' because later on the abelian categories will occur as hearts in triangulated categories. The effective cone of an abelian category is a linearisation and drastic simplification of the Grothendieck monoid of the category. See [24] for a discussion of the Grothendieck monoid itself. **Remark 1.3**.: We write \(v|_{\mathsf{H}}\geq 0\) to mean \(v\in C(\mathsf{H})\), expanding on the notational shortcut \(v(h)=v(\lambda(h))\). It suffices to check non-negativity on generators of \(E(\mathsf{H})\). **Remark 1.4**.: Classes \([h]\in K(\mathsf{H})\) play a prominent role in this article. Consider the subcategory \(\mathsf{N}_{\mathsf{H}}\coloneqq\{h\in\mathsf{H}\mid 0=[h]\in K(\mathsf{H})\}\subseteq \mathsf{H}\). It can happen that \(\mathsf{N}_{\mathsf{H}}=\mathsf{H}\), e.g. a variant of the Eilenberg swindle shows that this occurs whenever \(\mathsf{H}\) allows countable sums. The categories we have in mind feature \(\mathsf{N}_{\mathsf{H}}=0\) instead but see Example 1.7 for an exception. Our main examples are: * \(\mathsf{H}=\mathsf{mod}(A)\), the category of finitely generated left \(A\)-modules over a finite-dimensional algebra \(A\). In this case, \(K(\mathsf{H})\) is a free abelian group generated by pairwise non-isomorphic simple \(A\)-modules. A module \(M\in\mathsf{H}\) with \(0=[M]\in K(\mathsf{H})\) has vanishing dimension vector, hence \(M=0\). The same reasoning applies to any length category \(\mathsf{H}\) * \(\mathsf{H}=\mathsf{coh}(X)\), the category of coherent sheaves on a smooth projective variety \(X\). Let \(K(X)=K(\mathsf{H})\) be the Grothendieck group of \(X\) and \(N(X)\) the numerical Grothendieck group, i.e. the quotient of \(K(X)\) by the radical of the Euler pairing. The Chern character is a morphism \(K(X)\to\operatorname{CH}(X)\otimes\mathbb{Q}\) to the rational Chow ring, inducing an isomorphism \(K(X)\otimes\mathbb{Q}\cong\operatorname{CH}(X)\otimes\mathbb{Q}\) by the Grothendieck-Riemann-Roch theorem, and in turn \(N(X)\otimes\mathbb{Q}\cong\operatorname{CH}_{\operatorname{num}}(X)\otimes \mathbb{Q}\) where \(\operatorname{CH}_{\operatorname{num}}(X)\) is the numerical Chow ring. Hence if \(F\) is a coherent sheaf on \(X\) then: \(F=0\iff 0=[F]\in K(X)\iff 0=[F]\in N(X)\). The next lemma collects elementary properties of these constructions; see Appendix B.1 for terminology on cones in convex geometry. In part (1), \(\mathsf{N}_{\mathsf{H},\lambda}\coloneqq\{h\in\mathsf{H}\mid 0=\lambda(h)\in\Lambda\}\). **Lemma 1.5**.: _Let \(\mathsf{H}\) be an abelian category._ 1. _If_ \(E(\mathsf{H})\) _is strictly convex then_ \(\mathsf{N}_{\mathsf{H},\lambda}\) _is a Serre subcategory of_ \(\mathsf{H}\)_._ 2. \(C(\mathsf{H})\) _is closed, strictly convex and pointed._ Proof.: (1) Assume \(E(\mathsf{H})\) strictly convex. Let \(h\in\mathsf{N}_{\mathsf{H},\lambda}\) and \(0\to h^{\prime}\to h\to h^{\prime\prime}\to 0\) be a short exact sequence. Then \(\lambda(h^{\prime})+\lambda(h^{\prime\prime})=\lambda(h)=0\) and strict convexity of \(E(\mathsf{H})\) implies \(\lambda(h^{\prime})=\lambda(h^{\prime\prime})=0\). Thus \(\mathsf{N}_{\mathsf{H},\lambda}\) is closed under subobjects and quotients. As \(\mathsf{N}_{\mathsf{H},\lambda}\) is always closed under extensions, it is a Serre subcategory. (2) The closure of \(C(\mathsf{H})\) follows directly from the definition. It is strictly convex because \(\pm v\in C(\mathsf{H})\) implies \(v(h)\coloneqq v(\lambda(h))=0\) for all \(h\in\mathsf{H}\), hence \(v=0\) as the classes \(\lambda(h)\) for \(h\in\mathsf{H}\) span \(\Lambda_{\mathbb{R}}=\Lambda\otimes\mathbb{R}\). Since in addition \(0\in C(\mathsf{H})\), the dual cone is pointed. The next examples exhibit effective cones that are not closed, or contain a line. Moreover, they show that distinct hearts inside a derived category can have the same dual cone. **Example 1.6** (projective line).: Consider the category \(\mathsf{H}=\mathsf{coh}(\mathbb{P}^{1})\) of coherent sheaves on the projective line. Then \(K(\mathsf{H})\cong\mathbb{Z}^{2}\) and we fix \([\mathcal{O}],[\mathcal{O}_{p}]\) as an orthonormal basis, i.e. \([F]=(\operatorname{rk}{(F)},\deg(F))\) for \(F\in\mathsf{H}\). Here \(\mathcal{O}_{p}\) denotes the length one skyscraper sheaf at a closed point \(p\in\mathbb{P}^{1}\) and \(\mathcal{O}=\mathcal{O}_{\mathbb{P}^{1}}\) is the structure sheaf. We use the inner product resulting from the basis to identify \(\Lambda_{\mathbb{R}}\) and \(\Lambda_{\mathbb{R}}^{*}\). Because there is an infinite chain of line bundles, the effective cone \(E(\mathsf{H})\) is isomorphic to a half-plane missing a boundary ray: \(\{(x,y)\in\mathbb{R}^{2}\mid x>0\text{ or }x=0,y\geq 0\}\) and thus not closed. Its dual cone \(C(\mathsf{H})\) is the ray \(\mathbb{R}_{\geq 0}\times\{0\}\) in \(\Lambda_{\mathbb{R}}^{*}\cong\mathbb{R}^{2}\). See Example 1.9 for a generalisation. **Example 1.7** (curves tilted in points).: Let \(X\) be a smooth projective curve and \(M\subseteq X\) a subset of closed points. Let \(\mathsf{T}_{M}=\langle\mathcal{O}_{p}\mid p\in M\rangle\) be the subcategory generated by skyscraper sheaves at points in \(M\) and \(\mathsf{F}_{M}=\langle\operatorname{Pic}(X),\mathcal{O}_{q}\mid q\notin M\rangle\) be generated by all line bundles and the skyscraper sheaves at the remaining points. Then \((\mathsf{T}_{M},\mathsf{F}_{M})\) is a torsion pair, see Appendix A.3, and gives rise to a new heart \(\mathsf{H}_{M}\coloneqq\mathsf{F}_{M}[1]*\mathsf{T}_{M}\) in \(\mathsf{D}^{b}(X)\), see Appendix A.5. Moreover, \((\mathsf{T}_{M},\mathsf{F}_{M})\) is a _cotilting torsion pair_ in \(\mathsf{H}\), that is each object of \(\mathsf{H}\) is a quotient of an object of \(\mathsf{F}_{M}\). This implies that \(\mathsf{H}_{M}\) is the heart of a faithful bounded t-structure, i.e. \(\mathsf{D}^{b}(\mathsf{H}_{M})\cong\mathsf{D}^{b}(X)\); see [7, Prop. 5.4.3]. For later reference, we introduce the following notation and terminology: \[\mathsf{coh}^{\!\times}(X) \coloneqq\mathsf{H}_{X}, \text{the \emph{{reversed geometric heart}}};\] \[\mathsf{coh}^{\!\times}(X) \coloneqq\mathsf{H}_{M}, \text{a \emph{{mixed geometric heart}}}\text{ where }\varnothing\neq M\subsetneq X.\] For our purposes, it will not matter precisely which points are chosen, so we don't specify the subset \(M\). If \(p\in M\) and \(q\in X\backslash M\) are linearly equivalent points then \(\mathcal{O}_{p}[-1],\mathcal{O}_{q}\in\mathsf{coh}^{\!\times}(X)\) sit in short exact sequences \(0\to\mathcal{O}_{p}[-1]\to\mathcal{O}_{X}(-p)\to\mathcal{O}_{X}\to 0\) and \(0\to\mathcal{O}_{X}(-q)\to\mathcal{O}_{X}\to\mathcal{O}_{q}\to 0\) with \(\mathcal{O}_{X}(-p)\cong\mathcal{O}_{X}(-q)\). Hence \([\mathcal{O}_{p}[-1]\oplus\mathcal{O}_{q}]=0\in K(\mathsf{coh}^{\!\times}(X))\) and the subcategory \(\mathsf{N}_{\mathsf{coh}^{\!\times}(X)}\) of Remark 1.4 is non-zero and not closed under direct summands. **Example 1.8** (projective line tilted in points).: We combine the previous two examples. The Auslander-Reiten quiver of \(\mathsf{coh}^{\!\times}(\mathbb{P}^{1})\) looks like that of \(\mathsf{coh}(\mathbb{P}^{1})\), except that the skyscraper sheaves sit before the line bundles. In \(\mathsf{coh}^{\!\times}(\mathbb{P}^{1})\), only some of the skyscraper sheaves are tilted before the line bundles. The notation is supposed to suggest that all/some skyscraper sheaves have been moved from the right-hand end of the Auslander-Reiten quiver of the category to the left-hand end. The categories \(\mathsf{coh}(\mathbb{P}^{1}),\mathsf{coh}^{\mathsf{N}}(\mathbb{P}^{1})\) and \(\mathsf{coh}^{\times}(\mathbb{P}^{1})\) have different effective cones: \(E(\mathsf{coh}^{\mathsf{N}}(\mathbb{P}^{1}))=\{(x,y)\mid x\geq 0\}\) and \(E(\mathsf{coh}^{\times}(\mathbb{P}^{1}))=\{(x,y)\mid x>0\text{ or }x=0,y\leq 0\}\). However, they have identical dual cones: \(C(\mathsf{coh}(\mathbb{P}^{1}))=C(\mathsf{coh}^{\rtimes}(\mathbb{P}^{1}))=C( \mathsf{coh}^{\times}(\mathbb{P}^{1}))\). Note \(E(\mathsf{coh}^{\mathsf{M}}(\mathbb{P}^{1}))\) is not strictly convex as it contains the line \(y=0\). **Example 1.9** (coherent sheaves).: Let \(X\) be a connected, smooth and projective variety of dimension \(d\) and \(\mathsf{H}=\mathsf{coh}(X)\) the category of coherent sheaves on \(X\). Let \(\Lambda=N(X)\) be the numerical Grothendieck group, as in Remark 1.4. Then \(\Lambda_{\mathbb{R}}\cong\operatorname{CH}(X)_{\operatorname{num}}\otimes \mathbb{R}\) is spanned by numerical equivalence classes of structure sheaves \([\mathcal{O}_{Z}]\) of irreducible closed subsets \(Z\subseteq X\). Let \(D\subset X\) be an effective ample divisor and \(Z\subseteq X\) a closed subset of positive dimension; we can assume that \(D\) intersects \(Z\) transversally. The sequence of sheaves \(\mathcal{O}_{Z}(nD)\), where \(n\in\mathbb{Z}\), generates an affine line in the effective cone \(E(\mathsf{H})\) parallel to \([\mathcal{O}_{Z}]\). In contrast, if \(p\in X\) is a point then only the ray \(\mathbb{R}_{\geq 0}[\mathcal{O}_{p}]\) lies in \(E(\mathsf{H})\). It follows that, as an abstract cone, the closure of the effective cone is \(\overline{E}(\mathsf{H})\cong\mathbb{R}^{d-1}\times\mathbb{R}_{\geq 0}\). Hence the heart cone is a ray, \(C(\mathsf{H})\cong\mathbb{R}_{\geq 0}\). As a subset of \(\Lambda_{\mathbb{R}}^{*}=\operatorname{Hom}(N(X),\mathbb{R})\), this ray is generated by the rank function. The above example shows that the heart cones of smooth projective varieties are not interesting in themselves, as they're always rays regardless of the geometry. Example 1.11 below shows that the heart cones of finite-dimensional algebras are always orthants, i.e. full-dimensional and simplicial, likewise regardless of the algebra. These invariants become interesting when studied as part of the heart fans in Section 3. In the special case \(\Lambda=K(\mathsf{H})\), full (i.e. full-dimensional) cones correspond to an algebraic setting. Recall that an abelian category is algebraic if it is Noetherian and Artinian, and has finitely many simple objects up to isomorphism. **Proposition 1.10**.: _Let \(\mathsf{H}\) be an abelian lattice category. Then the following are equivalent:_ 1. _The full subcategory_ \(\mathsf{N}_{\mathsf{H}}=\{h\in\mathsf{H}\mid[h]=0\}\) _is Serre and_ \(\mathsf{H}/\mathsf{N}_{\mathsf{H}}\) _is algebraic._ 2. _The effective cone_ \(E(\mathsf{H})\) _is simplicial and full._ 3. _The dual cone_ \(C(\mathsf{H})\) _is simplicial and full._ 4. _The interior of_ \(C(\mathsf{H})\) _is non-empty._ Proof.: \((i)\Longrightarrow(ii)\): Suppose \(\mathsf{N}_{\mathsf{H}}\) is a Serre subcategory and \(\mathsf{H}/\mathsf{N}_{\mathsf{H}}\) is algebraic. The quotient functor induces a canonical isomorphism \(K(\mathsf{H})=K(\mathsf{H}/\mathsf{N}_{\mathsf{H}})\) identifying the effective cones of \(\mathsf{H}\) and \(\mathsf{H}/\mathsf{N}_{\mathsf{H}}\). Since \(\mathsf{H}/\mathsf{N}_{\mathsf{H}}\) is algebraic, the classes of its simple objects form an integral basis of \(K(\mathsf{H}/\mathsf{N}_{\mathsf{H}})=K(\mathsf{H})\). The effective cone \(E(\mathsf{H}/\mathsf{N}_{\mathsf{H}})=E(\mathsf{H})\) is generated by this basis, so is simplicial and of maximal dimension. \((ii)\Longrightarrow(iii)\): By definition, a simplicial, full cone is generated by a vector space basis. Its dual cone is generated by the dual basis, hence also simplicial and full. \((iii)\Longrightarrow(iv)\): A full cone has non-empty interior. \((iv)\Longrightarrow(i)\): This is the only place where we use \(\Lambda=K(\mathsf{H})\). We have \(C(\mathsf{H})^{\vee}=\overline{E}(\mathsf{H})\), i.e. the dual heart cone is the closure of the effective cone; by the assumption that \(C(\mathsf{H})\) has non-empty interior (equivalently: \(C(\mathsf{H})\) is full) it is strictly convex. In particular \(E(\mathsf{H})\) itself is strictly convex, so that \(\mathsf{N}_{\mathsf{H}}=\mathsf{N}_{\mathsf{H},\mathrm{id}}\) is a Serre subcategory by Lemma 1.5. Now fix \(h\in\mathsf{H}\). Any subobject of \(h\) has class in the compact subset \(\overline{E}(\mathsf{H})\cap\big{(}[h]-\overline{E}(\mathsf{H})\big{)}\subset \Lambda_{\mathbb{R}}=K(\mathsf{H})\otimes\mathbb{R}\). Since the number of lattice points in this region is finite, there are only finitely many possible classes for subobjects. Therefore the class of any ascending or descending chain of subobjects is eventually constant, and thus the quotients of successive terms in the chain are eventually in \(\mathsf{N}_{\mathsf{H}}\). We conclude that \(\mathsf{H}/\mathsf{N}_{\mathsf{H}}\) is a length abelian category. As the classes of simple objects form a basis of \(K(\mathsf{H})\), the category \(\mathsf{H}/\mathsf{N}_{\mathsf{H}}\) is even algebraic. **Example 1.11** (heart cones of algebraic categories are simplicial).: Let \(\mathsf{H}\) be an algebraic abelian category, such as \(\mathsf{H}=\mathsf{mod}(A)\) for a finite-dimensional algebra \(A\). Then \(K(\mathsf{H})\) is a free abelian group of finite rank \(r\). We put \(\Lambda=K(\mathsf{H})\), as always in this situation unless explicitly stated otherwise. By the proposition, \(C(\mathsf{H})\) is the cone generated by a basis of the dual lattice \(\Lambda^{*}=\operatorname{Hom}(\Lambda,\mathbb{Z})\), and in particular simplicial. Concretely, the heart cone of an algebraic category only depends on the rank \(r\) and is isomorphic to an orthant \((\mathbb{R}_{\geq 0})^{r}\) in \(\Lambda^{*}_{\mathbb{R}}\cong\mathbb{R}^{r}\). **Example 1.12** (a round cone).: Let \(\mathsf{H}\) be an abelian category generated by simple objects such that \(K(\mathsf{H})\cong\mathbb{Z}^{\infty}\) is free of countably infinite rank; e.g. \(\mathsf{H}=\mathsf{mod}(\mathbb{Z})\) or \(\mathsf{H}=\mathsf{mod}(\mathbb{R}^{\mathbb{Z}})\). Fix a closed cone \(\sigma\subseteq\Lambda^{*}_{\mathbb{R}}\) whose dual \(\sigma^{\vee}\) is full with dense rational points, i.e. \(\overline{S}=\sigma^{\vee}\) where \(S\coloneqq\sigma^{\vee}\cap\Lambda_{\mathbb{Q}}\). This holds if \(\sigma\) is full and strictly convex, e.g. for the round cone in \(\mathbb{R}^{3}\) given by \(x^{2}+y^{2}\leq z^{2}\) and \(z\geq 0\). Define \(\lambda\colon K(\mathsf{H})=\mathbb{Z}^{\infty}\to\Lambda\) to map the classes of simple objects onto \(S\cap\Lambda\). Then the effective cone \(E(\mathsf{H})\) is the cone generated by \(S\) and its dual is \(C(\mathsf{H})=S^{\vee}=\sigma^{\vee\vee}=\overline{\sigma}=\sigma\). ## 2. Faces and Serre subcategories Here we explain what categorical data parametrises the exposed faces of the effective cone of an abelian category. By duality, the same data parametrises the dual faces of the heart cone. A _face_ of a cone is a subset closed under sums and under summands while an _exposed face_ is a subset of the cone obtained as intersection with a supporting hyperplane; see Appendix B.2. Non-exposed faces can occur in our theory: **Example 2.1** (a non-exposed face).: Following up on Example 1.12, one can map the classes of the countably many simple objects into \(\Lambda^{*}_{\mathbb{R}}=\mathbb{R}^{3}\) such that the cross-section of the dual cone with the plane \(z=1\) is as shown on the right. The ray through the point \(A\) is then a face of \(C(\mathsf{H})\) which is not exposed. We write \(\tau\preceq\sigma\) when \(\tau\) is a face of \(\sigma\) and denote by \(\operatorname{Faces}(\sigma)\) the partially ordered set of faces of \(\sigma\), and by \(\operatorname{ExFaces}(\sigma)\) the subset of exposed faces. Dualising faces yields monotone maps \[\operatorname{ExFaces}(E(\mathsf{H}))^{\operatorname{op}}\hookrightarrow \operatorname{ExFaces}(C(\mathsf{H})),\quad\tau\mapsto\tau^{\wedge}=C( \mathsf{H})\cap\tau^{\perp},\] \[\operatorname{ExFaces}(C(\mathsf{H}))\xrightarrow{\sim}\operatorname{ExFaces }(\overline{E}(\mathsf{H}))^{\operatorname{op}},\quad\kappa\mapsto\kappa^{ \wedge}= \overline{E}(\mathsf{H})\cap\kappa^{\perp}.\] This statement is Proposition B.3 in our setting applied respectively to the cone \(E(\mathsf{H})\subseteq\Lambda_{\mathbb{R}}\) and the closed cone \(C(\mathsf{H})\subset\Lambda^{*}_{\mathbb{R}}\). ### Dual faces and face subcategories We introduce a third kind of faces of heart cones; the _dual faces_ of exposed faces of the effective cone. This is useful because there are convenient categorical descriptions of both exposed faces of \(E(\mathsf{H})\) and of their dual faces. We give those descriptions as definitions right away; later on we will justify them. **Definition 2.2**.: A _face subcategory_ of \(\mathsf{H}\) is one of the form \(\mathsf{S}^{v}=\{h\in\mathsf{H}\mid v(h)=0\}\) for some \(v\in C(\mathsf{H})\). Let \(\operatorname{Serre}_{\Lambda}(\mathsf{H})\) be the poset of face subcategories ordered by inclusion. The _dual face_ of \(C(\mathsf{H})\) corresponding to \(\mathsf{S}\in\operatorname{Serre}_{\Lambda}(\mathsf{H})\) is \(C(\mathsf{H}/\mathsf{S})\coloneqq C(\mathsf{H})\cap E(\mathsf{S})^{\perp}\). The next lemma says that face subcategories parametrise exposed faces of \(E(\mathsf{H})\). It also shows that \(C(\mathsf{H})\cap E(\mathsf{S})^{\perp}=E(\mathsf{S})^{\triangle}\) is a dual face of an exposed face. Writing \(C(\mathsf{H}/\mathsf{S})\) for the dual face can be treated as a mere notational shorthand; however, we show below that the dual cone of the abelian category \(\mathsf{H}/\mathsf{S}\) has intrinsic meaning (it is immediate that \(\mathsf{S}^{v}\subseteq\mathsf{H}\) is a Serre subcategory). The hurried reader may just accept the lemma and skip to Section 3. **Lemma 2.3**.: _The map \(\operatorname{Serre}_{\Lambda}(\mathsf{H})\to\operatorname{ExFaces}(E(\mathsf{ H}))\), \(\mathsf{S}\mapsto E(\mathsf{S})\) is a monotone bijection with inverse \(E(\mathsf{H})\cap v^{\perp}\mapsto\mathsf{S}^{v}\)._ Proof.: We first show \(E(\mathsf{S}^{v})=E(\mathsf{H})\cap v^{\perp}\). The inclusion \(E(\mathsf{S}^{v})\subseteq E(\mathsf{H})\cap v^{\perp}\) follows from the definition of \(\mathsf{S}^{v}\). For the reverse inclusion, if \(\sum_{i}a_{i}\lambda(h_{i})\in E(\mathsf{H})\cap v^{\perp}\) then \(v(h_{i})=0\) because of \(v|_{\mathsf{H}}\geq 0\), hence \(h_{i}\in\mathsf{S}^{v}\) for all \(i\). Therefore, the map of the lemma is well-defined and surjective; it is clearly monotone. For injectivity, let \(v,w\in C(\mathsf{H})\) with \(\mathsf{S}^{v}\neq\mathsf{S}^{w}\). We may assume there is some \(h\in\mathsf{S}^{v}\backslash\mathsf{S}^{w}\). Then \(\lambda(h)\in E(\mathsf{H})\cap v^{\perp}\) but \(\lambda(h)\notin E(\mathsf{H})\cap w^{\perp}\). Hence \(E(\mathsf{S}^{v})=E(\mathsf{H})\cap v^{\perp}\neq E(\mathsf{H})\cap w^{\perp} =E(\mathsf{S}^{w})\). The description of the inverse map also follows from the identity \(E(\mathsf{H})\cap v^{\perp}=E(\mathsf{S}^{v})\). ### Induced lattices and associated subcategories We turn to a more systematic study of faces and we begin by explaining how effective and dual cones of quotient categories \(\mathsf{H}/\mathsf{S}\) are subsets of the vector spaces \(\Lambda_{\mathbb{R}}\) and \(\Lambda_{\mathbb{R}}^{*}\) in a natural way. **Remark 2.4** (lattices and cones induced by quotient and subcategories).: Let \(\mathsf{H}\) be an abelian category over \(\Lambda\) and \(\mathsf{S}\subseteq\mathsf{H}\) a Serre subcategory. Denote by \(\mathsf{Q}\coloneqq\mathsf{H}/\mathsf{S}\) the associated abelian quotient category. We define \[\Lambda(\mathsf{S}) \coloneqq\operatorname{im}\bigl{(}K(\mathsf{S})\to K(\mathsf{H}) \twoheadrightarrow\Lambda\bigr{)} \text{the \emph{{induced sublattice}}},\] \[\Lambda(\mathsf{Q}) \coloneqq\Lambda/\Lambda(\mathsf{S}) \text{the \emph{{induced quotient group}}}.\] The finitely generated abelian group \(\Lambda(\mathsf{Q})\) may have torsion but this is irrelevant for the real vector spaces \(\Lambda(\mathsf{Q})_{\mathbb{R}}=\Lambda(\mathsf{Q})\otimes\mathbb{R}\) and \(\Lambda(\mathsf{Q})_{\mathbb{R}}^{*}=\operatorname{Hom}(\Lambda(\mathsf{Q}), \mathbb{R})\). The Grothendieck groups of the abelian categories \(\mathsf{S}\) and \(\mathsf{Q}\) have structure maps induced from \(\lambda\colon K(\mathsf{H})\to\Lambda\): \[\tikzcd{K(\mathsf{S})} \xrightarrow{}K(\mathsf{H}) \xrightarrow{}K(\mathsf{Q}) \xrightarrow{}0\] \[\tikzcd{\lambda^{\prime}} \tikzcd{\lambda} \tikzcd{\lambda} \tikzcd{\lambda} \tikzcd{\bar{\lambda}}\] \[0 \xrightarrow{}\Lambda(\mathsf{S}) \xrightarrow{}\Lambda(\mathsf{Q}) \xrightarrow{}\Lambda(\mathsf{Q}) \xrightarrow{}0.\] Effective cones \(E(\mathsf{S})\subseteq\Lambda(\mathsf{S})_{\mathbb{R}}\), \(E(\mathsf{Q})\subseteq\Lambda(\mathsf{Q})_{\mathbb{R}}\) and their duals \(C(\mathsf{S})\subseteq\Lambda(\mathsf{S})_{\mathbb{R}}^{*}\), \(C(\mathsf{Q})\subseteq\Lambda(\mathsf{Q})_{\mathbb{R}}^{*}\) are given by Definition 1.1. The effective cone of the subcategory is naturally embedded into \(\Lambda_{\mathbb{R}}\); likewise both dual cones are naturally embedded into \(\Lambda_{\mathbb{R}}^{*}\): \[E(\mathsf{S})\subseteq\Lambda(\mathsf{S})_{\mathbb{R}}\subseteq \Lambda_{\mathbb{R}} E(\mathsf{Q})\subseteq\Lambda(\mathsf{Q})_{\mathbb{R}}\] \[C(\mathsf{S})\subseteq\Lambda_{\mathbb{R}}^{*} C(\mathsf{Q})\subseteq\Lambda(\mathsf{Q})_{\mathbb{R}}^{*}\subseteq \Lambda_{\mathbb{R}}^{*}.\] We always consider \(E(\mathsf{S})\subseteq\Lambda_{\mathbb{R}}\) and \(C(\mathsf{S}),C(\mathsf{Q})\subseteq\Lambda_{\mathbb{R}}^{*}\) unless explicitly saying otherwise. **Definition 2.5**.: Given subsets \(\tau\subseteq\Lambda_{\mathbb{R}}\) and \(\kappa\subseteq\Lambda_{\mathbb{R}}^{*}\), their _associated subcategories_ are \[\mathsf{S}_{\tau}=\{h\in\mathsf{H}\mid\lambda(h)\in\tau\}\qquad\text{and} \qquad\mathsf{S}^{\kappa}=\{h\in\mathsf{H}\mid v(h)=0\ \forall v\in\kappa\}.\] By Lemma 2.3, a face subcategory of \(\mathsf{H}\) is an associated subcategory \(\mathsf{S}_{\tau}\) for an exposed face \(\tau\) of \(E(\mathsf{H})\). Specifically, \(\mathsf{S}^{v}=\mathsf{S}_{\tau}=\mathsf{S}^{\tau^{h}}\) for any \(v\in\operatorname{relint}(\tau)\); the last equality is Lemma 2.8(4). Thus the set of face subcategories is \(\operatorname{Serre}_{\Lambda}(\mathsf{H})=\{\mathsf{S}_{\tau}\mid\tau\in \operatorname{ExFaces}(E(\mathsf{H}))\}\). \(\mathsf{S}_{\tau}\) and \(\mathsf{S}^{\kappa}\) are full subcategories of \(\mathsf{H}\). The notation is justified by the monotonicity behaviour: if \(\tau^{\prime}\subseteq\tau\subseteq\Lambda_{\mathbb{R}}\) are nested subsets then \(\mathsf{S}_{\tau^{\prime}}\subseteq\mathsf{S}_{\tau}\); if \(\kappa^{\prime}\subseteq\kappa\subseteq\Lambda_{\mathbb{R}}^{*}\) then \(\mathsf{S}^{\kappa^{\prime}}\supseteq\mathsf{S}^{\kappa}\). The following diagram collects the maps appearing in our setup and many of their properties. The top left map sends \(v\in C(\mathsf{H})\) to the exposed face \(E(\mathsf{H})\cap v^{\perp}\); we allow \(v=0\), i.e. we consider \(E(\mathsf{H})\) as an exposed face. This map is surjective by the definition of exposed faces. The horizontal composition sends \(v\in C(\mathsf{H})\) to the minimal exposed face of \(C(\mathsf{H})\) containing \(v\); see Proposition B.3. The vertical bijection was shown in Lemma 2.3. The dual cone \(C(\mathsf{H}/\mathsf{S})\subseteq\Lambda_{\mathbb{R}}^{*}\) induced by a Serre subcategory \(\mathsf{S}\subseteq\mathsf{H}\) was introduced above, and Lemma 2.6 identifies it with the dual face of Definition 2.2. This lemma also shows that the square starting in \(\operatorname{Serre}_{\Lambda}(\mathsf{H})\) is commutative. The square starting at \(\operatorname{ExFaces}(E(\mathsf{H}))\) commutes by Lemma 2.8(4). **Lemma 2.6**.: _If \(\mathsf{S}\subseteq\mathsf{H}\) is a Serre subcategory then \(C(\mathsf{H}/\mathsf{S})=C(\mathsf{H})\cap E(\mathsf{S})^{\perp}\) and, in particular, it is a dual face of \(C(\mathsf{H})\)._ Proof.: Put \(\mathsf{Q}\coloneqq\mathsf{H}/\mathsf{S}\). Then \(C(\mathsf{Q})=E(\mathsf{Q})^{\vee}=\{\bar{v}\in\Lambda(\mathsf{Q})_{\mathbb{R} }^{*}:\bar{v}|_{\mathsf{Q}}\geq 0\}\) is the dual cone of \(E(\mathsf{Q})\) inside \(\Lambda(\mathsf{Q})_{\mathbb{R}}\); see Remark 2.4. Then \(\{v\in\Lambda_{\mathbb{R}}^{*}:v|_{\mathsf{H}}\geq 0,v|_{\Lambda(\mathsf{S})}=0\}=C( \mathsf{H})\cap\Lambda(\mathsf{S})^{\perp}\) is the image of \(C(\mathsf{Q})\) under \(\Lambda(\mathsf{Q})_{\mathbb{R}}^{*}\hookrightarrow\Lambda_{\mathbb{R}}^{*}\). Because \(\Lambda(\mathsf{S})\) is the subgroup of \(\Lambda\) generated by all \(\lambda(h)\) with \(h\in\mathsf{S}\), we have \(\Lambda(\mathsf{S})^{\perp}=E(\mathsf{S})^{\perp}\subseteq\Lambda_{\mathbb{R}}^ {*}\). Hence \(C(\mathsf{H}/\mathsf{S})=C(\mathsf{H})\cap E(\mathsf{S})^{\perp}\) is the dual face of \(C(\mathsf{H})\) given by \(E(\mathsf{S})\subseteq E(\mathsf{H})\). Exposed faces of \(E(\mathsf{H})\) correspond to face subcategories by Lemma 2.3. If \(\mathsf{S}\subseteq\mathsf{H}\) is an arbitrary Serre subcategory then \(E(\mathsf{S})\subseteq E(\mathsf{H})\) need not be a face although \(C(\mathsf{H}/\mathsf{S})\subseteq C(\mathsf{H})\) is always face (even a dual face) by Lemma 2.6. That dual face of \(C(\mathsf{H})\) may arise from many Serre subcategories and among those, the face subcategory is the maximal one. It is characterised by the property: \(h\in\mathsf{S}\iff v(h)=0\ \forall v\in C(\mathsf{H}/\mathsf{S})\) (i.e. \(v\in C(\mathsf{H})\) with \(v(h)=0\)). **Example 2.7**.: Consider \(\mathsf{H}=\mathsf{coh}(\mathbb{P}^{1})\) as in Example 1.6 and these three Serre subcategories: \(\mathsf{S}_{1}=0\); \(\mathsf{S}_{2}=\langle\mathcal{O}_{p}\rangle\) generated by a single skyscraper sheaf; torsion sheaves \(\mathsf{S}_{3}=\langle\mathcal{O}_{x}\mid x\in\mathbb{P}^{1}\rangle\). Then \(C(\mathsf{H})=C(\mathsf{H}/\mathsf{S}_{1})=C(\mathsf{H}/\mathsf{S}_{2})=C( \mathsf{H}/\mathsf{S}_{3})\) and the face subcategory is \(\mathsf{S}_{3}\). **Lemma 2.8**.: _Let \(\mathsf{H}\) be an abelian category, \(\tau\preceq E(\mathsf{H})\) a face of the effective cone and \(\kappa\subseteq\Lambda_{\mathbb{R}}^{*}\)._ 1. \(\mathsf{S}_{\tau}\subseteq\mathsf{H}\) _is a Serre subcategory and_ \(E(\mathsf{S}_{\tau})=\tau\)_,_ \(C(\mathsf{H}/\mathsf{S}_{\tau})=\tau^{\triangle}\)_._ 2. \(\mathsf{S}^{\kappa}\) _is unchanged when replacing_ \(\kappa\) _by its closure or the cone generated by_ \(\kappa\)_._ 3. _If_ \(\kappa\subseteq C(\mathsf{H})\) _then_ \(\mathsf{S}^{\kappa}\subseteq\mathsf{H}\) _is a Serre subcategory and_ \(E(\mathsf{S}^{\kappa})=\kappa^{\triangle}\)_,_ \(C(\mathsf{S}^{\kappa})=\kappa^{\triangle}\)_._ 4. \(\mathsf{S}_{\tau}\subseteq\mathsf{S}^{\tau^{\triangle}}\) _with equality if_ \(\tau\) _is an exposed face._ Proof.: (1) If \(0\to h^{\prime}\to h\to h^{\prime\prime}\to 0\) is a short exact sequence in \(\mathsf{H}\), then in \(E(\mathsf{H})\) we have \(\lambda(h)=\lambda(h^{\prime})+\lambda(h^{\prime\prime})\). The face property of \(\tau\) implies \(\lambda(h)\in\tau\iff\lambda(h^{\prime}),\lambda(h^{\prime\prime})\in\tau\) which is precisely the condition for \(\mathsf{S}_{\tau}\) to be a Serre subcategory. By definition, \(E(\mathsf{S}_{\tau})\) consists of all non-negative linear combinations \(\sum_{i}a_{i}\lambda(h_{i})\) with \(\lambda(h_{i})\in\tau\). This immediately gives \(E(\mathsf{S}_{\tau})\subseteq\tau\), as \(\tau\) is a cone. The reverse inclusion follows from \(\tau\) being closed under summands. The last part follows from \(\tau=E(\mathsf{S}_{\tau})\) and Lemma 2.6 applied to \(\mathsf{S}=\mathsf{S}_{\tau}\) which yields \(C(\mathsf{H}/\mathsf{S}_{\tau})=C(\mathsf{H})\cap E(\mathsf{S}_{\tau})^{ \perp}=C(\mathsf{H})\cap\tau^{\perp}=\tau^{\triangle}\). (2) As \(\kappa\subseteq\overline{\kappa}\) is contained in its closure inside \(\Lambda_{\mathbb{R}}^{*}\), we have \(\mathsf{S}^{\kappa}\supseteq\mathsf{S}^{\overline{\kappa}}\). Let \(h\in\mathsf{S}^{\kappa}\). The orthogonal subspaces \(\kappa^{\perp}=\overline{\kappa}^{\perp}\subseteq\Lambda_{\mathbb{R}}\) agree, so that \(v(h)=0\) for all \(v\in\overline{\kappa}\), hence \(h\in\mathsf{S}^{\overline{\kappa}}\). The analogous statement for the cone generated by \(\kappa\) is immediate. (3) Let \(0\to h^{\prime}\to h\to h^{\prime\prime}\to 0\) as in (1) and fix \(v\in\kappa\subseteq C(\mathsf{H})\). Then \(v(h^{\prime}),v(h),v(h^{\prime\prime})\geq 0\) and \(v(h)=v(h^{\prime})+v(h^{\prime\prime})\). We get \(v(h)=0\iff v(h^{\prime})=v(h^{\prime\prime})=0\). Thus \(\mathsf{S}^{\kappa}\) is closed under subobjects, quotients and extensions, hence is a Serre subcategory of \(\mathsf{H}\). It follows from the definition of \(\mathsf{S}^{\kappa}\) that \(E(\mathsf{S}^{\kappa})=E(\mathsf{H})\cap\kappa^{\perp}=\kappa^{\triangle}\) is the dual face of \(\kappa\). The formula for \(C(\mathsf{S}^{\kappa})\) follows. (4) If \(\tau\preceq E(\mathsf{H})\), \(h\in\mathsf{S}_{\tau}\) and \(v\in\tau^{\triangle}=C(\mathsf{H})\cap\tau^{\perp}\) then \(\lambda(h)\in\tau\) and \(v(h)=0\). Hence \(\mathsf{S}_{\tau}\subseteq\mathsf{S}^{\tau^{\triangle}}\). If \(\tau=E(\mathsf{H})\cap w^{\perp}\) is an exposed face, where \(w\in C(\mathsf{H})\), then \(\mathsf{S}_{\tau}=\mathsf{S}^{w}\). Let \(h\in\mathsf{S}^{\tau^{\triangle}}\), i.e. \(v(h)=0\) for all \(v\in\tau^{\triangle}=C(\mathsf{H})\cap\tau^{\perp}\). As \(w\in\tau^{\triangle}\), we get \(w(h)=0\) and \(h\in\mathsf{S}_{\tau}\). **Remark 2.9**.: Shunya Saito establishes in [24, SS2.3] a bijection between Serre subcategories of an abelian (or exact) category and the faces of its Grothendieck monoid. So while the Grothendieck monoid classifies all Serre subcategories, we are dealing here with a much coarser, linearised invariant, hence only get certain Serre subcategories. ### Dual faces revisited By definition, the set of dual faces of \(C(\mathsf{H})=E(\mathsf{H})^{\vee}\) consists of all \(\tau^{\triangle}\) where \(\tau\) ranges over exposed faces of \(E(\mathsf{H})\). This is a general convex-geometric notion; see Appendix B.3. Hence the partially ordered set of dual faces is just \(\operatorname{ExFaces}(E(\mathsf{H}))^{\operatorname{op}}\). In fact, we get the same set by taking duals of (not necessarily exposed) faces or even of arbitrary subsets; see Proposition B.3. Restricting to duals of exposed faces has the advantage of providing a bijective parametrisation of dual faces. We thus have two inclusions \[\operatorname{ExFaces}(E(\mathsf{H}))^{\operatorname{op}}\subseteq \operatorname{ExFaces}(C(\mathsf{H}))\subseteq\operatorname{Faces}(C(\mathsf{H}))\] that may be strict. If \(E(\mathsf{H})\) is a closed cone, the first inclusion is the identity. If \(E(\mathsf{H})\) is a polyhedral cone, all three face sets coincide; e.g. if \(\mathsf{H}\) is algebraic and \(\Lambda=K(\mathsf{H})\). **Proposition 2.10**.: _The following are equivalent for a face \(\kappa\) of a heart cone \(C(\mathsf{H})\):_ 1. \(\kappa\) _is a dual face, i.e. in the image of_ \(\operatorname{ExFaces}(E(\mathsf{H}))^{\operatorname{op}}\hookrightarrow \operatorname{ExFaces}(C(\mathsf{H}))\)_;_ 2. \(\kappa=C(\mathsf{H}/\mathsf{S})\) _for a face subcategory_ \(\mathsf{S}\in\operatorname{Serre}_{\mathsf{A}}(\mathsf{H})\)_;_ 3. \(\kappa=C(\mathsf{H})\cap\lambda(h)^{\perp}\) _for some_ \(h\in\mathsf{H}\)_;_ 4. \(\kappa=C(\mathsf{H})\cap e^{\perp}\) _for some_ \(e\in E(\mathsf{H})\)_._ Proof.: \((i)\Longrightarrow(ii)\) follows from Lemma 2.3 and \(\kappa=E(\mathsf{S})^{\vartriangle}=C(\mathsf{H})\cap E(\mathsf{S})^{\perp}=C (\mathsf{H}/\mathsf{S})\). \((ii)\Longrightarrow(iii)\) follows from Lemma B.4 as \(E(\mathsf{S})\) is a rational cone and \((iii)\Longrightarrow(iv)\) is trivial. For the remaining implication, let \(\kappa=C(\mathsf{H})\cap e^{\perp}\) for \(e\in E(\mathsf{H})\) with dual face \(\kappa^{\vartriangle}=\overline{E}(\mathsf{H})\cap\kappa^{\perp}\) in \(C(\mathsf{H})^{\vee}=\overline{E}(\mathsf{H})\). Clearly \(\tau:=\kappa^{\vartriangle}\cap E(\mathsf{H})\) is a face of \(E(\mathsf{H})\). Then on the one hand \(\kappa\subseteq\tau^{\vartriangle}\) from \(\kappa\subseteq\kappa^{\vartriangle\vartriangle}\subseteq\tau^{\vartriangle}\). On the other hand, \(\tau^{\vartriangle}\subseteq\kappa\) due to \(e\in\tau\), hence \(e^{\perp}\supseteq\tau^{\perp}\). Therefore \(\tau^{\vartriangle}=\kappa\) is in the image of \(\operatorname{ExFaces}(E(\mathsf{H}))^{\operatorname{op}}\hookrightarrow \operatorname{ExFaces}(C(\mathsf{H}))\). ## 3. The fan of a bounded heart Let \(\mathsf{D}\) be a triangulated category and \(\lambda\colon K(\mathsf{D})\to\Lambda\) a surjective homomorphism onto a lattice. We think of \(\mathsf{D}\) as a category over \(\Lambda\) and sometimes write \(\mathsf{D}/\Lambda\). By a _heart \(\mathsf{H}\) in \(\mathsf{D}\)_, we always mean the heart of a bounded t-structure on \(\mathsf{D}\), although occasionally we write 'bounded heart' for emphasis. Since \(K(\mathsf{D})=K(\mathsf{H})\) canonically, \(\Lambda_{\mathbb{R}}=\Lambda\otimes\mathbb{R}\) and \(\Lambda_{\mathbb{R}}^{*}=\operatorname{Hom}(\Lambda,\mathbb{R})\) are compatible with the previous sections. There is a partial order on the set of hearts in which \(\mathsf{H}\leq\mathsf{H}[1]\) given by inclusion of co-aisles; see Appendix A.4. We will frequently use _Happel-Reiten-Smalo (HRS) tilting_ from homological algebra: given two hearts with \(\mathsf{H}\leq\mathsf{K}\leq\mathsf{H}[1]\), there is a unique torsion pair \((\mathsf{T},\mathsf{F})\) in \(\mathsf{H}\) such that \(\mathsf{K}=\mathsf{F}[1]*\mathsf{T}\); see Appendix A.5. ### The heart fan Fix a bounded heart \(\mathsf{H}\subset\mathsf{D}\). Recall that a _fan_ in a vector space is a collection of cones such that (a) a face of a cone in the fan is in the fan and (b) the intersection of two cones in the fan is a common face of each; see Appendix B.4. In this section we show that the set of heart cones \(C(\mathsf{K})\) for \(\mathsf{H}\leq\mathsf{K}\leq\mathsf{H}[1]\) generates a fan in \(\Lambda_{\mathbb{R}}^{*}\), i.e. that the set of all faces of heart cones is a fan. We denote this fan by \(\Sigma^{\mathsf{n}}(\mathsf{H})\) and call it the _naive heart fan_. The construction of \(\Sigma^{\mathsf{n}}(\mathsf{H})\) requires \(\mathsf{H}\) to be a bounded heart inside a triangulated category \(\mathsf{D}\) because it utilises HRS tilting. _A posteriori_ we will see that \(\Sigma^{\mathsf{n}}(\mathsf{H})\) depends only on \(\mathsf{H}\) and not on \(\mathsf{D}\), which therefore may always be taken to be the bounded derived category. This justifies omitting \(\mathsf{D}\) from the notation; we usually also suppress the dependence on \(\lambda\colon K(\mathsf{D})\to\Lambda\). We deduce the existence of this naive heart fan from a subtly stronger result, namely that the set of _dual faces_ of the heart cones \(C(\mathsf{K})\) for \(\mathsf{H}\leq\mathsf{K}\leq\mathsf{H}[1]\) forms a _dual face fan_ in \(\Lambda_{\mathbb{R}}^{*}\). By this we mean that (a) the set of all dual faces of heart cones is closed under taking dual faces and (b) any two of these dual faces intersect in a common dual face; see Appendix B.5 for further details. We denote this dual face fan by \(\Sigma(\mathsf{H})\) and refer to it as the _heart fan_. As for the naive heart fan, the construction depends on the existence of an ambient triangulated category, but the result is independent of the particular choice. There are three reasons why we refer to the fan of all faces \(\Sigma^{\mathsf{n}}(\mathsf{H})\) as naive: 1. the notion of fan is more familiar, and classical, than that of dual face fan; 2. \(\Sigma(\mathsf{H})\) being a dual face fan is stronger, it implies \(\Sigma^{\mathsf{n}}(\mathsf{H})\) is a fan; see Corollary B.7; 3. the cones in \(\Sigma(\mathsf{H})\) admit a categorical interpretation via face pairs -- see Proposition 3.4 -- but we are not aware of any such interpretation of the cones in \(\Sigma^{\mathsf{n}}(\mathsf{H})\). For these reasons we regard the heart fan \(\Sigma(\mathsf{H})\) as the more fundamental convex-geometric object, and focus on its properties in what follows. If all effective cones \(E(\mathsf{K})\) for \(\mathsf{H}\leq\mathsf{K}\leq\mathsf{H}[1]\) are polyhedral then \(\Sigma(\mathsf{H})=\Sigma^{\mathsf{n}}(\mathsf{H})\) because all faces are dual faces. However, non-polyhedral effective cones do occur, and \(\Sigma(\mathsf{H})\neq\Sigma^{\mathsf{n}}(\mathsf{H})\) may happen; see Example 3.17. **Theorem 3.1**.: _Let \(\mathsf{H}\) be a bounded heart in a triangulated category \(\mathsf{D}\) over \(\Lambda\). Let \(\Sigma(\mathsf{H})\) be the set of all dual faces of heart cones \(C(\mathsf{K})\) for \(\mathsf{H}\leq\mathsf{K}\leq\mathsf{H}[1]\). Then_ 1. \(\Sigma(\mathsf{H})\) _is a dual face fan in_ \(\Lambda_{\mathbb{R}}^{*}\)_;_ 2. \(\Sigma(\mathsf{H})\) _does not depend on the ambient triangulated category_ \(\mathsf{D}\) Proof.: By Proposition B.6, \(\Sigma(\mathsf{H})\) is a dual face fan if the intersection \(C(\mathsf{K})\cap C(\mathsf{K}^{\prime})\) of any two heart cones is a common dual face. Since \(\mathsf{H}\leq\mathsf{K},\mathsf{K}^{\prime}\leq\mathsf{H}[1]\), there are unique torsion pairs \((\mathsf{T},\mathsf{F})\) and \((\mathsf{T}^{\prime},\mathsf{F}^{\prime})\) in \(\mathsf{H}\) such that \(\mathsf{K}=\mathsf{F}[1]*\mathsf{T}\) and \(\mathsf{K}^{\prime}=\mathsf{F}^{\prime}[1]*\mathsf{T}^{\prime}\). Each \(t^{\prime}\in\mathsf{T}^{\prime}\) sits in a short exact sequence \(0\to t_{1}\to t^{\prime}\to f_{1}\to 0\) where \(t_{1}\in\mathsf{T}\) and \(f_{1}\in\mathsf{F}\cap\mathsf{T}^{\prime}\) because \(\mathsf{T}^{\prime}\) is closed under quotients, and similarly each \(f^{\prime}\in\mathsf{F}^{\prime}\) sits in a short exact sequence \(0\to t_{2}\to f^{\prime}\to f_{2}\to 0\) where \(t_{2}\in\mathsf{T}\cap\mathsf{F}^{\prime}\) and \(f_{2}\in\mathsf{F}\), using that \(\mathsf{F}^{\prime}\) is closed under subobjects. Then \[v\in C(\mathsf{K})\cap C(\mathsf{K}^{\prime}) \iff v|_{\mathsf{T}}\geq 0,v|_{\mathsf{F}}\leq 0\text{ and }v|_{\mathsf{T}^{\prime}}\geq 0,v|_{ \mathsf{F}^{\prime}}\leq 0\] \[\iff v\in C(\mathsf{K})\text{ and }v|_{\mathsf{F}\cap\mathsf{T}^{ \prime}}=0=v|_{\mathsf{T}\cap\mathsf{F}^{\prime}}\] \[\iff v\in C(\mathsf{K})\text{ and }v\in E^{\perp}\] where \(E\coloneqq E(\mathsf{T}\cap\mathsf{F}^{\prime},\mathsf{T}^{\prime}\cap \mathsf{F})\) is the cone in \(\Lambda_{\mathbb{R}}\) generated by the objects in \(\mathsf{T}\cap\mathsf{F}^{\prime}\) and \(\mathsf{T}^{\prime}\cap\mathsf{F}\). Hence \(C(\mathsf{K})\cap C(\mathsf{K}^{\prime})=C(\mathsf{K})\cap E^{\perp}\). This is a dual face of \(C(\mathsf{K})\) by Proposition B.3(1) and by symmetry it is also a dual face of \(C(\mathsf{K}^{\prime})\). For (2), note that the heart fan \(\Sigma(\mathsf{H})\) is determined by the heart cones. These have the form \(C(\mathsf{F}[1]*\mathsf{T})=\{v\in\Lambda_{\mathbb{R}}^{*}\mid v|_{\mathsf{T }}\geq 0\text{ and }v|_{\mathsf{F}}\leq 0\}\) for torsion pairs \(\mathsf{H}=\mathsf{T}*\mathsf{F}\). This subset of \(\Lambda_{\mathbb{R}}^{*}\) only depends on the torsion pair: the tilted hearts \(\mathsf{F}[1]*\mathsf{T}\) may depend on \(\mathsf{D}\) but their dual cones \(C(\mathsf{F}[1]*\mathsf{T})\) do not. Therefore \(\Sigma(\mathsf{H})\) is independent of \(\mathsf{D}\). **Corollary 3.2**.: _Let \(\mathsf{H}\) be a bounded heart in a triangulated category \(\mathsf{D}\) over \(\Lambda\). Then the set \(\Sigma^{\mathsf{n}}(\mathsf{H})\) of all faces of heart cones \(C(\mathsf{K})\) for \(\mathsf{H}\leq\mathsf{K}\leq\mathsf{H}[1]\) is a fan. It does not depend on the ambient triangulated category \(\mathsf{D}\)._ Proof.: This follows immediately from the fact that \(\Sigma(\mathsf{H})\) is a dual face fan, and is independent of the ambient triangulated category; see Corollary B.7. **Definition 3.3**.: Let \(\mathsf{H}\) be a heart in a triangulated category \(\mathsf{D}\). A _face pair_\((\mathsf{K},\mathsf{S})\) of the heart fan \(\Sigma(\mathsf{H})\) consists of a heart \(\mathsf{K}\) in \(\mathsf{D}\) with \(\mathsf{H}\leq\mathsf{K}\leq\mathsf{H}[1]\) and a face subcategory \(\mathsf{S}\in\operatorname{Serre}_{\Lambda}(\mathsf{K})\). The dual faces of the heart cone \(C(\mathsf{K})\) are parametrised by the set of face subcategories \(\operatorname{Serre}_{\Lambda}(\mathsf{K})\), and each dual face in \(\Sigma(\mathsf{H})\) is of the form \(C(\mathsf{K}/\mathsf{S})\) for a face pair \((\mathsf{K},\mathsf{S})\). Equality of dual faces in \(\Sigma(\mathsf{H})\) defines an equivalence relation on face pairs that we describe categorically: **Proposition 3.4**.: _Fix a bounded heart \(\mathsf{H}\) in \(\mathsf{D}\). Let \((\mathsf{K},\mathsf{S})\) and \((\mathsf{K}^{\prime},\mathsf{S}^{\prime})\) be face pairs where \(\mathsf{H}\leq\mathsf{K},\mathsf{K}^{\prime}\leq\mathsf{H}[1]\). Then the following are equivalent:_ 1. \(C(\mathsf{K}/\mathsf{S})=C(\mathsf{K}^{\prime}/\mathsf{S}^{\prime})\) _in_ \(\Lambda_{\mathbb{R}}^{*}\)__ 2. \(\mathsf{thick}_{\mathsf{D}}(\mathsf{S})=\mathsf{thick}_{\mathsf{D}}(\mathsf{S }^{\prime})=:\mathsf{N}\) _and_ \(\mathsf{K}/\mathsf{S}=\mathsf{K}^{\prime}/\mathsf{S}^{\prime}\) _as hearts in_ \(\mathsf{D}/\mathsf{N}\)_._ Proof.: Suppose \(C(\mathsf{K}/\mathsf{S})=C(\mathsf{K}^{\prime}/\mathsf{S}^{\prime})\) in \(\Lambda_{\mathbb{R}}^{*}\). Let \(\mathsf{K}=\mathsf{F}[1]*\mathsf{T}\) and \(\mathsf{K}^{\prime}=\mathsf{F}^{\prime}[1]*\mathsf{T}^{\prime}\) for torsion pairs \(\mathsf{T}*\mathsf{F}=\mathsf{H}=\mathsf{T}^{\prime}*\mathsf{F}^{\prime}\). The common dual face \(C(\mathsf{K}/\mathsf{S})=C(\mathsf{K}^{\prime}/\mathsf{S}^{\prime})\) is contained in the intersection \(C(\mathsf{K})\cap C(\mathsf{K}^{\prime})\). This intersection is a dual face \(C(\mathsf{K}/\mathsf{S}^{\prime\prime})\) of \(C(\mathsf{K})\) for some face subcategory \(\mathsf{S}^{\prime\prime}\) of \(\mathsf{K}\). Recalling from the proof of Theorem 3.1 that \(C(\mathsf{K})\cap C(\mathsf{K}^{\prime})=C(\mathsf{K})\cap E^{\perp}\) where \(E\coloneqq E(\mathsf{F}^{\prime}\cap\mathsf{T},\mathsf{T}^{\prime}\cap\mathsf{F})\), we deduce \(\mathsf{F}^{\prime}\cap\mathsf{T},(\mathsf{T}^{\prime}\cap\mathsf{F})[1]\subseteq \mathsf{S}^{\prime\prime}\subseteq\mathsf{S}\). Similarly, \(\mathsf{T}^{\prime}\cap\mathsf{F},(\mathsf{F}^{\prime}\cap\mathsf{T})[1] \subseteq\mathsf{S}^{\prime}\). For any \(k\in\mathsf{K}\), there is a diagram (1) whose rows and columns are triangles arising from the above torsion pairs. The row is short exact in \(\mathsf{K}\) and the columns are short exact in \(\mathsf{H}[1]\) and \(\mathsf{H}\) respectively. We have \(t_{0}^{\prime}\in\mathsf{T}^{\prime}\cap\mathsf{F}\subseteq\mathsf{S}^{\prime}\) since \(f\in\mathsf{F}\) which is closed under subobjects; dually \(f_{1}^{\prime}\in\mathsf{T}\cap\mathsf{F}^{\prime}\subseteq\mathsf{S}^{\prime}[-1]\) as \(t\in\mathsf{T}\) which is closed under quotients. In particular \(v(t_{0}^{\prime})=0=v(f_{1}^{\prime})\) for all \(v\in C(\mathsf{K}^{\prime}/\mathsf{S}^{\prime})\). Now let \(k\in\mathsf{S}\) and \(v\in C(\mathsf{K}/\mathsf{S})\), i.e. \(v|_{\mathsf{S}}=0\). From the row of (1), we get \(v(t)=0=v(f)\). From the columns of (1) and \(C(\mathsf{K}/\mathsf{S})=C(\mathsf{K}^{\prime}/\mathsf{S}^{\prime})\), we get \(v(f_{0}^{\prime})=0=v(t_{1}^{\prime})\). Again using \(C(\mathsf{K}/\mathsf{S})=C(\mathsf{K}^{\prime}/\mathsf{S}^{\prime})\), we conclude \(f_{0}^{\prime}[1],t_{1}^{\prime}\in\mathsf{S}^{\prime}\), and so \(k\in\mathsf{thick}_{\mathsf{D}}(\mathsf{S}^{\prime})\). This shows the inclusion \(\mathsf{thick}_{\mathsf{D}}(\mathsf{S})\subseteq\mathsf{thick}_{\mathsf{D}}( \mathsf{S}^{\prime})\) and by symmetry they are equal; let \(\mathsf{N}\) be the common thick subcategory. Considering the diagram (1) one last time, we see \(k\in\mathsf{F}^{\prime}[1]*\mathsf{T}^{\prime}\) in the quotient \(\mathsf{D}/\mathsf{N}\). Therefore \(\mathsf{K}/\mathsf{S}\subseteq\mathsf{K}^{\prime}/\mathsf{S}^{\prime}\) are nested hearts in \(\mathsf{D}/\mathsf{N}\) and therefore are equal. This completes the forward implication. The converse is clear. **Remark 3.5**.: Distinct hearts may have the same heart cone. An infinite family occurs in Example 1.8 where the mixed geometric hearts are obtained by tilting \(\mathsf{coh}(\mathbb{P}^{1})\). However, if \(0\) is a face subcategory of \(\mathsf{K}\) -- for example if \(\mathsf{K}\) is a length heart and \(\Lambda=K(\mathsf{K})\) -- then the previous result shows \(C(\mathsf{K})\) is not contained in \(C(\mathsf{K}^{\prime})\) for any other heart \(\mathsf{K}^{\prime}\). Therefore, if \(C(\mathsf{K})=C(\mathsf{K}^{\prime})\) for two length hearts \(\mathsf{H}\leq\mathsf{K},\mathsf{K}^{\prime}\leq\mathsf{H}[1]\) and \(\Lambda=K(\mathsf{H})\) then \(\mathsf{K}=\mathsf{K}^{\prime}\). **Remark 3.6** (change of lattice).: Let \(\mathsf{H}\) be an abelian category and \(\Lambda\), \(\Gamma\) two lattices with surjective homomorphisms \(K(\mathsf{H})\to\Lambda\to\Gamma\). The heart cones \(C(\mathsf{H}/\Lambda)\subseteq\Lambda_{\mathbb{R}}^{*}\) and \(C(\mathsf{H}/\Gamma)\subseteq\Gamma_{\mathbb{R}}^{*}\) are related by \(C(\mathsf{H}/\Gamma)=C(\mathsf{H}/\Lambda)\cap\Gamma_{\mathbb{R}}^{*}\) via the canonical inclusion \(\Gamma_{\mathbb{R}}^{*}\hookrightarrow\Lambda_{\mathbb{R}}^{*}\). Hence the heart fan \(\Sigma(\mathsf{H}/\Gamma)\) in \(\Gamma_{\mathbb{R}}^{*}\) is obtained by intersecting all cones in \(\Sigma(\mathsf{H}/\Lambda)\) with \(\Gamma_{\mathbb{R}}^{*}\). See Example 3.21. **Remark 3.7** (functoriality).: Let \(\mathsf{H}/\Lambda\) and \(\mathsf{H}^{\prime}/\Lambda^{\prime}\) be abelian categories over lattices and let \(\Phi\colon\mathsf{H}\xrightarrow{\sim}\mathsf{H}^{\prime}\) be an exact equivalence such that \(K(\Phi)\colon K(\mathsf{H})\to K(\mathsf{H}^{\prime})\) is compatible with the structure maps \(\lambda\colon K(\mathsf{H})\to\Lambda\) and \(\lambda^{\prime}\colon K(\mathsf{H}^{\prime})\to\Lambda^{\prime}\), i.e. if \(\lambda(h_{1})=\lambda(h_{2})\) for some \(h_{1},h_{2}\in\mathsf{H}\) then \([\Phi(h_{1})]=[\Phi(h_{2})]\in K(\mathsf{H}^{\prime})\); this is trivially met if \(\Lambda=K(\mathsf{H})\) and \(\Lambda^{\prime}=K(\mathsf{H}^{\prime})\). Then \(K(\Phi)\) induces a homomorphism \(\varphi\colon\Lambda\to\Lambda^{\prime}\). This map is surjective but need not be injective -- for example, the previous remark's change of lattice is part of this setup, with \(\Phi=\operatorname{id}\colon\mathsf{H}\to\mathsf{H}\) and \(\varphi\colon\Lambda\to\Gamma\) the given surjection of lattices. We consider the heart fans \(\Sigma(\mathsf{H}/\Lambda)\) and \(\Sigma(\mathsf{H}^{\prime}/\Lambda^{\prime})\) via the ambient categories \(\mathsf{D}\coloneqq\mathsf{D}^{b}(\mathsf{H})\) and \(\mathsf{D}^{\prime}\coloneqq\mathsf{D}^{b}(\mathsf{H}^{\prime})\), a choice justified by Theorem 3.1(2). The linear map \(\varphi_{\mathbb{R}}^{*}\colon\Lambda_{\mathbb{R}}^{\prime}\hookhook \hookrightarrow\Lambda_{\mathbb{R}}^{*}\) induces a map of fans \(\Sigma(\Phi)\colon\Sigma(\mathsf{H}^{\prime}/\Lambda^{\prime})\to\Sigma( \mathsf{H}/\Lambda)\) -- this means that for each \(\sigma^{\prime}\in\Sigma(\mathsf{H}^{\prime}/\Lambda^{\prime})\) there is \(\sigma\in\Sigma(\mathsf{H}/\Lambda)\) such that \(\varphi_{\mathbb{R}}^{*}(\sigma^{\prime})\subseteq\sigma\). We show this for a maximal cone \(\sigma^{\prime}=C(\mathsf{K}^{\prime})\), where \(\mathsf{H}^{\prime}\leq\mathsf{K}^{\prime}\leq\mathsf{H}^{\prime}[1]\). Then \(\mathsf{K}^{\prime}=\mathsf{F}^{\prime}[1]*\mathsf{T}^{\prime}\) for a torsion pair on \(\mathsf{H}^{\prime}\), and \(\mathsf{K}\coloneqq\Phi^{-1}(\mathsf{F}^{\prime})[1]*\Phi^{-1}(\mathsf{T}^{ \prime})\) is a heart in \(\mathsf{D}\) with \(\mathsf{H}\leq\mathsf{K}\leq\mathsf{H}[1]\). Unwinding the definitions shows \(\varphi_{\mathbb{R}}^{*}(C(\mathsf{K}^{\prime}/\Lambda^{\prime}))\subseteq C( \mathsf{K}/\Lambda)\). In the special case \(\mathsf{H}=\mathsf{H}^{\prime}\) and \(\Lambda=\Lambda^{\prime}\), the induced map \(\varphi\colon\Lambda\to\Lambda\) is an automorphism, and we get a right action of the group \(\operatorname{Aut}_{\Lambda}(\mathsf{H})\) of exact autoequivalences compatible with the structure map on the heart fan \(\Sigma(\mathsf{H}/\Lambda)\). See Examples 3.15 and 3.24. ### Properties of the heart fan We discuss when heart fans are finite (the number of cones in the fan is finite) or complete (the union of the cones is the entire vector space). Our main results are summarised in the next theorem; the subsequent corollary applies them to representations of finite-dimensional algebras. The proof of the theorem relies on Lemmas 3.10 and 3.11 which we state and prove subsequently. Recall \(\mathsf{N}_{\mathsf{H}}=\{h\in\mathsf{H}\mid 0=[h]\in K(\mathsf{H})\}\). **Theorem 3.8**.: _Let \(\mathsf{D}\) be a triangulated category over \(\Lambda\) and \(\mathsf{H}\) a bounded heart in \(\mathsf{D}\)._ 1. _If_ \(\mathsf{H}\) _is a length category then the heart fan_ \(\Sigma(\mathsf{H})\) _is complete._ 2. _If_ \(\mathsf{N}_{\mathsf{H}}=0\) _and_ \(\mathsf{D}\) _is a lattice category then the following conditions are equivalent:_ 1. _The heart fan_ \(\Sigma(\mathsf{H})\) _is finite and complete._ 2. \(\mathsf{H}\) _is length with finitely many torsion pairs._ _When these conditions hold, the heart fan is simplicial._ Proof of Theorem 3.8.: (1) This is precisely the content of Lemma 3.10 below: given \(v\in\Lambda_{\mathbb{R}}^{*}\), we are looking for a heart \(\mathsf{K}\) such that \(v\in C(\mathsf{K})\). The lemma constructs a torsion pair \((\mathsf{T},\mathsf{F})\) on \(\mathsf{H}\) such that \(\mathsf{K}=\mathsf{F}[1]*\mathsf{T}\) and \(v|_{\mathsf{T}}\geq 0\), \(v|_{\mathsf{F}}\leq 0\), i.e. \(v\in C(\mathsf{K})\). (2) \((i)\Longrightarrow(ii)\): As \(\Lambda_{\mathbb{R}}^{*}\) is covered by a finite number of cones, all maximal cones \(C(\mathsf{K})\) are full. Proposition 1.10 states that the abelian quotient categories \(\mathsf{K}/\mathsf{N}_{\mathsf{K}}\) are algebraic. In particular, this applies to \(C(\mathsf{H})\) where we in addition get \(\mathsf{H}\) algebraic because \(\mathsf{N}_{\mathsf{H}}=0\). Then \(\mathsf{N}_{\mathsf{K}}=0\) by Lemma 3.11. Thus by Remark 3.5, the finitely many maximal cones correspond to finitely many hearts \(\mathsf{H}\leq\mathsf{K}\leq\mathsf{H}[1]\), hence there are only finitely many torsion pairs in \(\mathsf{H}\). \((ii)\implies(i)\): The heart fan \(\Sigma(\mathsf{H})\) is complete by part (1). With \(\mathsf{H}\) only having finitely many torsion pairs, the set of maximal cones in \(\Sigma(\mathsf{H})\) is finite, hence the fan itself is finite. Finally, Proposition 1.10 implies that the fan is simplicial when these conditions hold. **Corollary 3.9**.: _Let \(A\) be a finite-dimensional algebra, \(\mathsf{H}=\mathsf{mod}(A)\) and \(\mathsf{D}=\mathsf{D}^{b}(\mathsf{H})\) with \(\Lambda=K(A)\). Then the following conditions are equivalent:_ 1. _The heart fan_ \(\Sigma(\mathsf{H})\) _is finite._ 2. _The algebra_ \(A\) _is_ \(\tau\)_-tilting finite._ Proof of the corollary.: This result is classically phrased in terms of the \(g\)-fan; see Section 4. \(K(A)=K(\mathsf{H})\) is a lattice and \(\mathsf{N}_{\mathsf{H}}=0\). The fan \(\Sigma(\mathsf{H})\) is complete by Theorem 3.8(1). For a module category \(\mathsf{H}=\mathsf{mod}(A)\), the bijection between (bounded) hearts \(\mathsf{K}\) with \(\mathsf{H}\leq\mathsf{K}\leq\mathsf{H}[1]\) and torsion pairs \((\mathsf{T},\mathsf{F})\) in \(\mathsf{H}\) restricts to a bijection between algebraic hearts \(\mathsf{K}\) with \(\mathsf{H}\leq\mathsf{K}\leq\mathsf{H}[1]\) and functorially finite torsion pairs \((\mathsf{T},\mathsf{F})\) in \(\mathsf{H}\) by [21, proof of Prop. 3.1]. By [13, Thm. 1.2], the algebra \(A\) is \(\tau\)-tilting finite if and only if all torsion pairs in \(\mathsf{H}=\mathsf{mod}(A)\) are functorially finite. These facts imply the claim: \(A\)\(\tau\)-tilting finite \(\iff\) all hearts \(\mathsf{K}\) algebraic \(\iff\)\(\Sigma(\mathsf{H})\) covered by full cones \(\iff\)\(\Sigma(\mathsf{H})\) finite, using Theorem 3.8(2) in the last equivalence. **Lemma 3.10**.: _Let \(\mathsf{H}\) be a length heart in \(\mathsf{D}\) and let \(v\in\Lambda_{\mathbb{R}}^{s}\). Then the following full subcategories of \(\mathsf{H}\) define a torsion pair in \(\mathsf{H}\):_ \[\mathsf{T}_{v} =\{h\in\mathsf{H}\mid v(h^{\prime\prime})\geq 0\text{ for all quotients }h\twoheadrightarrow h^{\prime\prime}\},\] \[\mathsf{F}_{v} =\{h\in\mathsf{H}\mid v(h^{\prime})<0\text{ for all non-zero subobjects }0\neq h^{\prime}\hookrightarrow h\}.\] Proof.: First, by construction \(\mathsf{F}_{v}\) is closed under subobjects and \(\mathsf{T}_{v}\) is closed under quotients. Second, both subcategories are closed under extensions. We show this for \(\mathsf{F}_{v}\): let \(f_{1},f_{2}\in\mathsf{F}_{v}\) and consider an extension \(0\to f_{1}\to h\to f_{2}\to 0\) in \(\mathsf{H}\). If \(h^{\prime}\to h\) is a non-zero inclusion then pulling back the exact sequence gives rise to a commutative diagram with monic vertical arrows: Then \(v(h^{\prime})=v(f^{\prime}_{1})+v(f^{\prime}_{2})<0\) from \(v(f^{\prime}_{1})\leq 0\) and \(v(f^{\prime}_{2})\leq 0\) with at least one of the two inequalities strict; hence \(h^{\prime}\in\mathsf{F}_{v}\). The proof for \(\mathsf{T}_{v}\) is dual. Third, vanishing \(\operatorname{Hom}(\mathsf{T}_{v},\mathsf{F}_{v})=0\): given such a morphism \(\alpha\colon t\to f\), consider its factorisation \(t\to\operatorname{im}(\alpha)\to f\). The image is a quotient of \(t\) as well as a subobject of \(f\), so \(\operatorname{im}(\alpha)\in\mathsf{T}_{v}\cap\mathsf{F}_{v}\). This gives a contradiction \(0\leq v(\operatorname{im}(\alpha))<0\), unless \(\operatorname{im}(\alpha)=0\), i.e. \(\alpha=0\). Finally, we have to show \(\mathsf{H}=\mathsf{T}_{v}*\mathsf{F}_{v}\). This is the (only) place where we need \(\mathsf{H}\) to be length. If \(s\in\mathsf{H}\) is a simple object then either \(s\in\mathsf{T}_{v}\) (if \(v(s)\geq 0\)) or \(s\in\mathsf{F}_{v}\) (if \(v(s)<0\)). Let \(h\in\mathsf{H}\) be an arbitrary object. We want to exhibit a short exact sequence \(0\to t\to h\to f\to 0\) with \(t\in\mathsf{T}_{v}\) and \(f\in\mathsf{F}_{v}\). The proof goes by induction over the composition length of \(h\); with length one, i.e. \(h\) simple, already dealt with. Let now \(h\in\mathsf{H}\) be any object and assume that there is a non-zero subobject \(t\hookrightarrow h\) with \(t\in\mathsf{T}_{v}\). Let \(q=h/t\) be the quotient. Then by induction, \(q\) has a decomposition \(0\to t^{\prime\prime}\to q\to f^{\prime\prime}\to 0\) with \(t^{\prime\prime}\in\mathsf{T}_{v}\) and \(f^{\prime\prime}\in\mathsf{F}_{v}\). Consider the pull-back \(p\) of \(h\) and \(t^{\prime\prime}\) over \(q\), leading to a commutative diagram whose columns are short exact sequences Then \(p\in\mathsf{T}_{v}\) because \(\mathsf{T}_{v}\) is closed under extensions, and the middle column is the sought after decomposition of \(h\). Now assume that \(h\) has no non-zero subobjects from \(\mathsf{T}_{v}\). We then need to show that \(h\in\mathsf{F}_{v}\). For this, let \(h^{\prime}\hookrightarrow h\) be any non-zero subobject. Again by induction, \(h^{\prime}\) has a decomposition \(0\to t^{\prime}\to h^{\prime}\to f^{\prime}\to 0\) with \(t^{\prime}\in\mathsf{T}_{v}\) and \(f^{\prime}\in\mathsf{F}_{v}\). Then \(t^{\prime}\hookrightarrow h\), hence \(t^{\prime}=0\) and \(h^{\prime}=f^{\prime}\). Therefore \(v(h^{\prime})=v(f^{\prime})<0\), as required. The following technical result was used in the proof of Theorem 3.8 and will be used again when we compare heart fans with \(g\)-fans in the next section. The lemma below applies if \(\mathsf{H}\) is an algebraic category. The full-dimensionality of \(C(\mathsf{K})\) is crucial: if \(\mathsf{H}\) is the module category of the Kronecker quiver, then one possible tilt is the mixed geometric heart \(\mathsf{K}=\mathsf{coh}^{\mathsf{K}}(\mathbb{P}^{1})\) of Example 1.8 which has \(\mathsf{N}_{\mathsf{K}}\neq 0\) but \(C(\mathsf{K})\) is not full. **Lemma 3.11**.: _Let \(\mathsf{H}\) and \(\mathsf{K}\) be hearts in \(\mathsf{D}\) with \(\mathsf{H}\leq\mathsf{K}\leq\mathsf{H}[1]\). Assume that \(K(\mathsf{D})\) is a lattice. If \(C(\mathsf{K})\) is full and \(\mathsf{N}_{\mathsf{H}}=0\) then \(\mathsf{N}_{\mathsf{K}}=0\)._ Proof.: Let \((\mathsf{T},\mathsf{F})\) be the torsion pair in \(\mathsf{H}=\mathsf{T}*\mathsf{F}\) such that \(\mathsf{K}=\mathsf{F}[1]*\mathsf{T}\) is its tilt. Assume \(\mathsf{N}_{\mathsf{K}}\neq 0\), i.e. there is \(0\neq k\in\mathsf{K}\) with \(0=[k]\in K(\mathsf{K})\). The decomposition sequence of \(k\in\mathsf{K}\) with respect to the torsion pair \((\mathsf{F}[1],\mathsf{T})\) is \(0\to f[1]\to k\to t\to 0\), where \(f\in\mathsf{F}\) and \(t\in\mathsf{T}\). We have \([f]\neq 0\) and \([t]\neq 0\) in \(K(\mathsf{H})=K(\mathsf{K})\) by \(\mathsf{N}_{\mathsf{H}}=0\). Then the effective cone \(E(\mathsf{K})\) contains the opposite generators \([f[1]]=[k]-[t]=-[t]\) and \([t]\), so is not strictly convex, contradicting Proposition 1.10 as \(C(\mathsf{K})\) was assumed to be full. ### Examples of heart fans We compute and draw a few fans for triangulated categories with lattice \(\Lambda\) of rank less than or equal to \(2\). Except where indicated otherwise \(\Lambda\) is the Grothendieck group. The pictures are collected on page 1. Following [1], vectors in \(V=\mathbb{R}^{2}\) are written \((x,y)\) and vectors in \(V^{*}\) as \([a,b]\). Our example will come from algebras or varieties. We employ the standard short-hands: we write \(\Sigma(A)\coloneqq\Sigma(\mathsf{mod}(A)/K(A))\) for the heart fan of modules over a finite-dimensional algebra \(A\). Similarly, we write \(\Sigma(X)\coloneqq\Sigma(\mathsf{coh}(X)/N(X))\) for the heart fan of coherent sheaves on a smooth projective variety \(X\) where the lattice \(\Lambda=N(X)\) is the numerical Grothendieck group. **Example 3.12** (\(\Lambda\) of rank 1).: Two of the four pointed fans on \(\mathbb{R}\) occur as heart fans. If \(\mathsf{H}\neq 0\) and \(\mathsf{N}_{\mathsf{H}}=0\) then \(E(\mathsf{H})=\mathbb{R}_{\geq 0}\) is a ray, hence the dual cone \(C(\mathsf{H})=\mathbb{R}_{\geq 0}\) also. Here \(\Sigma(\mathsf{H})=\{\mathbb{R}_{\leq 0},\{0\},\mathbb{R}_{\geq 0}\}\) is the complete pointed fan on \(\mathbb{R}\). This applies to \(\mathsf{H}=\mathsf{mod}(\mathsf{k})\). The other possibility is \(E(\mathsf{H})=\mathbb{R}\); then \(C(\mathsf{H})=\{0\}\) and \(\Sigma(\mathsf{H})\) consists only of \(\{0\}\). This happens for \(\mathsf{H}=\mathsf{coh}^{\mathsf{K}}(\mathbb{P}^{1})\) of Example 1.8 with \(\lambda\colon K(\mathsf{H})\to\Lambda=\mathbb{Z},\lambda(\mathcal{O})=0, \lambda(\mathcal{O}_{p})\neq 0\). **Example 3.13** (\(A_{2}\) quiver).: Let \(\mathsf{H}=\mathsf{mod}(\mathsf{k}A_{2})\) be the category of representations of the quiver \(2\longrightarrow 1\). Denote the simple modules by \(S_{1}\) and \(S_{2}\), and let \(0\to S_{1}\to E\to S_{2}\to 0\) be the non-trivial extension. Fix \([S_{1}],[S_{2}]\) as orthonormal basis for \(\Lambda=K(\mathsf{H})\cong\mathbb{Z}^{2}\), so that \([E]\) corresponds to the vector \((1,1)\). There are five hearts between \(\mathsf{H}\) and \(\mathsf{H}[1]\), all of which are algebraic, and accordingly five \(2\)-dimensional cones in \(\Sigma(\mathsf{k}A_{2})\). The hearts are listed below, together with their simple objects, the torsion pair which tilts from \(\mathsf{H}\) towards the heart, and the cone generators. They are of \(A_{2}\) type, i.e. equivalent to \(\mathsf{H}\), except for \(\mathsf{K}_{\mathsf{ss}}\) which is semisimple. \begin{tabular}{l l l l l l} heart & simple objects & \(\mathsf{T}\) & \(\mathsf{F}\) & \(E(\mathsf{H})\) & \(C(\mathsf{H})\) \\ \hline \(\mathsf{H}\) & \(S_{1}\), \(S_{2}\) & \(\mathsf{H}\) & \(0\) & \((1,0)\), \((0,1)\) & \([1,0]\), \([0,1]\) \\ \(\mathsf{K}_{1}\) & \(E\), \(S_{1}[1]\) & \(\langle E,S_{2}\rangle\) & \(\langle S_{1}\rangle\) & \((1,1)\), \((-1,0)\) & \([-1,1]\), \([0,1]\) \\ \(\mathsf{K}_{2}\) & \(S_{2}\), \(E[1]\) & \(\langle S_{2}\rangle\) & \(\langle S_{1},E\rangle\) & \((0,1)\), \((-1,-1)\) & \([-1,0]\), \([-1,1]\) \\ \(\mathsf{H}[1]\) & \(S_{1}[1]\), \(S_{2}[1]\) & \(0\) & \(\mathsf{H}\) & \((-1,0)\), \((0,-1)\) & \([-1,0]\), \([0,-1]\) \\ \(\mathsf{K}_{\mathsf{ss}}\) & \(S_{1}\), \(S_{2}[1]\) & \(\langle S_{1}\rangle\) & \(\langle S_{2}\rangle\) & \((1,0)\), \((0,-1)\) & \([1,0]\), \([0,-1]\) \\ \end{tabular} **Example 3.14** (\(2\)-dimensional semisimple algebra).: Let \(\mathsf{H}=\mathsf{mod}(\mathsf{k}^{2})\) be a semisimple category with two simple objects. Similarly to the previous example, but even easier, one can see that the heart fan \(\Sigma(\mathsf{k}^{2})\) has four maximal cones, all of which are simplicial. **Example 3.15** (derived-discrete algebras).: Consider the bound path algebra of \(2\xmapsto 1\) with zero relations at each vertex. This algebra is known as the _derived-discrete algebra_ \(\Lambda(2,2,0)\); see [6]. It is representation-finite. More precisely, the Auslander-Reiten quiver of the module category is determined by the two exact sequences \(0\to S_{1}\to E\to S_{2}\to 0\) and \(0\to S_{2}\to F\to S_{1}\to 0\) where \(S_{1},S_{2}\) are the simple modules. \begin{tabular}{c c c c c c} heart & simple objects & \(\mathsf{T}\) & \(\mathsf{F}\) & \(E(\mathsf{H})\) & \(C(\mathsf{H})\) \\ \hline \(\mathsf{H}\) & \(S_{1}\), \(S_{2}\) & \(\mathsf{H}\) & \(0\) & \((1,0)\), \((0,1)\) & \([1,0]\), \([0,1]\) \\ \(\mathsf{K}_{1}\) & \(E\), \(S_{1}[1]\) & \(\langle E,S_{2}\rangle\) & \(\langle S_{1}\rangle\) & \((1,1)\), \((-1,0)\) & \([-1,1]\), \([0,1]\) \\ \(\mathsf{K}_{2}\) & \(S_{2}\), \(E[1]\) & \(\langle S_{2}\rangle\) & \(\langle S_{1},E\rangle\) & \((0,1)\), \((-1,-1)\) & \([-1,0]\), \([-1,1]\) \\ \(\mathsf{K}_{3}\) & \(S_{1}\), \(F[1]\) & \(\langle S_{1}\rangle\) & \(\langle S_{2},F\rangle\) & \((1,0)\), \((-1,-1)\) & \([-1,0]\), \([1,-1]\) \\ \(\mathsf{K}_{4}\) & \(F\), \(S_{2}[1]\) & \(\langle S_{1},F\rangle\) & \(\langle S_{2}\rangle\) & \((1,1)\), \((-1,0)\) & \([1,-1]\), \([1,0]\) \\ \(\mathsf{H}[1]\) & \(S_{1}[1]\), \(S_{2}[1]\) & \(0\) & \(\mathsf{H}\) & \((-1,0)\), \((0,-1)\) & \([-1,0]\), \([0,-1]\) \\ \end{tabular} We remark that the Auslander-Reiten quivers of the following two abelian categorieres truncate to the one explained for \(\Lambda(2,2,0)\) above and, in particular, these produce the same fan with six maximal cones: First, a homogeneous tube of rank two. This is an algebraic category but not the module category of a finite-dimensional algebra. Second, the derived-discete algebra \(\Lambda(1,2,0)\) with a single zero relation. Unlike \(\Lambda(2,2,0)\), it is of finite global dimension. The identical heart fans \(\Sigma(\Lambda(2,2,0))=\Sigma(\Lambda(1,2,0))\) behave differently equivariantly: the algebra \(\Lambda(2,2,0)\) has an automorphism \(\alpha\) swapping the two simples which gives rise to an exact autoequivalence \(\alpha_{*}\in\operatorname{Aut}_{\Lambda}(\mathsf{H})\). By Remark 3.7, \(\alpha_{*}\) induces a non-trivial involution of the fan \(\Sigma(\alpha_{*})\) which fixes \(\pm C(\mathsf{H})\) and swaps \(C(\mathsf{K}_{1})\mapsto C(\mathsf{K}_{4})\) and \(C(\mathsf{K}_{2})\mapsto C(\mathsf{K}_{3})\). In contrast, the same fan automorphism of \(\Sigma(\Lambda(1,2,0))\) is not induced by an exact autoequivalence of \(\mathsf{mod}(\Lambda(1,2,0))\). **Remark 3.16**.: The previous three examples show complete fans in \(\mathbb{R}^{2}\) with \(5\), \(4\) and \(6\) full cones, respectively; see page 18. According to [2, Thm. 4.13], each finite, complete and sign-coherent fan in \(\mathbb{R}^{2}\) can be realised as the \(g\)-fan of a \(\tau\)-tilting finite algebra; see Section 4 for details on the relationship between \(g\)-fans and heart fans. In particular, this means that any complete sign-coherent fan of rank \(2\) occurs as the heart fan of some finite-dimensional algebra of rank \(2\). A fan in \(\mathbb{R}^{2}\) is _sign-coherent_ if there is a basis of \(\mathbb{Z}^{2}\) such that each cone of the fan lies in a quadrant determined by the basis. **Example 3.17** (infinite-dimensional semisimple algebra).: We consider the semisimple category \(\mathsf{H}=\mathsf{mod}(\mathbf{k}^{\mathbb{Z}})\) with countably many simple objects \(S_{i}\) for \(i\in\mathbb{Z}\). Let \(\Lambda=\mathbb{Z}^{2}\) and define the homomorphism \(\lambda\colon K(\mathsf{H})\to\Lambda\) by \(\lambda(S_{i})\coloneqq(i+1,1)\) and \(\lambda(S_{-i})\coloneqq(1,1+i)\) for \(i\in\mathbb{N}\). Subsets \(I\subseteq\mathbb{Z}\) are in bijection with torsion subcategories in \(\mathsf{H}\); the latter are \(\langle S_{i}\mid i\in I\rangle\). The corresponding tilt \(\mathsf{K}_{I}\) of \(\mathsf{H}\) is also semisimple with simple objects \(S_{i}\) for \(i\in I\) and \(S_{i}[1]\) for \(i\notin I\). It follows that \(C(\mathsf{K}_{\mathbb{Z}\setminus I})=-C(\mathsf{K}_{I})\). For example \(C(\mathsf{K}_{\mathbb{Z}})=C(\mathsf{H})\) is the first quadrant and \(C(\mathsf{K}_{\varnothing})=C(\mathsf{H}[1])\) is the third quadrant. The only other non-zero cones are those of the hearts \(\mathsf{K}_{\leq n}\coloneqq\mathsf{K}_{I}\) where \(I=\{i\in\mathbb{Z}\mid i\leq n\}\) and \(\mathsf{K}_{>n}\coloneqq\mathsf{K}_{I}\) where \(I=\{i\in\mathbb{Z}\mid i>n\}\). Here \(C(\mathsf{K}_{\leq n})\) is generated by \([n,1]\), \([n-1,1]\) if \(n<0\) and by \([-1,n+1]\), \([-1,n+2]\) if \(n\geq 0\), and \(C(\mathsf{K}_{>n})=-C(\mathsf{K}_{\leq n})\). In all other cases \(C(\mathsf{K}_{I})=0\); this does not contradict Proposition 1.10 because \(\Lambda\neq K(\mathsf{K}_{I})\). The effective cone \(E(\mathsf{H})=\{x,y>0\}\cup\{0\}\) is not polyhedral, and our two heart fans differ: \(\Sigma(\mathsf{H})\neq\Sigma^{\mathsf{n}}(\mathsf{H})\). The naive heart fan \(\Sigma^{\mathsf{n}}(\mathsf{H})\) contains four extra cones, namely the rays through \([\pm 1,0]\) and \([0,\pm 1]\), which are faces of \(C(\mathsf{H})\) or \(C(\mathsf{H}[1])\) but not dual faces; see Example B.2. The four rays are shown in red in Figure 2. **Example 3.18** (Kronecker quiver).: Let \(\mathsf{H}=\mathsf{mod}(\mathbf{k}\tilde{A}_{1})\) be the category of representations of the Kronecker quiver \(2\xrightarrow{\ }1\). Denote the simple modules by \(S_{1}\), \(S_{2}\) and let \(M_{n}=[\mathbf{k}^{n-1}\xrightarrow{\ }\mathbf{k}^{n}]\) for \(n\geq 1\) be the indecomposable modules in the postprojective component (\(M_{1}=S_{1}=P_{1}\) and \(M_{2}=P_{2}\)). Similarly, let \(N_{n}=[\mathbf{k}^{n}\xrightarrow{\ }\mathbf{k}^{n-1}]\) for \(n\geq 1\) be the indecomposable modules in the preinjective component (\(N_{1}=S_{2}=I_{2}\) and \(N_{2}=I_{1}\)). Fix \([S_{1}],[S_{2}]\) as orthonormal basis for \(\Lambda=K(\mathsf{H})\cong\mathbb{Z}^{2}\). There are infinitely many hearts between \(\mathsf{H}\) and \(\mathsf{H}[1]\), of which two series \(\{\mathsf{K}_{n}\}\) and \(\{\mathsf{K}_{n}^{\prime}\}\) are algebraic. Their limit and inverse limit are the reversed geometric heart \(\mathsf{K}_{\infty}=\mathsf{coh}^{\ltimes}(\mathbb{P}^{1})\) and the geometric heart \(\mathsf{K}_{-\infty}=\mathsf{coh}(\mathbb{P}^{1})\), respectively. Between the \(\mathsf{K}_{\infty}\) and \(\mathsf{K}_{-\infty}\), there are mixed hearts \(\mathsf{K}_{U}\) parametrised by subsets \(U\subseteq\mathbb{P}^{1}\); see Example 1.8. In particular, \(\mathsf{K}_{\infty}=\mathsf{K}_{\mathbb{P}^{1}}\) and \(\mathsf{K}_{-\infty}=\mathsf{K}_{\varnothing}\). Finally, there is one semisimple heart \(\mathsf{K}_{\mathsf{sg}}\). The table below summarises the simple objects in each heart, describes the torsion pair \((\mathsf{T},\mathsf{F})\) used to obtain the heart by tilting from \(\mathsf{H}\), the generators of the effective cone of the heart and the generators of the dual cone of the heart. \begin{tabular}{c c c c c c} heart \(\mathsf{K}\) & simple objects & \(\mathsf{T}\) & \(\mathsf{F}\) & \(E(\mathsf{K})\) & \(C(\mathsf{K})\) \\ \hline \(\mathsf{H}\) & \(M_{1}\), \(N_{1}\) & \(\mathsf{H}\) & \(0\) & \((1,0)\), \((0,1)\) & \([1,0]\), \([0,1]\) \\ \(\mathsf{K}_{1}\) & \(M_{2}\), \(M_{1}[1]\) & \(\langle M_{2},N_{1}\rangle\) & \(\langle M_{1}\rangle\) & \((2,1)\), \((-1,0)\) & \([0,1]\), \([-1,2]\) \\ \(\mathsf{K}_{2}\) & \(M_{3}\), \(M_{2}[1]\) & \(\langle M_{3},N_{1}\rangle\) & \(\langle M_{1},M_{2}\rangle\) & \((3,2)\), \((-2,-1)\) & \([-1,2]\), \([-2,3]\) \\ \(\mathsf{i}\) & \(R_{p}\colon p\in\mathbb{P}^{1}\) & \(\langle R_{p},N_{n\geq 1}\rangle\) & \(\langle M_{n\geq 1}\rangle\) & & \([-1,1]\) ray \\ \(\mathsf{K}_{U}\) & \(R_{p},R_{q}[1]\colon p\in U,q\in\mathbb{P}^{1}\backslash U\) & \(\langle R_{p},N_{n\geq 1}\rangle\) & \(\langle M_{n\geq 1},R_{q}\rangle\) & & \([-1,1]\) ray \\ \(\mathsf{K}_{-\infty}\) & \(R_{q}[1]\colon q\in\mathbb{P}^{1}\) & \(\langle N_{n\geq 1}\rangle\) & \(\langle M_{n\geq 1},R_{q}\rangle\) & & \([-1,1]\) ray \\ \(\mathsf{i}\) & \(N_{2}\), \(N_{3}[1]\) & \(\langle N_{2},N_{1}\rangle\) & \(\langle M_{1},N_{3}\rangle\) & \((1,2)\), \((-2,-3)\) & \([-2,1]\), \([-3,2]\) \\ \(\mathsf{K}_{1}^{\prime}\) & \(N_{1}\), \(N_{2}[1]\) & \(\langle N_{1}\rangle\) & \(\langle M_{1},N_{2}\rangle\) & \((0,1)\), \((-1,-2)\) & \([-1,0]\), \([-2,1]\) \\ \(\mathsf{H}[1]\) & \(M_{1}[1]\), \(N_{1}[1]\) & \(0\) & \(\mathsf{H}\) & \((0,-1)\), \((-1,0)\) & \([0,-1]\), \([-1,0]\) \\ \(\mathsf{K}_{\infty}\) & \(M_{1}\), \(N_{1}[1]\) & \(\langle M_{1}\rangle\) & \(\langle N_{1}\rangle\) & \((1,0)\), \((0,-1)\) & \([1,0]\), \([0,-1]\) \\ \end{tabular} **Example 3.19** (projective line).: Let \(\mathsf{H}=\mathsf{coh}(\mathbb{P}^{1})\) and write \(\Lambda=K(\mathsf{H})=\mathbb{Z}[\mathcal{O}]\oplus\mathbb{Z}[\mathcal{O}_{p}]\) as in Example 1.6. Let \((\mathsf{T},\mathsf{F})\) be a torsion pair in \(\mathsf{H}\). If \(\mathsf{T}\) contains no line bundles, then it is of the form \(\mathsf{T}_{M}=\langle\mathcal{O}_{q}:q\in M\rangle\) for a subset \(M\subseteq\mathbb{P}^{1}\). Tilting preserves the heart cone: \(C(\mathsf{H})=C(\mathsf{coh}^{\!\times}(\mathbb{P}^{1}))=C(\mathsf{coh}^{\! \times}(\mathbb{P}^{1}))\); see Example 1.7. If \(\mathsf{T}\) contains all line bundles then \(\mathsf{T}=\mathsf{H}\). Thus assume \(\mathsf{T}\) contains a line bundle \(\mathcal{O}(n)\) of minimal degree \(n\). Then all skyscraper sheaves are in \(\mathsf{T}\) because torsion classes are closed under surjections. Hence \(\mathsf{T}\) includes the line bundles \(\mathcal{O}(n+1),\mathcal{O}(n+2),\ldots\) as torsion classes are closed under extensions. Write \(\mathsf{T}_{n}=\langle\mathcal{O}(n),\mathcal{O}_{x}\rangle\) for this torsion class. The corresponding torsion-free class is \(\mathsf{F}_{n}=\langle\mathcal{O}(m)\mid m<n\rangle\). Denote by \(\mathsf{K}_{n}\) the heart obtained from the positive tilt of \(\mathsf{H}\) at \((\mathsf{T}_{n},\mathsf{F}_{n})\). The simple objects of \(\mathsf{K}_{n}\) are \(\mathcal{O}(n)\) and \(\mathcal{O}(n-1)[1]\). Therefore the effective cone \(E(\mathsf{K}_{n})\) is generated by \((1,n)\) and \((-1,1-n)\) and its dual cone \(C(\mathsf{K}_{n})\) has generators \([-n,1]\) and \([1-n,1]\). All this implies that the maximal cones of the fan \(\Sigma(\mathbb{P}^{1})\) are the opposite rays \(C(\mathsf{H})\) and \(C(\mathsf{H}[1])\) together with a series of \(2\)-dimensional cones \(C(\mathsf{K}_{n})\). These fill out a half-plane in \(\Lambda_{\mathbb{R}}^{*}\). The fan \(\Sigma(\mathbb{P}^{1})\) is incomplete because \(v(\mathcal{O}_{x})\geq 0\) for any heart \(\mathsf{K}\) and any \(v\in C(\mathsf{K})\). Moreover, the tilted hearts from Example 1.7 form an infinite family with the same heart fan: \(\Sigma(\mathsf{coh}(\mathbb{P}^{1}))=\Sigma(\mathsf{coh}^{\!\times}(\mathbb{P} ^{1}))=\Sigma(\mathsf{coh}^{\!\times}(\mathbb{P}^{1}))\). **Example 3.20** (elliptic curve).: Let \(\mathsf{H}=\mathsf{coh}(X)\) for an elliptic curve \(X\). The Grothendieck group \(K(X)=K(\mathsf{H})\cong\mathbb{Z}\oplus\mathrm{Pic}^{0}(X)=\mathbb{Z}\oplus X\) is large. Let \(\Lambda=N(X)\cong\mathbb{Z}^{2}\) be the numerical Grothendieck group, i.e. the quotient of \(K(X)\) by the kernel of the Euler pairing, and let \(\lambda\colon K(X)\to N(X)\), \(\lambda(A)=(\mathrm{rk}\,(A),\deg(A))\). We are going to show that the heart fan consists of all rays in the upper half-plane, together with the origin: \(\Sigma(X)=\{e^{\pi i\theta}\mathbb{R}_{\geq 0}\mid\theta\in[0,1]\}\cup\{\{0\}\}\). Denote the slope of a coherent sheaf by \(\mu(A)=\deg(A)/\mathrm{rk}(A)\in\mathbb{Q}\cup\{\infty\}\), and consider the full subcategories \(\mathsf{H}_{\mu}=\mathsf{coh}(X)_{\mu}\) of semistable sheaves of slope \(\mu\). Two general facts about semistable sheaves are: \(\mathsf{H}_{\mu}\) is a finite length abelian category; these categories are ordered: \(\mathrm{Hom}(A_{1},A_{2})=0\) if \(A_{1}\) and \(A_{2}\) are semistable with \(\mu(A_{1})>\mu(A_{2})\). Peculiar to the elliptic curve case is that then \(\mathrm{Hom}(A_{2},A_{1})\neq 0\) holds; moreover, indecomposable sheaves are semistable, and \(\mathsf{H}_{\mu}\cong\mathsf{H}_{\infty}\) for all \(\mu\in\mathbb{Q}\). As \(\mathsf{H}_{\infty}\) is the subcategory of torsion sheaves, splitting as a direct sum indexed by points of \(X\), each \(\mathsf{H}_{\mu}\) is a direct sum of rank one tubes. See [22, II.14] for details. For _real_ numbers \(\delta\in\mathbb{R}\), the additive subcategories \(\mathsf{T}_{\delta}=\langle\mathsf{H}_{\mu}\mid\mu>\delta\rangle\) and \(\mathsf{F}_{\delta}=\langle\mathsf{H}_{\mu}\mid\mu\leq\delta\rangle\) define mutually different torsion pairs in \(\mathsf{H}\). Let \(\mathsf{K}_{\delta}=\mathsf{F}_{\delta}[1]*\mathsf{T}_{\delta}\) be the corresponding tilts of \(\mathsf{H}\). The cone \(E(\mathsf{K}_{\delta})\) is a half-plane containing \(\{0\}\times\mathbb{R}_{\geq 0}\) whose boundary line has slope \(\delta\) and so the heart cone \(C(\mathsf{K}_{\delta})=E(\mathsf{K}_{\delta})^{\vee}\) is a ray of slope \(-1/\delta\) in \(\Lambda_{\mathbb{R}}^{*}\cong\mathbb{R}^{2}\). This extends to \(\delta=\pm\infty\) with \(\mathsf{K}_{-\infty}=\mathsf{H}=\mathsf{K}_{\infty}[-1]\) and \(C(\mathsf{K}_{-\infty})=\mathbb{R}_{\geq 0}\times\{0\}=-C(\mathsf{K}_{\infty})\). We are going to explain that these are all heart cones in \(\Sigma(X)\). They fill the upper half-plane completely with rays, as claimed. So let \((\mathsf{T The effective cone \(E(\mathsf{K})\) is a half-plane with the same slope as \(E(\mathsf{K}_{\delta})\). The boundary of \(E(\mathsf{K})\) depends on whether some, all or no tubes from \(\mathsf{H}_{\delta}\) belong to \(\mathsf{T}\), as in Example 1.7. Regardless of this, the heart cone is always the same ray \(C(\mathsf{K})=C(\mathsf{K}_{\delta})\) of slope \(-1/\delta\). **Example 3.21**.: We continue the previous example \(\mathsf{H}=\mathsf{coh}(X)\) for an elliptic curve with \(\Lambda=N(X)\cong\mathbb{Z}^{2}\) by considering the smaller lattice \(K(X)\to\Gamma\coloneqq\mathbb{Z}\), \((r,d)\mapsto d\). Then \(E(\mathsf{H}/\Gamma)=\mathbb{R}_{\geq 0}\), and \(\Sigma(\mathsf{H}/\Gamma)\) is the complete pointed fan on \(\mathbb{R}\), although \(\mathsf{H}\) is not a length category. This is an illustration of general lattice change as in Remark 3.6 and also shows that the implication in Theorem 3.8(1) is not reversable. **Example 3.22** (wild quivers).: The heart fan of the elliptic curve contains a region entirely filled by rays. This behaviour occurs regularly, for instance, for all path algebras of wild quivers; see [14, Prop. 3.32]. It is related to having a dense region in the phase diagram. Specifically, and nice to draw due to their rank \(2\) Grothendieck groups, this holds for the \(n\)-Kronecker quivers \(Q_{n}\) with two vertices and \(n\geq 3\) parallel arrows; see [12]. Let \(\mathsf{H}_{n}=\mathsf{mod}(\mathsf{k}Q_{n})\). By Kac's Theorem, the class in \(K(\mathsf{H}_{n})=\mathbb{Z}^{2}\) of an indecomposable representation of \(Q_{n}\) is a positive root \((a,b)\in\mathbb{N}^{2}\) of the associated Euler form \(q(a,b)=a^{2}-nab+b^{2}\). The real roots \(q(a,b)=1\) correspond to indecomposables in the preprojective and preinjective components of the Auslander-Reiten quiver (there is one indecomposable up to isomorphism for each positive real root). Let \(P_{i}\) be the indecomposable objects in the preprojective component with \(P_{1}\) and \(P_{2}\) being the indecomposable projectives with respective classes \((0,1)\) and \((1,n)\). The Auslander-Reiten sequences \(0\to P_{i}\to P_{i+1}^{n}\to P_{i+2}\to 0\) imply that \([P_{i}]=(a_{i},a_{i+1})\) where the \(a_{i}\) satisfy the recurrence \(a_{i+2}=na_{i+1}-a_{i}\) with \(a_{0}=0\), \(a_{1}=1\). There is a dual picture for the preinjective component where \([I_{i}]=(a_{i+1},a_{i})\). The imaginary roots \(q(a,b)<0\) correspond to indecomposable objects in the regular component (there are infinitely many indecomposables for each positive imaginary root). Rays through positive imaginary roots are dense in the cone \[n-\sqrt{n^{2}-4}\leq 2x/y\leq n+\sqrt{n^{2}-4}.\] The sequences \(a_{i+1}/a_{i}\) and \(a_{i}/a_{i+1}\) converge to the roots \((n\pm\sqrt{n^{2}-4})/2\) of \(x^{2}-nx+1\). If \(n=2\) then \(a_{i}=i\) and the classes of every regular module lie on the ray of slope \(1\). For \(n>2\), the classes of the regular modules lie in a cone with non-empty interior. **Question 3.23**.: Could a heart be isolated in its fan (assuming \(\dim\Lambda_{\mathbb{R}}\geq 2\)), i.e. can \(\Sigma(\mathsf{H})\) have only \(C(\mathsf{H})\) and \(C(\mathsf{H}[1])=-C(\mathsf{H})\) as maximal cones? Algebraically: can a heart have no non-trivial torsion pairs? This question is inspired by the convexity of the support of the heart fan in all our examples. **Example 3.24** (coherent sheaves).: Let \(X\) be a connected, smooth and projective variety, \(\mathsf{H}=\mathsf{coh}(X)\) the category of coherent sheaves on \(X\) and \(\Lambda=N(X)\) the numerical Grothendieck group of \(X\) as in Example 1.9. Since \(\mathsf{H}\) is Noetherian, all torsion pairs in \(\mathsf{H}\) are of the form \((\mathsf{T},\mathsf{T}^{\perp})\) where \(\mathsf{T}\subseteq\mathsf{H}\) is an additive subcategory closed under quotients and extensions. If \(\mathsf{T}\neq 0\) then it must contain the skyscraper sheaf \(\mathcal{O}_{p}\) of a point \(p\in X\), hence the effective cone of the tilted heart \(\mathsf{K}=\mathsf{T}^{\perp}[1]\ast\mathsf{T}\) contains the point class \([\mathcal{O}_{p}]\in E(\mathsf{K})\) (as \(X\) is connected, all points have the same class in \(N(X)\).) Chose an orthonormal basis for \(\Lambda_{\mathbb{R}}=N(X)\otimes\mathbb{R}\) containing the point class and denote its dual basis vector by \(v_{0}\in\Lambda_{\mathbb{R}}^{*}\). Then \(C(\mathsf{K})\) is contained in the half-space of \(\Lambda_{\mathbb{R}}^{*}\) given by \(v_{0}\geq 0\). With \(C(\mathsf{H}[1])=-C(\mathsf{H})\) being a ray by Example 1.9, this shows that the support of the heart fan \(\Sigma(X)\) is contained in a half-space. A line bundle on \(X\) induces an exact autoequivalence of \(\mathsf{coh}(X)\) and hence, according to Remark 3.7, an automorphism of the fan \(\Sigma(X)\). In this way, the quotient group \(\operatorname{Pic}(X)/\operatorname{Pic}^{0}(X)\) acts faithfully on the fan. For example, the action of \(\operatorname{Pic}(\mathbb{P}^{1})\) on \(\Sigma(\mathbb{P}^{1})\) fixes the two rays \(\pm C(\mathsf{H})\) and acts transitively on the remaining maximal cones: twisting by \(\mathcal{O}(1)\) maps \(C(\mathsf{K}_{i})\mapsto C(\mathsf{K}_{i+1})\) for all \(i\in\mathbb{Z}\), using the notation of Example 3.19. Figure 2. Heart fans of abelian categories \(\mathsf{K}\) over \(\Lambda=\mathbb{Z}^{2}\). Some maximal cones \(C(\mathsf{K})\) are labelled by the heart \(\mathsf{K}\). Faces which are not dual faces shown as red dashed lines. ## 4. \(g\)-fans In the remaining sections, we compare geometric structures attached to algebraic categories to our heart fans. Let \(A\) be a finite-dimensional algebra over an algebraically closed field \(\mathbf{k}\). We write \(\Sigma(A)\) for the heart fan of the abelian category of finite-dimensional \(A\)-modules \(\mathsf{mod}(A)\) over \(\Lambda\coloneqq K(A)\cong\mathbb{Z}^{n}\). In order to define the \(g\)-fan of \(A\) we require some silting theory. ### Projective-simple duality Without loss of generality, we may assume that \(A\) is basic, i.e. \(A\cong P(1)\oplus\cdots\oplus P(n)\), where the \(P(i)\) are a complete set of pairwise non-isomorphic indecomposable projective \(A\)-modules. We consider two Hom-finite triangulated categories associated to \(A\) together with their Grothendieck groups: \[\mathsf{D}^{p} \coloneqq\mathsf{K}^{b}(\mathsf{proj}(A))\] the perfect derived category of \[A\], \[\mathsf{D}^{b} \coloneqq\mathsf{D}^{b}(\mathsf{mod}(A))\] the bounded derived category of \[A\], \[K(A) \coloneqq K(\mathsf{D}^{b})=K(\mathsf{mod}(A))\] the Grothendieck group of \[A\], \[K^{\mathrm{split}}(A) \coloneqq K(\mathsf{D}^{p})=K(\mathsf{proj}(A))\] the split Grothendieck group of \[A\]. The classes \([P(1)],\ldots,[P(n)]\) form a basis of \(K^{\mathrm{split}}(A)\) and the classes of simple modules \([S(1)],\ldots,[S(n)]\) form a basis of \(K(A)\). These bases are dual under the non-degenerate pairing \[K^{\mathrm{split}}(A)\times K(A)\to\mathbb{Z},\quad(P,M)=\dim\mathrm{Hom}_{ A}(P_{i},M).\] Having set \(\Lambda\coloneqq K(A)\), we identify \(\Lambda^{*}=K^{\mathrm{split}}(A)\). ### Two-term (pre)silting objects Consider the full subcategory of two-term complexes of projective \(A\)-modules, \(\mathsf{C}\coloneqq\mathsf{K}^{[-1,0]}(\mathsf{proj}(A))=\mathsf{add}(A)* \mathsf{add}(A)[1]\subset\mathsf{D}^{p}\). It is an extriangulated category with \(K(\mathsf{C})=K(\mathsf{D}^{p})\). A complex \(P_{\bullet}\in\mathsf{C}\) is _presulting_ if \(\mathrm{Hom}_{\mathsf{D}^{p}}(P_{\bullet},P_{\bullet}[>]0])=0\). If additionally \(\mathsf{thick}_{\mathsf{D}^{p}}(P_{\bullet})=\mathsf{D}^{p}\), or equivalently \(|P_{\bullet}|=n=\mathrm{rk}\,K^{\mathrm{split}}(A)\), then \(P_{\bullet}\) is called _silting_. Sometimes these complexes are called 'two-term (pre)silting' and \(\mathsf{C}\) is called the 'two-term category'. We drop the adjective but we stress that we are working in \(\mathsf{C}\) throughout, so that our complexes have non-zero terms in homological degrees \(1\) and \(0\) only. ### \(g\)-vectors, \(g\)-cones and the \(g\)-fan The _\(g\)-vector_ of \(P_{\bullet}\in\mathsf{C}\) is its class \([P_{\bullet}]\in K(\mathsf{C})\); writing \(P_{\bullet}=(P_{1}\to P_{0})\) and decomposing \(P_{0}=\bigoplus_{i=1}^{n}P(i)^{a_{i}}\) and \(P_{1}=\bigoplus_{i=1}^{n}P(i)^{b_{i}}\), this is the classical \(g^{P_{\bullet}}=(a_{1}-b_{1},\ldots,a_{n}-b_{n})\in\mathbb{Z}^{n}\cong K( \mathsf{C})\), identified using the basis of indecomposable projective modules. For \(P_{\bullet}\in\mathsf{C}\) prestilting with indecomposable summands \(Q_{\bullet}^{1},\ldots,Q_{\bullet}^{t}\), the \(g\)-vectors \([Q_{\bullet}^{1}],\ldots,[Q_{\bullet}^{t}]\) are linearly independent by [13] and generate the _\(g\)-cone_ \[C(P_{\bullet})=\{a_{1}[Q_{\bullet}^{1}]+\cdots+a_{t}[Q_{\bullet}^{t}]\mid a_ {i}\geq 0\}\subset\Lambda_{\mathbb{R}}^{*}=K(\mathsf{C})\otimes\mathbb{R}.\] They form a basis if \(t=n\), i.e. if \(P_{\bullet}\) is silting. The _\(g\)-fan_ of \(A\) is \(\Sigma_{A}^{g}\coloneqq\{C(P_{\bullet})\mid P_{\bullet}\in\mathsf{C}\text{ is presilting}\}\). We show that it coincides with the subfan of the heart \(\Sigma(A)\) consisting of full cones, i.e. those coming from algebraic hearts, and their faces: \[\Sigma(A)^{\mathrm{full}}\coloneqq\{\kappa\in\Sigma(A)\mid\exists\sigma\in \Sigma(A)\text{ full},\kappa\preceq\sigma\}=\{C(\mathsf{K}/\mathsf{S})\in\Sigma(A)\mid\mathsf{K }\text{ algebraic}\}.\] By Proposition 1.10, a full cone is simplicial; the proposition applies in view of Lemma 3.11. Recall that \(\Sigma(A)\) was defined as a dual face fan but as \(\Sigma(A)^{\mathrm{full}}\) is generated by dual cones of polyhedral cones \(E(\mathsf{K})\), all faces of \(C(\mathsf{K})\) are dual faces. **Proposition 4.1**.: _The \(g\)-fan of the finite-dimensional algebra \(A\) is \(\Sigma_{A}^{g}=\Sigma(A)^{\mathrm{full}}\)._ **Remark 4.2**.: According to [4, Thm. 4.7], the \(g\)-fan \(\Sigma_{A}^{g}\) of a finite-dimensional algebra \(A\) is complete if and only if it is finite, which happens if and only if there are finitely many silting complexes in \(\mathsf{C}\). By [21, Prop. 3.1], this implies that all hearts \(\mathsf{K}\) with \(\mathsf{H}=\mathsf{mod}(A)\leq\mathsf{K}\leq\mathsf{H}[1]\) are algebraic and \(\Sigma_{A}^{g}=\Sigma(A)\); cf. Corollary 3.9. In Theorem 3.8, we showed that the heart fan of any length category is complete. Thus we can think of the heart fan of \(\mathsf{mod}(A)\) as a natural completion of the \(g\)-fan. Moreover, heart fans can be seen as generalisations of \(g\)-fans to contexts without silting theory, like tube categories. To prove the proposition, we need to describe the connection of silting theory to Serre subcategories via simple-minded collections and \(c\)-vectors. ### Koenig-Yang correspondences and Serre subcategories In [20], Steffen Koenig and Dong Yang obtain correspondences between algebraic hearts \(\mathsf{K}\) in \(\mathsf{D}^{b}\) and silting subcategories of \(\mathsf{D}^{p}\) which restrict to two-term versions by [11]. Recall that \(\{X_{1},\ldots,X_{r}\}\) is a _simple-minded collection_ in a \(\mathbf{k}\)-linear triangulated category if (a) \(\operatorname{Hom}^{<0}(X_{i},X_{j})=0\) for all \(i,j\) and (b) \(\operatorname{Hom}(X_{i},X_{i})=\mathbf{k}\), \(\operatorname{Hom}(X_{i},X_{j})=0\) for \(i\neq j\) and (c) the collection generates the triangulated category; see [20, SS3.2]. **Proposition 4.3** ([20, Thm. 6.1] & [11, Cor. 4.3]).: _Let \(A\) be a finite-dimensional algebra and \(\mathsf{H}=\mathsf{mod}(A)\). There are bijections between the following sets:_ 1. _silting objects in_ \(\mathsf{C}\)_, up to additive equivalence;_ 2. _algebraic hearts_ \(\mathsf{K}\) _with_ \(\mathsf{H}\leq\mathsf{K}\leq\mathsf{H}[1]\)_;_ 3. _simple-minded collections in_ \(\mathsf{H}[1]\ast\mathsf{H}\)_._ The map from (2) to (3) sends \(\mathsf{K}\mapsto\{\text{simple objects of $\mathsf{K}$}\}\); the inverse map from (3) to (2) sends a simple-minded collection \(\{X_{1},\ldots,X_{n}\}\mapsto\langle X_{1},\ldots,X_{n}\rangle\), its extension closure in \(\mathsf{D}^{b}\). The objects in (1) and (3) obey a _generalised projective-simple duality_: let \(P_{\bullet}=\bigoplus_{i=1}^{n}Q_{\bullet}^{i}\) be a silting object and \(\{X_{1},\ldots,X_{n}\}\) the corresponding simple-minded collection, then \[\operatorname{Hom}(Q_{\bullet}^{i},X_{i})=\mathbf{k}\qquad\text{and}\qquad \operatorname{Hom}(Q_{\bullet}^{i},X_{j})=0\text{ for }i\neq j. \tag{2}\] Moreover, \(\mathsf{K}\simeq\mathsf{mod}(\operatorname{End}(P_{\bullet}))\). In particular, by Proposition 4.3 and [16, Prop. 5.3], there are bijections \[\{\text{direct summands of $P_{\bullet}$}\}\stackrel{{ 1-1}}{{ \longleftrightarrow}}\{\text{subsets of $\{X_{1},\ldots,X_{n}\}$}\}\stackrel{{ 1-1}}{{\longleftrightarrow}}\{\text{Serre subcategories of $\mathsf{K}$}\}.\] ### c-vectors Following [3, p. 5035], given a simple-minded collection \(\{X_{1},\ldots,X_{n}\}\), the \(c\)_-vector_ of \(X_{i}\) is its class \([X_{i}]\in K(A)\). Expressed in the basis of \(K(A)\) of simple modules, this gives the classical \(c^{X_{i}}\in\mathbb{Z}^{n}\). Moreover, since \(\{X_{1},\ldots,X_{n}\}\) are a complete list of non-isomorphic simple objects for an algebraic heart \(\mathsf{K}\) in \(\mathsf{D}^{b}\), the \(c\)-vectors \(\{[X_{1}],\ldots,[X_{n}]\}\) form a basis of \(\Lambda_{\mathbb{R}}\). By generalised simple-projective duality \(([Q_{\bullet}^{i}],[X_{j}])=\delta_{ij}\), and the \(c\)-vector basis is dual to the \(g\)-vector basis of \(\Lambda^{*}\) coming from the silting object \(P_{\bullet}=\bigoplus_{i=1}^{n}Q_{\bullet}^{i}\) corresponding to \(\{X_{1},\ldots,X_{n}\}\). We now assemble the ingredients from silting theory to prove Proposition 4.1. Proof of Proposition 4.1.: Let \(\mathsf{K}\) be an algebraic heart with \(\mathsf{H}\leq\mathsf{K}\leq\mathsf{H}[1]\), write \(P_{\bullet}=\bigoplus_{i=1}^{n}Q_{\bullet}^{i}\) for the corresponding two-term silting complex in \(\mathsf{C}\), and \(\{X_{1},\ldots,X_{n}\}\) for the corresponding two-term simple-minded collection in \(\mathsf{H}[1]\ast\mathsf{H}\) under the bijections in Proposition 4.3. Then \(C(\mathsf{K})=C(P_{\bullet})\) follows from the definitions and the duality \(([Q_{\bullet}^{i}],[X_{j}])=\delta_{ij}\). Now, suppose \(\mathsf{S}\subseteq\mathsf{K}\) is a Serre subcategory. By [16, Prop. 5.3], we can re-order the two-term simple-minded collection \(\{X_{1},\ldots,X_{n}\}\) so that \(\mathsf{S}=\langle X_{1},\ldots,X_{t}\rangle\) for some \(1\leq t\leq n\). We order the summands of the corresponding two-term silting complex \(P_{\bullet}\) accordingly and write \(Q_{\bullet}=\bigoplus_{i=t+1}^{n}Q_{\bullet}^{i}\). It follows immediately from the generalised projective-simple duality (2) that \(C(\mathsf{K}/\mathsf{S})=C(Q_{\bullet})\) using \(C(\mathsf{K}/\mathsf{S})=C(\mathsf{K})\cap E(\mathsf{S})^{\perp}\) from Lemma 2.6. ## 5. King semistability and wall-and-chamber structures The heart fan \(\Sigma(\mathsf{H})\) of an algebraic abelian category \(\mathsf{H}\) can be described in terms of Alastair King's notion of semistability [19], and we relate the heart fan to the wall-and-chamber structure of [10]. Throughout, \(\Lambda=K(\mathsf{H})\) and \(\mathsf{D}=\mathsf{D}^{b}(\mathsf{H})\). **Definition 5.1**.: Let \(\mathsf{H}\) be an abelian category and \(v\in\Lambda_{\mathbb{R}}^{*}=\operatorname{Hom}(K(\mathsf{H}),\mathbb{R})\). 1. \(h\in\mathsf{H}\) is \(v\)_-semistable_ if \(v(h)=0\) and \(v(h^{\prime})\leq 0\) for all subobjects \(h^{\prime}\subseteq h\). 2. \(h\in\mathsf{H}\) is \(v\)_-stable_ if it is \(v\)-semistable and if \(h^{\prime}\hookrightarrow h\), \(v(h^{\prime})=0\) implies \(h^{\prime}=0\) or \(h^{\prime}=h\). 3. \(\mathsf{H}^{\mathsf{ss}}(v)\) denotes the full subcategory of \(v\)-semistable objects in \(\mathsf{H}\). **Remark 5.2**.: This is [19, Def. 1.1] although for compatibility with [8] we have reversed the sign convention which in [19] was: \(h\) is \(v\)-semistable \(\iff v(h)=0\) and \(v(h^{\prime})\geq 0\) for all \(h^{\prime}\subseteq h\). The subcategory \(\mathsf{H}^{\mathsf{ss}}(v)\) is a wide subcategory of \(\mathsf{H}\), i.e. closed under extensions, kernels and cokernels. In particular, it is an abelian category. The next proposition says that in the heart fan of an _algebraic_ abelian category \(\mathsf{H}\), among all face pairs describing the same cone there is a unique one closest to the reference heart \(\mathsf{H}\). Recall from Definition 3.3 that a face pair \((\mathsf{K},\mathsf{S})\) consists of a heart \(\mathsf{K}\) with \(\mathsf{H}\leq\mathsf{K}\leq\mathsf{H}[1]\) and a face subcategory \(\mathsf{S}\in\operatorname{Serre}_{\mathsf{A}}(\mathsf{K})\). By Theorem 3.8(1) the heart fan is complete, i.e. \(|\Sigma(\mathsf{H})|=\Lambda_{\mathbb{R}}^{*}\). **Proposition 5.3**.: _Let \(\mathsf{H}\) be an algebraic abelian category and \(\Lambda=K(\mathsf{H})\) its Grothendieck group._ _For each \(v\in\Lambda_{\mathbb{R}}^{*}\) there is a unique face pair \((\mathsf{K}^{v},\mathsf{S}^{v})\) in \(\mathsf{D}=\mathsf{D}^{b}(\mathsf{H})\) such that_ 1. \(C(\mathsf{K}^{v}/\mathsf{S}^{v})\) _is the minimal dual face containing_ \(v\)_,_ 2. \(\mathsf{K}^{v}\) _is minimal amongst hearts_ \(\mathsf{H}\leq\mathsf{K}\leq\mathsf{H}[1]\) _with_ \(v\in C(\mathsf{K})\) _and_ 3. \(\mathsf{S}^{v}=\mathsf{H}^{\mathsf{ss}}(v)\subseteq\mathsf{H}\cap\mathsf{K}^ {v}\) _is the subcategory of_ \(v\)_-semistable objects in_ \(\mathsf{H}\)_._ _Moreover, if \(w\in C^{\circ}(\mathsf{K}^{v}/\mathsf{S}^{v})\) then \((\mathsf{K}^{w},\mathsf{S}^{w})=(\mathsf{K}^{v},\mathsf{S}^{v})\), and there is a distinguished choice of face pair for each cone in \(\Sigma(\mathsf{H})\)._ We call face pairs of the form \((\mathsf{K}^{v},\mathsf{S}^{v})\)_distinguished_. Choosing the distinguished pair determines a section of the map \((\mathsf{K},\mathsf{S})\mapsto C(\mathsf{K}/\mathsf{S})\) from face pairs to cones in the heart fan. While \(\mathsf{S}^{v}\subseteq\mathsf{K}^{v}\) is a face subcategory in the sense of Definition 2.2, here the ambient abelian category \(\mathsf{K}^{v}\) also depends on \(v\). Proof.: Let \(\mathsf{H}=\mathsf{T}_{v}*\mathsf{F}_{v}\) be the torsion pair of Lemma 3.10, i.e. \(\mathsf{T}_{v}\coloneqq\{h\in\mathsf{H}\mid v(h^{\prime\prime})\geq 0\ \forall h\twoheadrightarrow h^{\prime\prime}\}\) and \(\mathsf{F}_{v}\coloneqq\{h\in\mathsf{H}\mid v(h^{\prime})<0\ \forall 0\neq h^{\prime} \hookrightarrow h\}\). Consider the tilted heart \(\mathsf{K}^{v}\coloneqq\mathsf{F}_{v}[1]*\mathsf{T}_{v}\) and the face subcategory \(\mathsf{S}^{v}\coloneqq\{t\in\mathsf{T}_{v}\mid v(t)=0\}\subseteq\mathsf{K}^ {v}\). Then \(C(\mathsf{K}^{v}/\mathsf{S}^{v})\) is the minimal dual face of \(C(\mathsf{K}^{v})\) containing \(v\). If \(v\in C(\mathsf{K})\) with \(\mathsf{H}\leq\mathsf{K}\leq\mathsf{H}[1]\) then \(\mathsf{K}=\mathsf{F}[1]*\mathsf{T}\) for a torsion pair \(\mathsf{H}=\mathsf{T}*\mathsf{F}\) where \(v|_{\mathsf{T}}\geq 0\). As \(\mathsf{T}\) is closed under quotientes, we get \(\mathsf{T}\subseteq\mathsf{T}_{v}\), hence \(\mathsf{F}_{v}\subseteq\mathsf{F}\), i.e. \(\mathsf{K}^{v}\leq\mathsf{K}\). Thus \(\mathsf{K}^{v}\) is minimal amongst hearts \(\mathsf{H}\leq\mathsf{K}\leq\mathsf{H}[1]\) with \(v\in C(\mathsf{K})\). Moreover, \(v\) indeed determines \(\mathsf{K}^{v}\) uniquely. Now we show that the subcategory \(\mathsf{S}^{v}\subseteq\mathsf{T}_{v}\subseteq\mathsf{H}\) is the subcategory of \(v\)-semistable objects. Suppose \(h\in\mathsf{S}^{v}\). If \(h^{\prime}\hookrightarrow h\) is a subobject then \(0\leq v(h/h^{\prime})=v(h)-v(h^{\prime})=-v(h^{\prime})\) by \(h\in\mathsf{T}_{v}\), and hence \(h\in\mathsf{H}^{\mathsf{ss}}(v)\). Conversely if \(h\in\mathsf{H}^{\mathsf{ss}}(v)\) then it has a short exact decomposition sequence \(0\to t\to h\to f\to 0\) in \(\mathsf{H}\) with \(t\in\mathsf{T}_{v}\) and \(f\in\mathsf{F}_{v}\). Since \(h\) is semistable, we have \(v(t)\leq 0\) for its subobject \(t\). On the other hand, \(v(t)\geq 0\) for all objects of \(\mathsf{T}_{v}\). Therefore \(v(t)=0\) and \(v(f)=v(k)-v(t)=0\) too, so that \(f=0\) by definition of \(\mathsf{F}_{v}\). Hence \(h=t\in\mathsf{S}^{v}\). Finally, if \(C(\mathsf{K}^{v}/\mathsf{S}^{v})\) is the minimal dual face containing \(w\) then \(\mathsf{K}^{w}=\mathsf{K}^{v}\) by minimality. Thus \(\mathsf{S}^{w}=\mathsf{S}^{v}\) too, as each is the face subcategory corresponding to the dual face \(C(\mathsf{K}^{v}/\mathsf{S}^{v})\). **Remark 5.4**.: If the minimal dual face containing both \(v\) and \(w\) is \(C(\mathsf{K}/\mathsf{S})\) for some \(\mathsf{H}\leq\mathsf{K}\leq\mathsf{H}[1]\) then the subcategories of semistable objects \(\mathsf{H}^{\mathsf{ss}}(v)=\mathsf{H}^{\mathsf{ss}}(w)\) are the same. The converse does not hold in general; for example \(\mathsf{H}^{\mathsf{ss}}(v)=0\) for any \(v\) in the interior of a full cone in \(\Sigma(\mathsf{H})\). **Example 5.5** (distinguished face pair in Kronecker quiver heart fan).: Consider the heart fan of the module category over the Kronecker quiver, \(\mathsf{H}=\mathsf{mod}(\mathsf{k}(\bullet\mathbin{\hbox to 0.0pt{$\rightarrow$} \mskip 2.0mu \rightarrow}\bullet))\). This fan contains a limiting ray denoted \(C(\mathsf{K}_{\infty})\) in Example 3.18. This ray is the dual cone of the infinitely many hearts of \(\mathsf{D}^{b}(\mathsf{H})\) listed in Example 1.8, among them the geometric heart \(\mathsf{coh}(\mathbb{P}^{1})\). If \(0\neq v\in C(\mathsf{K}_{\infty})\) then the distinguished face pair \((\mathsf{K}^{v},\mathsf{S}^{v})\) of Proposition 5.3 is \(\mathsf{coh}(\mathbb{P}^{1})\) with its torsion subcategory: note that the mixed and reversed geometric hearts are positive tilts of \(\mathsf{coh}(\mathbb{P}^{1})\), so this heart is indeed minimal amongst \(\mathsf{K}\) with \(\mathsf{H}\leq\mathsf{K}\leq\mathsf{H}[1]\) and \(v\in C(\mathsf{K})\). **Lemma 5.6**.: _An object of \(\mathsf{H}^{\mathsf{ss}}(v)\) is stable if and only it is simple as an object of \(\mathsf{K}^{v}\)._ Proof.: Suppose \(h\in\mathsf{H}^{\mathsf{ss}}(v)\). If \(0\to h^{\prime}\to h\to h/h^{\prime}\to 0\) is short exact in \(\mathsf{K}^{v}\) then, since \(\mathsf{H}^{\mathsf{ss}}(v)=\mathsf{S}^{v}\) is a Serre subcategory of \(\mathsf{K}^{v}\), both \(h^{\prime}\) and \(h/h^{\prime}\) are in \(\mathsf{H}^{\mathsf{ss}}(v)\) so the sequence is also short exact in \(\mathsf{H}\). It follows that if \(h\) is stable it is simple in \(\mathsf{K}^{v}\). Conversely, if \(h^{\prime\prime}\) is a subobject of \(h\) in \(\mathsf{H}\) with \(v(h^{\prime\prime})=0\) then there is a short exact sequence \(0\to h^{\prime\prime}\to h\to h/h^{\prime\prime}\to 0\) in \(\mathsf{H}\) with all terms in \(\mathsf{H}^{\mathsf{s}\mathsf{s}}(v)\), which is therefore also short exact in \(\mathsf{K}^{v}\). Thus if \(h\) is simple in \(\mathsf{K}^{v}\) then it is stable. ### Wall-and-chamber structures Defined in [10] for finite-dimensional algebras, we explain how to reconstruct walls and chambers from the heart fan \(\Sigma(\mathsf{H})\) of an algebraic abelian category \(\mathsf{H}\). The _stability space_ of an object \(M\in\mathsf{H}\) is [10, Def. 3.2] \[\mathcal{D}(M)\coloneqq\{v\in\Lambda_{\mathbb{R}}^{*}\mid M\in\mathsf{H}^{ \mathsf{s}\mathsf{s}}(v)\}.\] This is a _wall_ if \(\operatorname{codim}_{\Lambda_{\mathbb{R}}^{*}}\mathcal{D}(M)=1\). We write \(\mathcal{D}\coloneqq\bigcup_{M\neq 0}\mathcal{D}(M)\) for the union of stability spaces of non-zero objects. A _chamber_ is a connected component of the complement \(\Lambda_{\mathbb{R}}^{*}\backslash\overline{\mathcal{D}}\). The _wall-and-chamber_ structure is the set of stability spaces \(\mathcal{D}(M)\) for indecomposable modules \(M\) together with the set of chambers. **Proposition 5.7**.: _The stability space \(\mathcal{D}(M)\) of \(M\in\mathsf{H}\) is a rational, polyhedral cone. Moreover,_ \[\Delta(M)\coloneqq\{C(\mathsf{K}/\mathsf{S})\mid(\mathsf{K},\mathsf{S})\text{ distinguished face pair with }M\in\mathsf{S}\}\] _is a set of cones in \(\Lambda_{\mathbb{R}}^{*}\) forming a dual face subfan of \(\Sigma(\mathsf{H})\) with support \(|\Delta(M)|=\mathcal{D}(M)\)._ _The chambers are the open cones \(C^{\circ}(\mathsf{K})\) for the algebraic hearts \(\mathsf{H}\leq\mathsf{K}\leq\mathsf{H}[1]\). In particular each chamber is the interior of a simplicial cone._ **Remark 5.8**.: This provides an alternative proof of [10, Prop. 3.15] and [4, Thm. 13.7] for \(\mathsf{H}=\mathsf{mod}(A)\) for a finite-dimensional algebra \(A\) which combined show that the chambers are in natural bijection with \(\tau\)-tilting pairs, i.e. with algebraic hearts in the interval between \(\mathsf{H}\) and \(\mathsf{H}[1]\). Note that \(\Delta(M)\subseteq[M]^{\perp}\) but need not contain all dual faces in \(\Sigma(A)\) in this hyperplane; it only contains those in heart cones \(C(\mathsf{K})\) where \(M\in\mathsf{K}\). Proof.: _Stability spaces are rational, polyhedral cones:_ The set \(S\coloneqq\{[M^{\prime}]\mid M^{\prime}\subseteq M\}\subset K(\mathsf{H})\) of classes of subobjects of \(M\) is finite because \(\mathsf{H}\) is a length category, and therefore it generates a rational, polyhedral cone \(E(S)\subset\Lambda_{\mathbb{R}}\). Hence its dual cone \(C(S)\subset\Lambda_{\mathbb{R}}^{*}\) is rational, polyhedral as well (this is Gordon's lemma in convex geometry), as is \(\mathcal{D}(M)=-C(S)\cap[M]^{\perp}\). \(\Delta(M)\) _is a dual face subfan:_ Let \((\mathsf{K},\mathsf{S})\) be a distinguished face pair with \(M\in\mathsf{S}\). Then \(M\) is \(v\)-semistable for any \(v\in C^{\circ}(\mathsf{K}/\mathsf{S})\) by Proposition 5.3; as semistability is a closed condition, this holds for any \(v\in C(\mathsf{K}/\mathsf{S})\). Hence every dual face of \(C(\mathsf{K}/\mathsf{S})\) in \(\Sigma(\mathsf{H})\) is also in \(\Delta(M)\). Because \(\Delta(M)\subseteq\Sigma(\mathsf{H})\) is a subset anyway, it is a dual face subfan. _Stability space as support:_ The support \(|\Delta(M)|\subseteq\Lambda_{\mathbb{R}}^{*}\) is the union of all cones in \(\Delta(M)\). We have \(\mathcal{D}(M)\supseteq|\Delta(M)|\) by the previous step. Conversely, suppose \(v\in\mathcal{D}(M)\). Then \(M\) is \(v\)-semistable and, as \(\Sigma(\mathsf{H})\) is complete, there is a minimal dual face \(C(\mathsf{K}/\mathsf{S})\) containing \(v\) where \((\mathsf{K},\mathsf{S})\) is a distinguished face pair with \(M\in\mathsf{S}=\mathsf{H}^{\mathsf{s}\mathsf{s}}(v)\) by Proposition 5.3, and hence \(\mathcal{D}(M)\subseteq|\Delta(M)|\). _Chambers are interiors of maximal cones:_ Suppose \(\mathsf{K}\) is an algebraic heart with \(\mathsf{H}\leq\mathsf{K}\leq\mathsf{H}[1]\). Then \(\operatorname{codim}_{\Lambda_{\mathbb{R}}^{*}}C(\mathsf{K})=0\), so the relative interior \(C^{\circ}(\mathsf{K})\subseteq\Lambda_{\mathbb{R}}^{*}\backslash\mathcal{D}\) is an open subset. Each proper face of \(C(\mathsf{K})\) has the form \(C(\mathsf{K}/\mathsf{S})\) for \(\mathsf{S}\neq 0\) and is therefore contained in \(\mathcal{D}\). Therefore \(C^{\circ}(\mathsf{K})\) is a chamber. Conversely, suppose \(v\in\Lambda_{\mathbb{R}}^{*}\) is in a chamber \(\mathcal{C}\). As \(\Sigma(\mathsf{H})\) is complete, \(v\) is contained in a minimal dual face \(C(\mathsf{K}/\mathsf{S})\) where \((\mathsf{K},\mathsf{S})\) is a distinguished face pair with \(\mathsf{S}=\mathsf{H}^{\mathsf{s}\mathsf{s}}(v)=0\). It suffices to show that \(C^{\circ}(\mathsf{K})\) is open in \(\Lambda_{\mathbb{R}}^{*}\) for then \(\mathsf{K}\) is algebraic by Proposition 1.10, and \(\mathcal{C}=C^{\circ}(\mathsf{K})\) by the first part. Fix a norm on \(\Lambda_{\mathbb{R}}^{*}\). There is \(\varepsilon>0\) with \(B_{\varepsilon}(v)\subset\mathcal{C}\), hence \(\mathsf{S}^{w}=\mathsf{H}^{\mathsf{s}\mathsf{s}}(w)=0\) for all \(w\in B_{\varepsilon}(v)\). We claim \(\mathsf{T}_{w}=\mathsf{T}_{v}\) for all \(w\in B_{\varepsilon}(v)\). Suppose for a contradiction that \(\mathsf{T}_{w}\neq\mathsf{T}_{v}\) for some \(w\in B_{\varepsilon}(v)\). As \(\mathsf{H}\) is algebraic, we may assume \(t\in\mathsf{T}_{v}\backslash\mathsf{T}_{w}\), and that every proper quotient of \(t\) is in \(\mathsf{T}_{w}\), i.e. \(v(t)\geq 0>w(t)\) and \(v(t^{\prime\prime}),w(t^{\prime\prime})\geq 0\) for every proper quotient \(t^{\prime\prime}\) of \(t\). Thus there is \(u\in B_{\varepsilon}(v)\) on the line segment between \(v\) and \(w\) with \(u(t)=0\) and \(u(t^{\prime\prime})\geq 0\) for all quotients \(t^{\prime\prime}\) of \(t\). Then \(u(t^{\prime})=u(t)-u(t/t^{\prime})=-u(t/t^{\prime})\leq 0\) for all subobjects \(t^{\prime}\) of \(t\), hence \(0\neq t\in\mathsf{H}^{\mathsf{s}\mathsf{s}}(u)\), contradicting \(u\in B_{\varepsilon}(v)\subset\mathcal{C}\). We conclude that \(\mathsf{T}_{w}=\mathsf{T}_{v}\) for all \(w\in B_{\varepsilon}(v)\) after all. Therefore \(\mathsf{K}^{w}=\mathsf{K}\) for all \(w\in B_{\varepsilon}(v)\), i.e. \(B_{\varepsilon}(v)\subset C^{\circ}(\mathsf{K})\). Thus \(C^{\circ}(\mathsf{K})\) is open and we are done. **Remark 5.9**.: Define the _stability fan_ of \(\mathsf{H}\) to be the dual face subfan \[\Sigma^{\mathsf{ss}}(\mathsf{H})\coloneqq\{C(\mathsf{K}/\mathsf{S})\in\Sigma( \mathsf{H})\mid(\mathsf{K},\mathsf{S})\text{ distinguished face pair with }\mathsf{S}\neq 0\}\] of the heart fan. We give it this name because its support \(|\Sigma^{\mathsf{ss}}(\mathsf{H})|=\bigcup_{M\neq 0}\mathcal{D}(M)\) is the union of stability spaces of non-zero modules. Note that this is a union of rational polyhedral cones, even though \(\Sigma^{\mathsf{ss}}(\mathsf{H})\) may contain cones which are neither rational nor polyhedral. ## 6. Scattering diagrams Let \(A\) be a finite-dimensional algebra over \(\mathbb{C}\). Since we are only interested in invariants of the category \(\mathsf{H}=\mathsf{mod}(A)\) of finite-dimensional modules we may assume without loss of generality that \(A=\mathbb{C}Q/I\) where \((Q,I)\) is a quiver with relations. We explain how to construct the Hall algebra scattering diagram defined in [8, Thm. 6.5], more precisely a representative of its equivalence class, from the heart fan \(\Sigma(A)\). As usual we take \(\Lambda=K(\mathsf{mod}(A))\), and define \(\Lambda^{+}=\Lambda\cap E(\mathsf{H})\) to be the submonoid of classes of representations. Below, codimension one faces of cones will be important; these are called _facets_. Roughly, a scattering diagram \(\mathfrak{D}=(\mathfrak{S},\varphi)\) consists of a fan \(\mathfrak{S}\) together with a choice of element \(\varphi(\mathfrak{d})\) in a pro-nilpotent Lie algebra for each facet \(\mathfrak{d}\) in the fan. In the particular case of the Hall algebra scattering diagram, the pro-nilpotent Lie algebra is the completion with respect to the natural grading by total dimension of the Hall algebra \(H(A)\), or rather of the Lie algebra given by its commutator bracket. ### The Hall algebra We review the construction of the Hall algebra and its completion. Essentially this is a summary of [8, SS4 and SS5], to which we refer the reader for further details. The objects of \(\mathsf{H}=\mathsf{mod}(A)\) are parametrised by an Artin stack \(\mathcal{M}\), locally of finite type over \(\mathbb{C}\) and with affine diagonal. The objects of \(\mathcal{M}\) over a scheme \(S\) are the algebra homomorphisms \(A\to\operatorname{End}_{S}(\mathcal{E})\) for locally free \(\mathcal{O}_{S}\)-modules \(\mathcal{E}\) of finite rank, i.e. representations of \((Q,I)\) in vector bundles on \(S\). This stack decomposes as a disjoint union \[\mathcal{M}=\bigsqcup_{\lambda\in\Lambda^{+}}\mathcal{M}_{\lambda} \tag{3}\] of open and closed substacks parametrising the representations of each fixed dimension vector. Each \(\mathcal{M}_{\lambda}\) is a stack quotient of an affine variety by a linear algebraic group. There is also a stack \(\mathcal{M}^{\mathsf{ses}}\) parametrising short exact sequences \(0\to M_{0}\to M_{1}\to M_{2}\to 0\) in \(\mathsf{mod}(A)\) and morphisms sending a short exact sequence to its three objects. **Definition 6.1**.: The _Hall algebra_\(H(A)\) is the \(\mathbb{C}(t)\)-algebra whose underlying vector space has basis the isomorphism classes \([X\to\mathcal{M}]\) of algebraic stacks \(X\) over \(\mathcal{M}\) of finite type over \(\mathbb{C}\) and with affine stabilisers, subject to the following relations: 1. \([X\to\mathcal{M}]=[Y\to\mathcal{M}]+[X\backslash Y\to\mathcal{M}]\) for every closed substack \(Y\subset X\); 2. \([Y_{1}\to\mathcal{M}]=[Y_{2}\to\mathcal{M}]\) whenever \(Y_{1}\to X\) and \(Y_{2}\to X\) are locally trivial fibrations in the Zariski topology with isomorphic fibres; 3. \([X\times Y\to\mathcal{M}]=P_{t}(Y_{\mathrm{an}})\cdot[X\to\mathcal{M}]\) when \(Y\) is a smooth projective variety over \(\mathbb{C}\). Here \(P_{t}(Y_{\mathrm{an}})\) is the Poincare polynomial of the corresponding compact complex manifold. The algebra structure on \(H(A)\) is given by the convolution product \[[X\to\mathcal{M}]\ast[Y\to\mathcal{M}]=[Z\to\mathcal{M}]\] where \(Z\) is the fibre product defined by the Cartesian diagram and structure morphism \(Z\to\mathcal{M}^{\mathsf{ses}}\xrightarrow{\pi_{1}}\mathcal{M}\). The decomposition (3) induces a grading \(H(A)=\bigoplus_{\lambda\in\Lambda^{+}}H(A)_{\lambda}\) where \(H(A)_{\lambda}\) is defined in the same way as the Hall algebra, but working over the substack \(\mathcal{M}_{\lambda}\). Let \(d(\lambda)\) denote the dimension, as a complex vector space, of any representation with dimension vector \(\lambda\in\Lambda^{+}\). For each \(k\in\mathbb{N}\) the subspace \(\bigoplus_{d(\lambda)>k}H(A)_{\lambda}\) is an ideal. The quotient \[\mathfrak{g}_{\leq k}\coloneqq H(A)\bigg{/}\!\!\bigoplus_{d(\lambda)>k}\!\!H( A)_{\lambda}\] is a nilpotent Lie subalgebra under the commutator bracket. The Lie correspondence yields a corresponding unipotent algebraic Lie group \(G_{\leq k}\) whose exponential map \(\exp\colon\mathfrak{g}_{\leq k}\to G_{\leq k}\) is a bijection. Under this identification the group operation on \(G_{\leq k}\) is given by the Baker-Campbell-Hausdorff formula. The completion \(\widehat{\mathfrak{g}}_{\mathrm{Hall}}\coloneqq\lim\mathfrak{g}_{\leq k}\) is a pro-nilpotent Lie algebra, with corresponding pro-unipotent algebraic Lie group \(\widehat{G}_{\mathrm{Hall}}\coloneqq\lim G_{\leq k}\). We construct a map \(\Lambda^{*}_{\mathbb{R}}\backslash\{0\}\to\widehat{\mathfrak{g}}_{\mathrm{ Hall}}\), \(v\mapsto 1^{\mathsf{ss}}(v)\) using the subcategory \(\mathsf{H}^{\mathsf{ss}}(v)\) of \(v\)-semistable objects. Let \(\mathcal{M}^{\mathsf{ss}}(v)\subset\mathcal{M}\) be the open substack parametrising \(v\)-semistable modules. Indeed, \[\mathcal{M}^{\mathsf{ss}}(v)\subset\bigoplus_{v(\lambda)=0}\!\!\mathcal{M}_{\lambda} \tag{4}\] because if \(M\in\mathsf{mod}(A)\) is \(v\)-semistable then \(v([M])=0\). This substack is not of finite type, so does not define an element of the Hall algebra \(H(A)\). However, the intersection \(\mathcal{M}^{\mathsf{ss}}(v)\cap\mathcal{M}_{\leq k}\) is of finite type with affine stabilisers, for each \(k\in\mathbb{N}\). Thus there are compatible elements \[1^{\mathsf{ss}}_{\leq k}(v)\coloneqq[\mathcal{M}^{\mathsf{ss}}(v)\cap \mathcal{M}_{\leq k}\hookrightarrow\mathcal{M}_{\leq k}]\in\mathfrak{g}_{ \leq k}\] which define an element \(1^{\mathsf{ss}}(v)\in\widehat{\mathfrak{g}}_{\mathrm{Hall}}\). This illustrates why we need to consider the completion, and not just the Hall algebra itself. ### Scattering diagrams We review the definition of scattering diagram as given in [8, SS2]. For any cone \(\sigma\in\Lambda^{*}_{\mathbb{R}}\) we define a Lie subalgebra \[\mathfrak{g}_{\leq k}(\sigma)\coloneqq\bigoplus_{0\neq\lambda\in\Lambda^{+} \cap\sigma^{\perp}}(\mathfrak{g}_{\leq k})_{\lambda}\] where \((\mathfrak{g}_{\leq k})_{\lambda}\) denotes the image in the quotient of the graded piece \(H(A)_{\lambda}\). **Definition 6.2**.: A \(\mathfrak{g}_{\leq k}\)_-complex_\(\mathfrak{D}=(\mathfrak{S},\varphi)\) consists of a finite rational polyhedral fan \(\mathfrak{S}\) in \(\Lambda^{*}_{\mathbb{R}}\) together with a choice of element \(\varphi(\mathfrak{0})\in\mathfrak{g}_{\leq k}(\mathfrak{0})\) for each facet \(\mathfrak{0}\in\mathfrak{S}\). The _support_\(\operatorname{supp}(\mathfrak{D})\) is the support \(|\mathfrak{S}|\) of the underlying fan. There are notions of consistency and equivalence for \(\mathfrak{g}_{\leq k}\)-complexes defined as follows. A smooth path \(\gamma\colon[0,1]\to\Lambda^{*}_{\mathbb{R}}\) is _\(\mathfrak{D}\)-generic_ if its endpoints do not lie in \(\operatorname{supp}(\mathfrak{D})\), it does not meet any cone of codimension two or higher, and it meets every facet transversely. For any such path there is a finite sequence of points \(0<t_{1}<\cdots<t_{k}<1\) for which \(\gamma(t_{i})\) lies in a facet \(\mathfrak{d}_{i}\). Therefore we can define the product \[\Phi_{\mathfrak{D}}(\gamma)\coloneqq\exp\varphi(\mathfrak{d}_{k})^{\varepsilon _{k}}\cdots\exp\varphi(\mathfrak{d}_{1})^{\varepsilon_{1}}\in G_{\leq k}\] where \(\varepsilon_{i}\in\{\pm 1\}\) depends on the orientation of the crossing. Then * a \(\mathfrak{g}_{\leq k}\)-complex is _consistent_ if \(\Phi_{\mathfrak{D}}(\gamma)=1\) for any \(\mathfrak{D}\)-generic loop \(\gamma\) and * two \(\mathfrak{g}_{\leq k}\)-complexes \(\mathfrak{D}_{1}\) and \(\mathfrak{D}_{2}\) are _equivalent_ if \(\Phi_{\mathfrak{D}_{1}}(\gamma)=\Phi_{\mathfrak{D}_{2}}(\gamma)\) for any path \(\gamma\) which is both \(\mathfrak{D}_{1}\)-generic and \(\mathfrak{D}_{2}\)-generic. **Definition 6.3**.: A \(\widehat{\mathfrak{g}}\)_-complex_ or _scattering diagram_\(\mathfrak{D}\) is a sequence \((\mathfrak{D}_{k})\) of \(\mathfrak{g}_{\leq k}\)-complexes for \(k\geq 1\) such that \(\mathfrak{D}_{i}=(\mathfrak{S}_{i},\varphi_{i})\) and \((\pi^{ji})_{*}\mathfrak{D}_{j}\coloneqq(\mathfrak{S}_{j},\pi^{ji}\circ\varphi_ {j})\) are equivalent for all \(i<j\) where \(\pi^{ji}\colon\mathfrak{g}_{\leq j}\to\mathfrak{g}_{\leq i}\) is the canonical quotient homomorphism. The _support_ of the scattering diagram \(\mathfrak{D}\) is \(\operatorname{supp}(\mathfrak{D})\coloneqq\bigcup_{i\in\mathbb{N}} \operatorname{supp}(\mathfrak{D}_{i})\). A scattering diagram is consistent if each \(\mathfrak{D}_{k}\) is consistent, and two scattering diagrams are equivalent if their defining \(\mathfrak{g}_{\leq k}\)-complexes are equivalent for all \(k\geq 1\). ### Hall algebra scattering diagram The Hall algebra scattering diagram \(\mathfrak{D}_{\mathrm{Hall}}\) is constructed from semistability data. Theorem 6.5 of [8] characterises it uniquely, up to equivalence, by the two properties 1. its support is the union \(\bigcup_{0\neq M}\mathcal{D}(M)\) of stability spaces of non-zero modules; 2. the wall-crossing automorphism at a general point \(v\) of the support is \(1^{\mathbf{ss}}(v)\). Intuitively, \(\mathfrak{D}_{\mathrm{Hall}}\) consists of the stability fan \(\Sigma^{\mathbf{ss}}(A)=\{C(\mathsf{K}/\mathsf{S})\mid\mathsf{K}\neq 0\}\) together with the assignments \(C(\mathsf{K}/\mathsf{S})\mapsto 1^{\mathbf{ss}}(v)\) for (any) \(v\) in the relative interior of \(C(\mathsf{K}/\mathsf{S})\). Naively, one could try to make this precise by constructing the latter as the limit over the subfans \[\Sigma^{\mathbf{ss}}_{\leq k}(A)\coloneqq\{C(\mathsf{K}/\mathsf{S})\mid( \mathsf{K},\mathsf{S})\text{ distinguished, }\mathsf{K}\neq 0,\ d(M)\leq k\text{ for all stable }M\in\mathsf{K}\}.\] However, technically this fails because the above fan need not be finite, or rational, or polyhedral. The issue is that it is too fine a decomposition; instead we need to consider an alternative refinement of its support, which takes account only of the semistability of modules of dimension less than \(k\) rather than of all modules. What makes this possible is the fact -- a consequence of Proposition 5.7 -- that the support \(|\Sigma^{\mathbf{ss}}_{\leq k}(A)|=\bigcup_{0<d(M)\leq k}\mathfrak{D}(M)\) is a union of finitely many rational, polyhedral cones. The alternative refinement is constructed, following [8, Ex. 2.5], using the cones \[\sigma(P_{-},P_{0},P_{+})\coloneqq\{v\in\Lambda^{*}_{\mathbb{R}}\mid v(M)=0 \ \forall\,M\in P_{0}\text{ and }\pm v(M)\geq 0\ \forall M\in P_{\pm}\}\] where \(P_{-}\cup P_{0}\cup P_{+}=\{M\mid d(M)\leq k\}\) is a partition into unions of Grothendieck group classes. By construction each such cone is rational and polyhedral, and the set of semistable modules \(M\) with \(d(M)\leq k\) is constant on its relative interior. **Example 6.4**.: The set of cones \(\sigma(P_{-},P_{0},P_{+})\) for which there exists a non-zero semistable \(M\) with \(d(M)\leq k\) forms a fan. We denote it by \(\mathfrak{S}_{\leq k}\). By [8, Lemma 6.2], or alternatively by Proposition 5.7, its support is \(\bigcup_{0<d(M)\leq k}\mathfrak{D}(M)\). For each facet \(\mathfrak{d}\) in \(\mathfrak{S}_{\leq k}\), set \(\varphi_{\leq k}(\mathfrak{d})\coloneqq 1^{\mathbf{ss}}_{\leq k}(v)\) for (any) \(v\) in the relative interior of \(\mathfrak{d}\). It follows from (4) that \(1^{\mathbf{ss}}_{\leq k}(v)\in\mathfrak{g}_{\leq k}(\mathfrak{d})\). Hence the pair \(\mathfrak{D}_{\leq k}\coloneqq(\mathfrak{S}_{\leq k},\varphi_{\leq k})\) is a \(\mathfrak{g}_{\leq k}\)-complex. The sequence \((\mathfrak{D}_{\leq k})\) of \(\mathfrak{g}_{\leq k}\)-complexes defines a consistent scattering diagram, the _Hall algebra scattering diagram_\(\mathfrak{D}_{\mathrm{Hall}}\); see [8, SS6]. ## Appendix A Homological algebra A good source emphasising the similarity of abelian and triangulated categories is [5, Ch. I]. Categories are supposed to be essentially small, i.e. having a set of objects up to isomorphism. Subcategories are supposed to be full and strict, i.e. closed under isomorphisms. The \(n\)-fold shift (or: translation or suspension) functor of triangulated categories is denoted by \([n]\). ### Abelian categories Let \(\mathsf{H}\) be an abelian category; it is _length_ if each object of \(\mathsf{H}\) admits a finite composition series (whose factors are simple objects). Equivalently, \(\mathsf{H}\) is Noetherian (all ascending chains of subobjects stabilise) and Artinian (all descending chains stabilise). If in addition, \(\mathsf{H}\) has finitely many isomorphism classes of simple objects, it is _algebraic_. Let \(\mathsf{U}\subseteq\mathsf{H}\) be a full subcategory; it is an abelian subcategory if it is closed under kernels and cokernels. It is a _wide subcategory_ if it is closed under kernels, cokernels and extensions. It is a _Serre subcategory_ if it is closed under subobjects, quotients and extensions; then the quotient category \(\mathsf{H}/\mathsf{U}\) is abelian. Serre\((\mathsf{H})\) denotes the set of Serre subcategories. In Definition 2.2, we will define Serre\({}_{\mathsf{A}}(\mathsf{H})\) as the subset of face subcategories. ### Extension closure Let \(\mathsf{C}\) be an abelian or triangulated category, so we can speak of extensions in \(\mathsf{C}\). If \(M\) is a collection of objects of \(\mathsf{C}\) then \(\langle M\rangle\) denotes the _extension closure_ of \(M\), i.e. the smallest full subcategory of \(\mathsf{C}\) that (a) contains \(M\) and (b) is closed under extensions. If \(M_{1},M_{2}\) are two subcategories of \(\mathsf{C}\), their _ordered extension closure_\(M_{1}*M_{2}\) is the full subcategory of all extensions of objects of \(M_{2}\) by objects of \(M_{1}\). If, for example, \(\mathsf{C}\) is triangulated then \(M_{1}*M_{2}\) is the full subcategory of objects \(c\in\mathsf{C}\) sitting in exact triangles \(m_{1}\to c\to m_{2}\to m_{1}[1]\) with \(m_{1}\in M_{1},m_{2}\in M_{2}\). For \(H\subset C\) a heart in a triangulated category and \(M\subseteq H\), the two notions of \(\langle M\rangle\) agree. ### Torsion pairs Two full subcategories \((T,F)\) of an abelian category \(H\) form a _torsion pair_ if (a) \(\operatorname{Hom}(T,F)=0\) and (b) \(H=T*F\). In this case, \(T\) is called the _torsion class_ and \(F\) the _torsion-free class_ of the pair. Torsion classes \(T\) are closed under quotients and extensions; torsion-free classes \(F\) are closed under subobjects and extensions. For a torsion pair \((T,F)\), we have \(F=T^{\perp}\coloneqq\{h\in H\mid\operatorname{Hom}(T,h)=0\}\) and \(T={}^{\perp}F\); e.g. \(F\subseteq T^{\perp}\) by (a) and \(F\supseteq T^{\perp}\) by (b). If \(H\) is Noetherian and \(T\subseteq H\) is an additive subcategory closed under quotients and extensions then \(T\) defines a torsion pair \((T,T^{\perp})\). Dually, if \(H\) is Artinian and \(F\subseteq H\) is closed under subobjects and extensions then \(({}^{\perp}F,F)\) is a torsion pair. ### T-structures Two subcategories \((T,F)\) of a triangulated category \(D\) form a _t-structure_ if (a) \(\operatorname{Hom}(T,F)=0\) and (b) \(C=T*F\) and (c) \(T[1]\subseteq T\). The t-structure is _bounded_ if moreover (d) \(\bigcup_{n\in Z}T[n]=D\). Any t-structure \((T,F)\) on \(D\), bounded or not, induces an abelian subcategory of \(D\) as \(H\coloneqq T\cap F[1]\) called the _heart_. If the t-structure is bounded then \(T\) and \(F\) can be reconstructed from the heart \(H\) as \(T=\langle H[\geq 0]\rangle\) and \(F=\langle H[<0]\rangle\). In this article, _heart_ always means the _heart of a bounded t-structure_. Setting \((T,F)\leq(T^{\prime},F^{\prime})\iff F\subseteq F^{\prime}\) defines a partial order on the bounded t-structures of a triangulated category \(D\). Equivalently, this gives a partial order on hearts in \(D\) via \(H\leq H^{\prime}\iff\langle H[\leq 0]\rangle\subseteq\langle H^{\prime}[ \leq 0]\rangle\). The convention we follow here makes \(H\leq H[1]\). ### Tilting Let \(D\) be a triangulated category, \(H\) a heart in \(D\) and \((T,F)\) a torsion pair in the abelian category \(H\), so that \(H=T*F\). From this setup we get two new hearts of \(D\) as \[F[1]*T,\] the _positive tilt_ of \[H;\qquad F*T[-1],\] the _negative tilt_ of \[H\]. The terminology is justified by \(H[-1]\leq F*T[-1]\leq H\leq F[1]*T\leq H[1]\). Positive/negative tilts are also called left/right. The following basic fact is crucial for us, see [22, Lemma 1.1.2]: If \(K\) and \(H\) are two hearts in a triangulated category \(D\) with \(H\leq K\leq H[1]\), then there is a unique torsion pair \((T,F)\) on \(H=T*F\) with \(K=F[1]*T\); it is given by \(T=H\cap K\) and \(F=H\cap K[-1]\). ## Appendix B Convex geometry We collect notions from convex geometry in a self-contained manner as we are not aware of a source covering our needs: [15, SS1] is inspired by applications in toric geometry, so it discusses fans but cones are assumed to be polyhedral; [17] is motivated by optimisation and treats a wider class of convex sets and cones (although still often assumed to be closed) but no fans. We need to pay more attention than usual as our effective cones may be neither polyhedral nor closed, even in important examples. The notion of 'dual face fan' is novel. Throughout, \(V\) is a finite-dimensional \(\mathbb{R}\)-vector space with dual vector space \(V^{*}=\operatorname{Hom}(V,\mathbb{R})\). ### Cones A _cone_ in \(V\) is a subset \(\sigma\subseteq V\) closed under (a) sums and (b) multiples by positive real numbers, i.e. \(\sigma+\sigma\subseteq\sigma\) and \(\mathbb{R}_{>0}\cdot\sigma\subseteq\sigma\). \(\operatorname{Cones}(V)\) denotes the set of all cones in \(V\). Intersections and sums of cones are again cones, as are the topological closure and interior of a cone in \(V\); we allow the empty set as a cone. The _linear hull_ of \(\sigma\) is \(\mathbb{R}\sigma=\sigma-\sigma=\sigma^{\perp\perp}\) and its dimension \(\dim(\sigma)\coloneqq\dim_{\mathbb{R}}(\mathbb{R}\sigma)\). A cone is _full_ if it is full-dimensional, i.e. \(\dim(\sigma)=\dim_{\mathbb{R}}(V)\); equivalently \(\sigma\) has non-empty interior: \(\operatorname{int}(\sigma)\neq\varnothing\). The _relative interior_\(\sigma^{\circ}=\operatorname{relint}(\sigma)\) of a cone \(\sigma\subseteq V\) is the interior of \(\sigma\) as a subset of its linear hull; it is again a cone. Cones are convex. A cone \(\sigma\) is _strictly convex_ if it contains no non-trivial linear subspace, i.e. \(\sigma\cap(-\sigma)\subseteq\{0\}\). It is _pointed_ if \(\sigma\cap(-\sigma)=\{0\}\). The cone _generated_ by a subset \(S\subseteq V\) is the convex hull of the rays \(\mathbb{R}_{\geq 0}\cdot v\) for \(v\in S\); we denote it \(E(S)\). We write \(\overline{E}(S)\) for the closure of \(E(S)\) in \(V\). A cone is _polyhedral_ if it is generated by a finite subset and _simplicial_ (or: simple) if it is generated by a linearly independent subset. The _dual cone_ (also: polar cone) of a cone \(\sigma\subseteq V\) is \(\sigma^{\vee}\coloneqq\{w\in V^{*}\mid w(v)\geq 0\ \forall v\in\sigma\}\), i.e. the set of functionals \(w\colon V\to\mathbb{R}\) with \(w|_{\sigma}\geq 0\); this is a closed cone in \(V^{*}\). Note \(\sigma^{\vee\vee}=\overline{\sigma}\) in \(V^{**}=V\), using the hyperplane separation theorem for the inclusion \(\sigma^{\vee\vee}\subseteq\overline{\sigma}\). Clearly \(\overline{\sigma}^{\vee}=\sigma^{\vee}\). For a subset \(S\subseteq V\), we denote \(C(S)\coloneqq E(S)^{\vee}\subseteq V^{*}\) and \(C^{\circ}(S)\coloneqq\operatorname{relint}(C(S))\). Given a lattice \(\Lambda\subset V\), i.e. a free abelian group of maximal rank, we write \(V=\Lambda\otimes\mathbb{R}=\Lambda_{\mathbb{R}}\) and \(V^{*}=\Lambda^{*}\otimes\mathbb{R}=\operatorname{Hom}(\Lambda,\mathbb{R})= \Lambda^{*}_{\mathbb{R}}\) with dual lattice \(\Lambda^{*}=\operatorname{Hom}(\Lambda,\mathbb{Z})\subset V^{*}\). A cone in \(V\) is _rational_ if it is generated by lattice elements. The same notion applies to cones in \(V^{*}\) via \(\Lambda^{*}\). ### Faces A _face_ of a cone \(\sigma\) is a non-empty subset \(\tau\subseteq\sigma\) such that for any \(v,v^{\prime}\in\sigma\), we have: \(v+v^{\prime}\in\tau\iff v,v^{\prime}\in\tau\). In particular \(\tau\) is itself a cone. We denote by \(\tau\preceq\sigma\) that \(\tau\) is a face of \(\sigma\) and by \(\tau\prec\sigma\) that it is a proper face, i.e. \(\tau\neq\sigma\). A _facet_ of a cone is a face of codimension one. For faces of a cone, inclusion and face relation coincide: if \(\tau,\tau^{\prime}\preceq\sigma\) then \(\tau\subseteq\tau^{\prime}\iff\tau\preceq\tau^{\prime}\). An _exposed face_ of \(\sigma\) is a subset of the form \(\sigma\cap w^{\perp}\) for \(w\in\sigma^{\vee}\). When \(w\neq 0\) the exposed face is cut out by intersecting with the _supporting hyperplane_\(w^{\perp}\subset V\); when \(w=0\) the corresponding exposed face is \(\sigma\) itself. Every exposed face is a face and the two notions coincide if \(\sigma\) is polyhedral. See Example 2.1 for a non-exposed face. The following properties are elementary: If \(\tau\prec\sigma\) is a proper face then \(\tau\cap\sigma^{\circ}=\varnothing\) and \(\dim(\tau)<\dim(\sigma)\). In particular, a maximal chain of proper faces has length at most \(\dim(V)\). A non-exposed face is a subset of an exposed face, hence there is a finite decreasing sequence of faces of the original cone, with each face exposed in the previous one, ending at the non-exposed face. Therefore faces are closed subsets of the cone, and the relative interior of a cone is the complement of the union of its proper exposed faces. \(\operatorname{Faces}(\sigma)\) denotes the set of all faces of a cone \(\sigma\) and \(\operatorname{ExFaces}(\sigma)\) the subset of exposed faces. The sets \(\operatorname{ExFaces}(\sigma)\subseteq\operatorname{Faces}(\sigma)\subset \operatorname{Cones}(V)\) are partially ordered by the \(\preceq\) relation. Recall that in order theory, a lattice is a partially ordered set such that each pair of elements has an infimum and a supremum. This use of the word 'lattice' is unrelated to a free abelian group of finite rank. Downward closures in the poset of faces and upward closures in the poset of exposed faces have simple internal descriptions: **Lemma B.1**.: _Let \(\sigma\subseteq V\) be a cone with \(0\in\sigma\); let \(\tau^{\prime}\preceq\sigma\) be a face and \(\tau\preceq\sigma\) an exposed face._ 1. \(\operatorname{ExFaces}(\sigma)\) _and_ \(\operatorname{Faces}(\sigma)\) _are lattices._ 2. \(\{\kappa\in\operatorname{Faces}(\sigma)\mid\kappa\preceq\tau^{\prime}\}= \operatorname{Faces}(\tau^{\prime})\)_._ 3. \(\{\kappa\in\operatorname{ExFaces}(\sigma)\mid\kappa\succeq\tau\}\cong \operatorname{ExFaces}(\sigma/\tau)\)_._ Proof.: (1) The infimum of two faces is their intersection which is non-empty by \(0\in\sigma\). The intersection of exposed faces \(\sigma\cap w^{\perp}_{1}\) and \(\sigma\cap w^{\perp}_{2}\) is the exposed face \(\sigma\cap(w_{1}+w_{2})^{\perp}\), using \(w_{1}(v)=w_{2}(v)=0\iff(w_{1}+w_{2})(v)=0\) for any \(v\in\sigma\), because of \(w_{1}|_{\sigma}\geq 0\) and \(w_{2}|_{\sigma}\geq 0\). The supremum of two faces of \(\sigma\) is the unique minimal face containing both. This exists because \(\sigma\) is a face of itself. Given two exposed faces then the smallest face containing them may not be exposed but it is contained in a unique minimal exposed face. In particular, \(\operatorname{ExFaces}(\sigma)\) is not necessarily a sublattice of \(\operatorname{Faces}(\sigma)\). (2) This follows immediately from the definitions. (3) \(\sigma/\tau\in\operatorname{Cones}(V/\mathbb{R}\tau)\) is the image of \(\sigma\) under the projection \(V\to V/\mathbb{R}\tau\). The bijection arises from the straightforward formula \((\sigma/\tau)\cap w^{\perp}=(\sigma\cap w^{\perp})/\tau\) for \(w\in\sigma^{\vee}\cap\tau^{\perp}\). The map \(\{\kappa\in\operatorname{ExFaces}(\sigma)\mid\kappa\succeq\tau\}\to \operatorname{ExFaces}(\sigma/\tau)\), \(\kappa=\sigma\cap w^{\perp}\mapsto\kappa/\tau\) is surjective by the formula and injective by \(\tau\subseteq\kappa\). Formula (2) of the lemma can fail for exposed faces: see Example 2.1 where the non-exposed face is an exposed face of an exposed face. Likewise, (3) may fail for non-exposed faces: if \(\sigma=C(\mathsf{H})\) is the cone in Example 2.1 and \(\tau\) is the non-exposed face given by the ray through \(A\) then \(\sigma/\tau\) is a closed half-plane, and \(\{\kappa\in\operatorname{Faces}(\sigma)\mid\kappa\succeq\tau\}\not\cong \operatorname{Faces}(\sigma/\tau)\) because the respective cardinalities are \(2\) and \(3\). ### Dual faces For a subset \(\tau\subseteq\sigma\), we define a subset of the dual cone by \[\tau^{\triangle}\coloneqq\sigma^{\vee}\cap\tau^{\perp}=\{w\in V^{*}\colon w|_{ \sigma}\geq 0,w|_{\tau}=0\};\] this is called the _dual face_ (also: conjugated face) and is in fact even an exposed face. We denote the set of dual faces by \(\operatorname{ExFaces}(\sigma)^{\operatorname{op}}\); identifying the domain of the order-reversing inclusion \(\operatorname{ExFaces}(\sigma)\hookrightarrow\operatorname{ExFaces}(\sigma^{\vee})\) with its image. This set can be equivalently defined as the set of duals of: exposed faces of \(\sigma\); faces of \(\sigma\); subsets of \(\sigma\). We write \(\operatorname{ExFaces}(\sigma)^{\operatorname{op}}\) because this gives a one-to-one parametrisation of dual faces. See Proposition B.3 for these facts. As \(\operatorname{ExFaces}(\sigma)\) is a lattice by Lemma B.1, so is the poset \(\operatorname{ExFaces}(\sigma)^{\operatorname{op}}\) of dual faces. Faces and exposed faces are defined for arbitrary cones whereas dual faces require the cone to be given as the dual of another cone. We always have the following inclusions which are equalities if \(\sigma\) is polyhedral (\(\sigma^{\vee}\) polyhedral is not enough): \[\operatorname{ExFaces}(\sigma)^{\operatorname{op}}\subseteq\operatorname{ ExFaces}(\sigma^{\vee})\subseteq\operatorname{Faces}(\sigma^{\vee}).\] **Example B.2**.: Let \(\sigma=\{(x,y)\ |\ x,y>0\text{ or }x=y=0\}\) be the open quadrant in the plane together with the origin. Its dual is the closed quadrant \(\sigma^{\vee}=(\mathbb{R}_{\geq 0})^{2}\) which has four faces (all exposed): the origin, the whole cone and two rays. However, only \(\{0\}\) and \(\sigma^{\vee}\) are dual faces. **Proposition B.3**.: _Let \(\sigma\subseteq V\) be a cone with \(0\in\sigma\) and let \(\tau\subseteq\sigma\) be a non-empty subset._ 1. \(\tau^{\vartriangle}\subseteq\sigma^{\vee}\) _is an exposed face, and_ \(\tau^{\vartriangle}=\sigma^{\vee}\cap v^{\perp}\) _for any_ \(v\in\operatorname{relint}(E(\tau))\)_._ 2. _If_ \(\nu\preceq\sigma\) _is the minimal exposed face containing_ \(\tau\) _then_ \(\tau^{\vartriangle}=\nu^{\vartriangle}\)_._ 3. _If_ \(\tau=\sigma\cap w^{\perp}\preceq\sigma\) _is an exposed face, where_ \(w\in\sigma^{\vee}\)_, then_ \(\tau^{\vartriangle\vartriangle}=\overline{\sigma}\cap w^{\perp}\subseteq \overline{\sigma}=\sigma^{\vee\vee}\)_._ 4. _If_ \(w\in\sigma^{\vee}\) _then_ \((\sigma\cap w^{\perp})^{\vartriangle}\preceq\sigma^{\vee}\) _is the minimal exposed face containing_ \(w\)_._ 5. \(\operatorname{ExFaces}(\sigma)\to\operatorname{ExFaces}(\sigma^{\vee})^{ \operatorname{op}}\)_,_ \(\tau\mapsto\tau^{\vartriangle}\) _is a monotone and injective map._ 6. _The map from (_5_) is bijective if_ \(\sigma\) _is closed._ Proof.: See [18, SS2] for some statements and proofs in a similar but slightly less general context; we give a self-contained proof for the convenience of the reader. 1. Without loss of generality, we assume that \(\tau\) is closed in view of \(\overline{\tau}^{\perp}=\tau^{\perp}\). Since \(V\) is a separable metric space, so is its subset \(\tau\), i.e. there is a dense sequence \((v_{i})\) in \(\tau\). Then \(\tau^{\perp}=\overline{\{v_{0},v_{1},\ldots\}^{\perp}}=\{v_{0},v_{1},\ldots \}^{\perp}=\bigcap_{i\geq 0}v_{i}^{\perp}=v_{0}^{\perp}\cap\cdots\cap v_{n}^{\perp}\) for some \(n\), using \(\dim(V)<\infty\) in the last equality. We get \(\sigma^{\vee}\cap\tau^{\perp}=\sigma^{\vee}\cap v_{0}^{\perp}\cap\cdots\cap v _{n}^{\perp}=\sigma^{\vee}\cap v_{\tau}^{\perp}\) with \(v_{\tau}\coloneqq v_{0}+\cdots+v_{n}\) because of \(w(v_{\tau})=0\iff w(v_{0})=\cdots=w(v_{n})=0\) for any \(w\in\sigma^{\vee}\). Denote by \(\tau^{\prime}\) the common relative interior of \(E(\tau)\subseteq E(\overline{\tau})\). If \(w\in\sigma^{\vee}\) and \(v\in\tau^{\prime}\) then \(w(v)=0\iff w|_{\tau^{\prime}}=0\iff w|_{\tau}=0\). We may assume \(v_{\tau}\in\tau^{\prime}\) by choosing \(v_{i}\in\tau^{\prime}\) above. Varying \(\varepsilon>0\) and \(v^{\prime}\in\tau^{\prime}\), the set of \(v\coloneqq\varepsilon v_{\tau}+v^{\prime}\) covers \(\tau^{\prime}\). Now \(\sigma^{\vee}\cap v^{\perp}=\sigma^{\vee}\cap\tau^{\perp}=\tau^{\vartriangle}\). 2. From \(\tau\subseteq\nu\) we get \(\tau^{\vartriangle}\supseteq\nu^{\vartriangle}\). For the other inclusion, let \(w\in\tau^{\vartriangle}\), i.e. \(w\in V^{*}\) with \(w|_{\sigma}\geq 0\) and \(w|_{\tau}=0\). Then the exposed face \(\sigma\cap w^{\perp}\) cut out by \(w\) contains \(\tau\). As \(\nu\) is the minimal exposed face over \(\tau\), we have \(\nu\subseteq\sigma\cap w^{\perp}\). Taking dual faces again, we get \(\nu^{\vartriangle}\supseteq(\sigma\cap w^{\perp})^{\vartriangle}\supseteq w\). 3. Let first \(v\in\tau^{\vartriangle\vartriangle}=\overline{\sigma}\cap(\sigma^{\vee}\cap \tau^{\perp})^{\perp}\subseteq V\). From \(w\in\tau^{\vartriangle}=\sigma^{\vee}\cap\tau^{\perp}\), we get \(w(v)=0\), i.e. \(v\in\overline{\sigma}\cap w^{\perp}\). For the reverse inclusion, let \(v\in\overline{\sigma}\cap w^{\perp}\) and \(f\in\sigma^{\vee}\cap\tau^{\perp}\), i.e. \(f|_{\sigma}\geq 0\) and \(0=f|_{\tau}=f|_{\sigma\cap w^{\perp}}\). Because \(f\) is linear, hence continuous, this implies \(f|_{\overline{\sigma}\cap w^{\perp}}=0\). Therefore \(f(v)=0\) and so \(v\in\tau^{\vartriangle\vartriangle}\). 4. Let \(\kappa=\sigma^{\vee}\cap v^{\perp}\) for some \(v\in\overline{\sigma}\) be an exposed face containing \(w\). We show \((\sigma\cap w^{\perp})^{\vartriangle}\subseteq\kappa\). Let \(f\in(\sigma\cap w^{\perp})^{\vartriangle}\), i.e. \(f\in V^{*}\) with \(f|_{\sigma}\geq 0\) and \(f|_{\sigma\cap w^{\perp}}=0\). We have \(w(v)=0\) from \(w\in\kappa\), hence \(v\in\overline{\sigma}\cap w^{\perp}\). Again by continuity of \(f\), we find \(f(v)=0\), thus \(f\in\kappa\), as claimed. 5. The map \(\tau\mapsto\tau^{\vartriangle}\) is clearly inclusion-reversing. Among (exposed) faces, inclusion equals the face relation, so \(\operatorname{ExFaces}(\sigma)\to\operatorname{ExFaces}(\sigma^{\vee})^{ \operatorname{op}}\) is monotone. To check injectivity, we iterate face duality and consider the composition \(\operatorname{ExFaces}(\sigma)\to\operatorname{ExFaces}(\overline{\sigma})\), \(\sigma\cap w^{\perp}\mapsto\overline{\sigma}\cap w^{\perp}\), using part (3) above. The composition is injective, hence so is its first component. 6. If \(\sigma\) is closed then the composition of the previous step, \(\operatorname{ExFaces}(\sigma)\to\operatorname{ExFaces}(\sigma)\), \(\tau\mapsto\tau^{\vartriangle\vartriangle}\) is the identity, and so is the composition \(\operatorname{ExFaces}(\sigma^{\vee})\to\operatorname{ExFaces}(\sigma)\to \operatorname{ExFaces}(\sigma^{\vee})\). The dual face \(\tau^{\vartriangle}\) of a subset \(\tau\subseteq\sigma\) is itself a dual cone in a linear subspace, in view of \(\tau^{\vartriangle}=\sigma\cap\tau^{\perp}=(\sigma/\tau)^{\vee}\). Here \(\sigma/\tau\subseteq V/\mathbb{R}\tau\) and we consider its dual cone \((\sigma/\tau)^{\vee}\subseteq(V/\mathbb{R}\tau)^{*}=\tau^{\perp}\) as a cone in \(V^{*}\). The diagram (5) commutes; in particular dual faces of dual faces of \(\sigma^{\vee}\) are themselves dual faces of \(\sigma^{\vee}\). **Lemma B.4**.: _If \(V=\Lambda_{\mathbb{R}}\) for a lattice \(\Lambda\) and \(\tau\) is a rational cone inside a cone \(\sigma\subseteq\Lambda_{\mathbb{R}}\) then \(\sigma^{\vee}\cap\tau^{\perp}=\sigma^{\vee}\cap\lambda^{\perp}\) for some \(\lambda\in\sigma\cap\Lambda\)._ Proof.: \(\tau\) is rational in \(V=\Lambda_{\mathbb{R}}\), i.e. generated by a subset of \(\Lambda\). Then the subset \(\tau\cap\Lambda_{\mathbb{Q}}\) is dense in \(\tau\). Therefore we can choose the sequence of the proof of Proposition B.3(1) to be of the form \(v_{i}=\lambda_{i}\in\Lambda\). Following the previous argument, \(\sigma^{\vee}\cap\tau^{\perp}=\sigma^{\vee}\cap(\lambda_{0}+\cdots+\lambda_{ n})^{\perp}\), as claimed. ### Fans A _fan_ in \(V\) is a subset \(\Sigma\subset\operatorname{Cones}(V)\) such that (a) \(\Sigma\) is closed under taking faces and (b) any intersection of two cones in \(\Sigma\) is a face of each. The _support_ of a fan \(\Sigma\) is the union \(|\Sigma|\coloneqq\bigcup_{\sigma\in\Sigma}\sigma\) of all cones. \(\Sigma\) is _complete_ if \(|\Sigma|=V\). A fan is called _polyhedral_ or _simplicial_ or _rational_ if all cones in the fan have this property. It is called _finite_ if it is a finite collection of cones. Given a fan \(\Sigma\), we denote by \(\Sigma^{\text{full}}\) the subfan consisting of all simplicial, full cones in \(\Sigma\) together with their faces. The next result shows that a fan \(\Sigma\) is determined by its subset of _maximal cones_, i.e. the \(\sigma\in\Sigma\) not contained in a larger cone in \(\Sigma\). **Lemma B.5**.: _Let \(\Delta\) be a set of cones in \(V\). Then the following are equivalent:_ 1. _The set_ \(\Sigma\) _of all faces of cones in_ \(\Delta\) _is a fan._ 2. _Any two cones in_ \(\Delta\) _intersect in a common face._ Proof.: If \(\Sigma\) is a fan any two cones in \(\Delta\) intersect in a common face. To prove the converse we use the following two properties: (IF) the intersection of two faces of a cone is a face and (FF) the faces of a face of a cone are precisely the faces of the cone contained in that face. By (FF), \(\Sigma\) is closed under taking faces. Suppose \(\sigma^{\prime}\) and \(\tau^{\prime}\) are faces respectively of \(\sigma,\tau\in\Delta\). By assumption \(\sigma\cap\tau\) is a common face of \(\sigma\) and of \(\tau\). Thus by (IF) \(\sigma^{\prime}\cap\tau=\sigma^{\prime}\cap(\sigma\cap\tau)\) is a face of \(\sigma\). Since it is contained in \(\sigma^{\prime}\) and \(\sigma\cap\tau\) it is a face of each by (FF). Applying (FF) again shows that it is also a face of \(\tau\). Now consider \(\sigma^{\prime}\cap\tau^{\prime}=(\sigma^{\prime}\cap\tau)\cap\tau^{\prime}\). By the above \(\sigma^{\prime}\cap\tau\) is a face of \(\tau\), and therefore by (IF) so is \(\sigma^{\prime}\cap\tau^{\prime}\). Thus by (FF) \(\sigma^{\prime}\cap\tau^{\prime}\) is a face of \(\tau^{\prime}\) and of \(\sigma^{\prime}\cap\tau\). By the first part \(\sigma^{\prime}\cap\tau\) is a face of \(\sigma^{\prime}\) so by (FF) \(\sigma^{\prime}\cap\tau^{\prime}\) is a face of \(\sigma^{\prime}\) too. The analogue for exposed faces is false because property (FF) may fail: an exposed face of an exposed face of a cone need not be exposed in the cone. ### Dual face fans A _dual face fan_ is a set \(\Sigma\) of dual faces of dual cones such that (a) \(\Sigma\) is closed under taking dual faces and (b) any intersection of two cones in \(\Sigma\) is a dual face of each. In Section 3 we generate a dual face fan from a subset \(\mathcal{E}\subset\operatorname{Cones}(V)\) such that the set of all dual faces \(\Sigma\coloneqq\{\tau^{\triangle}\mid\tau\preceq\sigma\in\mathcal{E}\} \subset\operatorname{Cones}(V^{*})\) satisfies (a) and (b); we do not know if every dual face fan arises in this way. Dual face fans are determined by their maximal cones: **Lemma B.6**.: _Let \(\Delta\) be a set of dual faces of dual cones in \(V^{*}\). Then the following are equivalent:_ 1. _The set_ \(\Sigma\) _of all dual faces of cones in_ \(\Delta\) _is a dual face fan._ 2. _Any two cones in_ \(\Delta\) _intersect in a common dual face._ Proof.: The proof is identical to that of Lemma B.5, but using instead the facts that (IF) the intersection of two dual faces of a dual cone is a dual face, because dual faces form a lattice (the dual lattice of exposed faces of the original cone), and (FF) the dual faces of a dual face \((\sigma/\tau)^{\vee}\) of \(\sigma^{\vee}\) are the dual faces of \(\sigma^{\vee}\) contained in \((\sigma/\tau)^{\vee}\), by Lemma B.1(3) and diagram (5). **Corollary B.7**.: _Let \(\Sigma\) be a dual face fan. Then the set \(\Sigma^{\prime}\) of all faces of cones in \(\Sigma\) is a fan._ Proof.: Let \(\Delta\) be the set of maximal cones in \(\Sigma\). Clearly, the set of all faces of cones in \(\Delta\) is the set of all faces of cones in \(\Sigma\). By Lemma B.6 the intersection of any two cones in \(\Delta\) is a common dual face, in particular a common face. Therefore by Lemma B.5 the set of all faces of cones in \(\Delta\) is a fan. Clearly, \(\Sigma\subseteq\Sigma^{\prime}\) with the same maximal cones and hence the same support. If each maximal cone in \(\Sigma\) is the dual of a polyhedral cone then \(\Sigma=\Sigma^{\prime}\) because each face of a cone in \(\Sigma\) is a dual face by Proposition B.3(6). This occurs, for example, with the fans defining toric varities. **Remark B.8**.: The converse is false: there are sets \(\Delta\) of dual cones whose faces form a fan, but whose dual faces do not form a dual face fan. For example, consider \(\Delta=\{\sigma^{\vee},\tau^{\vee}\}\) where \(\sigma=\{x<0,y>0\}\) and \(\tau=\{x\geq 0,y\geq 0\}\) are cones in \(\mathbb{R}^{2}\). Here \(\sigma^{\vee}\cap\tau^{\vee}=\{x=0,y\geq 0\}\) is a face of \(\sigma^{\vee}\), but not a dual face.
2305.06894
Reinterpreting causal discovery as the task of predicting unobserved joint statistics
If $X,Y,Z$ denote sets of random variables, two different data sources may contain samples from $P_{X,Y}$ and $P_{Y,Z}$, respectively. We argue that causal discovery can help inferring properties of the `unobserved joint distributions' $P_{X,Y,Z}$ or $P_{X,Z}$. The properties may be conditional independences (as in `integrative causal inference') or also quantitative statements about dependences. More generally, we define a learning scenario where the input is a subset of variables and the label is some statistical property of that subset. Sets of jointly observed variables define the training points, while unobserved sets are possible test points. To solve this learning task, we infer, as an intermediate step, a causal model from the observations that then entails properties of unobserved sets. Accordingly, we can define the VC dimension of a class of causal models and derive generalization bounds for the predictions. Here, causal discovery becomes more modest and better accessible to empirical tests than usual: rather than trying to find a causal hypothesis that is `true' a causal hypothesis is {\it useful} whenever it correctly predicts statistical properties of unobserved joint distributions. This way, a sparse causal graph that omits weak influences may be more useful than a dense one (despite being less accurate) because it is able to reconstruct the full joint distribution from marginal distributions of smaller subsets. Within such a `pragmatic' application of causal discovery, some popular heuristic approaches become justified in retrospect. It is, for instance, allowed to infer DAGs from partial correlations instead of conditional independences if the DAGs are only used to predict partial correlations.
Dominik Janzing, Philipp M. Faller, Leena Chennuru Vankadara
2023-05-11T15:30:54Z
http://arxiv.org/abs/2305.06894v1
# Reinterpreting causal discovery as the task of predicting unobserved joint statistics ###### Abstract If \(\mathbf{X},\mathbf{Y},\mathbf{Z}\) denote sets of random variables, two different data sources may contain samples from \(P_{\mathbf{X},\mathbf{Y}}\) and \(P_{\mathbf{Y},\mathbf{Z}}\), respectively. We argue that causal discovery can help inferring properties of the 'unobserved joint distributions' \(P_{\mathbf{X},\mathbf{Y},\mathbf{Z}}\) or \(P_{\mathbf{X},\mathbf{Z}}\). The properties may be conditional independences (as in 'integrative causal inference') or also quantitative statements about dependences. More generally, we define a learning scenario where the input is a subset of variables and the label is some statistical property of that subset. Sets of jointly observed variables define the training points, while unobserved sets are possible test points. To solve this learning task, we infer, as an intermediate step, a causal model from the observations that then entails properties of unobserved sets. Accordingly, we can define the VC dimension of a class of causal models and derive generalization bounds for the predictions. Here, causal discovery becomes more modest and better accessible to empirical tests than usual: rather than trying to find a causal hypothesis that is 'true' (which is a problematic term when it is unclear how to define interventions) a causal hypothesis is _useful_ whenever it correctly predicts statistical properties of unobserved joint distributions. This way, a sparse causal graph that omits weak influences may be more useful than a dense one (despite being less accurate) because it is able to reconstruct the full joint distribution from marginal distributions of smaller subsets. Within such a 'pragmatic' application of causal discovery, some popular heuristic approaches become justified in retrospect. It is, for instance, allowed to infer DAGs from partial correlations instead of conditional independences if the DAGs are only used to predict partial correlations. We further sketch why our pragmatic view on causality may even cover the usual meaning in terms of interventions and sketch why predicting the impact of interventions can sometimes also be phrased as a task of the above type. ## 1 Introduction The difficulty of inferring causal relations from purely observational data lies in the fact that observations drawn from a joint distribution \(P_{\mathbf{X}}\) with \(\mathbf{X}:=\{X_{1},\ldots X_{n}\}\) are supposed to imply statements about how the system behaves under _interventions_(Pearl, 2000; Spirtes et al., 1993). Specifically, one may be interested in the new joint distribution induced by _setting_ a subset \(\tilde{\mathbf{X}}\subset\mathbf{X}\) of the variables to some specific values. Under this interventional defintion of causality, assessing the performance of causal discovery algorithms is highly challenging, primarily due to the absence of datasets with established causal ground truth or even the causal equivalent of a validation set. This issue primarily stems from the fact that conducting experiments or interventions is often infeasible, impractical, unethical, or ill-defined to begin with. For example, it is unclear what it means to intervene on the age of a person or the Gross Domestic Product of a country (see Janzing and Mejia (2022) and references therein for a more elaborate discussion on ill-definedness of interventions). **Utility of causal information without reference to interventions.** The utility of causal models goes beyond the sole objective of predicting system behavior under interventions. For example causal information can be useful in facilitating knowledge transfer across datasets from different distributions (Scholkopf et al., 2012). Among the numerous other ways in which causal models can be useful, we particularly emphasize the utility of causal models in predicting statistical properties of _unobserved joint distributions_,1 by leveraging multiple heterogeneous datasets with overlapping variables. Assume we have access to datasets \(D_{1},\ldots,D_{k}\) containing observations from different, but overlapping sets \(S_{1},\ldots,S_{k}\subset\{X_{1},X_{2},\cdots,X_{n}\}\) of variables. Joint causal models over the variables \(\{X_{1},X_{2},\cdots,X_{n}\}\) learned from datasets \(S_{1},\ldots,S_{k}\) then entail statistical properties such as conditional independences over subsets of variables for which no joint observations are available. For instance, various methods under the umbrella of integrative causal inference learn such joint causal models by first applying causal discovery algorithms independently to datasets \(S_{1},\ldots,S_{k}\) to learn _marginal causal models_ and subsequently, constructing a joint causal model that is consistent with the marginal causal models (Danks, 2005; Danks et al., 2008; Claassen and Heskes, 2010; Tsamardinos et al., 2012; Triantafillou and Tsamardinos, 2015; Huang et al., 2020). ### A Pragmatic approach to validating causal discovery Drawing inspiration from this application scenario, we provide a pragmatic approach to validate causal discovery methods. We reframe the problem of causal discovery as the prediction of statistical properties of unobserved joint distributions. Specifically, the learning problem reads as follows: Regardless of what kind of statistical properties are meant, under this _statistical paradigm_, causal models entail statements that can be empirically tested without referring to an interventional scenario. Consequently, we drop the ambitious demand of finding 'the true' causal model and replace it with a more pragmatic and modest goal of finding causal models that correctly predict unseen joint statistics. **Remark 1**.: _The joint causal model in Schema (1) could be inferred by first inferring marginal causal models (as in integrative causal inference) or directly from the statistical properties of marginal distributions. This distinction is irrelevant for our discussion._ ### Why causal models? It is not obvious why inferring properties of unobserved joint distributions from observed ones should take the 'detour' via causal models as visualized Figure 1: Schema depicting the reframing of causal discovery as a statistical learning problem. in (1). One could also define a class of _statistical_ models (that is, a class of joint distributions without any causal interpretation) that is sufficiently small to yield definite predictions for the desired properties. However, causal models can naturally incorporate causal prior knowledge or causal inductive biases and thereby yield stronger predictions than what may be possible via models without causal semantics. To motiavate this idea, let us consider a simple example. **Example 1** (Why causal models are helpful?).: _Assume we are given variables \(X,Y,Z\) where we observed \(P_{X,Y}\) and \(P_{Y,Z}\). The extension to \(P_{X,Y,Z}\) is heavily underdetermined. Now assume that we have the additional causal information that \(X\) causes \(Y\) and \(Y\) causes \(Z\) (see Figure 2, left), in the sense that both pairs are causally sufficient (see Remark 2). In other words, neither \(X\) and \(Y\) nor \(Y\) and \(Z\) have a common cause. This information can be the result of some bivariate causal discovery algorithm that is able to exclude confounding. Given that there is, for instance, an additive noise model from \(Y\) to \(Z\)(Kano and Shimizu, 2003; Hoyer et al., 2009), a confounder is unlikely because it would typically destroy the independence of the additive noise term._ **Entire causal structure:**_We can then infer the entire causal structure to be the causal chain \(X\to Y\to Z\) for the following reasons. First we show that \(X,Y,Z\) is a causally sufficient set of variables: A common cause of \(X\) and \(Z\) would be a common cause of \(Y\) and \(Z\), too. The pair \((X,Y)\) and \((Y,Z)\) both have no common causes by assumption. One checks easily that no DAG with \(3\) arrows leaves both pairs unconfounded. Checking all DAGs on \(X,Y,Z\) with \(2\) arrows that have a path from \(X\) to \(Y\) and from \(Y\) to \(Z\), we end up with the causal chain in Figure 2, middle, as the only option._ **Resulting joint distribution:**_This implies \(X\perp\!\!\!\perp Z\,|Y.\) Therefore, \(P_{X,Y,Z}=P_{X,Y}P_{Z|Y}.\)_ Despite it simplicity, Example 1 demonstrates how incorporating causal prior knowledge into the class of causal models can yield particularly strong Figure 2: A simple example where causal information allows to ‘glue’ two distributions to a unique joint distribution. predictions about the joint distribution. It is worth noting here that statistical models lack these implications for the joint distribution, as it remains unclear how they can naturally leverage causal prior knowledge to 'glue' marginal distributions. **Remark 2**.: _Note that we have neglected a subtle issue in discussing Example 1. There are several different notions of what it means that \(X\) causes \(Y\) in a causally sufficient way: We have above used the purely graphical criterion asking whether there is some variable \(Z\) having directed paths to \(X\) and \(Y\). An alternative option for defining that \(X\) influences \(Y\) in a causally sufficient way would be to demand that \(P_{Y}^{do(X=x)}=P_{Y|X=x}\). This condition is called 'interventional sufficiency' in Peters et al. (2017), a condition that is testable by interventions on \(X\) without referring to a larger background DAG in which \(X\) and \(Y\) are embedded. This condition, however, is weaker than the graphical one and not sufficient for the above argument. This is because one could add the link \(X\to Z\) to the chain \(X\to Y\to Z\) and still observe that \(P_{Z}^{do(Y=y)}=P_{Z|Y=y}\), as detailed by Example 9.2 in Peters et al. (2017). Therefore, we stick to the graphical criterion of causal sufficiency and justify this by the fact that for 'generic' parameter values it coincides with interventional sufficiency (which would actually be the more reasonable criterion)._ Causal marginal problem.The idea that causal constraints can impose meaningful biases on the class of causal models can be supported by a more general motivation of the _causal marginal problem_. Given marginal distributions \(P_{S_{1}},\ldots,P_{S_{k}}\) on sets of variables \(S_{1},\ldots,S_{k}\), the problem of existence and uniqueness of the joint distribution \(P_{S_{1}\cup\cdots\cup S_{k}}\) that is consistent with the marginals is usually referred to as _(probabilistic) marginal problem_(Vorob'ev, 1962; Kellerer, 1964). Janzing (2018)2 introduce the causal marginal problem as follows. Given marginal causal models \(M_{1},\ldots,M_{k}\) over distinct but overlapping sets of variables \(S_{1},\ldots,S_{k}\) respectively, is there a joint causal model \(M\) over \(S_{1}\cup\cdots\cup S_{k}\) that is _consistent_ with the marginal causal models. A joint causal model \(M\) is counterfactually (interventionally) consistent with a marginal model \(M_{i}\), when they agree on all counterfactual (interventional) distributions. Such counterfactual or interventional consistency constraints can impose causal inductive biases over classes of causal models which clearly do not apply for statistical models. For instance, without formalizing this claim, Example 1 suggests that the causal marginal problem may have a unique solution even when the (probabilistic) marginal problem does not (Janzing, 2016) - subject to some genericity assumption explained above. Gresele et al. (2022) formally study the special case, where three binary variables \(X,Y,Z\) are linked via a V-structure and marginal distributions for \(X,Y\) and \(Y,Z\) are given. They show that in this scenario the counterfactual consistency constraints restricts the space of possible joint causal models. Guo et al. (2023) introduce the term 'out-of-Variable (OOV) generalization' for an agent's ability to handle new situations that involve variables never jointly observed before. As toy example for OOV generalization, they study prediction from distinct yet overlapping causal parents, that is, a scenario where causal directions are known. In contrast, we focus on causal discovery where inferring causal directions is an essential part of the learning task. With this motivation in mind, we can now summarize the main contributions of this work. ### Our contributions The primary contribution of this work is the reinterpretation of the problem of causal discovery as the prediction of joint statistics of unobserved random variables in the sense described in Section 1.1. This allows us to formalize and study this problem in the framework of statistical learning theory. More explicitly, 1. We formalize the problem of causal discovery as a standard prediction task where the input is a subset (or an ordered tuple) of variables, for which we want to test some statistical property. The output is a statistical property of that subset (or tuple). This way, each observed variable set defines a _training_ point for inferring the causal model while the unobserved variable sets are the _test_ instances. (Section 2). 2. After reinterpreting causal discovery this way, classes of causal models become function classes whose richness can be measured via VC dimension. The problem then becomes directly accessible to statistical learning theory in the following sense. Assume we have found a causal model that is consistent with the statistical properties of a large number of observed subsets. We can then hope that it also correctly predicts properties of unobserved subsets provided that the causal model has been taken from a sufficiently'small' class to avoid overfitting on the set of observed statistical properties (see Remark 3). This 'radical empirical' point of view can be developed even further: rather than asking whether some statistical property like statistical independence is 'true', we only ask whether the test at hand rejects or accepts it.3 In our reinterpretation we need not ascribe a'metaphysical' character to statistical independences. Hence we can replace the term'statistical properties' in the scheme (1) with 'test results'. In fact, we do not even have to assume that there is a 'true' underlying causal model. We use causal models (a priori) just as predictors of statistical properties. (Section 3). Footnote 3: Asking whether two variables are ’in fact’ statistically independent does not make sense for an empirical sample unless the sample is thought to be part of an infinite sample – which is problematic in our finite world. 3. By straightforward application of VC learning theory, we then derive error bounds for the predicted statistical properties and discuss how they can be used as guidance for constructing causal hypotheses from not-too-rich classes of hypotheses. (Section 4). 4. We also provide an experimental evaluation of two scenarios with simulated data where we compare prediction errors with our error bounds. (Section 5). 5. Finally, we revisit some conceptual and practical problems of causal discovery and argue that our pragmatic view offers a slightly different perspective that is potentially helpful. (Section 6). **Remark 3** (**Benign overfitting**).: _If there is no label noise, say if all the conditional independences are correctly prescribed in the observed datasets, then models that overfit to the observed data (for example, the true DAG) may be superior to the those that do not. Generalization properties for such estimators in the absence of label noise are very well studied (Haussler, 1992; Bartlett and Long, 2021). However, in our empirical view point, we can safely omit this discussion. Further note that recent work has shown that the phenomenon of benign overfitting - overfitting to the training data and yet achieving close-to-optimal generalization - can be observed even in the presence of label noise when learning with overparameterized model classes due to some form of implicit regularization (Belkin et al., 2019a,b; Liang and Rakhlin, 2020; Tsigler and Bartlett, 2020; Bartlett et al., 2020; Muthukumar et al., 2020). In this paper, we focus on the classical underparameterized regime and refrain from the discussion of generalization in overparameterized systems._ ### Related work The field of integrative causal inference (Tsamardinos et al., 2012) is the work that is closest to the present paper. Tsamardinos et al. (2012) provides algorithms that use causal inference to combine knowledge from heterogenous data sets and to predict conditional independences between variables that have not been jointly observed. In contrast, our main contribution is the conceptual task of framing causal discovery as a predictive problem which allows causal models to be empirically testable in the iid scenario thereby setting the scene for learning theory. Furthermore, in contrast to Tsamardinos et al. (2012), the term'statistical properties' in our setting need not necessarily refer to conditional independences. There is a broad variety of new approaches that infer causal directions from statistical properties other than conditional independences Kano and Shimizu (2003); Sun et al. (2006); Hoyer et al. (2009); Zhang and Hyvarinen (2009); Daniusis et al. (2010); Janzing et al. (2009); Stegle et al. (2010); Peters et al. (2010); Mooij et al. (2016); Marx and Vreeken (2017). On the other hand, the causal model inferred from the observations may entail statistical properties other than conditional independences - subject to the model assumptions on which the above-mentioned inference procedures rely. ## 2 The formal setting Below we will usually refer to some given set of variables \(S:=\{X_{1},\ldots,X_{n}\}\) whose subsets are considered. Whenever this cannot cause any confusion, we will not carefully distinguish between the _set_\(S\) and the _vector_\(\mathbf{X}:=(X_{1},\ldots,X_{n})\) and also use the term 'joint distribution \(P_{S}\)' although the order of variables certainly matters. ### Statistical properties Statistical properties are the crucial concept of this work. On the one hand, they are used to infer causal structure. On the other hand, causal structure is used to predict statistical properties. **Definition 1** (statistical property).: _Let \(S=\{X_{1},\ldots,X_{n}\}\) a set of variables. A statistical property \(Q\) with range \(\mathcal{Y}\) is given by a function_ \[Q:\Delta_{S}\rightarrow\mathcal{Y}\] _where \(\Delta_{S}\) denotes the space of joint distributions of \(k\)-tuples of variables \(Y_{1},\ldots,Y_{k}\in S\) and \(\mathcal{Y}\) denotes some output space. Often we will consider binary or real-valued properties, that is \(\mathcal{Y}=\{0,1\}\), \(\mathcal{Y}=\{-1,+1\}\), or \(\mathcal{Y}=\mathbb{R}\), respectively._ By slightly abusing terminology, the term'statistical property' will sometimes refer to the value in \(\mathcal{Y}\) that is the output of \(Q\) or to the function \(Q\) itself. This will, hopefully, cause no confusion. Here, \(Q\) may be defined for fixed size \(k\) or for general \(k\). Moreover, we will consider properties that depend on the ordering of the variables \(Y_{1},\ldots,Y_{k}\), those that do not depend on it, or those that are invariant under some permutations \(k\) variables. This will be clear from the context. We will refer to \(k\) tuples for which part of the order matters as 'partly ordered tuples'. To given an impression about the variety of statistical properties we conclude the section with a list of examples. We start with an example for a binary property that does not refer to an ordering: **Example 2** (statistical independence).: \[Q(P_{Y_{1},\ldots,Y_{k}})=\left\{\begin{array}{ll}1&\mbox{for $Y_{j}$ jointly independent}\\ 0&\mbox{otherwise}\end{array}\right.\] The following binary property allows for some permutations of variables: **Example 3** (conditional independence or partial uncorrelatedness).: \[Q(P_{Y_{1},\ldots,Y_{k}})=\left\{\begin{array}{ll}1&\mbox{for $Y_{1}\perp Y_{2}\left|Y_{3},\ldots,Y_{k}$}\\ 0&\mbox{otherwise}\end{array}\right.\] _Likewise, \(Q(P_{Y_{1},\ldots,Y_{k}})\) could indicate whether \(Y_{1}\) and \(Y_{2}\) have zero partial correlations, given \(Y_{3},\ldots,Y_{k}\) (that is, whether they are uncorrelated after linear regression on \(Y_{3},\ldots,Y_{k}\))._ To emphasize that our causal models are not only used to predict conditional independences but also other statistical properties we also mention linear additive noise models (Kano and Shimizu, 2003): **Example 4** (existence of linear additive noise models).: \(Q(P_{Y_{1},\ldots,Y_{k}})=1\) _if and only if there is a matrix \(A\) with entries \(A_{ij}\), that is lower triangular, such that_ \[Y_{i}=\sum_{j<i}A_{ij}Y_{j}+N_{j}, \tag{1}\] _where \(N_{1},\ldots,N_{k}\) are jointly independent noise variables. If no such additive linear model exists, we set \(Q(P_{Y_{1},\ldots,Y_{k}})=0\)._ Lower triangularity means that there is a DAG such that \(A\) has non-zero entries \(A_{ij}\) whenever there is an arrow from \(j\) to \(i\). Here, the entire order of variables matters. Then (1) is a linear structural equation. Whenever the noise variables \(N_{j}\) are non-Gaussian, linear additive noise models allow for the unique identification of the causal DAG (Kano and Shimizu, 2003) if one assumes that the true generating process has been linear. Then, \(Q(P_{Y_{1},\ldots,Y_{k}})=1\) holds for those orderings of variables that are compatible with the true DAG. This way, we have a statistical property that is directly linked to the causal structure (subject to a strong assumption, of course). The following simple binary property will also play a role later: **Example 5** (sign of correlations).: _Whether a pair of random variables is positively or negatively correlated defines a simple binary property in a scenario where all variables are correlated:_ \[Q(P_{Y_{1},Y_{2}})=\left\{\begin{array}{cl}1&\mbox{ if }\mathrm{cov}(Y_{1},Y_{2})>0 \\ -1&\mbox{ if }\mathrm{cov}(Y_{1},Y_{2})<0\end{array}\right.\] Note that positivity of the covariance matrix already restricts \(Q\), but we will later see a causal model class that restricts \(Q\) even further beyond this constraint. Finally, we mention a statistical property that is not binary but positive-semidefinite matrix-valued: **Example 6** (covariances and correlations).: _For \(k\) variables \(Y_{1},\ldots,Y_{k}\) let \(\mathcal{Y}\) be the set of positive semi-definite matrices. Then define_ \[Q:P_{Y_{1},\ldots,Y_{k}}\mapsto\Sigma_{Y_{1},\ldots,Y_{k}},\] _where \(\Sigma_{Y_{1},\ldots,Y_{n}}\) denotes the joint covariance matrix of \(Y_{1},\ldots,Y_{n}\). For \(k=2\), one can also get a real-valued property by focusing on the off-diagonal term. One may then define a map \(Q\) by_ \[Q(P_{Y_{1},Y_{2}}):=\mathrm{cov}(Y_{1},Y_{2}),\] _or alternatively, if one prefers correlations, define_ \[Q(P_{Y_{1},Y_{2}}):=\mathrm{corr}(Y_{1},Y_{2}).\] ### Statistical and causal models The idea of this paper is that causal models are used to predict statistical properties, but a priori, the models need not be causal. One can use Bayesian networks, for instance, to encode conditional statistical independences with or without interpreting the arrows as formalizing causal influence. For the formalism introduced in this section it does not matter whether one interprets the models as causal or not. Example 1, however, suggested that model classes that come with a causal semantics are particularly intuitive regarding the statistical properties they predict. We now introduce our notion of'models': **Definition 2** (models for a statistical property).: _Given a set \(S:=\{X_{1},\ldots,X_{n}\}\) of variables and some statistical property \(Q\), a model \(M\) for \(Q\) is a class of joint distributions \(P_{X_{1},\ldots,X_{n}}\) that coincide regarding the output of \(Q\), that is,_ \[Q(P_{Y_{1},\ldots,Y_{k}})=Q(P^{\prime}_{Y_{1},\ldots,Y_{k}})\quad\forall P_{Y_ {1},\ldots,Y_{k}},P^{\prime}_{Y_{1},\ldots,Y_{k}}\in M,\] _where4\(Y_{1},\ldots,Y_{k}\in S\). Accordingly, the property \(Q_{M}\) predicted by the model \(M\) is given by a function_ Footnote 4: To avoid threefold indices we use \(Y_{j}\) instead of \(X_{i_{j}}\) here. \[(Y_{1},\ldots,Y_{k})\mapsto Q_{M}\left[(Y_{1},\ldots,Y_{k})\right]:=Q(P_{Y_{1 },\ldots,Y_{k}}),\] _for all \(P_{X_{1},\ldots,X_{n}}\) in \(M\), where \((Y_{1},\ldots,Y_{k})\) runs over all allowed input (partly ordered) tuples of \(Q\)._ Formally, the 'partly ordered tuples' are equivalence classes in \(S^{k}\), where equivalence corresponds to irrelevant reorderings of the tuple. To avoid cumbersome formalism, we will just refer to these equivalence classes as 'the allowed inputs'. Later, such a model will be, for instance, a DAG \(G\) and the property \(Q\) formalizes all conditional independences that hold for the respective Markov equivalence class. To understand the above terminology, note that \(Q\) receives a distribution as input and the output of \(Q\) tells us the respective property of the distribution (e.g. whether independence holds). In contrast, \(Q_{M}\) receives a set of nodes (variables) of the DAG as inputs and tells us the property entailed by \(M\). The goal will be to find a model \(M\) for which \(Q_{M}\) and \(Q\) coincide for the majority of observed tuples of variables. We now describe a few examples of causal models as predictors of statistical properties that we have in mind. Our most prominent one reads: **Example 7** (DAG as model for conditional independences).: _Let \(G\) be a DAG with nodes \(S:=\{X_{1},\ldots,X_{n}\}\) and \(Q\) be the set of conditional independences as in Example 3. Then, let \(Q_{G}\) be the function on \(k\)-tuples from \(S\) defined by_ \[Q_{G}\left[(Y_{1},\ldots,Y_{k})\right]:=0\] _if and only if the Markov condition implies \(Y_{1}\mathchoice{\mathrel{\hbox to 0.0pt{\hbox to 7.499886pt{\hss$ \perp$}\mskip 2.0mu \perp$}}}{\mathrel{\hbox to 0.0pt{\hbox to 7.499886pt{\hss$ \perp$}\mskip 2.0mu \perp$}}}{\mathrel{\hbox to 0.0pt{\hbox to 7.499886pt{\hss$ \perp$}\mskip 2.0mu \perp$}}}{\mathrel{\hbox to 0.0pt{\hbox to 7.499886pt{\hss$ \perp$}\mskip 2.0mu \perp$}}}Y_{2}\,|Y_{3}\ldots,Y_{k}\), and_ \[Q_{G}\left[(Y_{1},\ldots,Y_{k})\right]:=1\] _otherwise._ Note that \(Q_{G}(.)=1\) does not mean that the Markov condition implies dependence, it only says that it does not imply independence. However, if we think of \(G\) as a causal DAG, the common assumption of causal faithfulness (Spirtes et al., 1993) states that all dependences that are allowed by the Markov condition occur in reality. Adopting this assumption, we will therefore interpret \(Q_{G}\) as a function that predicts dependence or independence, instead of making no prediction if the Markov condition allows dependence. We also mention a particularly simple class of DAGs that will appear as an interesting example later: **Example 8** (DAGs consisting of a single colliderfree path).: _Let \(\mathcal{G}\) be the set of DAGs that consist of a single colliderfree path_ \[X_{\pi(1)}-X_{\pi(2)}-X_{\pi(3)}-\cdots-X_{\pi(n)},\] _where the directions of the arrows are such that there is no variable with two arrowheads. Colliderfree paths have the important property that any dependence between two non-adjacent nodes is screened off by any variable that lies between the two nodes, that is,_ \[X_{j}\mathchoice{\mathrel{\hbox to 0.0pt{\hbox to 7.499886pt{\hss$ \perp$}\mskip 2.0mu \perp$}}}{\mathrel{\hbox to 0.0pt{\hbox to 7.499886pt{\hss$ \perp$}\mskip 2.0mu \perp$}}}{\mathrel{\hbox to 0.0pt{\hbox to 7.499886pt{\hss$ \perp$}\mskip 2.0mu \perp$}}}{\mathrel{\hbox to 0.0pt{\hbox to 7.499886pt{\hss$ \perp$}\mskip 2.0mu \perp$}}}X_{k}|\,X_{l},\] _whenever \(X_{l}\) lies between \(X_{j}\) and \(X_{k}\). If one assumes, in addition, that the joint distribution is Gaussian, the partial correlation between \(X_{j}\) and \(X_{k}\), given \(X_{l}\), vanishes. This implies that the correlation coefficient of any two nodes is given by the product of pairwise correlations along the path:_ \[\mathrm{corr}(X_{j},X_{k})=\prod_{i=\pi^{-1}(j)}^{\pi^{-1}(k)-1}\mathrm{corr }(X_{\pi(i)},X_{\pi(i+1)})=:\prod_{i=\pi^{-1}(j)}^{\pi^{-1}(k)-1}r_{i}. \tag{2}\] _This follows easily by induction because \(\mathrm{corr}(X,Z)=\mathrm{corr}(X,Y)\mathrm{corr}(Y,Z)\) for any three variables \(X,Y,Z\) with \(X\mathchoice{\mathrel{\hbox to 0.0pt{\hbox to 7.499886pt{\hss$ \perp$}\mskip 2.0mu \perp$}}}{\mathrel{\hbox to 0.0pt{\hbox to 7.499886pt{\hss$ \perp$}\mskip 2.0mu \perp$}}}{\mathrel{\hbox to 0.0pt{\hbox to 7.499886pt{\hss$ \perp$}\mskip 2.0mu \perp$}}}{\mathrel{\hbox to 0.0pt{\hbox to 7.499886pt{\hss$ \perp$}\mskip 2.0mu \perp$}}}Z\,|Y\). Therefore, such a DAG, together with all the correlations between adjacent nodes, predicts all pairwise correlations. We therefore specify our model by \(M:=(\pi,r)\), that is, the ordering of nodes and correlations of adjacent nodes._ The following example shows that a DAG can entail also properties that are more sophisticated than just conditional independences and correlations: **Example 9** (DAGs and linear non-Gaussian additive noise).: _Let \(G\) be a DAG with nodes \(S:=\{X_{1},\ldots,X_{n}\}\) and \(Q\) be the linear additive noise property in Example 4. Let \(Q_{G}\) be the function on \(k\)-tuples from \(S\) defined by_ \[Q_{G}((Y_{1},\ldots,Y_{k})):=1\] _if and only if the following two conditions hold: (1) \(Y_{1},\ldots,Y_{k}\) is a causally sufficient subset from \(S\) in \(G\) and, that is, no two different \(Y_{i},Y_{j}\) have a common ancestor in \(G\) (2) the ordering \(Y_{1},\ldots,Y_{k}\) is consistent with \(G\), that is, \(Y_{j}\) is not ancestor of \(Y_{i}\) in \(G\) for any \(i<j\)._ In contrast to \(Q\) from Example 4, \(Q_{G}\) predicts from the graphical structure whether the joint distribution of some subset of variables admits a linear additive noise model. The idea is the following. Assuming that the entire joint distribution of all \(n\) variables has been generated by a linear additive noise model (Kano and Shimizu, 2003), any \(k\)-tuple \((Y_{1},\ldots,Y_{k})\) also admits a linear additive noise model provided that (1) and (2) hold. This is because marginalizations of linear additive noise models remain linear additive noise models whenever one does not marginalize over common ancestors.5 Hence, conditions (1) and (2) are clearly sufficient. For generic parameter values of the underlying linear model the two conditions are also necessary because linear non-Gaussian models render causal directions uniquely identifiable and also admit the detection of hidden common causes (Hoyer et al., 2008). Footnote 5: Note that the class of _non-linear_ additive noise models (Hoyer et al., 2009) is not closed under marginalization. ### Testing properties on data So far we have introduced statistical properties as mathematical properties of distributions. In real-world applications, however, we want to predict the outcome of a test on empirical data. The task is no longer to predict whether some set of variables is'really' conditionally independent, we just want to predict whether the statistical test at hand accepts independence. Whether or not the test is appropriate for the respective mathematical property \(Q\) is not relevant for the generalization bounds derived later. If one infers DAGs, for instance, by partial correlations and uses these DAGs only to infer partial correlations, it does not matter that non-linear relations actually prohibit to replace conditional independences with partial correlations. The reader may get confused by these remarks because now there seems to be no requirement on the tests at all if it is not supposed to be a good test for the mathematical property \(Q\). This is a difficult question. One can say, however, that for a test that is entirely unrelated to some property \(Q\) we have no guidance what outcomes of our test a causal hypothesis should predict. The fact that partial correlations, despite all their limitations, approximate conditional independence, does provide some justification for expecting vanishing partial correlations in many cases where there is d-separation in the causal DAG. We first specify the information provided by a data set. **Definition 3** (data set).: _Each data set \(D_{j}\) is an \(l_{j}\times k_{j}\) matrix of observations, where \(l_{j}\) denotes the sample size and \(k_{j}\) the number of variables. Further, the dataset contains a \(k_{j}\)-tuple of values from \(\{1,\ldots,n\}\) specifying the \(k_{j}\) variables \(Y_{1},\ldots,Y_{k_{j}}\subset\{X_{1},\ldots,X_{n}\}\) the samples refer to._ To check whether the variables under consideration in fact satisfy the property predicted by the model we need some statistical test (in the case of binary properties) or an estimator (in the case of real-valued or other properties). Let us say that we are given some test or estimator for a property \(Q\), formally defined as follows: **Definition 4** (statistical test / estimator for \(Q\)).: _A test (respective estimator for non-binary properties) for the statistical property \(Q\) with range \(\mathcal{Y}\) is a map_ \[Q_{T}:D\mapsto Q_{T}(D)\in\mathcal{Y},\] _where \(D\) is a data set that involves the observed instances of \(Y_{1},\ldots,Y_{k}\), where \((Y_{1},\ldots,Y_{k})\) is a partly ordered tuple that defines an allowed input of \(Q\). \(Q_{T}(D)\) is thought to indicate the outcome of the test or the estimated value, respectively._ ### Phrasing the task as standard prediction problem Our learning problem now reads: given the data sets \(D_{1},\ldots,D_{l}\) with the \(k\)-tuples \(S_{1},\ldots,S_{l}\) of variables, find a model \(M\) such that \(Q_{M}(S_{j})=Q_{T}(D_{j})\) for all data sets \(j=1,\ldots,l\) or, less demanding, for most of the data sets. However, more importantly, we would like to choose \(M\) such that \(Q_{M}(S_{l+1})=Q_{T}(D_{l+1})\) will most probably also hold for a _future_ data set \(D_{l+1}\). The problem of constructing a causal model now becomes a standard learning problem where the training as well as the test examples are _data sets_. Note that also Lopez-Paz et al. (2015) phrased a causal discovery problem as standard learning problem. There, the task was to classify two variables as 'cause' and 'effect' after getting a large number of cause-effect pairs as training examples. Here, however, the data sets refer to observations from different subsets of variables that are assumed to follow a joint distribution over the union of all variables occurring in any of the data sets. Having phrased our problem as a standard prediction scenario whose inputs are subsets of variables, we now introduce the usual notion of empirical error on the training data accordingly: **Definition 5** (empirical error).: _Let \(Q\) be a statistical property, \(Q_{T}\) a statistical test, and \(D:=\{D_{1},\ldots,D_{k}\}\) a collection of data sets referring to the variable tuples \(S_{1},\ldots,S_{k}\). Then the empirical training error of model \(M\) is defined by_ \[L(M):=\frac{1}{k}\sum_{j=1}^{k}|Q_{T}(D_{j})-Q_{M}(S_{D_{j}})|.\] Note that our theory does not prefer one model over another if they agree in terms of the predictions they make. For example, with the independence property as defined in Example 7, all DAGs in the same Markov equivalence class are also equivalent with respect to the empirical error \(L(M)\). This further emphasizes that we are not necessarily looking for a _true_ model. To see why this paradigm change can be helpful, assume there was a ground truth model \(M\) for which the test \(Q_{T}\) correctly outputs the statistical properties. Then \(M\) will certainly be one of the optimal models. Yet, if we are bound to make errors (either in evaluating the property via \(Q_{T}\) or in estimating \(M\)), it becomes less obvious which model to pick. Graphical metrics like structural Hamming distance (SHD) (Tsamardinos et al., 2006) are used frequently to benchmark causal discovery, although it is still an open debate how to quantify the quality of causal models (Gentzel et al., 2019). In our framework the focus is shifted. Here, causal models are seen as predictors of statistical properties. Therefore the best model is the one that predicts the statistical properties the most accurately. In this sense, we do not even need to reference a 'true' model. Consider the following example. **Example 10** (Ground truth vs. prediction).: _Assume there is a true causal model for the variables \(X,Y,Z\), such that \(X\) causes \(Y\) and \(Z\) is a confounder, as visualized in Fig. 2(a). Further assume that the confounding effect of \(Z\) is very weak, to the extent that the statistical independence test \(Q_{T}\) outputs \(X\mathchoice{\mathrel{\hbox to 0.0pt{\kern 2.0pt\perp}\mskip-4.0mu \perp}}{ \mathrel{\hbox to 0.0pt{\kern 2.0pt\perp}\mskip-4.0mu \perp}}{\mathrel{\hbox to 0.0pt{\kern 2.0pt \perp}\mskip-4.0mu \perp}}{\mathrel{\hbox to 0.0pt{\kern 2.0pt\perp}\mskip-4.0mu \perp}}Z\) and \(Z\mathchoice{\mathrel{\hbox to 0.0pt{\kern 2.0pt\perp}\mskip-4.0mu \perp}}{ \mathrel{\hbox to 0.0pt{\kern 2.0pt\perp}\mskip-4.0mu \perp}}{\mathrel{\hbox to 0.0pt{\kern 2.0pt \perp}\mskip-4.0mu \perp}}{\mathrel{\hbox to 0.0pt{\kern 2.0pt\perp}\mskip-4.0mu \perp}}Y\). Let \(\hat{G}_{1}\) be a graph that reflects these independences as shown in Fig. (b)b. In the sense of our framework, \(\hat{G}_{1}\) is a good predictor of \(Q_{T}\), as for every tuple of variables (i.e. dataset) it correctly predicts the output of the independence tests. Now consider \(\hat{G}_{2}\) in Fig. (c)c. \(\hat{G}_{2}\) is closer to the ground truth \(G\) with respect to SHD. Yet, it reflects the observed independences quite poorly. So if we are interested in the output of the independence tests and data at hand, \(\hat{G}_{1}\) is more 'useful' than \(\hat{G}_{2}\). Although, we do not claim that \(\hat{G}_{1}\) is generally better or worse than \(\hat{G}_{2}\), and the statement is to be understood with respect to the statistical tests considered._ Finding a model \(M\) for which the training error is small does not guarantee, however, that the error will also be small for future test data. If \(M\) has been chosen from a 'too rich' class of models, the small training error may be a result of overfitting. Fortunately we have phrased our learning problem in a way that the richness of a class of causal models can be quantified by standard concepts from statistical learning theory. This will be discussed in the following section. ## 3 Capacity of classes of causal models We have formally phrased our problem as a prediction problem where the task is to predict the outcome in \(\mathcal{Y}\) of \(Q_{T}\) for some test \(T\) applied to an unobserved variable set. We now assume that we are given a class of models \(\mathcal{M}\) defining statistical properties \((Q_{M})_{M\in\mathcal{M}}\) that are supposed to predict the outcomes of \(Q_{T}\). ### Binary properties Given some binary statistical property, we can straightforwardly apply the notion of VC dimension (Vapnik, 1998) to classes \(\mathcal{M}\) and define: Figure 3: Let \(G\) be the true model. If \(Q_{T}\) falsely outputs \(X\mathchoice{\mathrel{\hbox to 0.0pt{\kern 2.0pt\perp}\mskip-4.0mu \perp}}{\mathrel{\hbox to 0.0pt{ \kern 2.0pt\perp}\mskip-4.0mu \perp}}{\mathrel{\hbox to 0.0pt{\kern 2.0pt\perp} \mskip-4.0mu \perp}}{\mathrel{\hbox to 0.0pt{\kern 2.0pt\perp}\mskip-4.0mu \perp}}Z\) and \(Z\mathchoice{\mathrel{\hbox to 0.0pt{\kern 2.0pt\perp}\mskip-4.0mu \perp}}{\mathrel{\hbox to 0.0pt{ \kern 2.0pt\perp}\mskip-4.0mu \perp}}{\mathrel{\hbox to 0.0pt{\kern 2.0pt\perp} \mskip-4.0mu \perp}}{\mathrel{\hbox to 0.0pt{\kern 2.0pt\perp}\mskip-4.0mu \perp}}Y\) then \(\hat{G}_{1}\) is a better predictor of \(Q_{T}\) than \(\hat{G}_{2}\), even though \(\hat{G}_{2}\) is graphically closer to the ground truth \(G\). **Definition 6** (VC dimension of a model class for binary properties).: _Let \(S\) be a set of variables \(X_{1},\ldots,X_{n}\) and \(Q\) be a binary property. Let \(\mathcal{M}\) be a class of models for \(Q\), that is, each \(M\in\mathcal{M}\) defines a map_ \[Q_{M}:(Y_{1},\ldots,Y_{k})\mapsto Q_{M}\left[(Y_{1},\ldots,Y_{k})\right]\in\{0, 1\}.\] _Then the VC dimension of \(\mathcal{M}\) is the largest number \(h\) such that there are \(h\) allowed inputs \(S_{1},\ldots,S_{h}\) for \(Q_{M}\) such that the restriction of all \(M\in\mathcal{M}\) to \(S_{1},\ldots,S_{h}\) runs over all \(2^{h}\) possible binary functions._ Since our model classes are thought to be given by causal hypotheses the following class is our most important example although we will later further restrict the class to get stronger generalization bounds: **Lemma 1** (VC dimension of conditional independences entailed by DAGs).: _Let \(\mathcal{G}\) be the set of DAGs with nodes \(X_{1},\ldots,X_{n}\). For every \(G\in\mathcal{G}\), we define \(Q_{G}\) as in Example 7. Then the VC dimension \(h\) of \((Q_{G})_{G\in\mathcal{G}}\) satisfies_ \[h\leq n\log_{2}n+n(n-1)/2\in O(n^{2}). \tag{3}\] Proof.: The number \(N_{n}\) of DAGs on \(n\) labeled nodes can easily be upper bounded by the number of orderings times the number of choices to draw an edge or not. This yields \(N_{n}<n!2^{n(n-1)/2}\). Using Stirling's formula we obtain \[n!<e^{1/(12n)}\sqrt{2\pi n}\left(\frac{n}{e}\right)^{n}<n^{n},\] and thus \(N_{n}<n^{n}2^{n(n-1)/2}\). Since the VC dimension of a class cannot be larger than the binary logarithm of the number of elements it contains, (3) easily follows. It would actually be desirable to find a bound for the VC dimension that is smaller than the logarithm of the number of classifiers, since that would be a more powerful application of VC theory. We leave this to future work. Note that the number of possible conditional independence tests of the form \(Y_{1}\perp\!\!\!\perp Y_{2}\,|Y_{3}\) already grows faster than the VC dimension, namely with the third power.6 Therefore, the class of all DAGs already defines a restriction since it is not able to explain all possible patterns of conditional (in)dependences even when one conditions on one variable only. Nevertheless, the set of all DAGs may be too large for the number of data sets at hand. We therefore mention the following more restrictive class given by so-called polytrees, that is, DAGs whose skeleton is a tree (hence they contain no undirected cycles). **Lemma 2** (VC dimension of cond. independences entailed by polytrees).: _Let \(\mathcal{G}\) be the set of polytrees with nodes \(X_{1},\ldots,X_{n}\). For every \(G\in\mathcal{G}\), we define \(Q_{G}\) as in Example 7. Then the VC dimension \(h\) of \((Q_{G})_{G\in\mathcal{G}}\) satisfies_ \[h\leq n(\log_{2}n+1). \tag{4}\] Proof.: According to Cayley's formula, the number of trees with \(n\) nodes reads \(n^{n-2}\)(Aigner and Ziegler, 1998). The number of Markov equivalence classes of polytrees can be bounded from above by \(2^{n-1}-n+1\)(Radhakrishnan et al., 2017). Thus the number of Markov equivalence classes of polytrees is upper bounded by \[n^{n-2}(2^{n-1}-n+1)\leq n^{n-2}2^{n}. \tag{5}\] Again, the bound follows by taking the logarithm. We will later use the following result: **Lemma 3** (VC dimension of sign of correlations along a path).: _Consider the set of DAGs on \(X_{1},\ldots,X_{n}\) that consist of a single colliderfree path as in Example 8 and assume multivariate Gaussianity. The sign of pairwise correlations is then determined by the permutation \(\pi\) that aligns the graph and the sign of correlations of all adjacent pairs. We thus parameterize a model by \(M:=(\pi,s)\) where the vector \(s:=(s_{1},\ldots,s_{n})\) denotes the signs of adjacent nodes. The full model class \(\mathcal{M}\) is obtained when \(\pi\) runs over the entire group of permutations and \(s\) over all combinations in \(\{-1,+1\}^{n}\). Let \(Q\) be the property indicating the sign of the correlation of any two variables as in Example 5. Then the VC dimension of \((Q_{M})_{M\in\mathcal{M}}\) is at most \(n\)._ Proof.: Defining \[s_{j}:=\prod_{i=1}^{\pi^{-1}(j)-1}\operatorname{sign}(\operatorname{corr}(X_{ \pi(i)},X_{\pi(i+1)}))\] we obtain \[\mathrm{sign}(\mathrm{corr}(X_{i},X_{j}))=s_{i}s_{j},\] due to (2). Therefore, the signs of all pairwise correlations can be computed from \(s_{1},\ldots,s_{n}\). Since there are \(2^{n}\) possible assignments for these values, \(\mathcal{G}\) thus induces \(2^{n}\) functions and thus the VC dimension is at most \(n\). ### Real-valued statistical properties We also want to obtain quantitative statements about the strength of dependences and therefore consider also the correlation as an example of a real-valued property. **Lemma 4** (correlations along a path).: _Let \(\mathcal{M}\) be the model class whose elements \(M\) are colliderfree paths together with a list of all correlations of adjacent pairs of nodes, see Example 8. Assuming also multi-variate Gaussianity, \(M\), again, defines all pairwise correlations and we can thus define the model induced property_ \[Q_{M}\left[(X_{j},X_{k})\right]:=\mathrm{corr}_{M}(X_{j},X_{k}),\] _where the term on the right hand side denotes the correlation determined by the model \(M:=(\pi,r)\) as introduced in Example 8. Then the VC dimension of \((Q_{M})_{M\in\mathcal{M}}\) is in \(O(n)\)._ Proof.: We assume, for simplicity, that all correlations are non-zero. To specify the absolute value of the correlation between adjacent nodes we define the parameters \[\beta_{i}:=\log|\mathrm{corr}_{M}(X_{\pi(i-1)},X_{\pi(i)})|.\] To specify the sign of those correlations we define the binary values \[\mathrm{g}_{i}:=\left\{\begin{array}{ll}1&\mbox{ for corr}_{M}(X_{\pi(i-1)},X_{\pi(i)})<0\\ 0&\mbox{ otherwise}\end{array}\right.,\] for all \(i\geq 2\). It will be convenient to introduce the parameters \[\alpha_{j}:=\sum_{i=2}^{j}\beta_{i},\] which are cumulative versions of the 'adjacent log correlations' \(\beta_{i}\). Likewise, we introduce the binaries \[s_{j}:=\left(\sum_{i=2}^{j}g_{i}\right)\operatorname{mod}2,\] which indicate whether the number of negative correlations along the chain from its beginning is odd or even. This way, the correlations between any two nodes can be computed from \(\alpha\) and \(s\): \[\operatorname{corr}_{M}(X_{j},X_{k})=(-1)^{s_{\pi^{-1}(j)}+s_{\pi^{-1}(k)}}\,e ^{|\alpha_{\pi^{-1}(j)}-\alpha_{\pi^{-1}(k)}|}.\] For technical reasons we define corr formally as a function of _ordered_ pairs of variables although it is actually symmetric in \(j\) and \(k\). We are interested in the VC dimension of the family \(F:=(f_{M})_{M\in\mathcal{M}}\) of real-valued functions defined by \[f_{M}(j,k):=\operatorname{corr}_{M}(X_{j},X_{k})=:\rho_{i,j}^{M}.\] Its VC dimension is defined as the VC dimension of the set of classifiers \(C:=(c_{M}^{\gamma})_{M,\gamma}\) with \[c_{M}^{\gamma}(j,k):=\left\{\begin{array}{ll}1&\text{ for }\rho_{j,k}^{M} \geq\gamma\\ 0&\text{ otherwise}\end{array}\right.,\] To estimate the VC dimension of \(C\) we compose it from classifiers whose VC dimension is easier to estimate. We first define the family of classifiers given by \(C^{>}:=(c_{\alpha}^{>\theta})_{\alpha\in\mathbb{R}^{-},\theta\in\mathbb{R}}\) with \[c_{\alpha}^{>\theta}(j,k):=\left\{\begin{array}{ll}1&\text{ for }\alpha_{\pi^{-1} (j)}-\alpha_{\pi^{-1}(k)}\geq\theta\\ 0&\text{ otherwise}\end{array}\right..\] Likewise, we define \(C^{<}:=(c_{\alpha}^{<\theta})_{\alpha\in\mathbb{R}^{-},\theta\in\mathbb{R}}\) with \[c_{\alpha}^{<\theta}(j,k):=\left\{\begin{array}{ll}1&\text{ for }\alpha_{\pi^{-1} (j)}-\alpha_{\pi^{-1}(k)}<\theta\\ 0&\text{ otherwise}\end{array}\right..\] The VC dimensions pf \(C^{>}\) and \(C^{<}\) are at most \(n+1\) because they are given by linear functions on the space of all possible \(\alpha\in\mathbb{R}^{n}\)(Vapnik, 1995), Section 3.6, Example 1. Further, we define a set of classifiers that classify only according to the sign of the correlations: \[S:=(c_{+}^{M})\cup(c_{-}^{M}),\] where \[c_{+}^{M}(j,k):=\left\{\begin{array}{ll}1&\mbox{ if }\rho_{j,k}^{M}\geq 0\\ 0&\mbox{ otherwise}\end{array}\right..\] Likewise, we set \[c_{-}^{M}(j,k):=\left\{\begin{array}{ll}1&\mbox{ if }\rho_{j,k}^{M}<0\\ 0&\mbox{ otherwise}\end{array}\right..\] Since both components of \(S\) have VC dimension \(n\) at most, the VC dimension of \(S\) is in \(O(n)\). For \(\gamma>0\), \(\rho_{j,k}^{M}\geq\gamma\) is equivalent to \[(\rho_{j,k}^{M}\geq 0)\wedge(\alpha_{\pi^{-1}(j)}-\alpha_{\pi^{-1}(k)}\geq \log\gamma)\wedge(\alpha_{\pi^{-1}(k)}-\alpha_{\pi^{-1}(j)}\geq\log\gamma).\] Therefore, \[c_{M}^{\gamma}\in S\sqcap C^{>}\sqcap C^{<},\] for all \(\gamma>0\), where \(\sqcap\) denotes the intersection of 'concept classes' (van der Wart and Wellner, 2009) given by \[C_{1}\sqcap C_{2}:=(c_{1}\cap c_{2})_{c_{1}\in C_{1},c_{2}\in C_{2}}.\] Likewise, the union of concept classes is given by \[C_{1}\sqcup C_{2}:=(c_{1}\cup c_{2})_{c_{1}\in C_{1},c_{2}\in C_{2}},\] as opposed to the set-theoretic unions and intersections. For \(\gamma<0\), \(\rho_{j,k}^{M}\geq\gamma\) is equivalent to \[(\rho_{j,k}^{M}\geq 0)\vee\left\{(a_{\pi^{-1}(j)}-\alpha_{\pi^{-1}(k)}\geq \log|\gamma|)\wedge(\alpha_{\pi^{-1}(k)}-\alpha_{\pi^{-1}(j)}\geq\log|\gamma|) \right\}.\] Hence, \[c_{M}^{\gamma}\in S\sqcup[C^{>}\sqcap C^{<}],\] for all \(\gamma<0\). We then obtain: \[C\subset(S\sqcap C^{>}\sqcap C^{<})\cup(S\sqcup[C^{>}\sqcap C^{<}]).\] Hence, \(C\) is a finite union and intersection of concept classes and set theoretic union, each having VC dimension in \(O(n)\). Therefore, \(C\) has VC dimension in \(O(n)\) (van der Wart and Wellner, 2009). Generalization bounds ### Binary properties After we have seen that in our scenario causal models like DAGs define classifiers in the sense of standard learning scenarios, we can use the usual VC bounds like Theorem 6.7 in Vapnik (2006) to guarantee generalization to future data sets. To this end, we need to assume that the data sets are sampled from some distribution of data sets, an assumption that will be discussed at the end of this section. **Theorem 1** (VC generalization bound).: _Let \(Q_{T}\) be a statistical test for some statistical binary property and \(\mathcal{M}\) be a model class with VC dimension \(h\) defining some model-induced property \(Q_{M}\). Given \(k\) data sets \(D_{1},\ldots,D_{k}\) sampled according to a distribution \(P_{D}\). Then_ \[\mathbb{E}\left[|Q_{T}(D)-Q_{M}(D)|\right]\leq\frac{1}{k}\sum_{j=1}^{k}|Q_{T}( D_{j})-Q_{M}(S_{D_{j}})|+2\sqrt{\frac{h\left(\ln\frac{2k}{h}+1\right)-\ln \frac{\eta}{9}}{k}} \tag{6}\] _with probability \(1-\eta\)._ It thus suffices to increase the number of data sets slightly faster than the VC dimension. To illustrate how to apply Theorem 1 we recall the class of polytrees in Lemma 2. An interesting property of polytrees is that every pair of non-adjacent nodes can already be rendered conditional independent by one appropriate intermediate node. This is because there is always at most one (undirected) path connecting them. Moreover, for any two nodes \(X,Y\) that are not too close together in the DAG, there is a realistic chance that some randomly chosen \(Z\) satisfies \(X\mathchoice{\mathrel{\hbox to 0.0pt{\kern 2.0pt\perp}\mskip 2.0mu {\perp}}}{\mathrel{ \hbox to 0.0pt{\kern 2.0pt\perp}\mskip 2.0mu {\perp}}}{\mathrel{ \hbox to 0.0pt{\kern 2.0pt\perp}\mskip 2.0mu {\perp}}}{\mathrel{ \hbox to 0.0pt{\kern 2.0pt\perp}\mskip 2.0mu {\perp}}}Y\,|Z\). Therefore, we consider the following scenario: 1. Draw \(k\) triples \((Y_{1},Y_{2},Y_{3})\) uniformly at random and check whether \(Y_{1}\mathchoice{\mathrel{\hbox to 0.0pt{\kern 2.0pt\perp}\mskip 2.0mu {\perp}}}{\mathrel{ \hbox to 0.0pt{\kern 2.0pt\perp}\mskip 2.0mu {\perp}}}{\mathrel{ \hbox to 0.0pt{\kern 2.0pt\perp}\mskip 2.0mu {\perp}}}{\mathrel{ \hbox to 0.0pt{\kern 2.0pt\perp}\mskip 2.0mu {\perp}}}Y_{2}\,|Y_{3}\). 2. Search for a polytree \(G\) that is consistent with the \(k\) observed (in)dependences. 3. Predict conditional independences for unobserved triples via \(G\) Since the number of points in the training set should increase slightly faster than the VC dimension (which is \(O(n\log n)\), see Lemma 2), we know that a small fraction of the possible independence tests (which grows with third power) is already sufficient to predict further conditional independences. The red curve in Figure 4 provides a rough estimate of how \(k\) needs to grow if we want to ensure that the term \(\sqrt{\cdot}\) in (6) is below \(0.1\) for \(\eta=0.1\). The blue curve shows how the number of possible tests grows, which significantly exceeds the required ones after \(n=40\). For more than \(100\) variables, only a fraction of about \(1/4\) of the possible tests is needed to predict that also the remaining ones will hold with high probability. In Section 5.1 we will also look at a more practical example of how to apply Theorem 1 to conditional independence tests. While conditional independences have been used for causal inference already since decades, more recently it became popular to use other properties of distributions to infer causal DAGs. In particular, several methods have been proposed that distinguish between cause and effect from bivariate distributions, e.g., Kano and Shimizu (2003); Hoyer et al. (2009); Zhang and Hyvarinen (2009); Daniusis et al. (2010); Peters et al. (2011); Lopez-Paz et al. Figure 4: The red curve shows how the number of tests required by the VC bound grows with the number of variables, while the blue one shows how the number of possible tests grows. Thus, the bounds are getting useful at about \(50\) nodes, where the curves cross. (2015); Mooij et al. (2016). It is tempting to do multivariate causal inference by finding DAGs that are consistent with the bivariate causal direction test. This motivates the following example. **Lemma 5** (bivariate directionality test on DAGs).: _Let \(\mathcal{G}\) be the class of DAGs on \(n\) nodes. Define a model-induced property \(Q_{G}\) by_ \[Q_{G}(X_{i},X_{j}):=\begin{cases}1\text{ if there is a directed path }X_{i}\to X_{j}\text{ in }G\\ 0\text{ else}\end{cases}\] _The VC dimension of \((Q_{G})_{G\in\mathcal{G}}\) is at most \(n-1\)._ Proof.: The VC dimension is the maximal number \(h\) of pairs of variables for which the causal directions can be oriented in all \(2^{h}\) possible ways. If we take \(n\) or more pairs, the undirected graph defined by connecting each pair contains a cycle \[(X_{1},X_{2}),(X_{2},X_{3}),\ldots,(X_{l-1},X_{l}),(X_{l},X_{1}),\] with \(l\leq n\). Then, however, not all \(2^{l}\) causal directions are possible because \[X_{1}\to X_{2}\to\cdots X_{l}\to X_{1}\] would be a directed cycle. Thus the VC dimension is smaller than \(n\). This result can be used to infer causal directions for pairs that have not been observed together: 1. Apply the bivariate causality test \(Q_{T}\) to \(k\) randomly chosen ordered pairs, where \(k\) needs to grow slightly faster than \(n\). 2. Search for a DAG \(G\in\mathcal{G}\) that is consistent with many of the outcomes. 3. Infer the outcome of further bivariate causality tests from \(G\). It is remarkable that the generalization bound holds regardless of how bivariate causality is tested and whether one understands which statistical features are used to infer the causal direction. Solely the fact that a causal hypothesis from a class of low VC dimension matches the majority of the bivariate tests ensures that it generalizes well to future tests. ### Real-valued properties The VC bounds in Subsection 4.1 referred to binary statistical properties. To consider also real-valuedd properties note that the VC dimension of a class of real-valued functions \((f_{\lambda})_{\lambda\in\Lambda}\) with \(f:\mathcal{X}\to\mathbb{R}\) is defined as the VC dimension of the set of binary functions, see Section 3.6 Vapnik (1995): \[\left(f_{\lambda}^{-1}\left((-\infty,r]\right)\right)_{\lambda\in\Lambda,r\in \mathbb{R}}.\] By combining (3.15) with (3.14) and (3.23) in Vapnik (1995) we obtain: **Theorem 2** (VC bound for real-valued statistical properties).: _Let \((Q_{M})_{M\in\mathcal{M}}\) be a class of \([A,B]\)-valued model-induced properties with VC dimension \(h\). Given \(k\) data sets \(D_{1},\ldots,D_{k}\) sampled from some distribution \(P_{D}\). Then_ \[\mathbb{E}[|Q_{T}(D)-Q_{M}(D)|] \leq\frac{1}{k}\sum_{j=1}^{k}|Q_{T}(D_{j})-Q_{M}(S_{D_{j}})\] \[+(B-A)\sqrt{\frac{h\left(\ln\frac{k}{h}+1\right)-\ln\frac{\eta}{ 4}}{k}}\] _with probability at least \(1-\eta\)._ This bound can easily be applied to the prediction of correlations via collider-free paths: Due to Lemma 3, we then have \(h\in O(n)\). Since correlations are in \([-1,1]\), we can set \(2\) for \(B-A\). ### Remarks on exchangeability required for learning theory In practical applications, the scenario is usually somehow different because one does not choose 'observed' and 'unobserved' subsets randomly in a way that justifies exchangeability of data sets. Instead, the observed sets are defined by the available data sets. One may object that the above considerations are therefore inapplicable. There is no formal argument against this objection. However, there may be reasons to believe that the observed variable sets at hand are not substantially different from the unobserved ones whose properties are supposed to be predicted, apart from the fact that they are observed. Based on this belief, one may still use the above generalization bounds as guidance on the richness of the class of causal hypotheses that is allowed to obtain good generalization properties. ## 5 Experiments In this section we use simulated toy scenarios that illustrate how statistical properties of subsets of variables can be predicted via the detour of inferring a causal model from a small class. In the first scenario, we want to interpret a DAG as a model for conditional independences as in Example 7 and use the classical PC algorithm (Spirtes et al., 1993) to estimate a DAG from data. In this setting Theorem 1 provides guarantees for the accuracy of our model on conditional independence tests that have not been used for the construction of the DAG. In the second scenario, we interpret polytrees as models for the admissibility of additive noise models similar to Example 9. Again, we could interpret this scenario such that a causal discovery algorithm has _in principle_ access to all pairs of variables but does not use all of them (e.g. due to computational constraints). Alternatively, we can interpret this scenario as the problem of merging marginal distributions in the sense of 'integrative causal inference' (Tsamardinos et al., 2012), also when marginal distributions of some subsets are unavailable due to missing data. ### Predicting independences In Example 7 we interpreted a DAG as a model of conditional independences, i.e. we defined \(Q_{G}[(Y_{1},\ldots,Y_{k})]=1\) if the Markov condition implies \(Y_{1}\perp\!\!\!1\)\(Y_{2}\mid Y_{3},\ldots,Y_{k}\) and else \(0\). Given that there is a graph \(G^{*}\) such that the joint distribution of the data \(P_{\mathbf{X}}\) is Markovian to \(G^{*}\) and the tests \(Q_{T}\) correctly output the independences, the empirical risk \(\mathbb{E}[|Q_{T}(D)-Q_{G}(D)|]\) becomes zero for any \(G\) in the Markov equivalence class of \(G^{*}\). In order to find this equivalence class, one could conduct all possible independence tests and construct the equivalence class from them. The PC algorithm is more efficient and can recover the underlying equivalence class with a polynomial number of conditional independence tests for sparse graphs (Kalisch and Buhlman, 2007). Further, in the limit of infinite data the result of the PC algorithm will perfectly represent the conditional independences used during the algorithm. In this sense we want to interpret the PC algorithm as an ERM algorithm, that aims to minimize the empirical risk \[\frac{1}{k}\sum_{i=1}^{k}|Q_{T}(D_{i})-Q_{G}(D_{i})|, \tag{7}\] where \(k\) is bounded by a polynomial in \(n\). It is important to note, that technically in this scenario Theorem 1 does not hold, as the samples \(D_{1},\ldots,D_{k}\) are not chosen independently. We see this as an opportunity to test the conjecture that in this case the available variable sets do not substantially differ from the unseen ones, similar to what we have described in Section 4.3. Data generationIn our experiments we synthetically generated linear structural models as ground truth. First, we uniformly chose an order \(\pi:\{1,\ldots,n\}\to\{1,\ldots,n\}\) of the variables and for each pair of nodes \(X_{i},X_{j}\) we add an edge \(X_{i}\to X_{j}\) if \(\pi(i)<\pi(j)\) and with probability \(p\). For each edge \(X_{i}\to X_{j}\) we draw a structural coefficient \(a_{i,j}\) uniformly from \([0.1,1)\) and set all other \(a_{i^{\prime},j^{\prime}}=0\). Then every value \(x_{j}\) of a variable \(X_{j}\) is a linear combination of the values of previous variables and some noise \[x_{j}=n_{j}+\!\!\!\!\!\sum_{k(i)<k(j)}\!\!\!\!\!a_{i,j}\cdot x_{i},\] where the noise terms \(n_{j}\) are all drawn independently from a standard normal distribution. In all experiments we choose \(p\) such that the expected degree of a node \(\frac{2}{n}\frac{p(n^{2}-n)}{2}=1.5\). The PC algorithm has one hyperparameter, namely the confidence level of the conditional independence tests. We randomly generate 10 datasets with 10, 20 and 40 nodes respectively as described above and each one with 30.000 samples.7 We conducted the Fisher-\(Z\) test for partial correlations for all triplets \((X_{i},X_{j},X_{k})\) and \((X_{i},X_{k},\emptyset)\) and compared the result with the graphical ground truth. For each dataset we calculated the \(F_{1}\)-score and chose the confidence level with the maximal average score, which was 0.001 in this case. Footnote 7: Note, that even for this large sample size we still cannot hope to always decide correctly, as there is a non-negligible chance to have ‘almost’ non-faithful distributions (Uhler et al., 2013) Experimental setupIn this experiment, we want to see how close the empirical performance (the performance on the tuples \(D_{i}=(Y_{1},\ldots,Y_{k})\) used to construct the DAG) is to the expected performance (the performance on all possible tuples). Due to computational constraints, we restrict ourselves to the case where \(k\leq 3\), i.e. we condition on at most one variable. We calculate the empirical loss as in Eq. (7), where \(Q_{T}\) denotes the statistical test results and \(Q_{G}\)\(d\)-separation in \(G\). As the PC algorithm only outputs a partially directed graph, we randomly draw a DAG from the corresponding Markov-equivalence class to get a model \(G\). It can happen though, that the output of the PC algorithm is a PDAG that does not describe an equivalence class of DAGs. In this case we randomly orient conflicting edges. For the expected error we conduct the conditional independence test on all possible triples and tuples (i.e. \(k\in\{2,3\}\)). The differences between empirical risk and expected risk for different datasets are plotted in Fig. 5. For \(n=20,40,100\) there are 20 datasets respectively. We also plotted the theoretical error bound from (6). Note that we rescaled the bounds by the factor \(0.6\) for visualization purposes. We can see that the empirical risk is closer to the expected risk for instances for larger numbers of nodes when the PC algorithm uses more conditional independence tests to construct the graph. ### Predicting existence of additive noise models (ANMs) In the next experiment we want to present a concrete example that constructs a simple DAG, namely a polytree, based on bivariate information (motivated by Lemma 5) which is then used to infer bivariate statistical properties. We will use non-linear additive noise models (Hoyer et al., 2009) since they have achieved reasonable results in bivariate causal discovery (Mooij et al., 2016). We recall that \(P_{X,Y}\) is said to admit an additive noise model from \(X\) to \(Y\) if there exists a function \(f\) such that the residual \(Y-f(X)\) is independent of \(X\). Generative modelIn this experiment we will generate the joint distribution via generalized additive models, i.e. we assume that for each node Figure 5: The difference between empirical error and expected error (\(|\frac{1}{k}\sum_{i=1}^{k}|Q_{T}(D_{i})-Q_{G}(D_{i})|-\mathbb{E}|Q_{T}(D)-Q_{G }(D)||\)) versus the VC error bound when graphs with \(n=20,40,100\) nodes are used as predictors for conditional independences. there are non-linear functions \(f_{i,j}\) such that the values \(x_{i}\) of \(X_{i}\) are given by \[x_{i}=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! _where \(\mathcal{X}\) and \(\mathcal{N}_{Y}\) denote the support of \(X\) and \(N_{Y}\) respectively, then there is no additive noise model from \(X\) to \(Z\)._ The proof can be found in Appendix A. Intuitively, Eq. 8 states that \(f_{Z}\) is non-linear. It also encodes the subtlety that this non-linearity must occur at a point, where additionally \(f_{Y}\) is not constant and that is in the support of \(Y\). In the following, we will assume without proof that concatenating more than two ANMs generically also does not result in an ANM (generalizing Lemma 6). We further assume that variables \(X_{i}\) and \(X_{j}\) connected by a common cause do not admit an ANM, following known identifiability results for multivariate ANMs in Peters et al. (2011) with appropriate genericity conditions. Under this premise, a polytree contains an edge \(X_{i}\to X_{j}\) if and only if \(Q_{T}(X_{i},X_{j})=1\). Motivated by this connection, we define \[Q_{G}(X_{i},X_{j}):=\begin{cases}1\text{ if there is an edge }X_{i}\to X_{j}\text{ in }G\\ 0\text{ else}\end{cases}\] The reader might wonder why we state assumptions about the generative process of the data in the section above, even though we have repeatedly emphasized that our theory does not need to reference a 'true' causal model. Note, that we primarily used Lemma 6 to render polytrees a well-defined model for \(Q_{T}\) in the sense of Definition 2. For the validity of Theorem 1 it does not matter whether the data has actually been generated by an additive noise model (although we might be able to achieve a lower empirical risk in that case). Causal discovery algorithmWe then estimate a graph \(G\) with the following procedure. 1. Apply the bivariate causality test \(Q_{T}\) to \(k\) randomly chosen ordered pairs. To estimate the functions \(f_{i}\), we use the Gaussian Process implementation from sklearn (Pedregosa et al., 2011) and to test independences we use the kernel independence test (Zhang et al., 2011) as implemented by Blobaum et al. (2022). 2. We add an edge if \(Q_{T}(X_{i},X_{j})=1\). 3. If the resulting graph is not a tree, in each undirected cycle we remove the edge, where \(X_{i}\mathchoice{\mathrel{\hbox to 0.0pt{\kern 2. would close an undirected circle. To generate the generalised additive mechanisms \(f_{i,j}\) we use neural networks with a single hidden layer with 20 nodes, \(\tanh\) activation function and uniformly random weights from \([0.1,1)\). Moreover, we used uniform instead of Gaussian noise. All causal graphs in the experiment contain 20 nodes. We draw 600 samples and use 0.05 as confidence level. For each dataset, we draw \(k\) tuples of variables and use the algorithm described above to estimate a polytree. The plot in Figure 6 shows difference between empirical error and the expected error for increasing \(k\) on the same dataset, as well as the theoretical generalization bound from Eq 6. Note, that the expectation is to be understood w.r.t. to tuples of variables. This means that the expected error is simply the prediction error evaluated on all possible tuples of variables. Also note, that we rescaled the bound by the factor 0.2 for visualization purposes. We repeated the causal discovery 20 times for each joint dataset (but with different randomly drawn marginals) and plotted the mean and the 90% empirical quantile. The difference turns out to be small when the number of training tuples is large, in agreement with statistical learning theory. In Appendix B we provide additional plots with other datasets drawn according to the above procedure, to demonstrate that the results in Figure 6 are not due to a peculiar ground truth model. **Remark 4**.: _Unfortunately, inferring causal directions using the class of additive noise models raises the following dilemma: the class is not closed under marginalization, e.g., if there is a (non-linear) ANM from \(X\) to \(Y\) and Figure 6: The absolute difference between empirical error \(\frac{1}{k}\sum_{i=1}^{k}|Q_{T}(D_{i})-Q_{G}(D_{i})|\) and expected error \(\mathbb{E}(|Q_{T}(D)-Q_{G}(D)|)\) for graphs with \(n=20\) nodes when \(k\) tuples of variables are used to fit a polytree. one from \(Y\) to \(Z\), there is, in the generic case, no ANM from \(X\) to \(Z\), as we argued in Lemma 6. For this reason, it would not be mathematically consistent to use the ANM condition for inferring whether there is a directed path from a variable \(X_{i}\) to \(X_{j}\). Instead, checking ANM infers whether \(X_{i}\) is a direct cause of \(X_{j}\). The model class ANM thus suggests an absolute distinction between 'direct' and 'indirect' influence, while in nature the distinction always refers to the set of observed variables (since we can always zoom into the mechanism transmitting the information between the variables). We will, however, accept this artificial distinction between direct and indirect to get a mathematically consistent toy example._ ## 6 Revisiting Common Problems of Causal Discovery We now explain how our interpretation of causal models as predictors of statistical properties provides a slightly different perspective, both on conceptual and practical questions of causal discovery. Predicting impact of interventions by merging distributionsWe have argued that causal hypotheses provide strong guidance on how to merge probability distributions and thus become empirically testable without resorting to interventions. One may wonder whether this view on causality is completely disconnected to interventions. Here we argue that it is not. In some sense, estimating the impact of an intervention can also be phrased as the problem of inferring properties of unobserved joint distributions. Assume we want to test whether the causal hypothesis \(X\to Y\) is true. We would then check how the distribution of \(Y\) changes under randomized interventions on \(X\). Let us formally introduce a variable \(F_{X}\)(Pearl, 2000) that can attain all possible values \(x\) of \(X\) (indicating to which value \(x\) is set to) or the value idle (if no intervention is made). Whether \(X\) influences \(Y\) is then equivalent to \[F_{X}\not\perp Y. \tag{9}\] If we demand that this causal relation is unconfounded (as is usually intended by the notation \(X\to Y\)), we have to test the condition \[P_{Y|F_{X}=x}=P_{Y|X=x}. \tag{10}\] Before the intervention is made, both conditions (9) and (10) refer to the unobserved distribution \(P_{Y,F_{X}}\). Inferring whether \(X\to Y\) is true from thus amounts to inferring the unobserved distribution \(P_{Y,F_{X}}\) from \(P_{X,Y}\) plus the additional background knowledge regarding the statistical and causal relation between \(F_{X}\) and \(X\) (which is just based on the knowledge that the action we made has been in fact the desired intervention). In applications it can be a non-trivial question why some action can be considered an intervention on a target variable at hand (for instance in complex gene-gene interactions). If one assumes that it is based on purely observational data (maybe earlier in the past), we have reduced the problem of predicting the impact of interventions entirely to the problem of merging joint distributions. Linear causal models for non-linear relationsOur perspective justifies to apply multivariate Gaussian causal models to data sets that are clearly non-Gaussian: Assume a hypothetical causal graph is inferred from the conditional independence pattern obtained via _partial correlation tests_ (which is correct only for multivariate Gaussians), as done by common causal inference software TETRAD. Even if one knows that the graph only represents partial correlations correctly, but not conditional independences, it may predict well partial correlations of unseen variable sets. This way, the linear causal model can be helpful when the goal is only to predict linear statistics. This is good news particularly because general conditional independence tests remain a difficult issue (Shah and Peters, 2020). Tuning of confidence levelsThere is also another heuristic solution of a difficult question in causal inference that can be justified: Inferring causal DAGs based on causal Markov condition and causal faithfulness (Spirtes et al., 1993) relies on setting the confidence levels for accepting conditional dependence. In practice, one will usually adjust the level such that enough independences are accepted and enough are rejected for the sample size at hand. Too few independences will results in a maximal DAG, too many in a graph with no edges. Adjusting the confidence level is problematic, however, from the perspective of the common justification of causal faithfulness: if one rejects causal hypotheses with accidental conditional independences because they occur 'with measure zero' (Meek, 1995a), it becomes questionable to set the confidence level high enough just because one wants ot get some independences accepted.8 Footnote 8: For a detailed discussion of how causal conclusions of several causal inference algorithms may repeatedly change after increasing the sample size see (Kelly and Mayo-Wilson, 2010). Here we argue as follows instead: Assume we are given any arbitrary confidence level as threshold for the conditional independence tests. Further assume we have found a DAG \(G\) from a sufficiently small model class that is consistent with all the outcomes'reject/accept' of the conditional independence tests on a large number of subsets \(S_{1},\ldots,S_{k}\). It is then justified to assume that \(G\) will correctly predict the outcomes of this test for unobserved variable sets \(\tilde{S_{1}},\ldots,\tilde{S_{l}}\subset S_{1}\cup\cdots\cup S_{k}\) for this particular confidence level. This is because \(G\) predicts the outcomes of the tests, not the properties themselves, just as in Example 10. Methodological justification of causal faithfulnessIn our learning scenarios, DAGs are used to predict for some choice of variables \(X_{j_{1}},X_{j_{2}},\ldots,X_{j_{k}}\) whether \[X_{j_{1}}\perp\!\!\!\perp X_{j_{2}}\,|X_{j_{3}},\ldots,X_{j_{k}}.\] Without faithfulness, the DAG can only entail _in_dependence, but never entail dependence. Rather than stating that 'unfaithful distributions are unlikely' we need faithfulness simply to obtain a definite prediction in the first place. This way, we avoid discussions about whether violations of faithfulness occur with probability zero (relying on assuming probability densities in parameter space (Meek, 1995b), which has been criticized by Lemeire and Janzing (2012)). After all, the argument is problematic for finite data because distributions with weak dependences are not unlikely for DAGs with many nodes (Uhler et al., 2013). Regardless of whether one believes that distributions in nature are faithful with respect to the 'true DAG', any DAG that explains a sufficiently large set of dependences and independences is likely to also predict future (in)dependences. ## 7 Conclusions We have described different scenarios where causal models can be used to infer statistical properties of joint distributions of variables that have never been observed together. If the causal models are taken from a class of sufficiently low VC dimension, this can be justified by generalization bounds from statistical learning theory. This opens a new pragmatic and context-dependent perspective on causality where the essential empirical content of a causal model may consist in its prediction regarding how to merge distributions from overlapping data sets. Such a pragmatic use of causal concepts may be helpful for domains where the interventional definition of causality raises difficult questions (if one claims that the age of a person causally influences his/her income, as assumed in Mooij et al. (2016), it is unclear what it means to intervene on the variable 'Age'). We have, moreover, argued that our pragmatic view of causal models is related to the usual concept of causality in terms of interventions. AcknowledgementsThanks to Robin Evans for correcting remarks on an earlier version. Part of this work was done while Philipp Faller was an intern at Amazon Research.
2307.02253
Multivariate Time Series Classification: A Deep Learning Approach
This paper investigates different methods and various neural network architectures applicable in the time series classification domain. The data is obtained from a fleet of gas sensors that measure and track quantities such as oxygen and sound. With the help of this data, we can detect events such as occupancy in a specific environment. At first, we analyze the time series data to understand the effect of different parameters, such as the sequence length, when training our models. These models employ Fully Convolutional Networks (FCN) and Long Short-Term Memory (LSTM) for supervised learning and Recurrent Autoencoders for semisupervised learning. Throughout this study, we spot the differences between these methods based on metrics such as precision and recall identifying which technique best suits this problem.
Mohamed Abouelnaga, Julien Vitay, Aida Farahani
2023-07-05T12:50:48Z
http://arxiv.org/abs/2307.02253v1
# Multivariate Time Series Classification: A Deep Learning Approach ###### Abstract This paper investigates different methods and various neural network architectures applicable in the time series classification domain. The data is obtained from a fleet of gas sensors that measure and track quantities such as oxygen and sound. With the help of this data, we can detect events such as occupancy in a specific environment. At first, we analyze the time series data to understand the effect of different parameters, such as the sequence length, when training our models. These models employ Fully Convolutional Networks (FCN) and Long Short-Term Memory (LSTM) for supervised learning and Recurrent Autoencoders for semi-supervised learning. Throughout this study, we spot the differences between these methods based on metrics such as precision and recall identifying which technique best suits this problem. ###### Contents * 1 Introduction * 1.1 Motivation * 1.2 Methods * 1.2.1 Fully Convolutional Network * 1.2.2 InceptionTime * 1.2.3 Long Short-Term Memory * 1.2.4 Recurrent Autoencoder * 1.3 Software Setup * 2 Experiments * 2.1 Cleaning Data * 2.2 Features Reduction * 2.3 Under Sampling * 2.4 Sequence Labeling and Normalization * 2.5 Benchmarking Architectures * 2.6 Minimized Architecture and Sequence Length * 2.7 Hyperparameter Optimization * 2.8 Predictions Distribution and Features Visualization * 2.9 Encoder Classifier * 3 Conclusion Introduction A time series is a collection of data points ordered in time (Adhikari & Agrawal, 2013). The analysis of this data is very beneficial in many domains, such as weather forecasting (Shumway et al., 2000). An important application when we talk about time series classification is anomaly detection which is applicable in many domains, e.g., with the help of time series data such as velocity and acceleration, dangerous driving behaviors can be detected (Kieu et al., 2018). ### Motivation Our motivation for this paper is to harness the time series data obtained from a fleet of gas sensors deployed by Corant GmbH / Air-Q company 1 in many homes and companies to detect events that can't be measured directly by these sensors. The primary function of these sensors is to measure and track many chemicals and quantities, such as O2, CO2, NO2, pressure, and sound. With the help of machine learning, we extend these sensors' functionality by detecting more events, such as whether a specific environment is occupied within a particular time range. Also, we can see whether the windows of the place are open. We can notify the users of these events, which helps them to have more control over their environments and raises the safety level. Moreover, the investigated methods can be tailored to similar problems in the domain of time series analysis. Footnote 1: [https://www.air-q.com](https://www.air-q.com) ### Methods Event detection in time series data can be done using various deep-learning architectures. We exploit the power of Fully Convolutional Networks (FCN) and Long Short-Term Memory (LSTM) in supervised learning. Also, we would introduce a simple Recurrent Autoencoder, which uses the unlabeled data in semi-supervised learning. We mainly treat our problem as a multi-label classification in which we have two primary classes {'person,' 'window_open'} that can be detected simultaneously, with binary cross entropy loss function (Liu et al., 2017). We also experiment with a separate network for each class in a single-label classification manner with softmax as an output layer (Qi et al., 2017). #### 1.2.1 Fully Convolutional Network Our problem deals with multivariate time series data, so FCN can be applied to grasp each input channel's local and global features. FCN has no pooling operations. Therefore it is used in other applications, such as semantic segmentation, to produce a pixel-wise output (Long et al., 2015). FCN performs as a feature extractor in our settings, as shown in Fig. 1. FCN has several convolutional blocks, each consisting of a convolutional layer, followed by a batch normalization layer, and has Rectified Linear Unit (ReLU) as an activation function. The batch normalization helps to improve the overfitting and speed up the convergence (Ioffe & Szegedy, 2015). As shown in (Wang et al., 2017), FCN originally stacks three convolutional blocks with 1-D kernel sizes of {8, 5, 3} and filters count of {128, 256, 128} respectively. The number of filters and kernel sizes can be optimized to better suit the problem, especially in small data sets. Instead of applying directly a fully connected layer, the last convolutional block is followed by a Global Average Pooling (GAP) layer to average its output over time dimension. This enables a drastic reduction of the parameters. To preserve the time series length after each convolutional block, the convolution operations have zero padding and a stride equal to 1. One main advantage of FCN is that it can handle time sequences with different sizes, unlike the standard Recurrent Neural Networks that struggle with long-term dependencies. #### 1.2.2 InceptionTime InceptionTime (Ismail Fawaz et al., 2020) is a state-of-art architecture that achieves very high accuracy when applied to time series classification. It is an ensemble of five Inception Networks that are initialized with different random weights with two residual blocks, as opposed to ResNet (He et al., 2016), which has three residual blocks as shown in Fig. 2. The residual connections fight the vanishing gradient problem (Hochreiter, 1991). Each residual block is comprised of three Inception modules. After the second block, a Global Average Pooling (GAP) is applied instead of directly using a fully connected layer. As shown in Fig. 2, the core component of each Inception module is applying m filters with a stride equal to 1 and a length of 1. The result is called the bottleneck layer. This layer significantly reduces the dimension of the time series input and the model complexity. This technique allows for a longer filter with almost the same number of parameters as ResNet. After that, several convolutions with different sizes are applied simultaneously on the bottleneck layer. To mitigate the perturbations, another MaxPooling layer is applied and concatenated with the output of the previous convolutions. #### 1.2.3 Long Short-Term Memory Recurrent Neural Network (RNN) is an essential architecture when dealing with time series data, as the output depends on a history of inputs ordered in time. However, RNN sufferers from detecting long-term dependencies due to the application of Back Propagation Through Time (BPTT) for a specific Horizon (Graves and Graves, 2012). Figure 1: Fully Convolutional Network (FCN). In contrast, Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) cell uses a state which represents a "memory" or a "context" besides the inputs and the outputs to overcome this issue. LSTM contains three gates to control the dependencies; an input gate to select the inputs, a forget gate to free some part of the memory, and an output gate to control the output, as shown in Fig. 3. We use an LSTM in our supervised method with only one hidden layer, as shown in Fig. 3. #### 1.2.4 Recurrent Autoencoder Supervised learning algorithms require a lot of labeled data to train the model, especially if we have multivariate time series data. However, obtaining annotated data is a challenging and usually expensive task. According to Vapnik-Chervonenkis theorem (Vapnik, 1999), generalization error depends significantly on the amount of data the model is trained on, not only the complexity of the model. As unlabeled data, in contrast, is cheap to obtain, we can combine them with a small amount of labeled data to get good accuracy in semi-supervised learning (Van Engelen and Hoos, 2020). Also, the random initialization of the parameters can lead to a longer training time. Therefore, as shown in Fig. 4, we can apply a Recurrent Autoencoder on the unlabeled data and minimize the reconstruction error, which is based on the Mean Squared Error (MSE) (Madiraju, 2018). Then we use the Encoder only with frozen parameters with a shallow classifier on the labeled data, resulting in less number and good initialization of the parameters. Figure 2: Top: InceptionTime Network, Bottom: Single Inception Module. Source: (Ismail Fawaz et al., 2020) ### Software Setup For applying the previous neural network architectures, we used the "Tsai" library (Oguiza, 2022), which is based on "PyTorch" (Paszke et al., 2019) and "Fastai" (Howard et al., 2018). For data manipulation and analysis, we used "Pandas" (McKinney, 2010), "NumPy" (Harris et al., 2020), and "Scikit-Learn" (Buitinck et al., 2013). For plotting the graphs, we used "Matplotlib" (Hunter, 2007) and "Plotly" (Inc., 2015). ## 2 Experiments Before applying any method, we need to understand the data first. The data contains 17 features which are {pressure, temperature, sound, tvoc, oxygen, humidity, humidity_abs, co2, co, so2, no2, o3, pm2_5, pm10, pm1, sound_max, dewpt} and two classes {person, and window_open}. The sensors measure a sample every two minutes. Figure 3: Top: LSTM Cell, Bottom: LSTM Network. Source: [https://towardsdatascience.com/lstm-recurrent-neural-networks-how-to-teach-a-network-to-remember-the-past-55e54c2ff22e](https://towardsdatascience.com/lstm-recurrent-neural-networks-how-to-teach-a-network-to-remember-the-past-55e54c2ff22e) The labeled data is collected using only one device from _July 2022_ to _December 2022_, while the unlabeled data is collected using 740 sensors over two years. ### Cleaning Data For a visualization of the labeled data (see Fig. 5), we used only two features {o2,co2} and the labels for the sake of readiness to spot how the classes are distributed over time. We note that no data was present in _August_ and most of _September_. To better understand the distribution of labels, we can find in Fig. 6 that the class person has a minimal number of labels when more than one person exists in the environment. Therefore, we merge Figure 4: Semi-supervised Learning using a Recurrent Autoencoder and a Shallow Classifier. Figure 5: Visualization of the labeled data. all labels in which a person is found into one label. Hence we have two binary classes {person, and Window_open}. An important aspect when dealing with data is cleaning it from missing values; however, we shouldn't delete the missing values directly in time series data as that may affect the series frequency. After we know the missing values in our data, as shown in Fig. 7, we can interpolate these values to keep the same timeline. Fortunately, a maximum of 20 missing values exist in our data, all in the beginning, so it was safe to delete them directly. However, we generally use linear interpolation to substitute for the missing values. ### Features Reduction We have a small data set (below 70,000 samples), which can lead to low performance and high generalization errors. Also, not all features are equally crucial for accurate classification. One way to deal with that is to use fewer independent features. We used Pearson correlation coefficient (Cohen et al., 2009) to build a correlation matrix as shown in Fig. 8 to eliminate the most correlated features. We obtained nine features {humidity, temperature, tvoc, oxygen, co2, co, pressure, o3, sound} which correlate well with the classes. ### Under Sampling After merging the labels of the data set, we can see an obvious imbalance of the labels as in Fig. 9. This can lead to false metrics when comparing different methods, e.g., for detecting a person; a model may produce high accuracy, although it performs poorly, as the ratio between the "People" label to "No Person" is very low. We use under-sampling to overcome the imbalance problem and reflect more accurate metrics. As the count of a person's existence and open windows are very low, we choose a specific number of labels Figure 6: Distribution of original labels in the labeled data. before and after every detection of a person or an open window. This leads to more balanced data, as shown in Fig. 10. We note that we should apply the sliding operation on each segment resulting from under-sampling to obtain time sequences, then perform concatenation to maintain the time of the data. To compare the results of under-sampling with the unbalanced data as shown in Table. 1, we perform under-sampling with a size of 30 and train FCN with default parameters on 70% of normalized data using standard scalar while keeping 20% for validation and 10% for testing as shown in Fig. 9 and Fig. 10. This splitting is done randomly with a sequence length equal to 15, with a stride equal to one, and labeling at the end of each sequence. We used only ten epochs with cosine learning rate scheduling (Loshchilov and Hutter, 2016). Also, we compared the usage of all features against the minimized features we deduced in the previous section. We can see from the results that using unbalanced data for training consumes (\(\approx\) 6x) training time \begin{table} \begin{tabular}{c|c|c|c|c|c} & _No. of_ & _Training time_ & _Test_ & _F1 score_ & _F1 score_ \\ & _parameters_ & _(s)_ & _accuracy_ & _(person)_ & _(window)_ \\ \hline _All features, no_ & 275,970 & 60 & 99\% & 0.93 & 0.94 \\ _sampling_ & & & & & \\ \hline _All features, sampling_ & 275,970 & 11 & 90\% & 0.90 & 0.94 \\ \hline _Minimal features, no_ & 269,698 & 63 & 98\% & 0.90 & 0.93 \\ _sampling_ & & & & & \\ \hline _Minimal features, sampling_ & 269,698 & 11 & 90\% & 0.91 & 0.93 \\ \hline \end{tabular} \end{table} Table 1: Performance of under-sampling and minimized features Figure 7: Distribution of missing values in the labeled data. Figure 8: Correlation Matrix of features and classes. Figure 9: Left: Distribution of unbalanced data set, Right: Distribution of unbalanced labels. compared to under-sampling, with similar F1 scores. Hence, we can use under-sampling to save time. Also, we can safely choose a minimal number of features. ### Sequence Labeling and Normalization We segment samples into sequences of a specific length for time series data and label each sequence. This label can be the first label, the very end, or the mean value of labels of all samples in each sequence. In Table. 2, we compare the performance of FCN with the same settings as before while using different sequence labeling. We can find that labeling the sequences from the start or taking the mean value would result in better performance. Also, Table. 3 compares applying a standard scalar and min-max scalar (Raju et al., 2020) when normalizing the data with the mean value for sequence labeling. We obtain similar results. Hence we will use a standard scalar from now on. \begin{table} \begin{tabular}{c|c|c|c|c|c} & _Train loss_ & _Valid loss_ & _Test_ & _F1 score (person)_ & _F1 score (window)_ \\ \hline _First label_ & 0.1 & 0.1 & 94\% & 0.94 & 0.96 \\ \hline _Mean label_ & 0.07 & 0.06 & 96\% & 0.96 & 0.97 \\ \hline _Last label_ & 0.14 & 0.12 & 90\% & 0.91 & 0.93 \\ \hline \end{tabular} \end{table} Table 2: Performance of different Sequence Labeling Figure 10: Left: Distribution of data set, Right: Distribution of labels [After applying Under Sampling]. \begin{table} \begin{tabular}{c|c|c|c|c|c} & _Train loss_ & _Valid loss_ & _Test_ & _F1 score (person)_ & _FI score (window)_ \\ \hline _Standard scalar_ & 0.08 & 0.06 & 95\% & 0.96 & 0.97 \\ \hline _Min-Max scalar_ & 0.07 & 0.05 & 96\% & 0.96 & 0.97 \\ \hline \end{tabular} \end{table} Table 3: Standard Scalar vs. Min-Max Scalar ### Benchmarking Architectures As shown in Table. 4, we compared FCN, LSTM (uni-directional and bi-directional one-layer Networks), and InceptionTime, all with default parameters with the mean value for sequence labeling. Regardless of the good results of InceptionTime, we would skip using it later as we have a small amount of labeled data compared to the parameter count of the model. Also, it has longer training time. From now on, we will use FCN and a uni-directional one-layer LSTM as the main models. ### Minimized Architecture and Sequence Length FCN with default parameters has a very high parameter count (269,698) compared to training samples, even after increasing the under-sampling size to 50 (5,000 training sequences on average, depending on sequence labeling). Therefore, we initially tune FCN to consist of only two convolutional blocks with kernel sizes of {5, 3} and filter count of {16, 32}, resulting in 2,418 parameters. This can mitigate overfitting. Also, the test set was chosen using random splitting after segmentation, which may lead to data leakage while training, producing more biased accuracy. Therefore, we prefer a different test set that is separated before segmentation. Fig. 11 shows this test set and the remaining training set. Using this test set, we experiment with various sequence lengths using FCN and 100 epochs for training with early stopping (Yao et al., 2007). According to the results shown in Table. 5, we can choose a sequence length of 7 as it represents 14 minutes in real-time to avoid a long prediction time; also, we choose the f st label for each sequence to perform further experiments. Also, we would keep considering our problem as a multi-label classification as it is more realistic as a person and an open window can be detected simultaneously; also, the results are comparable with single-label classification, e.g., if we take each sequence with a length equals to 10 with a mean value labeling, we will obtain F1 scores of (0.94, 0.84) for person and window classes respectively. ### Hyperparameter Optimization We use Optuna (Akiba et al., 2019), an automatic hyperparameter optimization framework to optimize hyperparameters. The search space for the minimized FCN would be the number of filters from 8 to 32 with a step of 4. And for LSTM, we optimize the hidden size from 10 to 30 with a step of 2, also the \begin{table} \begin{tabular}{c|c|c|c|c|c} & _No. of_ & _Training time_ & _Valid_ & _Train loss_ & _Valid loss_ \\ & _parameters_ & _(s)_ & _Accuracy_ & & \\ \hline _InceptionTime_ & 456,258 & 19 & 98\% & 0.05 & 0.04 \\ \hline _FCN_ & 269,698 & 9 & 97\% & 0.08 & 0.07 \\ \hline _Uni-directional_ & 43,802 & 8 & 97\% & 0.07 & 0.09 \\ _LSTM_ & & & & \\ \hline _Bi-directional_ & 87,602 & 9 & 97\% & 0.09 & 0.1 \\ _LSTM_ & & & & \\ \hline \end{tabular} \end{table} Table 4: Benchmarking FCN, LSTM, and InceptionTime ## 6 Conclusion In this paper, we proposed a new method for generating the _F1 score (person)_ and _F1 score (window)_ in the following way. We proposed a _F1 score (window)_ in the following way: * _F1 score (window)_ in the following way: dropout from 0.1 to 0.5 with a step of 0.1. We maximize the F1 score for optimization using 100 Optuna trials. That leads to an FCN of 2,306 parameters with two convolutional blocks, with 32,8 filters of sizes 5,3, respectively. Moreover, we get a one-layer uni-directional LSTM of 2,950 parameters with a hidden size of 26 and a dropout of 0.2. We present the results in Table. 6, with precision and recall metrics (Davis & Goadrich, 2006) beside the F1 score to reflect the contribution of false positives and false negatives separately. For better records of true positives, true negatives, false positives, and false negatives, Fig. 12 and Fig. 13 show the confusion matrices in case of FCN and LSTM, respectively, for both classes {person, window_open}. ### Predictions Distribution and Features Visualization As precision and recall don't reflect the distribution of predictions over time, we visualize this distribution in Fig. 14 and Fig. 15 for FCN and LSTM, respectively. Also, we visualize the feature space using Principle Component Analysis (PCA) (Abdi & Williams, 2010) for FCN and LSTM as in 16 and Fig. 17, respectively, on our separate labeled test set. We can also see, as in Fig. 18 when using unlabeled data from a different sensor, it does not follow the same distribution as in the labeled test set. \begin{table} \begin{tabular}{c|c|c} & _Precision, Recall, F1 score_ & _Precision, Recall, F1 score_ \\ & _(person)_ & _(person)_ \\ \hline _FCN_ & 0.91, 1.0, 0.95 & 1.0, 0.97, 0.98 \\ \hline _LSTM_ & 0.90, 1.0, 0.94 & 0.82, 1.0, 0.90 \\ \hline \end{tabular} \end{table} Table 6: FCN vs. LSTM after hyperparameter optimization Figure 12: Confusion matrices of FCN. Figure 14: Distribution of predictions for FCN. Figure 13: Confusion matrices of LSTM. Figure 16: PCA for FCN with labeled data. Figure 15: Distribution of predictions for LSTM. Figure 17: PCA for LSTM with labeled data. Figure 18: PCA for FCN with labeled and unlabeled data. ### Encoder Classifier Now we can use around 8,000,000 unlabeled sequences to train the recurrent autoencoder, then use the trained encoder with a shallow classifier consisting of a fully connected layer of 100 neurons and train it on the labeled data. The encoder consists of three LSTM layers with sizes of {128, 64, latent_size} respectively, and the decoder also consists of three LSTM layers with sizes of {latent_size, 64, 128} respectively; where "latent_size" represents the latent space size. We experiment with three different latent space sizes of {2, 10, 16} with parameters of {244,449, 249,825, 254,865} respectively. After testing the different encoder classifiers on the labeled test set we used before, we can see in Table. 7 that shows the scores of the encoder classifier, also in Fig. 19, Fig. 20, and Fig. 21 which show the distribution of predictions; that the embedding size of two is shallow to compress the 17 features efficiently. Therefore we will conduct the next experiments using the embedding size of 10 as it has similar results with 16 but with fewer parameters. We also show in Fig. 22 the confusion matrices for the encoder classifier with latent_size = 10, also Fig. 23 shows the PCA of the latent space of the Encoder. Also, we can see in Fig. 24, which shows the PCA of the encoder classifier when applied to the same \begin{table} \begin{tabular}{c|c|c} _Latent\_size_ & _Precision, Recall, F1 score (person)_ & _Precision, Recall, F1 score (person)_ \\ \hline **2** & 1.0, 0.72, 0.83 & 0.70, 1.0, 0.82 \\ \hline **10** & 0.82, 1.0, 0.90 & 0.80, 1.0, 0.89 \\ \hline **16** & 0.77, 1.0, 0.87 & 0.77, 1.0, 0.87 \\ \hline \end{tabular} \end{table} Table 7: Performance of encoder classifiers with various latent space sizes Figure 19: Distribution of predictions for encoder classifier with latent_size = 2. Figure 21: Distribution of predictions for encoder classifier with latent_size = 16. Figure 20: Distribution of predictions for encoder classifier with latent_size = 10. unlabeled test set we used before in the PCA of FCN, that it follows the distribution of the feature space in contrast to FCN. We can also smooth the predictions by rectifying the spikes with different widths that are considered errors, as shown in Fig. 25. Figure 23: PCA for encoder classifier with latent_size = 10 with labeled data. Figure 22: Confusion matrices of encoder classifier with latent_size = 10. At last, we include some smoothed predictions as shown in Fig. 26 using the encoder classifier when applied to various unlabeled test sets collected from different sensors combined with only three signals {o2, co2, humidity_abs} for better visualization. The signals are included to show the correlation with Figure 24: PCA for for encoder classifier with latent_size = 10 with labeled and unlabeled data. Figure 25: Smoothed distribution of predictions for encoder classifier with latent_size = 10. the predictions over time. ## 3 Conclusion Time series data can be found anywhere in nature; therefore, using them benefits many applications. In our paper, we presented two different deep-learning approaches to notify a user if a person exists or the window is open in his environment using data obtained from various gas sensors. In the first approach, we used supervised learning using two architectures which are FCN and LSTM. This method works well but needs more generalization if data is not sufficient. Also, we examined the usage of a semi-supervised learning technique by training a recurrent autoen Figure 26: Smoothed distribution of predictions and some signals when applying encoder classifier with latent_size = 10 on unlabeled test sets from different sensors. coder on the unlabeled data, then using the trained encoder with a shallow classifier on the labeled data. This allows using less labeled data as we train only the classifier while freezing the encoder. We should take care of some practices before dealing with time series data, such as cleaning data by interpolating missing values and not directly removing them to reserve the timeline. Also, choosing a sequence length and the labeling position for each sequence are two important factors. There is no significant difference between standard and min-max scalars as long as the normalization step is performed before training. Also, analyzing the feature space for the used architecture and visualizing the distribution of predictions give more insights into the best-fitting solution. Ultimately, we can get more robust results if we use more data, in-depth hyperparameter optimization, or even different architectures. One important future architecture to examine is the self-supervised learning technique using Transformers (Vaswani et al., 2017).
2305.14248
Improved rates of convergence for the multivariate Central Limit Theorem in Wasserstein distance
We provide new bounds for the rate of convergence of the multivariate Central Limit Theorem in Wasserstein distances of order $p \geq 2$. In particular, we obtain what we conjecture to be the asymptotically optimal rate whenever the density of the summands admits a non-zero continuous component and has a non-zero third moment.
Thomas Bonis
2023-05-23T17:02:42Z
http://arxiv.org/abs/2305.14248v4
# Improved rates of convergence for the multivariate Central Limit Theorem in Wasserstein distance ###### Abstract We provide new bounds for rates of convergence of the multivariate Central Limit Theorem in Wasserstein distances of order \(p\geq 2\). In particular, we obtain an asymptotic bound for measures with a continuous component which we conjecture to be optimal. ## 1 Introduction and main results Let \(X_{1},\ldots,X_{n}\) be i.i.d. random variables drawn from a measure \(\nu\) on \(\mathbb{R}^{d}\) and such that \(\mathbb{E}[X_{1}]=0\) and \(\mathbb{E}[X_{1}X_{1}^{T}]=I_{d}\). By the Central Limit Theorem, we know that the measure \(\nu_{n}\) of \(S_{n}=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}X_{i}\) converges to the \(d\)-dimensional normal distribution \(\gamma\) on \(\mathbb{R}^{d}\). In this work, we consider the quantify this convergence. The metrics we consider is the family of Wasserstein distance of order \(p\geq 2\), defined between any two measures \(\nu\) and \(\mu\) on \(\mathbb{R}^{d}\) by \[W_{p}(\nu,\mu)^{p}=\inf_{\pi}\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}\|y-x\|^ {p}\,d\pi(x,y),\] where \(\pi\) has marginals \(\mu\) and \(\nu\) and \(\|.\|\) is the traditional Euclidean norm. In recent years, multiple works provided non-asymptotic bounds for Wasserstein distances. For instance as long as the \((X_{i})_{1\leq i\leq n}\) admits a fourth moment, Theorem 1 [2] provides the following bound \[W_{2}(\nu_{n},\gamma)\leq C\sqrt{\frac{\sqrt{d}\|\mathbb{E}[X_{1}X_{1}^{T}\|X_ {1}\|^{2}]\|_{HS}}{n}} \tag{1}\] for some constant \(C>0\), with similar results being obtained for \(W_{p}\) distances [2, 4]. Better bounds have also been obtained in [3] whenever \(\nu\) is log-concave and satisfy a Poincare inequality with constant \(K\geq 1\). In this case, one has \[W_{2}(\nu_{n},\gamma)\leq C\sqrt{\frac{(K-1)d}{n}}\] and similar results are obtained for \(W_{p}\) distances [6]. An important difference between these two bounds is their scaling with respect to the dimension. While the first bound scales at least linearly with respect to the dimension, the second bound can sometimes scale with the square root of the dimension. Some insight on the conditions required to obtain this improved dependency on the dimension in the general case can be obtained from Proposition 1.2 [10] which states that, if \(X_{1}\) takes value in the lattice \(\beta\mathbb{Z}^{d}\) then \[\liminf_{n\to\infty}\sqrt{n}W_{2}(\nu_{n},\gamma)\geq\frac{\sqrt{d}\beta}{4}.\] In particular, if \(\beta\) is of order \(\sqrt{d}\) then \(W_{2}(\nu_{n},\gamma)\geq\frac{Cd}{\sqrt{n}}\). On the other hand if one wants the bound to scale with the square root of the dimension, one would require \(\beta\) to be independent of \(d\). Such a result does not come as surprising when compared to known asymptotic results in the univariate setting obtained in Theorem 1.2 [9]. Indeed, if \(X_{1}\) takes values in \(\beta\mathbb{Z}\), then, for any \(p\in]1,2]\), \[\liminf_{n\to\infty}\sqrt{n}W_{p}(\nu_{n},\gamma)=\frac{1}{6}\|\mathbb{E}[X_{1} ^{3}](Z^{2}-1)+\beta U\|_{p}, \tag{2}\] where \(Z\sim\gamma\) and \(U\) is a uniform random variable on \([-1/2,1/2]\) independent of \(Z\) and \(\|.\|_{p}=\mathbb{E}[\|.\|^{p}]^{1/p}\). On the other hand, as long as \(X_{1}\) is not distributed on a lattice, one has \[\liminf_{n\to\infty}\sqrt{n}W_{p}(\nu_{n},\gamma)=\frac{1}{6}\|\mathbb{E}[X_{1 }^{3}](Z^{2}-1)\|_{p}. \tag{3}\] Furthermore, faster rates of convergence have been obtained whenever the first moments of \(\nu\) and \(\gamma\) are equal and \(\nu\) satisfies the Cramer's condition [1]. Therefore, one can expect the rate of convergence for the central limit theorem in Wasserstein distance in a high-dimensional setting to not only determined by the moments of \(X_{1}\) but to also depend on whether the measure is lattice-distribute. In other words, along with the large-scale behaviour of \(\nu\), described by its moments, we expect a tight bound to include a term corresponding to the small-scale behaviour of \(\nu\). In this work, we provide a first instance of such a bound in the multidimensional setting. In particular, we obtain the following asymptotic bounds. **Corollary 1**.: _Let \(p\geq 2\) and \(X_{1},\ldots,X_{n}\) be i.i.d. centered random variables on \(\mathbb{R}^{d}\) with identity covariance matrix and finite moment of order \(p+2\). Suppose there exists \(\beta>0\) such that_ \[\mathbb{E}[(X_{2}-X_{1})(X_{2}-X_{1})^{T}1_{\|X_{2}-X_{1}\|^{2}\leq\beta}]\] _is positive-definite. Then,_ \[W_{p}(\nu_{n},\gamma)\leq\frac{1}{6\sqrt{n}}\|\mathbb{E}[X_{1}^{\otimes 3}](Z ^{\otimes 2}-I_{d})\|_{p}+C\sqrt{\frac{d\beta}{n}}+o\left(\frac{1}{\sqrt{n}} \right),\] _where \(C>0\) is a generic constant, \(Z\) is a gaussian random variable and \(\mathbb{E}[X_{1}^{\otimes 3}](Z^{\otimes 2}-I_{d})\) is a vector whose \(i\)-th coordinate is given by_ \[(\mathbb{E}[X_{1}^{\otimes 3}](Z^{\otimes 2}-I_{d}))_{i}\coloneqq(X_{1})_{i} \sum_{j,k}(X_{1})_{j}(X_{1})_{k}(Z_{j}Z_{k}-1).\] _Futhermore, if the density of \(\nu\) has a non-zero continuous component then,_ \[W_{p}(\nu_{n},\gamma)\leq\frac{1}{6\sqrt{n}}\|\mathbb{E}[X_{1}^{\otimes 3}](Y ^{\otimes 2}-I_{d})\|_{p}+o\left(\frac{1}{\sqrt{n}}\right).\] In particular, if the measure of \(X_{1}\) admits a continuous component, we conjecture our asymptotic rate to be optimal as it matches (3). On the other hand, if \(X_{1}\) is distributed on \(\beta\mathbb{Z}^{d}\), our bound is close to matching (2) but can still be improved. Finally, obtaining the optimal rate for discrete but non-lattice distributed random variables is still an open problem. This corollary is derived from a non-asymptotic bound obtained in Theorem 1 which can also deal with non-identically distributed random variables. Our result is derived using by refining a diffusion interpolation approach developed in [2]. These refinements might be of interest in other contexts. ## 2 Notations Let \(d\) be a positive integer. For any \(k\in\mathbb{N}\), let \((\mathbb{R}^{d})^{\otimes k}\) be the set of elements of the form \((x_{j})_{j\in\{1,\ldots,d\}^{k}}\in\mathbb{R}^{d^{k}}\). For \(x\in\mathbb{R}^{d}\) and \(k\in\mathbb{N}\), we denote by \(x^{\otimes k}\) the element of \((\mathbb{R}^{d})^{\otimes k}\) such that \[\forall j\in\{1,\ldots,d\}^{k},(x^{\otimes k})_{j}=\prod_{i=1}^{k}x_{j_{i}}.\] For any \(x,y\in(\mathbb{R}^{d})^{\otimes k}\), we denote by \(<x,y>\) the Hilbert-Schmidt scalar product between \(x\) and \(y\) defined by \[<x,y>=\sum_{i\in\{1,\ldots,d\}^{k}}x_{i}y_{i},\] and, by extension, we write \[\|x\|^{2}=<x,x>.\] Furthermore, for any \(x\in(\mathbb{R}^{d})^{\otimes(k+1)}\) and \(y\in(\mathbb{R}^{d})^{\otimes k}\), let \(xy\) be the vector defined by \[\forall i\in\{1,\ldots,d\},(xy)_{i}=\sum_{j\in\{1,\ldots,d\}^{k}}x_{i,j}y_{j}.\] We denote by \(\mathcal{C}^{k}\) the set of functions from \(\mathbb{R}^{d}\) to \(\mathbb{R}\) with partial derivatives of order \(k\in\mathbb{N}\) and by \(\mathcal{C}^{k}_{c}\) the set of such functions with compact support. For any \(k\in\mathbb{N}\), any \(\phi\in\mathcal{C}^{k}\) and any \(x\in\mathbb{R}^{d}\), we denote by \(\nabla^{k}\phi(x)\in(\mathbb{R}^{d})^{\otimes k}\) the \(k\)-th gradient of \(\phi\) at \(x\): \[\forall j\in\{1,\ldots,d\}^{k},(\nabla^{k}\phi(x))_{j}=\frac{\partial^{k}\phi }{\partial x_{j_{1}}\ldots\partial x_{j_{k}}}(x).\] For any \(k\in\mathbb{N}\), let \(H_{k}\) be the \(d\)-dimensional Hermite polynomial, defined by \[\forall x\in\mathbb{R}^{d},H_{k}(x)=e^{\frac{i\pi\phi^{2}}{2}}\nabla^{k}e^{- \frac{i\pi\phi^{2}}{2}}.\] Finally, for any random variable \(X\), we denote by \(\|X\|_{p}\) the \(L_{p}\) norm of \(X\), that is \[\|X\|_{p}\coloneqq\mathbb{E}[\|X\|^{p}]^{1/p}.\] ## 3 Main Result Let \(n>0\) and \(W_{1},\ldots,W_{n}\) be independent random variables such that * \(\forall i\in\{1,\ldots,n\},\mathbb{E}[W_{i}]=0\); * \(\sum_{i=1}^{n}\mathbb{E}[W_{i}^{\otimes 2}]=I_{d}\); Furthermore, let \(D_{i}=W_{i}^{\prime}-W_{i}\), where \(W_{i}^{\prime}\) is an i.i.d. copy of \(W_{i}\). For any \(t\geq 0,q\geq 0\), let us note * \(M_{q}\coloneqq\sum_{i=1}^{n}\mathbb{E}[W_{i}^{\otimes q}]\); * \(L_{q}(t)\coloneqq\sum_{i=1}^{n}\mathbb{E}[\|D_{i}\|^{q}]\); * \(L_{q}^{\prime}\coloneqq\sum_{i=1}^{n}\left\|\mathbb{E}[W_{i}^{\otimes 2}] \right\|^{q/2}\mathbb{E}[\|D_{i}\|^{q}]\); * \(L_{q}^{\prime\prime}(t)\coloneqq\sum_{i=1}^{n}\|\mathbb{E}[W_{i}^{\otimes 2 }\|D_{i}\|^{q-2}1_{\|D_{i}\|^{2}\geq t}]\|\). Finally, for any \(\beta>0\), let \(D_{i,\beta}=D_{i}1_{\|D_{i}\|\leq\beta}\). If \(\mathbb{E}\left[\sum_{i=1}^{n}D_{i,\beta}^{\otimes 2}\right]\) is positive-definite, we write, \[\beta_{q}=\sum_{i=1}^{n}\mathbb{E}\left[\left\|\mathbb{E}\left[\sum_{i=1}^{n} D_{i,\beta}^{\otimes 2}\right]^{-1}D_{i,\beta}\right\|^{q}\right]\] **Theorem 1**.: _Let \(p\geq 0\). If the \((W_{i})_{1\leq i\leq n}\) have finite moment of order \(p+2\) and if there exists \(\beta\) such that \(\mathbb{E}\left[\sum_{i=1}^{n}D_{i,\beta}^{\otimes 2}\right]\) is positive-definite, then for any \(q,r\geq 0\) such that \(\frac{1}{q}+\frac{1}{r}=\frac{1}{p}\), we have_ \[W_{p}(\nu,\gamma) \leq\frac{\|M_{3}H_{2}(Z)\|_{p}}{6}+C\left(\sqrt{dp}\beta\right)\] \[\quad+p^{7/6}\left(L_{4}^{\prime\prime}(\epsilon)+\|M_{4}\|+r\|M_ {3}\|W_{q}(\nu,\gamma)\right)^{2/3}\left(\sqrt{\beta_{2}+d}+\sqrt{p}\left( \beta_{p}+L_{p}\right)^{1/p}\right)^{1/3}\] \[\quad\quad\quad\quad\quad+C\left(p\sqrt{L_{4}(\epsilon)}+p^{1+1/p }L_{p+2}(\epsilon)^{1/p}+\log(\epsilon)p^{3/2}\left(\sqrt{L_{4}^{\prime}}+(p ^{2}L_{p+2}^{\prime})^{1/p}\right)\right),\] _where_ \[\epsilon^{3/2}=p\frac{L_{4}^{\prime\prime}(\epsilon)+\|M_{4}\|+r\|M_{3}\|W_{q }(\nu,\gamma)}{\sqrt{\beta_{2}+d}+\sqrt{p}\left(\beta_{p}+L_{p}\right)^{1/p}}\] In order to prove Corollary 1, we take \[\forall i\in\{1,\ldots,n\},W_{i}=\frac{X_{i}}{\sqrt{n}}.\] We then have \[\sqrt{\beta_{2}+d}+\sqrt{p}\left(\beta_{p}^{1/p}+L_{p}^{1/p}\right)=\mathcal{O}(1)\] and \[L_{4}^{\prime\prime}(\epsilon),\|M_{4}\|=\mathcal{O}\left(\frac{1}{n}\right).\] Furthermore, since \(X_{1}\) has finite moment of order \(p+2\), we can use Theorem 6 from [2] to obtain \[W_{p+1}(\nu,\gamma)=\mathcal{O}\left(\frac{1}{n^{1/2-1/2(p+1)}}\right)= \mathcal{O}\left(\frac{1}{n^{1/3}}\right).\] Since \(\|M_{3}\|=\mathcal{O}\left(\frac{1}{\sqrt{n}}\right)\), we thus have \[\left(L_{4}^{\prime\prime}(\epsilon_{2})+\|M_{4}\|+r\|M_{3}\|W_{p+1}(\nu, \gamma)\right)^{2/3}\left(\sqrt{\beta_{2}+d}+\sqrt{p}\left(\beta_{p}^{1/p}+L_ {p}^{1/p}\right)\right)^{1/3}=\mathcal{O}\left(\frac{1}{n^{5/9}}\right).\] Similarly, \[\epsilon=\mathcal{O}\left(\frac{1}{n^{5/9}}\right).\] Thus \[\lim_{n\to\infty}n\epsilon=+\infty\] and, since \(X_{1}\) has finite moment of order \(p+2\), \[\sqrt{L_{4}(\epsilon)},(L_{p+2}(\epsilon))^{1/p}=o\left(\frac{1}{\sqrt{n}} \right),\] which concludes the proof whenever the measure of \(X_{1}\) does not have a continuous component. If it does, we can consider a sequence \(\beta_{n}\) such that \[\beta^{n}=o(1)\] and \[\sqrt{\beta_{2}^{n}+d}+\sqrt{p}\left((\beta_{p}^{n})^{1/p}+L_{p}^{1/p}\right)= o\left(n^{1/6}\right).\] ## 4 The diffusion interpolation approach Let \(p>0\) and \(W\) be a random variable taking values drawn from a measure \(\nu\) on \(\mathbb{R}^{d}\). In the following, we assume \(\nu\) admits a density \(h\) with respect to the Gaussian measure which is both bounded and with bounded gradient. These additional assumptions can later be lifted to obtain Theorem 1 using approximation arguments similar to those developed in Section 8 [2]. Let \(t>0\) and let us consider the random variable \(F_{t}:=e^{-t}W+\sqrt{1-e^{-2t}}Z\), where \(Z\) is a random variable drawn from the Gaussian measure \(\gamma\) and independent of \(W\). We denote by \(\nu_{t}\) the measure of \(F_{t}\). Due to our assumptions on \(h\), the random variable \(F_{t}\) admits a smooth density \(h_{t}\) with respect to \(\gamma\). We can thus consider the score function of \(F_{t}\) defined by \[\rho_{t}:=\nabla\log h_{t}(F_{t}).\] Then, by Equation (3.8) [7], we have \[W_{p}(\nu,\gamma)\leq\int_{0}^{\infty}\|\rho_{t}\|_{p}\,d_{t}.\] Hence, it is possible to bound \(W_{p}(\nu,\gamma)\) by bounding \(\|\rho_{t}\|_{p}\) for all \(t\geq 0\). One can first remark that this score function verifies the following formula (see e.g. Lemma IV.1 [8]), \[\rho_{t}=e^{-t}\mathbb{E}\left[W-\frac{Z}{\sqrt{\Delta(t)}}\mid F_{t}\right]\ \text{ a.s.}, \tag{4}\] where \(\Delta(t)\coloneqq e^{2t}-1\). A first, somewhat trivial, bound on \(\|\rho_{t}\|_{p}\) can then be obtained by applying Jensen's and the triangular inequalities in order to obtain \[\|\rho_{t}\|_{p}\leq e^{-t}\left(\|\mathbb{E}[W\mid F_{t}]\|_{p}+\frac{\| \mathbb{E}[Z\mid F_{t}]\|_{p}}{\sqrt{\Delta(t)}}\right)\leq e^{-t}\left(\|W\|_ {p}+\frac{\|Z\|_{p}}{\sqrt{\Delta(t)}}\right). \tag{5}\] Note that this bound can still be close to optimal for small values of \(t\). Indeed, if \(W\) is a discrete measure, if \(t\) is small enough then, by definition of the Wasserstein distance, \[W_{p}(\nu,\nu_{t})\approx\|F_{t}-W\|_{p}=e^{-t}\left\|-W+\frac{Z}{\sqrt{\Delta (t)}}\right\|_{p}.\] However, for continuous measures \(\nu\) or for higher values of \(t\), it is often possible to obtain better bounds on \(\|\rho_{t}\|_{p}\). For instance, (1) is obtained using a combination of (5) for small values of \(t\) and another bound on \(\|\rho_{t}\|_{p}\) for larger values. In [4], a streamlined version of this approach was used to provide quantitative results in other frameworks. In this work, we refine this approach by using three different bounds: (5) for small values of \(t\), a bound for medium values of \(t\) highlighting the small-scale behaviour of the measure \(\nu\) and a last bound for large values of \(t\) which will involve moments of the random variables. ## 5 Bounding \(\|\rho_{t}\|_{p}\) ### Small time Suppose \(W=\sum_{i=1}^{n}W_{i}\) where the \((W_{i})_{1\leq i\leq n}\) are independent and such that \(\mathbb{E}[W_{1}^{\otimes 2}]=I_{d}\) and with finite moment of order \(p\). Then, there exists \(C>0\) such that \[\|\rho_{t}\|_{p}\leq\Psi_{1}(t)\coloneqq C\left(\sqrt{dp}\left(1+\frac{1}{ \sqrt{\Delta(t)}}\right)+pL_{p}^{1/p}\right). \tag{6}\] Indeed, since the \((W_{i})_{1\leq i\leq n}\) are independent and centered, we can use Lemma 2 to obtain \[\|W\|_{p}\leq C\left(\sqrt{dp}+pL_{p}^{1/p}\right).\] On the other hand, \[\mathbb{E}[\|Z\|_{p}]\leq\sqrt{d(p-1)}.\] Injecting these bounds into (5) then yields (6). ### Medium time When looking at (5), we can see that, for small values of \(t\), the main term of the bound is \(\frac{\|\mathbb{E}[Z\mid F_{t}]\|_{p}}{\sqrt{\Delta(t)}}\), which we bounded by \(\frac{\|Z\|_{p}}{\sqrt{\Delta(t)}}\) using Jensen's inequality. In this Section, we establish a sharper bound on this quantity by optimizing Proposition 6.1 [5]. We start by providing a general result in the more general exchangeable pair framework before tackling our specific Central Limit Theorem case. #### 5.2.1 A general bound **Proposition 1**.: _Let \((W,W^{\prime})\) be a couple of random variables such that \((W,W^{\prime})\) and \((W^{\prime},W)\) follow the same law. For any \(t\geq 0\), let \(D_{t}=(W^{\prime}-W)1_{\|W^{\prime}-W\|^{2}\leq\eta_{p}(t)}\) For any \(0<s<t\) such that \(\mathbb{E}[D_{s}^{\otimes 2}]\) is positive-definite, we have_ \[\|\rho_{t}\|_{p}\leq e^{-t}\left(\|\mathbb{E}[\Lambda D_{s}\mid W]+W\|_{p}+ \frac{1}{\sqrt{\eta_{p}(t)}}\|\mathbb{E}[\Gamma_{s}^{\Lambda}\mid W]-\mathbb{ E}[\Gamma_{s}^{\Lambda}]\|_{p}+\frac{C\eta_{p}(s)\sqrt{d}}{\eta_{p}(t)^{3/2}} \right),\] _where \(C>0\) is a generic constant, \(\Lambda_{s}=\mathbb{E}[D_{s}^{\otimes 2}]^{-1}\) and \(\Gamma_{s}^{\Lambda}=\frac{1}{2}\Lambda_{s}D_{s}^{\otimes 2}\)._ The proof of this result is mostly the same as the proof of Proposition 6.1 [5]. Proof.: Let \(0<s<t\) and let \[\tau_{t}=\left(\Lambda D_{s}+\frac{\Gamma_{s}^{\Lambda}Z}{\sqrt{\Delta(t)}}+ \sum_{k=3}^{\infty}a_{k}\frac{(\Gamma_{s}^{\Lambda}\otimes D_{s}^{\otimes(k-1) })H_{k}(Z)}{\Delta(t)^{k/2}}\right),\] with \(a_{k}=\frac{1}{k!}-\frac{1}{4(k-2)!}\). A small modification of Lemma 6.5 [5] gives \[\mathbb{E}[\tau_{t}\mid F_{t}]=0.\] Therefore, \[\rho_{t}=\rho_{t}+e^{-t}\mathbb{E}[\tau_{t}\mid F_{t}]\] and using (4) along with the triangle inequality yields \[e^{t}\|\rho_{t}\|_{p}\leq\|\mathbb{E}[\Lambda_{s}D_{s}+W\mid F_{ t}]\|_{p}+\frac{1}{\sqrt{\Delta(t)}}\left\|\mathbb{E}\left[\left(\Gamma_{s}^{ \Lambda}-I_{d}\right)Z\mid F_{t}\right]\right\|_{p}\\ +\sum_{k=3}^{n}\frac{a_{k}}{\Delta(t)^{k/2}}\left\|\mathbb{E}[( \Gamma_{s}^{\Lambda}\otimes D_{s}^{\otimes(k-1)})H_{k}(Z)\mid F_{t}]\right\| _{p}.\] Then, since \(Z\) and \(W\) are independent, we have, by Jensen's inequality, \[e^{t}\|\rho_{t}\|_{p}\leq\|\mathbb{E}[\Lambda_{s}D_{s}+W\mid W] \|_{p}+\frac{1}{\sqrt{\Delta(t)}}\left\|\left(\mathbb{E}\left[\Gamma_{s}^{ \Lambda}\mid W\right]-I_{d}\right)Z\right\|_{p}\\ +\sum_{k=3}^{n}\frac{a_{k}}{\Delta(t)^{k/2}}\left\|\mathbb{E}[ \Gamma_{s}^{\Lambda}\otimes D_{s}^{\otimes(k-1)}\mid W]H_{k}(Z)\right\|_{p}.\] Applying Lemma 3 thus gives \[e^{t}\|\rho_{t}\|_{p}\leq\|\mathbb{E}[\Lambda_{s}D_{s}+W\mid W] \|_{p}+\frac{1}{\sqrt{\eta_{p}(t)}}\left\|\mathbb{E}\left[\Gamma_{s}^{\Lambda }\mid W\right]-I_{d}\right\|_{p}\\ +\sum_{k=3}^{n}\frac{a_{k}\sqrt{k!}}{\eta_{p}(t)^{k/2}}\left\| \mathbb{E}[\Gamma_{s}^{\Lambda}\otimes D_{s}^{\otimes(k-1)}\mid W]\right\|_{p}.\] Now, since \[\left\|\mathbb{E}[\Gamma_{s}^{\Lambda}\otimes D_{s}^{\otimes(k-1) }\mid W]\right\|_{p} \leq\left\|\mathbb{E}[\Gamma_{s}^{\Lambda}\|D_{s}\|^{k-1}\mid W] \right\|_{p}\] \[\leq\eta_{p}(s)^{(k-1)/2}\left\|\mathbb{E}[\Gamma_{s}^{\Lambda} \mid W]\right\|_{p}\] \[\leq\eta_{p}(s)^{(k-1)/2}\left(\left\|\mathbb{E}[\Gamma_{s}^{ \Lambda}\mid W]-\mathbb{E}[\Gamma_{s}^{\Lambda}]\right\|_{p}+\|\mathbb{E}[ \Gamma_{s}^{\Lambda}]\right\|\right)\] and, since \(\sum_{k=3}^{\infty}a_{k}\sqrt{k!}<\infty\) and \(\eta_{p}(s)\leq\eta_{p}(t)\), there exists \(C>0\) such that \[e^{t}\|\rho_{t}\|_{p}\leq\|\mathbb{E}[\Lambda_{s}D_{s}+W\mid W]\|_{p}+\frac{1} {\sqrt{\eta_{p}(t)}}\left\|\mathbb{E}\left[\Gamma_{s}^{\Lambda}\mid W\right]-I _{d}\right\|_{p}+\frac{C\eta_{p}(s)}{\eta_{p}(t)^{3/2}}\|\mathbb{E}[\Gamma_{s} ^{\Lambda}]\|.\] Finally, one can remark that, by definition of \(\Lambda_{s}\), \[\mathbb{E}[\Gamma_{s}^{\Lambda}]=I_{d},\] concluding the proof. #### 5.2.2 Sum of independent variables **Proposition 2**.: _Suppose \(W=\sum_{i=1}^{n}W_{i}\) where the \((W_{i})_{1\leq i\leq n}\) are independent random variables with finite second moment. For any \(i\in\{1,\ldots,n\}\) and \(\beta>0\), let \(D_{i,\beta}=(W^{\prime}_{i}-W_{i})\mathds{1}_{\|D_{i}\|\leq\beta}\) where \(W^{\prime}_{i}\) is an i.i.d. copy of \(W_{i}\). Suppose there exists \(\beta>0\) such that_ \[\Lambda_{\beta}^{-1}=\sum_{i=1}^{n}\mathbb{E}[D_{i,\beta}^{\otimes 2}]\] _is positive-definite. Then, for any \(t\) such that \(\eta_{p}(t)\geq\beta^{2}\), there exists \(C>0\) such that_ \[\|\rho_{t}\|_{p}\leq\Psi_{2}(t)\coloneqq C\left(\sqrt{p(\beta_{2}+d)}+p(\beta _{p}+L_{p})^{1/p}+\frac{\sqrt{dp}\beta^{2}}{\Delta(t)^{3/2}}\right),\] _where_ \[\forall q\geq 0,\beta_{q}=\sum_{i=1}^{n}\mathbb{E}\left[\|\Lambda_{\beta}D_{ i,\beta}\|^{q}\right].\] Let \(s\) be such that \(\eta_{p}(s)=\beta^{2}\) and let \(t>s\). Let \(W^{\prime}=W+(W^{\prime}_{I}-W_{I})\), where \(I\) is a uniform random variable on \(\{1,\ldots,n\}\). Since \((W,W^{\prime})\) and \((W^{\prime},W)\) follow the same law, we can apply Proposition 1 to obtain \[e^{t}\|\rho_{t}\|_{p}\leq\|\mathbb{E}[\Lambda_{s}D_{s}\mid W]\|_{p}+\|W\|_{p}+ \frac{\|\mathbb{E}[\Gamma_{s}^{\Lambda}\mid W]-\mathbb{E}[\Gamma_{s}^{\Lambda} ]\|_{p}}{\sqrt{\eta_{p}(t)}}+C\frac{\sqrt{dp}\beta^{2}}{\Delta(t)^{3/2}},\] with \(\Lambda_{s}=n\Lambda_{\beta}\). First, following the proof of (6), we have \[\|W\|_{p}\leq C\left(\sqrt{dp}+pL_{p}^{1/p}\right).\] Now, by definition of \(D_{s}\) and since \(I\) is independent of \(W\), \[\mathbb{E}[D_{s}\mid W]=\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[(W^{\prime}_{i}-W _{i})\mathds{1}_{\|W^{\prime}_{i}-W_{i}\|^{2}\leq\eta_{p}(s)}\mid W].\] Hence, \[\|\mathbb{E}[\Lambda_{s}D_{s}\mid W]\|_{p}=\left\|\Lambda_{\beta}\sum_{i=1}^{ n}\mathbb{E}\left[D_{i,\beta}\mid W\right]\right\|_{p}\] and, by Jensen's inequality, \[\|\mathbb{E}[\Lambda_{s}D_{s}\mid W]\|_{p}\leq\left\|\sum_{i=1}^{n}\Lambda_{ \beta}D_{i,\beta}\right\|_{p}.\] Let \(i\in\{1,\ldots,n\}\). Since \(W^{\prime}_{i}\) and \(W_{i}\) are independent, we have \[\mathbb{E}[D_{i,\beta}]=0.\] We can thus apply Rosenthal's inequality (see Lemma 2) to obtain \[\|\mathbb{E}[\Lambda_{s}D_{s}\mid W]\|_{p}\leq C\sqrt{p}\left(\sum_{i=1}^{n} \|\Lambda_{\beta}D_{i,\beta}\|_{2}^{2}\right)^{1/2}+Cp\left(\sum_{i=1}^{n}\| \Lambda_{\beta}D_{i,\beta}\|_{p}^{p}\right)^{1/p}=C\left(\sqrt{p\beta_{2}}+p \beta_{p}^{1/p}\right).\] Similarly, \[\|\mathbb{E}[\Gamma_{s}^{\Lambda}\mid W]-\mathbb{E}[\Gamma_{s}^{\Lambda}]\|_{p }\leq C\sqrt{p}\left(\sum_{i=1}^{n}\left\|\Lambda_{\beta}D_{i,\beta}^{\otimes 2} \right\|_{2}^{2}\right)^{1/2}+Cp\left(\sum_{i=1}^{n}\left\|\Lambda_{\beta}D_{i, \beta}^{\otimes 2}\right\|_{p}^{p}\right)^{1/p}\] and, since \(\|D_{i,\beta}\|\leq\beta=\sqrt{\eta_{p}(s)}\leq\sqrt{\eta_{p}(t)}\), \[\left\|\Lambda_{\beta}D_{i,\beta}^{\otimes 2}\right\|\leq\|\Lambda_{\beta}D_{i, \beta}\|\,\|D_{i,\beta}\|\leq\sqrt{\eta_{p}(t)}\,\|\Lambda_{\beta}D_{i,\beta}\|\,,\] and \[\|\mathbb{E}[\Gamma_{s}^{\Lambda}\mid W]-\mathbb{E}[\Gamma_{s}^{\Lambda}]\|_{p} \leq C\sqrt{p\eta_{p}(t)}\left(\sum_{i=1}^{n}\|\Lambda_{\beta}D_{i,\beta}\|_{2}^ {2}\right)^{1/2}+Cp\sqrt{\eta_{p}(t)}\left(\sum_{i=1}^{n}\|\Lambda_{\beta}D_{i,\beta}\|_{p}^{p}\right)^{1/p}.\] Therefore \[\frac{\|\mathbb{E}[\Gamma_{s}^{\Lambda}\mid W]-\mathbb{E}[\Gamma_{s}^{\Lambda} ]\|_{p}}{\sqrt{\eta_{p}(t)}}\leq C\left(\sqrt{p\beta_{2}}+p\beta_{p}^{1/p} \right),\] concluding the proof. ### Large time We are then left with bounding \(\|\rho_{t}\|_{p}\) for large values of \(t\). **Proposition 3**.: _Suppose \(W=\sum_{i=1}^{n}W_{i}\) where the \((W_{i})_{1\leq i\leq n}\) are independent random variables with finite moment of order \(p+2\). Then, using the notations of Section 3, there exists \(C>0\) such that for any \(p<q\leq p+2\) and \(r\) verifying \(\frac{1}{q}+\frac{1}{r}=\frac{1}{p}\), we have_ \[\|\rho_{t}\|_{p}\leq\Psi_{3}(t) \coloneqq\frac{e^{-3t}}{2}\|M_{3}H_{2}(Z)\|_{p}+C\frac{r\|M_{3}\| W_{q}(\nu,\mu)}{\eta_{p}(t)^{3/2}}\] \[+C\left(\sqrt{\frac{pL_{4}(\eta_{p}(t))}{\eta_{p}(t)}}+p\left( \frac{L_{p+2}(\eta_{p}(t))}{\eta_{p}(t)}\right)^{1/p}+\frac{\sqrt{pL_{4}^{ \prime}}}{\eta_{p}(t)}+\frac{p(L_{p+2}^{\prime})^{1/p}}{\eta_{p}(t)^{1/2+2/p}}\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad+C\frac{L_{4}^{\prime\prime}(\eta_{p}(t))+\|M_{4}\|}{\eta_{p}(t)^{3/2}}.\] Before starting the proof of this result, we need to introduce some necessary notations. For any \(i\in\{1,\ldots,n\}\), let \(D_{i}=W_{i}^{\prime}-W_{i}\) and, for any \(t>0\), let \(D_{i,t}=D_{i}\mathbb{1}_{\|D_{i}\|^{2}\leq\eta_{p}(t)}\). Furthermore, in the remainder of this proof, we denote by \(C\) a generic constant. As in the proof of Proposition 1, we first rewrite \(\rho_{t}\) with the help of the following result. **Lemma 1**.: _For any \(i\in\{1,\ldots,n\}\), the quantity_ \[\tau_{i,t}=\mathbb{E}[D_{i,t}\mid F_{t}]+\sum_{k=1}^{\infty}\frac{\mathbb{E} \left[(W_{i}^{\prime}\otimes D_{i,t}^{\otimes k})H_{k}(Z)\mid F_{t}\right]}{k! \Delta(t)^{k/2}}\] _verifies_ \[\mathbb{E}[\tau_{i,t}\mid F_{t}]=0.\] Proof.: Let \(i\in\{1,\ldots,n\}\) and let \(\phi\) be a smooth test function. Since \(\Phi:x\to\mathbb{E}[\phi(e^{-t}x+\sqrt{1-e^{2t}}Z)]\) is real analytic (see e.g. Lemma 1 [2] or Lemma 6.4 [4]), we have \[\mathbb{E}[W_{i}1_{\|D_{i}\|^{2}\leq\eta_{p}(t)}\phi(F_{t})] =\sum_{k=0}^{\infty}\frac{e^{-kt}}{k!}\mathbb{E}\left[W_{i}\left< (-D_{i,t})^{\otimes k},\nabla^{k}\phi(F_{t}+e^{-t}D_{i})\right>\right].\] \[=\sum_{k=0}^{\infty}\frac{e^{-kt}}{k!}\mathbb{E}\left[(W_{i} \otimes(-D_{i,t})^{\otimes k})\nabla^{k}\phi(F_{t}+e^{-t}D_{i})\right].\] Thus, by performing multiple integration by parts with respect to the Gaussian measure (see e.g. Equation (16) [2]), we obtain \[\mathbb{E}[W_{i}1_{\|D_{i}\|^{2}\leq\eta_{p}(t)}\phi(F_{t})]=\sum_{k=0}^{ \infty}\frac{\mathbb{E}\left[(W_{i}\otimes(-D_{i,t})^{\otimes k})H_{k}(Z)\phi( F_{t}+e^{-t}D_{i})\right]}{k!\Delta(t)^{k/2}}.\] Finally, since \(W_{i}\) and \(W_{i}^{\prime}\) are independent, \[\forall k\geq 0,\mathbb{E}[(W_{i}\otimes(-D_{i,t})^{\otimes k})H_{k}(Z)\phi( F_{t}+e^{-t}D_{i})]=\mathbb{E}[(W_{i}^{\prime}\otimes D_{i,t}^{\otimes k})H_{k}(Z) \phi(F_{t})]\] \[\mathbb{E}[W_{i}1_{\|D_{i}\|^{2}\leq\eta_{p}(t)}\phi(F_{t})]=\mathbb{E}[W_{i}^{ \prime}1_{\|D_{i}\|^{2}\leq\eta_{p}(t)}\phi(F_{t})]+\sum_{k=1}^{\infty}\frac{ \mathbb{E}\left[(W_{i}^{\prime}\otimes D_{i,t}^{\otimes k})H_{k}(Z)\phi(F_{t}) \right]}{k!\Delta(t)^{k/2}},\] concluding the proof. We are now ready to start the proof of Proposition 3. Let \(t\geq 0\), using Lemma 1 yields \[\rho_{t}(F_{t})=\rho_{t}(F_{t})+e^{-t}\sum_{i=1}^{n}\mathbb{E}[\tau_{i,t}\mid F _{t}].\] Therefore, by the triangle inequality, \[e^{t}\|\rho_{t}\|_{p}\leq\left\|\mathbb{E}\left[\sum_{i=1}^{n}D_ {i,t}+W_{i}\mid F_{t}\right]\right\|_{p}+\frac{\left\|\mathbb{E}\left[(\sum_{ i=1}^{n}W_{i}^{\prime}\otimes D_{i,t}-I_{d})Z\mid F_{t}\right]\right\|_{p}}{ \sqrt{\Delta(t)}}\\ +\sum_{k=2}^{\infty}\frac{\left\|\mathbb{E}\left[(\sum_{i=1}^{n }W_{i}^{\prime}\otimes D_{i,t}^{\otimes k})H_{k}(Z)\mid F_{t}\right]\right\|_{ p}}{k!\Delta(t)^{k/2}}\] Let us write * \(M_{3}\coloneqq\sum_{i=1}^{n}\mathbb{E}[W_{i}^{\prime}\otimes D_{i}^{\otimes 2 }]=\sum_{i=1}^{n}\mathbb{E}[W_{i}^{\otimes 3}];\) * \(R(t)\coloneqq\sum_{i=1}^{n}\mathbb{E}[D_{i,t}+W_{i}\mid W_{i}];\) * \(S(t)\coloneqq\sum_{i=1}^{n}\mathbb{E}[W_{i}^{\prime}\otimes D_{i,t}\mid W_{i} ]-\mathbb{E}[\sum_{i=1}^{n}W_{i}^{\prime}\otimes D_{i,t}];\) * \(\forall k\in\{1,2\},E_{k}(t)\coloneqq\mathbb{E}[\sum_{i=1}^{n}W_{i}^{\prime} \otimes D_{i}^{\otimes k}1_{\|D_{i}\|^{2}\geq\eta_{p}(t)}]\) * \(\forall k\geq 2,G_{k}(t)\coloneqq\mathbb{E}[\sum_{i=1}^{n}W_{i}^{\prime} \otimes D_{i,t}^{\otimes k}]\) * \(\forall k\geq 2,K_{k}(t)\coloneqq\sum_{i=1}^{n}W_{i}^{\prime}\otimes D_{i,t}^{ \otimes k}-\mathbb{E}[\sum_{i=1}^{n}W_{i}^{\prime}\otimes D_{i,t}^{\otimes k}].\) Note that, by assumptions, \[\sum_{i=1}^{n}W_{i}^{\prime}\otimes D_{i,t}-I_{d}=S(t)+E_{1}(t)\] and \[\sum_{i=1}^{n}W_{i}^{\prime}\otimes D_{i,t}^{\otimes k}=M_{3}+E_{2}(t).\] Hence, applying Jensen's inequality along with the triangle inequality yields \[e^{t}\|\rho_{t}\|_{p}\leq\|R(t)\|_{p}+\frac{\|E_{1}Z\|_{p}+\|S(t )Z\|_{p}}{\sqrt{\Delta(t)}}+\frac{\left\|E_{2}(t)H_{2}(Z)\right\|_{p}+\left\| M_{3}\mathbb{E}[H_{2}(Z)\mid F_{t}]\right\|_{p}}{2\Delta(t)}\\ +\sum_{k=3}^{\infty}\frac{\left\|G_{k}(t)H_{k}(Z)\right\|_{p}+ \left\|K_{k}H_{k}(Z)\right\|_{p}}{k!\Delta(t)^{k/2}}.\] Let \(p<q\leq p+2\) and \(r>0\) such that \(\frac{1}{r}+\frac{1}{q}=\frac{1}{r}\). By Lemmas 4 and 5, we have \[\left\|M_{3}\mathbb{E}[H_{2}(Z)\mid F_{t}]\right\|_{p} =\left\|M_{3}\mathbb{E}[H_{2}(Z)\mid Z+\Delta(t)^{-1/2}W]\right\| _{p}\] \[\leq e^{-2t}\Delta(t)\|M_{3}H_{2}(Z)\|_{p}+\frac{Cr\|M_{3}\|W_{q}( \nu,\gamma)}{\sqrt{\Delta(t)}}.\] Therefore, \[e^{t}\|\rho_{t}\|_{p}\leq\|R(t)\|_{p}+\frac{\left\|E_{1}(t)Z\right\|_{p}+ \left\|S(t)Z\right\|_{p}}{\sqrt{\Delta(t)}}+\frac{e^{-2t}}{2}\left\|M_{3}H_{2} (Z)\right\|_{p}+\frac{Cr\left\|M_{3}\right\|W_{q}(\nu,\gamma)}{\eta_{p}(t)^{ 3/2}}\\ +\frac{\left\|E_{2}(t)H_{2}(Z)\right\|_{p}}{2\Delta(t)}+\sum_{k=3}^ {\infty}\frac{\left\|G_{k}(t)H_{k}(Z)\right\|_{p}+\left\|K_{k}(t)H_{k}(Z) \right\|_{p}}{k!\Delta(t)^{k/2}}.\] And finally, by Lemma 3, \[e^{t}\|\rho_{t}\|_{p}\leq\|R(t)\|_{p}+\frac{\left\|E_{1}(t)\right\|_ {p}+\left\|S(t)\right\|_{p}}{\sqrt{\eta_{p}(t)}}+\frac{e^{-2t}}{2}\left\|M_{3} H_{2}(Z)\right\|_{p}+\frac{Cr\left\|M_{3}\right\|W_{q}(\nu,\gamma)}{\eta_{p}(t) ^{3/2}}\\ +\frac{\left\|E_{2}(t)\right\|}{\sqrt{2}\eta_{p}(t)}+\sum_{k=3}^{ \infty}\frac{\left\|G_{k}(t)\right\|+\left\|K_{k}(t)\right\|_{p}}{\sqrt{k!} \eta_{p}(t)^{k/2}}.\] We are thus left with bounding these various quantities. #### 5.3.1 Bounding \(R(t)\) Let \(i\in\{1,\ldots,n\}\). Since \(W^{\prime}_{i}\) and \(W_{i}\) are independent and since \(\mathbb{E}[W^{\prime}_{i}]=0\), we have \[\mathbb{E}[D_{i,t}+W_{i}\mid F_{t}]=\mathbb{E}[D_{i}1_{\|D_{i}\|^{2}\geq\eta_ {p}(t)}\mid F_{t}].\] Thus, by Jensen's inequality, \[\left\|R(t)\right\|_{p}\leq\left\|\sum_{i=1}^{n}D_{i}1_{\|D_{i}\|^{2}\geq\eta_ {p}(t)}\right\|_{p}.\] Since \(W_{i}\) and \(W^{\prime}_{i}\) are independent, \[\mathbb{E}[D_{i}1_{\|D_{i}\|^{2}\geq\eta_{p}(t)}]=0.\] As the \((D_{i})_{1\leq i\leq n}\) are independent, we can apply Rosenthal's inequality (see Lemma 2) to obtain \[\|R(t)\|_{p}\leq C\sqrt{p}\left(\sum_{i=1}^{n}\|D_{i}1_{\|D_{i}\|^{2}\geq\eta_ {p}(t)}\|_{2}^{2}\right)^{1/2}+Cp\left(\sum_{i=1}^{n}\|D_{i}1_{\|D_{i}\|^{2} \geq\eta_{p}(t)}\|_{p}^{p}\right)^{1/p}.\] Now, for any \(q\leq p\), \[\|D_{i}1_{\|D_{i}\|^{2}\geq\eta_{p}(t)}\|_{q}^{q}\leq\frac{\mathbb{E}[\|D_{i} \|^{q+2}1_{\|D_{i}\|^{2}\geq\eta_{p}(t)}]}{\eta_{p}(t)}.\] Therefore, \[\|R(t)\|_{p}\leq\frac{C\sqrt{p}}{\sqrt{\eta_{p}(t)}}\left(\sum_{i=1}^{n} \mathbb{E}[\|D_{i}\|^{4}1_{\|D_{i}\|^{2}\geq\eta_{p}(t)}]\right)^{1/2}+\frac{ Cp}{\eta_{p}(t)^{1/p}}\left(\sum_{i=1}^{n}\mathbb{E}[\|D_{i}\|^{p+2}1_{\|D_{i}\|^{2} \geq\eta_{p}(t)}]\right)^{1/p}\] or \[\|R(t)\|_{p}\leq C\left(\sqrt{\frac{pL_{4}(\eta_{p}(t))}{\eta_{p}(t)}}+p \left(\frac{L_{p+2}(\eta_{p}(t))}{\eta_{p}(t)}\right)^{1/p}\right).\] #### 5.3.2 Bounding \(S(t)\) Let \(i\in\{1,\ldots,n\}\). Since \(W^{\prime}_{i}\) is independent of \(F_{t}\) and since \(\mathbb{E}[W_{i}]=0\), we have \[\mathbb{E}[W^{\prime}_{i}\otimes D_{i,t}\mid F_{t}] =\mathbb{E}[W^{\prime}_{i}\otimes D_{i}\mid F_{t}]-\mathbb{E}[W^{ \prime}_{i}\otimes D_{i}1_{\|D\|^{2}\geq\eta_{p}(t)}\mid F_{t}]\] \[=\mathbb{E}[W^{\prime}_{i}\otimes D_{i}]-\mathbb{E}[W^{\prime}_{i }\otimes D_{i}1_{\|D\|^{2}\geq\eta_{p}(t)}\mid F_{t}].\] Therefore, \[\mathbb{E}[W^{\prime}_{i}\otimes D_{i,t}\mid F_{t}]-\mathbb{E}[W^{\prime}_{i }\otimes D_{i,t}]=\mathbb{E}[W^{\prime}_{i}\otimes D_{i}1_{\|D\|^{2}\geq\eta_{ p}(t)}]-\mathbb{E}[W^{\prime}_{i}\otimes D_{i}1_{\|D\|^{2}\geq\eta_{p}(t)}\mid F_{t}].\] Hence, we can apply Rosenthal's inequality (see Lemma 2) to obtain \[\|S(t)\|_{p}\leq C\sqrt{p}\left(\sum_{i=1}^{n}\left\|\mathbb{E }[W^{\prime}_{i}\otimes D_{i}1_{\|D\|^{2}\geq\eta_{p}(t)}\mid F_{t}]\right\|_{ 2}^{2}\right)^{1/2}\\ +Cp\left(\sum_{i=1}^{n}\left\|\mathbb{E}[W^{\prime}_{i}\otimes D _{i}1_{\|D\|^{2}\geq\eta_{p}(t)}\mid F_{t}]\right\|_{p}^{p}\right)^{1/p}.\] Then, applying Cauchy-Schwarz's inequality yields \[\|\mathbb{E}[W^{\prime}_{i}\otimes D_{i,t}\mid F_{t}]\|^{2}\leq\|\mathbb{E}[W^{ \prime\otimes 2}_{i}\mid F_{t}]\|\|\mathbb{E}[D^{\otimes 2}_{i,t}\mid F_{t}]\|\] and, since \(W^{\prime}_{i}\) is independent of \(F_{t}\) and using Jensen's inequality, \[\|\mathbb{E}[W^{\prime}_{i}\otimes D_{i,t}\mid F_{t}]\|^{2}\leq\|\mathbb{E}[W^ {\otimes 2}_{i}]\|\|D^{\otimes 2}_{i,t}\|.\] Thus, for any \(2\leq q\leq p\), \[\|\mathbb{E}[W^{\prime}_{i}\otimes D_{i}1_{\|D\|^{2}\geq\eta_{p}(t)}\mid F_{t }]\|_{q}^{q}\leq\eta_{p}(t)\|\mathbb{E}[W^{\otimes 2}_{i}]\|^{q/2}\mathbb{E}[\|D_{i} \|^{q+2}].\] Therefore, \[\frac{\|S(t)\|_{p}}{\sqrt{\eta_{p}(t)}}\leq\frac{C\sqrt{p}}{\eta _{p}(t)}\left(\sum_{i=1}^{n}\left\|\mathbb{E}\left[W^{\otimes 2}_{i} \right]\right\|\mathbb{E}\left[\|D_{i}\|^{4}\right]\right)^{1/2}\\ +\frac{Cp}{\eta_{p}(t)^{1/2+2/p}}\left(\sum_{i=1}^{n}\left\| \mathbb{E}\left[W^{\otimes 2}_{i}\right]\right\|^{p/2}\mathbb{E}\left[\|D_{i}\|^{p+2} \right]\right)^{1/p}\] or \[\frac{\|S(t)\|_{p}}{\sqrt{\eta_{p}(t)}}\leq C\left(\frac{\sqrt{pL^{\prime}_{4 }}}{\eta_{p}(t)}+\frac{p(L^{\prime}_{p+2})^{1/p}}{\eta_{p}(t)^{1/2+2/p}}\right).\] #### 5.3.3 Bounding \(K_{k}(t)\) Let \(k\geq 2\). By Rosenthal's inequality (see Lemma 2), we have \[\|K_{k}(t)\|_{p}\leq C\sqrt{p}\left(\sum_{i=1}^{n}\|\mathbb{E}[W^{\prime}_{i} \otimes D^{\otimes k}_{i,t}\mid F_{t}]\|_{2}^{2}\right)^{1/2}+Cp\left(\sum_{i= 1}^{n}\|\mathbb{E}[W^{\prime}_{i}\otimes D^{\otimes k}_{i,t}\mid F_{t}]\|_{p} ^{p}\right)^{1/p}.\] Then, applying Cauchy-Schwarz's inequality yields \[\|\mathbb{E}[W^{\prime}_{i}\otimes D^{\otimes k}_{i,t}\mid F_{t}]\|^{2}\leq \|\mathbb{E}[W^{\prime\otimes 2}_{i}\mid F_{t}]\|\|\mathbb{E}[D^{\otimes 2k}_{i,t} \mid F_{t}]\|\] and, since \(W^{\prime}_{i}\) is independent of \(F_{t}\) and using Jensen's inequality, \[\|\mathbb{E}[W^{\prime}_{i}\otimes D^{\otimes k}_{i,t}\mid F_{t}]\|^{2}\leq \|\mathbb{E}[W^{\otimes 2}_{i}]\|\|D^{\otimes 2k}_{i,t}\|\] Thus, for any \(2\leq q\leq p\), \[\|\mathbb{E}[W^{\prime}_{i}\otimes D^{\otimes k}_{i,t}\mid W_{i}]\|_{q}^{q} \leq\eta_{p}(t)^{(k-2)q/2}\|\mathbb{E}[W^{\otimes 2}_{i}]\|^{q/2}\mathbb{E}[\|D_{i,t} \|^{q+2}]\] Therefore, \[\frac{\|K_{k}(t)\|_{p}}{\eta_{p}(t)^{k/2}}\leq\frac{C\sqrt{p}}{ \eta_{p}(t)}\left(\sum_{i=1}^{n}\left\|\mathbb{E}\left[W^{\otimes 2}_{i} \right]\right\|\mathbb{E}\left[\|D_{i}\|^{4}\right]\right)^{1/2}\\ +\frac{Cp}{\eta_{p}(t)^{1/2+2/p}}\left(\sum_{i=1}^{n}\left\| \mathbb{E}\left[W^{\otimes 2}_{i}\right]\right\|^{p/2}\mathbb{E}\left[\|D_{i}\|^{p+2} \right]\right)^{1/p}\] or \[\frac{\|K_{k}(t)\|_{p}}{\eta_{p}(t)^{k/2}}\leq C\left(\frac{\sqrt{pL^{\prime}_ {4}}}{\eta_{p}(t)}+\frac{p(L^{\prime}_{p+2})^{1/p}}{\eta_{p}(t)^{1/2+2/p}} \right).\] #### 5.3.4 Bounding \(E_{k}(t)\) Let us first consider the case \(k=1\). Let \(1\leq i\leq n\). Since \(W_{i}\) and \(W^{\prime}_{i}\) are independent, we have \[\mathbb{E}[W^{\prime}_{i}\otimes D_{i,t}]=-\mathbb{E}[W_{i}\otimes D_{i,t}].\] Therefore, \[E_{1}(t)=\frac{\mathbb{E}[D_{i}^{\otimes 2}1_{\|D_{i}\|^{2}\geq\eta_{p}(t)}]}{2}\] and, since \(D_{i}^{\otimes 2}\) is positive-definite, we have, by Lemma A.1[5], \[\|E_{1}(t)\|\leq\frac{\sum_{i=1}^{n}\left\|\mathbb{E}[D_{i}^{\otimes 2}1_{\|D_{i }\|^{2}\geq\eta_{p}(t)}]\right\|}{2}\leq\frac{\sum_{i=1}^{n}\left\|\mathbb{E}[ D_{i}^{\otimes 2}\|D_{i}\|^{2}1_{\|D_{i}\|^{2}\geq\eta_{p}(t)}]\right\|}{2\eta_{p}(t)}\] and \[\frac{\|E_{1}(t)\|}{\sqrt{\eta_{p}(t)}}\leq\frac{\sum_{i=1}^{n}\left\|\mathbb{ E}[D_{i}^{\otimes 2}\|D_{i}\|^{2}1_{\|D_{i}\|^{2}\geq\eta_{p}(t)}]\right\|}{2\eta_{p} (t)^{3/2}}.\] Now suppose \(k=2\). This time, \[E_{2}(t) =\frac{\sum_{i=1}^{n}\mathbb{E}[(W^{\prime}_{i}+W_{i})\otimes D_{ i}^{\otimes 2}1_{\|D_{i}\|^{2}\geq\eta_{p}(t)}]}{2}\] \[=\frac{\sum_{i=1}^{n}\mathbb{E}[(W^{{}^{\prime}\otimes 2}_{i}-W_{i}^{ \otimes 2})\otimes D_{i}1_{\|D_{i}\|^{2}\geq\eta_{p}(t)}]}{2}\] \[=\sum_{i=1}^{n}\mathbb{E}[W_{i}^{\otimes 2}\otimes D_{i}1_{\|D_{i }\|^{2}\geq\eta_{p}(t)}].\] Since \(W_{i}^{\otimes 2}\) is positive-definite, Lemma A.1 and A.2 from [5] give \[\frac{\|E_{2}(t)\|}{\eta_{p}(t)}\leq\frac{C\sum_{i=1}^{n}\|\mathbb{E}[W_{i}^{ \otimes 2}\|D_{i}\|^{2}1_{\|D_{i}\|^{2}\geq\eta_{p}(t)}]\|}{\eta_{p}(t)^{3/2}}.\] Overall, we obtained \[\forall k\in\{1,2\},E_{k}(t)\leq C\frac{L_{4}^{\prime\prime}(\eta_{p}(t))}{ \eta_{p}(t)^{3/2}}.\] #### 5.3.5 Bounding \(G_{k}(t)\) Let \(k\geq 3\) and suppose \(k\) is odd. Since \(W_{i}\) and \(W^{\prime}_{i}\) are independent, we have \[\mathbb{E}[W^{\prime}_{i}\otimes D_{i,t}^{\otimes k}]=-\mathbb{E}[W_{i}\otimes D _{i,t}^{\otimes k}].\] Therefore, \[\mathbb{E}[W^{\prime}_{i}\otimes D_{i,t}^{\otimes k}] =\frac{\mathbb{E}[(W^{\prime}_{i}-W_{i})\otimes D_{i,t}^{\otimes k }]}{2}\] \[=\frac{\mathbb{E}[D_{i,t}^{\otimes(k+1)}]}{2}.\] Now, since \(k+1\) is even, we have \[\|\mathbb{E}[W^{\prime}_{i}\otimes D_{i,t}^{\otimes k}]\|\leq\frac{\eta_{p}(t )^{(k-3)/2}\|\mathbb{E}[D_{i,t}^{\otimes 4}]\|}{2}\] Let us now consider an even \(k\). We have \[\mathbb{E}[W^{\prime}_{i}\otimes D_{i,t}^{\otimes k}]=\mathbb{E}[W_{i}\otimes D _{i,t}^{\otimes k}].\] Therefore, \[\mathbb{E}[W_{i}^{\prime}\otimes D_{i,t}^{\otimes k}] =\frac{\mathbb{E}[(W_{i}^{\prime}+W_{i})\otimes D_{i,t}^{\otimes k}]} {2}\] \[=\frac{\mathbb{E}[(W_{i}^{\prime\otimes 2}-W_{i}^{\otimes 2}) \otimes D_{i,t}^{\otimes(k-1)}]}{2}\] \[=\mathbb{E}[W_{i}^{\otimes 2}\otimes D_{i,t}^{\otimes(k-1)}].\] Finally, by Lemmas A.1 and A.2[5], \[\|\mathbb{E}[W_{i}^{\otimes 2}\otimes D_{i,t}^{\otimes(k-1)}]\|\leq\eta_{p}(t) ^{(k-3)/2}\|\mathbb{E}[W_{i}^{\otimes 2}\otimes D_{i,t}^{\otimes 2}]\|.\] In particular, \[\frac{\|G_{k}(t)\|}{\eta_{p}(t)^{k/2}}\leq\frac{C\|M_{4}\|}{\eta_{p}(t)^{3/2}}.\] ### Combining times We are now ready to conclude the proof of Theorem 1. Let \(0<\epsilon_{1}<\epsilon_{2}\) such that \(\eta_{p}(\epsilon_{2})\leq 1\). By (6) and Propositions 2 and 3 we have \[\int_{0}^{\infty}\|\rho_{t}\|_{p}\,dt\leq\int_{0}^{\epsilon_{1}}\Psi_{1}(t)\, dt+\int_{\epsilon_{1}}^{\epsilon_{2}}\Psi_{2}(t)\,dt+\int_{\epsilon_{2}}^{ \infty}\Psi_{3}(t)\,dt.\] Since \[\int_{0}^{\epsilon_{1}}\Psi_{1}(t)\,dt\leq C\left(\sqrt{dpe_{1}}+\epsilon_{1} \left(\sqrt{pd}+pL_{p}^{1/p}\right)\right)\] and \[\int_{\epsilon_{1}}^{\epsilon_{2}}\Psi_{2}(t)\,dt\leq C\epsilon_{2}\left( \sqrt{p(\beta_{2}+d)}+p\left(\beta_{p}+L_{p}\right)^{1/p}\right)+\frac{C\sqrt {dp}\beta^{2}}{\sqrt{\epsilon_{1}}}\] and \[\int_{\epsilon_{2}}^{\infty}\Psi_{3}(t)\,dt\leq\frac{\|M_{3}H_{2}(Z )\|_{p}}{6}+Cp^{3/2}\frac{L_{4}^{\prime\prime}(\epsilon_{2})+\|M_{4}\|+r\|M_{ 3}\|W_{q}(\nu,\mu)}{\sqrt{\epsilon_{2}}}\\ +C\left(p\sqrt{L_{4}(\epsilon_{2})}+p^{1+1/p}L_{p+2}(\epsilon_{2} )^{1/p}+p^{3/2}\log(\epsilon_{2})\left(\sqrt{L_{4}^{\prime}}+(p^{2}L_{p+2}^{ \prime})^{1/p}\right)\right).\] We can thus take \[\epsilon_{1}=\beta^{2}\] and \[\epsilon_{2}^{3/2}=p\frac{L_{4}^{\prime\prime}(\epsilon_{2})+\|M_{4}\|+r\|M_{ 3}\|W_{q}(\nu,\gamma)}{\sqrt{\beta_{2}+d}+\sqrt{p}\left(L_{p}^{1/p}+\beta_{p}^ {1/p}\right)}\] to complete the proof of Theorem 1. ## 6 Technical lemmas **Lemma 2**.: _There exists \(C>0\) such that, for any \(p\geq 2\) and any independent random variables \((U_{i})_{1\leq i\leq n}\) with finite moment of order \(p\), we have_ \[\left\|\sum_{i=1}^{n}U_{i}-\mathbb{E}[U_{i}]\right\|_{p}\leq C\sqrt{p}\left( \sum_{i=1}^{n}\|U_{i}\|_{2}^{2}\right)^{1/2}+Cp\left(\sum_{i=1}^{n}\|U_{i}\|_{ p}^{p}\right)^{1/p}.\] Proof.: By Rosenthal's inequality, \[\left\|\sum_{i=1}^{n}U_{i}-\mathbb{E}[U_{i}]\right\|_{p}\leq C\sqrt{p}\left(\sum_ {i=1}^{n}\|U_{i}-\mathbb{E}[U_{i}]\|_{2}^{2}\right)^{1/2}+Cp\left(\sum_{i=1}^{n }\|U_{i}-\mathbb{E}[U_{i}]\|_{p}^{p}\right)^{1/p}.\] First, we have \(\|U_{i}-\mathbb{E}[U_{i}]\|_{2}\leq\|U_{i}\|_{2}\). On the other hand, by the triangle and Jensen's inequalities, \[\|U_{i}-\mathbb{E}[U_{i}]\|_{p}\leq\|U_{i}\|_{p}+\|\mathbb{E}[U_{i}]\|\leq 2\|U_ {i}\|_{p},\] concluding the proof. **Lemma 3** (Lemma 3 [2]).: _Let \(Z\) be a normal random variable, \(p\geq 2\), \(k\in\mathbb{N}\) and \(M\in(\mathbb{R}^{d})^{\otimes k+1}\). Then,_ \[\|MH_{k}(Z)\|_{p}^{2}\leq(p-1)^{k}kt\|M\|^{2}.\] **Lemma 4**.: _Let \(X,Y\) and \(Z\) be three random variables such that \(Z\) is drawn the Gaussian measure \(\gamma\) and independent from \((X,Y)\). Let \(q>p\geq 2\) and suppose that \(X\) and \(Y\) have finite moment of order \(q\). Then, for any \(k\geq 0\) and any \(i\in\{1,\ldots,d\}^{k}\)_ \[\|\mathbb{E}[H_{i}(Z)\mid X+Z]-\mathbb{E}[H_{i}(Z)\mid Y+Z]\|_{p}\leq C\sqrt{r ^{k}k!}\mathbb{E}[\|Y-X\|^{q}]^{1/q},\] _where \(C>0\) is a generic constant, \(H_{i}=(H_{k})_{i}\) and \(r\) is such that \(\frac{1}{r}+\frac{1}{q}=\frac{1}{p}\)._ Proof.: Let \(\epsilon\) be the random variable defined by \[Y-X=\epsilon.\] Therefore, \[\mathbb{E}[H_{i}(Z)\mid X+Z]=\mathbb{E}[H_{i}(Z)\mid Y+Z+\epsilon]\] and there exists \(t\in[0,1]\) such that \[\mathbb{E}[H_{i}(Z)\mid X+Z]-\mathbb{E}[H_{i}(Z)\mid Y+Z]=\frac{d}{dt} \mathbb{E}[H_{i}(Z)\mid Y+Z+t\epsilon].\] Let \(\mu\) be the measure of \((Y,\epsilon)\) and \[f(t)=\int\left(\nabla_{k}\gamma\left(Z+Y-y^{\prime}+t(\epsilon-\epsilon^{ \prime})\right)\right)_{i}d\mu(y^{\prime},\epsilon^{\prime}).\] We then have \[\frac{df}{dt}(t)=\int\left\langle\epsilon-\epsilon^{\prime},\nabla\left( \nabla_{k}\gamma\left(Z+Y-y^{\prime}+t(\epsilon-\epsilon^{\prime})\right) \right)_{i}\right\rangle d\mu(y^{\prime},\epsilon^{\prime}).\] Similarly, letting \[g(t)=\int\gamma\left(Z+Y-y^{\prime}+t(\epsilon-\epsilon^{\prime})\right)d\mu( y^{\prime},\epsilon^{\prime}),\] we have \[\frac{dg}{dt}(t)=\int\left\langle\epsilon-\epsilon^{\prime},\nabla\gamma\left( Z+Y-y^{\prime}+t(\epsilon-\epsilon^{\prime})\right)\right\rangle d\mu(y^{\prime}, \epsilon^{\prime}).\] By definition of the conditional expectation, \[\mathbb{E}[H_{i}(Z)\mid Y+Z+t\epsilon]=\frac{f(t)}{g(t)}.\] Therefore, denoting by \(\epsilon^{\prime}\) an i.i.d. copy of \(\epsilon\), \[\frac{d}{dt}\mathbb{E}[H_{i}(Z)\mid Y+Z+t\epsilon]=\frac{d\frac{ f}{g}}{dt}(t)\\ =\mathbb{E}[\langle\epsilon^{\prime}-\epsilon,\nabla H_{i}(Z) \rangle\mid Y+Z+t\epsilon]-\mathbb{E}[\langle\epsilon^{\prime}-\epsilon,Z \rangle\mid Y+Z+t\epsilon]\mathbb{E}[H_{i}(Z)\mid Y+Z+t\epsilon].\] Applying the triangle inequality along with Holder's and Jensen's inequalities then yields \[\left\|\frac{d}{dt}\mathbb{E}[H_{i}(Z)\mid Y+Z+t\epsilon]\right\|_{p}\leq\| \left\langle\epsilon^{\prime}-\epsilon,\nabla H_{i}(Z)\right\rangle\|_{p}+\| \left\langle\epsilon^{\prime}-\epsilon,Z\right\rangle\|_{q}\|H_{i}(Z)\|_{r},\] where \[\frac{1}{d}+\frac{1}{r}=\frac{1}{p}.\] Finally, since \(Z,\epsilon\) and \(\epsilon^{\prime}\) are independent, applying Lemma 3 yields the desired result. **Lemma 5**.: _Let \(Y\) and \(Z\) be two independent Gaussian random variables. Then, for any \(k\geq 1\) and any \(\alpha>0\), we have_ \[\mathbb{E}[H_{k}(Z)\mid\alpha Y+\sqrt{1-\alpha^{2}}Z]=(1-\alpha^{2})^{k/2}H_{k }(\alpha Y+\sqrt{1-\alpha^{2}}Z).\] Proof.: Let \(\phi\) be a smooth function with compact support. By integration by parts with respect to the Gaussian measure (see e.g. Equation (16) [2]), we have \[\mathbb{E}[H_{k}(Z)\mid\alpha Y+\sqrt{1-\alpha^{2}}Z] \phi(\alpha Y+\sqrt{1-\alpha^{2}}Z)]\] \[=\mathbb{E}[H_{k}(Z)\phi(\alpha Y+\sqrt{1-\alpha^{2}}Z)]\] \[=(1-\alpha^{2})^{k/2}\mathbb{E}[\nabla^{k}\phi(\alpha Y+\sqrt{1- \alpha^{2}}Z)]\] \[=(1-\alpha^{2})^{k/2}\mathbb{E}[H_{k}(\alpha Y+\sqrt{1-\alpha^{2 }}Z)\phi(\alpha Y+\sqrt{1-\alpha^{2}}Z)],\] concluding the proof.
2308.03910
Exponential volume limits
Let $M$ be a $d$-dimensional closed Riemannian manifold, let $f\in\mathrm{Diff}^{1+\beta}(M)$, and denote by $m$ the Riemannian volume form of $M$. We prove that if $m\circ f^{-n}\xrightarrow[n\to\infty]{}\mu$ exponentially fast, then $\mu$ is an SRB measure.
Snir Ben Ovadia, Federico Rodriguez-Hetrz
2023-08-07T20:59:00Z
http://arxiv.org/abs/2308.03910v2
# Exponentially fast volume limits ###### Abstract. We show that every exponentially fast limit of the pushed volume is an SRB measure. More precisely: Let \(M\) be a \(d\)-dimensional closed Riemannian manifold, and let \(f\in\mathrm{Diff}^{1+\beta}(M)\), \(\beta>0\). Let \(\mu\) be an \(f\)-invariant probability measure on \(M\). Assume that the pushed-volume converges to \(\mu\) exponentially fast in the sense that \[\exists C>0,\alpha\in(0,1],\gamma>0:\forall g\in\mathrm{Hol}_{\alpha}(M), \left|\frac{1}{N}\sum_{k=n}^{n+N-1}m(g\circ f^{k})-\mu(g)\right|\leq C\cdot\|g \|_{\alpha}\cdot e^{-\gamma\cdot(n\wedge N)}, \tag{1}\] where \(n\wedge N:=\min\{n,N\}\). Then we show that \(\mu\) is an SRB measure. Moreover, under additional assumptions (see (5) in SS4) where the volume "almost" exponentially mixes but is not necessarily invariant, we show that unless \(\mu\) is a Dirac mass (a necessary condition), \(\mu\) admits a positive exponent almost everywhere. In that case, we show that \(\mu\) must be ergodic, and that it is the unique SRB measure of the system. Moreover, we show that every ergodic invariant measure \(\nu\) satisfies \(\max_{i}\chi_{i}^{+}(\nu)\geq\frac{\gamma}{2d}>0\). \({}^{*}\) Department of Mathematics, Eberly College of Science, Pennsylvania State University, [email protected] \({}^{\dagger}\) Department of Mathematics, Eberly College of Science, Pennsylvania State University, [email protected] ## 1. Introduction ### Motivation An important object in smooth ergodic theory is SRB measures, named after Sinai, Ruelle, and Bowen. SRB measures are invariant measures whose conditional measures on unstable leaves are absolutely continuous w.r.t the induced Riemannian volume on unstable leaves (see [20] for more details and properties of SRB measures). Aside for potential physicality and compatibility with the Riemannian volume in dissipative systems, SRB measures are important as possible limit points of the Riemannian volume under the dynamics. In [1] Bowen shows that for Axiom A attractors which support an SRB measure, the volume measure of the saturation of the attractor by stable leaves converges exponentially fast under the dynamics to the unique SRB measure supported on the attractor (the notion of rate of convergence relates to a fixed space of test functions). However, in the general case it is not clear if one can expect to always achieve an SRB measure as a limit point of the pushed Riemannian volume. In particular, some "nice" systems do not admit an SRB measure (see [10]). This gives rise to the natural question: When can we achieve an SRB measure through pushing forwards the Riemannian volume of a smooth dynamical system? Before we describe the results of this paper and how they relate to this question, we wish to mention another fundamental field of studies in smooth ergodic theory, and how it relates to this question. The _smooth realization problem_ posed by von Neumann is the question of what dynamical systems \((X,T,\nu)\) (not necessarily smooth) can be realized through a measure theoretic isomorphism as a smooth system \((M,f,m)\), where \(M\) is a closed Riemannian manifold, \(f\) is a smooth diffeomorphism of \(M\), and \(m=m\circ f^{-1}\) is the Riemannian volume of \(M\). Notice that an immediate restriction of the smoothly-realizable dynamical systems is having finite metric entropy. A recent advancement in this direction is due to Dolgopyat, Kanigowski, and Rodriguez-Hertz, where they prove that for smooth systems which preserve volume, exponential mixing implies Bernoulli ([1]). Exponential mixing is a property of the smooth structure, as it requires specifying a space of regular test functions on which the mixing estimates hold; however their result nonetheless explores a restriction on the ergodic properties of smooth systems. A natural extension of the smooth realization problem can then be, what dynamical systems \((X,T,\nu)\) can be realized through a measure theoretic isomorphism as a smooth system \((M,f,\mu)\), where \(M\) is a closed Riemannian manifold, \(f\) is a smooth diffeomorphism of \(M\), and \(\mu=\lim_{n}m\circ f^{-n}\), where \(m\) is the Riemannian volume of \(M\). Similarly, \(\int g\circ f^{n}hdm\xrightarrow{\exp}\int gdm\int hdm\) when \(m=m\circ f^{-1}\), can be naturally extended to \(\int g\circ f^{n}hdm\xrightarrow{\exp}\int gd\mu\int hdm\) where \(m\) is not necessarily invariant, but \(\mu\) is. Can we say that \(\mu\) is Bernoulli in that case? We believe that the answer is positive based on a consequence of this work, as we explain in SS1.2. The problem of finding a Banach space of test functions which admits certain properties is not a trivial issue. Another instance of that same challenge is proving the spectral gap property, which requires defining a suitable Banach space of test functions on which the dynamics act as a linear operator with a spectral gap. Often the space of such test function is non-trivial in the sense that one studies functions which are regular on stable leaves, but may have merely measurable behavior w.r.t the topology of the ambient manifold. Moreover, the relationship between properties such as a spectral gap (on some "reasonable" Banach space) and exponential mixing is still an open mystery. In what cases can one have exponential mixing without a spectral gap? These types of questions are generally still open, while being fundamental. Finally, an additional natural property in this family would be the exponential convergence of the volume to an invariant measure, as in (1). This property on its own is not enough to conclude any stronger ergodic properties (e.g \(f=\operatorname{Id}_{M}\), or even \(f=A\times\operatorname{Id}_{\mathbb{S}^{1}}\) where \(A\) is a volume-preserving linear Anosov map of the torus). However, we show that it is indeed enough to conclude that the limiting measure is an SRB measure, possibly in the degenerate sense that \(h_{\mu}(f)=\int\sum\chi^{+}d\mu=0\). To sum up the two independent directions of study we mentioned: We wish to understand when can limits of the pushed volume be SRB measures for thermodynamic purposes; and also we wish to understand what properties restrict smooth systems in terms of ergodic properties and the extended smooth realization problem. Possible future lines of study include exploring the relationship between different smooth ergodic properties, such as exponential mixing and a spectral gap. ### Main results The main results of the manuscript are structured in the following way: 1. In SS2 we show that exponential convergence in the sense of (1) to an ergodic limit point implies that the limit point is an SRB measure (not necessarily with positive entropy). The purpose of this section is didactic. 2. In SS3 we prove that exponential convergence in the sense of (1) to a limit point (not necessarily ergodic) implies that the limit point is an SRB measure (still not necessarily with positive entropy). Note that one cannot expect more ergodic properties without additional assumptions. 3. In SS4 we strengthen the assumptions to "almost" exponential mixing of the volume, but not assuming that the volume is invariant (see (5)). We then prove that the limit point must either be the unique SRB measure of the system with positive entropy, and it also admits certain mixing properties on unstable leaves (see Proposition 4.2), or the limit point is a Dirac mass at a fixed point which is an SRB measure in the degenerate sense that \(h_{\mu}(f)=\int\sum\chi^{+}d\mu=0\). Note that the degenerate case cannot be ruled out, as illustrated in the remark after Theorem 4.1. Moreover, in Theorem 4.1 we show a uniform bound from below on the positive exponents of all ergodic invariant measures of the systems, aside for at most possibly the limit point \(\mu\). In the case of SS4, when \(h_{\mu}(f)>0\), the authors believe that the methods of [DKRH] can be extended to show that \(\mu\) is Bernoulli. This is a consequence of the observation that the proof of [DKRH] only truly requires the conditional measures on unstable leaves to be smooth, and Proposition 4.2 gives the right notion of exponential mixing on unstable leaves for their methods to be extended. In addition, notice that the assumption of (1) is formally weaker than requiring \(|m(g\circ f^{n})-\mu(g)|\leq C\cdot\|g\|_{\alpha}\cdot e^{-\gamma n}\). The weaker assumption allows to rely on some averaging in order to gain exponential mixing, rather than just pushing forwards. Our proof relies on the following tools: We use coverings by exponential Bowen balls of the form \(B(\cdot,n,e^{-n\delta})\) which have the following three properties: 1. \(\lim_{\delta\to 0}\limsup\frac{-1}{n}\log\mu(B(\cdot,n,e^{-n\delta}))=h_{\mu_{x}}(f)\)\(\mu\)-a.e, where \(\mu=\int\mu_{x}d\mu(x)\) is its ergodic decomposition (see [BORH]), 2. If \(x\) is a Pesin regular point, then for all \(n\) large enough, \(\forall k\leq n\), \(f^{k}[B(x,n,e^{-n\delta})]\) is contained in the Pesin chart of \(f^{k}(x)\), 3. Subsets of Pesin blocks can be covered by exponential Bowen balls with exponentially low multiplicity, for all \(n\) large enough (see [BORH, Lemma 2.2]). Furthermore, our proof of the results of SS4 relies on the construction of fake \(cs\)-foliations which are absolutely continuous in small exponential neighborhoods of Pesin regular points. These fake foliations were constructed by Dolgopyat, Kanigowski, and Rodriguez-Hertz in [DKRH]. The key idea of the proof of SS2 and SS3 is a type of shadowing argument, where since we cannot mix on exponential Bowen balls of \(n\) steps, we break down the orbit segment of \(n\) steps into \(\frac{1}{\epsilon}\)-many orbit segments of \(n\epsilon\)-many steps. Thus we can study points which remain close to a large measure set of "good points", but not necessarily lie in the Bowen ball of any "good point" for the whole \(n\) steps. ## 2. The ergodic case ### Preliminary parameter choices For didactic purposes, we treat first the ergodic case, as the argument is much clearer in that case. Assume that \(\mu\) is ergodic, and that \(\chi^{u}(\mu)>h_{\mu}(f)\), otherwise we are done. Let \(\epsilon>0\). 1. Let \(\mathcal{K}_{\epsilon}\) be a set s.t \(\mu(\mathcal{K}_{\epsilon})\geq e^{-\epsilon^{4}}\), \(\mu(B(x,-n\epsilon,e^{-2\delta n})),\mu(B(x,-n\epsilon,e^{-\delta n}))=e^{-ne( h_{\mu}(f)\pm\epsilon^{2})}\) for all \(n\geq n_{\epsilon}\), for some \(\delta\in(0,\epsilon^{2})\) (see [BORH]). 2. Let \(\ell=\ell(\epsilon)\in\mathbb{N}\) s.t \(\mu(\Lambda_{\ell}^{(\underline{\chi},\tau)})\geq e^{-\epsilon^{4}}\), with \(0<\tau<\min\{\tau_{\underline{\chi}},\frac{1}{3d}\delta^{3}\}\). 3. Set \(E_{\epsilon}:=\Lambda_{\ell}\cap\mathcal{K}_{\epsilon}\). Then \(\mu(E_{\epsilon})\geq e^{-\epsilon^{3}}\) for all sufficiently small \(\epsilon>0\). W.l.o.g assume that \(\epsilon=\frac{1}{p^{2}}\), and that \(p^{2}|n\) when we choose some large \(n\) s.t \(e^{-\delta n}\ll\frac{1}{f}\), so the ceiling values can be omitted.1 Footnote 1: That is, \(n\) takes values in \(\{m\cdot p^{2}\}_{m\geq 1}\) for some \(p\in\mathbb{N}\). 4. Let \(n\geq n_{\epsilon}\), and let \(\widetilde{\mathcal{A}}_{\epsilon}^{(n)}\) be a cover of \(E_{\epsilon}\) by Bowen balls \(B(\cdot,-n\epsilon,e^{-2\delta n})\) with multiplicity bounded by \(e^{3drn}\leq e^{\delta^{3}n}\leq e^{e^{\delta n}}\), and in particular with cardinality bounded by \(\#\mathcal{A}_{\epsilon}^{(n)}\leq e^{ne(h_{\mu}(f)+\epsilon^{2})}e^{e^{6}n}\), as in [BORH, Lemma 2.2]. Set \(\mathcal{A}_{\epsilon}^{(n)}:=\{B(x,n\epsilon,e^{-\delta n}):B(x,n\epsilon,e^{ -2\delta n})\in\widetilde{\mathcal{A}}_{\epsilon}^{(n)}\}\). ### Large asymptotic-volume set of points shadowed by a set of \(\mu\)-good points **Lemma 2.1**.: _Let \(0\leq i\leq(1-2\sqrt{\epsilon})\frac{1}{\epsilon}\). Then for all \(\epsilon>0\) sufficiently small, \(\exists n_{\epsilon}^{\prime}\geq n_{\epsilon}\) s.t for all \(n\geq n_{\epsilon}^{\prime}\),_ \[\frac{1}{\sqrt{\epsilon}n}\sum_{k=-i\epsilon n+n(1-\sqrt{\epsilon})}^{-i \epsilon n+n-1}m\circ f^{-k}(M\setminus\bigcup\mathcal{A}_{\epsilon}^{(n)}) \leq\epsilon^{2}.\] Proof.: \(\frac{1}{\sqrt{\epsilon}n}\sum_{k=-i\epsilon n+n(1-\sqrt{\epsilon})}^{-i \epsilon n+n-1}m\circ f^{-k}(M\setminus\bigcup\mathcal{A}_{\epsilon}^{(n)})=1 -\frac{1}{\sqrt{\epsilon n}}\sum_{k=-i\epsilon n+n(1-\sqrt{\epsilon})}^{-i \epsilon n+n-1}m\circ f^{-k}(\bigcup\mathcal{A}_{\epsilon}^{(n)})=1-\frac{1}{ \sqrt{\epsilon n}}\sum_{k=-i\epsilon n+n(1-\sqrt{\epsilon})}^{-i\epsilon n+n-1} m\circ f^{-k}(\mathbb{1}_{\bigcup\mathcal{A}_{\epsilon}^{(n)}})\). For every \(B\in\mathcal{C}_{\epsilon}^{(n)}\), define \(g_{B}^{(n)}\) be a Lipschitz function s.t \(g_{B}^{(n)}|_{B(x_{B},n\epsilon,e^{-2\delta n})}=1\), \(g_{B}^{(n)}|_{B(x_{B},n\epsilon,e^{-\delta n})^{c}}=0\), and \(\mathrm{Lip}(g^{(n)})\leq e^{2ne\log M_{f}}\), where \(M_{f}:=\max_{M}\{\|d.f\|,\|d.f^{-1}\|\}\). Notice: \(\mathbb{1}_{\bigcup\mathcal{A}_{\epsilon}^{(n)}}=\max_{B\in\mathcal{A}_{ \epsilon}^{(n)}}\mathbb{1}_{B}\geq\max_{B\in\mathcal{C}_{\epsilon}^{(n)}}g_{B}^ {(n)}=:g^{(n)}\). _Claim:_\(\mathrm{Lip}(g^{(n)})\leq e^{2ne\log M_{f}}\). _Proof:_ We prove that if \(g_{1}\) and \(g_{2}\) are \(L\)-Lipschitz, then \(g_{1}\lor g_{2}:=\max\{g_{1},g_{2}\}\) is \(L\)-Lipschitz. The claim for \(g^{(n)}\) follows by induction. Let \(x,y\in M\). If \(g_{1}(x)\geq g_{2}(x)\) and \(g_{1}(y)\geq g_{2}(y)\), or if \(g_{2}(x)\geq g_{1}(x)\) and \(g_{2}(y)\geq g_{1}(y)\), then \[\frac{|(g_{1}\lor g_{2})(x)-(g_{1}\lor g_{2})(y)|}{|x-y|}\leq L\] by the Lipschitz properties of \(g_{1}\) and \(g_{2}\). We therefore may assume that w.l.o.g \(g_{1}(x)\geq g_{2}(x)\) and \(g_{1}(y)\leq g_{2}(y)\) (otherwise switch the roles of \(g_{1}\) and \(g_{2}\)). Then, \[g_{1}(x)\leq L\cdot|x-y|+g_{1}(y)\leq L\cdot|x-y|+g_{2}(y),\] and so \[g_{1}(x)-g_{2}(y)\leq L\cdot|x-y|.\] \[g_{2}(y)-g_{1}(x)\leq L\cdot|x-y|.\] Therefore, \(|g_{1}(y)-g_{2}(x)|\leq L\cdot|x-y|\), and so \[|(g_{1}\lor g_{2})(x)-(g_{1}\lor g_{2})(y)|=|g_{1}(x)-g_{2}(y)|\leq L\cdot|x-y|.\] This concludes the proof of the claim. By the exponential convergence of the volume averages given by (1), \[\frac{1}{\sqrt{\epsilon n}}\sum_{k=-i\epsilon n+n(1-\sqrt{\epsilon })}^{-i\epsilon n+n-1}m\circ f^{-k}(\mathbb{1}_{\bigcup\mathcal{A}_{\epsilon} ^{(n)}})\geq \frac{1}{\sqrt{\epsilon n}}\sum_{k=-i\epsilon n+n(1-\sqrt{\epsilon })}^{-i\epsilon n+n-1}m\circ f^{-k}(g^{(n)})=\mu(g^{(n)})\pm Ce^{-\gamma\sqrt {\epsilon}n}e^{2\epsilon\log M_{f}n}\] \[\geq \mu(\max_{B\in\bar{\mathcal{A}}_{\epsilon}^{(n)}}\mathbb{1}_{B}) -Ce^{-\gamma\sqrt{\epsilon n}}e^{2\epsilon\log M_{f}n}=\mu(\bigcup\bar{ \mathcal{A}}_{\epsilon}^{(n)})-Ce^{-\gamma\sqrt{\epsilon n}}e^{2\epsilon\log M _{f}n}\] \[\geq \mu(E_{\epsilon})-Ce^{-\gamma\sqrt{\epsilon}n}e^{2\epsilon\log M _{f}n}\geq e^{-\epsilon^{2}}-Ce^{-\gamma\sqrt{\epsilon}n}e^{2\epsilon\log M_{ f}n}.\] Then for all \(\epsilon>0\) s.t \(\gamma\sqrt{\epsilon}>2\log M_{f}\epsilon\) and \(\frac{\epsilon^{4}}{2}\geq 2\frac{\epsilon^{6}}{6}\), and for all \(n\) large enough so \(Ce^{-\gamma\sqrt{\epsilon}n}e^{2\epsilon\log M_{f}n}\leq\frac{\epsilon^{6}}{6}\), we have \[\frac{1}{\sqrt{\epsilon n}}\sum_{k=-i\epsilon n+n(1-\sqrt{\epsilon})}^{-i \epsilon n+n-1}m\circ f^{-k}(M\setminus\bigcup\mathcal{A}_{\epsilon}^{(n)}) \leq 1-e^{-\epsilon^{2}}+Ce^{-\gamma\sqrt{\epsilon n}}e^{2\epsilon n\log M _{f}}\leq\epsilon^{2}-\frac{\epsilon^{4}}{2}+\frac{\epsilon^{6}}{6}+\frac{ \epsilon^{6}}{6}\leq\epsilon^{2}.\] **Definition 2.2**.: \[\mathcal{S}_{n}:=\{x\in\bigcup\mathcal{A}_{\epsilon}^{(n)}:\text{for all }0\leq i\leq(1-2\sqrt{\epsilon})\frac{1}{\epsilon},f^{-in \epsilon}(x)\in\bigcup\mathcal{A}_{\epsilon}^{(n)}\}\] is the set of points in \(\bigcup\mathcal{A}_{\epsilon}^{(n)}\) which are shadowed by \(E_{\epsilon}\) for at least \((1-2\sqrt{\epsilon})n\)-many steps backwards. **Theorem 2.3**.: _Let \(n^{\prime}_{\epsilon}\geq 0\) as in Lemma 2.1, then for all \(\epsilon>0\) sufficiently small and for all \(n\geq n^{\prime}_{\epsilon}\),_ \[\frac{1}{\sqrt{\epsilon n}}\sum_{k=n(1-\sqrt{\epsilon})}^{n-1}m\circ f^{-k}( \mathcal{S}_{n})\geq e^{-\epsilon^{\frac{3}{4}}}.\] Proof.: Let \(n\geq n^{\prime}_{\epsilon}\). Let \(B\in\mathcal{C}_{\epsilon}^{(n)}\). We break down the pull-back of \(B\) as follows: \(f^{-n}[B]=f^{-n\epsilon}\circ\cdots\circ f^{-n\epsilon}[B]\), where the composition chain has \(\frac{1}{\epsilon}\) many steps. \[\frac{1}{\sqrt{\epsilon}n}\sum_{k=n(1-\sqrt{\epsilon})}^{n-1}m \circ f^{-k}(\mathcal{S}_{n})= \frac{1}{\sqrt{\epsilon n}}\sum_{k=n(1-\sqrt{\epsilon})}^{n-1}m \circ f^{-k}(\{x\in\bigcup\mathcal{A}_{\epsilon}^{(n)}:\text{for all }0\leq i\leq(1-2\sqrt{ \epsilon})\frac{1}{\epsilon},f^{-ine}(x)\in\bigcup\mathcal{A}_{\epsilon}^{(n)}\})\] \[= \frac{1}{\sqrt{\epsilon n}}\sum_{k=n(1-\sqrt{\epsilon})}^{n-1}m \circ f^{-k}\left(\bigcap_{i=0}^{(1-2\sqrt{\epsilon})\frac{1}{\epsilon}}f^{ine}[ \bigcup\mathcal{A}_{\epsilon}^{(n)}]\right)\] \[\geq 1-\sum_{i=0}^{(1-2\sqrt{\epsilon})\frac{1}{\epsilon}}\frac{1}{ \sqrt{\epsilon n}}\sum_{k=n(1-\sqrt{\epsilon})}^{n-1}m\circ f^{-k}(M\setminus f ^{ine}[\bigcup\mathcal{A}_{\epsilon}^{(n)}])\] \[= 1-\sum_{i=0}^{(1-2\sqrt{\epsilon})\frac{1}{\epsilon}}\frac{1}{ \sqrt{\epsilon n}}\sum_{k=-i\epsilon n+n(1-\sqrt{\epsilon})}^{-i\epsilon n+n-1}m \circ f^{-k}(M\setminus\bigcup\mathcal{A}_{\epsilon}^{(n)})\] \[\geq 1-(\frac{1-2\sqrt{\epsilon}}{\epsilon}+1)\cdot\epsilon^{2}\ ( \because\text{Lemma \ref{lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lemlem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lemlem:lem:lem:lem:lemlem:lem:lem:lemlem:lem:lemlem:lem:lem:lemlem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lemlem:lem:lemlem:lem:lem:lem:lem:lemlem:lem:lemlem:lemlem:lem:lem:lemlem:lem:lemlem:lemlem:lem:lem:lemlem:lemlem:lem:lem:lemlem:lem:lemlem:lem:lemlem:lemlem:lem:lemlem:lem:lemlem:lem:lemlem:lem:lem:lemlem:lem:lemlem:lem:lem:lemlem:lem:lemlem:lem:lemlem:lem:lem:lem:lemlem:lemlem:lemlem:lemlem:lem:lemlem:lemlem:lem:lemlem:lem:lemlem:lemlem:lemlem:lem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lem:lemlem:lem:lemlem:lemlem:lemlem:lemlem:lemlem:lem:lemlem:lemlem:lemlem:lemlem:lemlem:lem:lemlem:lem:lemlemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlemlem:lemlem:lemlemlem:lemlem:lemlemlemlem:lemlem:lemlem:lemlemlemlem:lemlemlem: ### A cover by exponential Bowen balls via concatenation **Definition 2.4**.: _Let \(\mathcal{C}_{\epsilon}^{(n)}:=\bigvee_{i=0}^{\frac{1-2\sqrt{\epsilon}}{\epsilon} }f^{ine}[\mathcal{A}_{\epsilon}^{(n)}]\)._ **Remark:** Notice that \(\mathcal{C}_{\epsilon}^{(n)}\) covers \(\mathcal{S}_{n}\), and that \(\#\mathcal{C}_{\epsilon}^{(n)}\leq(\mathcal{A}_{\epsilon}^{(n)})^{\frac{1}{ \epsilon}}\leq e^{\frac{1}{\epsilon}(n\epsilon h_{\mu}(f)+\epsilon^{2.5}n)} \leq e^{nh_{\mu}(f)+\epsilon^{\frac{3}{2}}n}\). **Lemma 2.5**.: _Let \(B\in\mathcal{C}_{\epsilon}^{(n)}\), then for all \(n\) large enough and \(\epsilon>0\) small enough (independently of \(B\)), \(\frac{1}{\sqrt{\epsilon}}\sum_{k=(1-\sqrt{\epsilon})n}^{n-1}m\circ f^{-k}(B) \leq e^{-\chi^{n}+2e^{\frac{1}{3}}n}\), where \(e^{-\chi^{n}n}:=\prod_{\chi_{1}>0}e^{-\chi_{1}^{u}n}\)._ Proof.: Let \(y\in B\), then \(B\subseteq B(y,-n(1-2\sqrt{\epsilon}),2e^{-\delta n})\). For every \(k\in[(1-\sqrt{\epsilon}n),n-1]\), \(m\circ f^{-k}[B]\leq e^{2d\sqrt{\epsilon}n\log M_{f}}m(B(f^{-n(1-2\sqrt{ \epsilon})}(y),n(1-2\sqrt{\epsilon}),2e^{-\delta n}))\). We show that \(m(B(f^{-n(1-2\sqrt{\epsilon})}(y),n(1-2\sqrt{\epsilon}),2e^{-\delta n}))\leq e ^{-(\chi^{u}-\epsilon)n+2e^{\frac{1}{3}}n}\). Write \(x:=f^{-n(1-2\sqrt{\epsilon})}(y)\). Let \(x_{i}\) s.t \(f^{n\epsilon i}(x)\in B(x_{i},n\epsilon,2e^{-\delta n})\), \(x_{i}\in f^{-n\epsilon}[E_{\epsilon}]\), \(0\leq i\leq\frac{1-2\sqrt{\epsilon}}{\epsilon}\). Assume for contradiction that there exists a volume form \(\omega_{0}^{u}\in\wedge^{\dim H^{u}(\mu)}T_{y}M\) s.t \(|\omega_{0}^{u}|\geq e^{-(\chi^{u}-\epsilon)n+\epsilon^{\frac{1}{3}}n}\), and s.t \(\sphericalangle(d_{x}\psi_{x_{0}}^{-1}\omega_{0}^{u},E^{u})\leq\epsilon^{2}\), where \(E^{u}\) is the unstable direction of \(x_{0}\) in its Pesin chart \(\psi_{x_{0}}\); and finally assume that \(\exp_{x}\omega_{0}^{u}\subseteq f^{-n(1-2\sqrt{\epsilon})}[B]\) (when we think of \(\omega^{u}\) as the parallelogram it defines in \(T_{x}M\)). We will show a contradiction by showing that \(f^{n(1-2\sqrt{\epsilon})}[\exp_{x}\omega_{0}^{u}]\) contains a geodesic of length greater than \(2e^{-\delta n}\), which contradicts \(B(f^{n(1-2\sqrt{\epsilon})}(x),n(1-2\sqrt{\epsilon}),2e^{-\delta n})\supseteq B \supseteq f^{n(1-2\sqrt{\epsilon})}[\exp_{x}\omega_{0}^{u}]\). The choice of \(\omega_{0}^{u}\) implies that \(|d_{x}f^{n\epsilon}\omega_{0}^{u}|\geq e^{-(\chi^{u}-\epsilon)n(1-\epsilon)+ \epsilon^{\frac{1}{3}}n}(1-\epsilon)\). Note, since \(f^{n\epsilon}(x_{0}),f^{n\epsilon}(x_{1})\in\Lambda_{\ell}^{(\chi,\tau)}\), we have \(f^{n\epsilon}(x_{0}),x_{1}\in\Lambda_{\epsilon\tau n\epsilon}^{(\chi,\tau)}\), while also having \(2e^{-\delta n}\ll e^{-\tau n\epsilon}\frac{1}{\epsilon}\) for all \(n\) large enough. By the Holder continuity of the unstable spaces of points in \(\Lambda_{\frac{1}{\epsilon}\tau n\epsilon}^{(\chi,\tau)}\) ([11, Appendix A]), \(d_{x}f^{n\epsilon}\omega_{0}^{u}\) projects to \(\omega_{1}^{u}\in\wedge^{\dim H^{u}(\mu)}T_{f^{n\epsilon}(x)}M\) s.t \(\sphericalangle(d_{x}\psi_{x_{1}}^{-1}\omega_{1}^{u},E^{u})\leq\epsilon^{2}\) and s.t \(|\omega_{1}^{u}|\geq(1-\epsilon)\cdot e^{-(\chi^{u}-\epsilon)n(1-\epsilon)+ \epsilon^{\frac{1}{3}}n}(1-\epsilon)\). Continuing by induction, let \(\omega_{\frac{1-2\sqrt{\epsilon}}{\epsilon}}^{u}\) be a component of \(d_{x}f^{(1-2\sqrt{\epsilon})n}\omega_{0}^{u}\) s.t \(|\omega_{\frac{1-2\sqrt{\epsilon}}{\epsilon}}^{u}|\geq e^{\epsilon^{\frac{1} {3}}n-2\sqrt{\epsilon}(\chi^{u}-\epsilon)n}(1-\epsilon)^{\frac{1-2\sqrt{\epsilon }}{\epsilon}}\gg 1\). Now, for all \(\epsilon>0\) small enough so \(2d\sqrt{\epsilon}\log M_{f}+\epsilon\leq\epsilon^{\frac{1}{3}}\), we are done. **Corollary 2.6**.: \(h_{\mu}(f)=\chi^{u}=:\sum_{\chi_{i}(\mu)>0}\chi_{i}(\mu)\)_._ Proof.: Let \(\epsilon>0\) small enough and \(n\) large enough for Theorem 2.3 and Lemma 2.5. Then, \[e^{-\epsilon^{\frac{3}{4}}}\leq \frac{1}{\sqrt{\epsilon}n}\sum_{k=(1-\sqrt{\epsilon})n}^{n-1}m \circ f^{-k}(\mathcal{S}_{n})\leq\frac{1}{\sqrt{\epsilon}n}\sum_{k=(1-\sqrt{ \epsilon})n}^{n-1}m\circ f^{-k}(\bigcup\mathcal{C}_{\epsilon}^{(n)})\] \[\leq \#\mathcal{C}_{\epsilon}^{(n)}\cdot\max_{B\in\mathcal{C}_{\epsilon }^{(n)}}\frac{1}{\sqrt{\epsilon}n}\sum_{k=(1-\sqrt{\epsilon})n}^{n-1}m\circ f ^{-k}(B)\leq e^{nh_{\mu}(f)+\epsilon^{\frac{3}{2}}n}\cdot e^{-\chi^{u}n+2 \epsilon^{\frac{1}{3}}n}.\] Whence since this holds for arbitrarily large \(n\), \(h_{\mu}(f)\geq\chi^{u}-3\epsilon^{\frac{1}{3}}\). Since \(\epsilon>0\) is arbitrarily small, and by the Ruelle inequality, \(h_{\mu}(f)=\chi^{u}\). ## 3. The non-ergodic case Up until now we treated the case where \(\mu\) is ergodic. This simplification serves as a didactic tool to make the proof more intuitive and easy to follow; The proof in the non-ergodic case is more complicated, since we wish to eventually prove that \(\int h_{\mu_{x}}(f)d\mu(x)=\int\chi^{u}(x)d\mu(x)\) (where \(\mu=\int\mu_{x}d\mu(x)\) is the ergodic decomposition of \(\mu\)), however it may be that neither of these functions are constant \(\mu\)-a.e; And so in particular we can neither control \(\#\vee_{i=0}^{\frac{1}{4}}f^{ine}[\mathcal{A}_{\epsilon}^{(n)}]\) by \(e^{nh_{\mu}(f)}=e^{n\int h_{\mu_{x}}(f)d\mu(x)}\), nor is the volume of each element in \(\vee_{i=0}^{\frac{1}{4}}f^{ine}[\mathcal{A}_{\epsilon}^{(n)}]\) controlled by \(e^{-n\int\sum\chi^{+}d\mu}\). While \(\chi^{u}(\cdot)\) is continuous on Pesin blocks, \(h_{\mu_{x}}\) is merely measurable. We treat this added difficulty by restricting to Lusin sets, and use a sort of "entropy shadowing" property, where if a trajectory remains close to different points with good local entropy estimates, then all said shadowed points must have similar local entropy. **Theorem 3.1**.: \(h_{\mu}(f)=\int\chi^{u}(x)d\mu(x)\)_._ Proof.: Let \(\mu=\int\mu_{x}d\mu(x)\) be the ergodic decomposition of \(\mu\). Let \(\epsilon>0\), and let \(E_{j_{h},j_{\chi_{1}},\ldots,j_{\chi_{d}}}:=\{x:h_{\mu_{x}}(f)\in[j_{h}\cdot \epsilon^{5}-\frac{\epsilon^{5}}{2},j_{h}\cdot\epsilon^{5}+\frac{\epsilon^{5}} {2}),\chi_{i}(x)\in(j_{\chi_{i}}\cdot\epsilon^{5}-\frac{\epsilon^{5}}{2},j_{ \chi_{i}}\cdot\epsilon^{5}+\frac{\epsilon^{5}}{2}],i\leq d\}\), \(\underline{j}\in\{0,\ldots,\frac{2d\log M_{\underline{f}}}{\epsilon^{5}}\}^{d+1}\). Assume w.l.o.g that \(\mu(E_{\underline{j}})>0\) for all \(\underline{j}\), and let \(\rho_{\epsilon}:=\epsilon\cdot(\min_{\underline{j}}\{\mu(E_{\underline{j}}) \})^{4}>0\). For each \(\underline{j}\), we define the set \(E_{\rho_{\epsilon}}^{\underline{j}}\) as in SS2.1, for the measure \(\mu_{\underline{j}}:=\frac{1}{\mu(E_{\underline{j}})}\int_{E_{\underline{j}} }\mu_{x}d\mu(x)\). Then it follows that \(\mu\big{(}\bigcup_{\underline{j}}E_{\underline{r}}^{\underline{j}}\big{)} \geq e^{-\rho_{\epsilon}^{3}}\). Let \(L_{\epsilon}\) be a Lusin set for the function \(x\mapsto h_{\mu_{x}}(f)\) s.t \(\forall\underline{j}\), \(\mu_{\underline{j}}(L_{\epsilon}^{\underline{j}})\geq e^{-2\rho_{\epsilon}^{3}}\), where \(L_{\epsilon}^{\underline{j}}:=L_{\epsilon}\cap E_{\underline{j}}\). Since the Lusin theorem tells us that \(L_{\epsilon}\) can be chosen to be closed, there exists \(0<r_{\epsilon}:=\frac{1}{2}\sup\{r>0:\forall x,y\in L_{\epsilon},d(x,y)\leq r \Rightarrow|h_{\mu_{x}}(f)-h_{\mu_{y}}(f)|\leq\epsilon^{5}\}\). Similarly, assume that \(L_{\epsilon}\) is a Lusin set for the function \(x\mapsto\underline{\chi}(x)\), with the same estimates w.r.t the \(|\cdot|_{\infty}\)-norm. Finally, given \(n\in\mathbb{N}\), set \(G_{\epsilon}^{\underline{j},n}:=L_{\epsilon}^{\underline{j}}\cap f^{n\epsilon }[L_{\epsilon}^{\underline{j}}]\), and so \(\mu(\bigcup_{\underline{j}}G_{\epsilon}^{\underline{j},n})\geq e^{-\rho_{ \epsilon}^{2}}\). Cover each \(G_{\epsilon}^{\underline{j},n}\) with \(\bar{\mathcal{A}}_{\epsilon}^{\underline{j},n}\)- a cover by exponential Bowen balls \(B(\cdot,-n\epsilon,e^{-2\delta n})\), as in SS2.1. Hence \(\#\mathcal{A}_{\epsilon}^{\underline{j},n}=e^{n\epsilon(h_{\mu_{j}}(f)\pm 2 \epsilon^{2})}\), where \(\mathcal{A}_{\epsilon}^{\underline{j},n}:=\{B(x,n\epsilon,e^{-\delta n}):B(x, n\epsilon,e^{-2\delta n})\in\bar{\mathcal{A}}_{\epsilon}^{\underline{j},n}\}\), similarly to SS2.1. Set \(\mathcal{S}_{n}:=\bigcap_{i=0}^{\frac{1-2\sqrt{\epsilon}}{\epsilon}}f^{ine}[ \bigcup_{\underline{j}}\bigcup\mathcal{A}_{\epsilon}^{\underline{j},n}]\). As in Theorem 2.3, \(\frac{1}{\sqrt{\epsilon}n}\sum_{k=(1-\sqrt{\epsilon})n}^{n-1}m\circ f^{-k}( \mathcal{S}_{n})\geq e^{-\rho_{\epsilon}^{\frac{3}{\epsilon}}}\) (for all \(\epsilon>0\) sufficiently small). Then, for any \(\underline{j}\), as in Lemma 2.1, for all \(n\) large enough (s.t \(\epsilon=\frac{1}{N^{6}}\) and \(N^{6}|n\)), \(\frac{1}{\sqrt{\epsilon}n}\sum_{k=(1-\sqrt{\epsilon})n}^{n-1}m\circ f^{-k}( \bigcup\mathcal{A}_{\epsilon}^{\underline{j},n})\geq\frac{1}{2}\mu(G_{ \epsilon}^{\underline{j},n})\geq\frac{1}{2}\mu(E_{\underline{j}})e^{-\rho_{ \epsilon}^{2}}\gg 2\rho_{\epsilon}^{\frac{3}{\epsilon}}\), whence \[\frac{1}{\sqrt{\epsilon}n}\sum_{k=(1-\sqrt{\epsilon})n}^{n-1}m\circ f^{-k}( \bigcup\mathcal{A}_{\epsilon}^{\underline{j},n}\cap\mathcal{S}_{n})\geq\frac {1}{5}\mu(E_{\underline{j}})\geq\rho_{\epsilon}. \tag{2}\] Notice, given \(B\in\vee_{i=0}^{\frac{1-2\sqrt{\epsilon}}{\epsilon}}f^{ine}[\bigcup_{ \underline{j}}\mathcal{A}_{\epsilon}^{\underline{j},n}]\) s.t \(B=\bigcap_{i=0}^{\frac{1-2\sqrt{\epsilon}}{\epsilon}}f^{ine}[B_{i}]\) with \(B_{i}\in\mathcal{A}_{\epsilon}^{\underline{j}^{i},n}\), and given \(D\in\mathcal{A}_{\epsilon}^{\underline{j},n}\) s.t \(D\cap B\neq\varnothing\), we have \[|h_{\mu_{\underline{j}}}(f)-h_{\mu_{\underline{j}^{i}}}(f)|\leq\epsilon^{3}\text { and }|\int\chi^{u}d\mu_{\underline{j}^{i}}-\int\chi^{u}d\mu_{\underline{j}^{i}}|\leq \epsilon^{3}\text{ for all }i\leq\frac{1-2\sqrt{\epsilon}}{\epsilon} \tag{3}\] (as long as \(n\) is large enough so \(2e^{-\delta n}\leq r_{\epsilon}\), since \(h_{\mu.}(f)=h_{\mu_{f^{n\epsilon(\cdot)}}}(f)\) and \(\underline{\chi}(\cdot)=\underline{\chi}(f^{n\epsilon}(\cdot))\)). Write \(\mathcal{A}_{\epsilon}^{\underline{j},n}:=\{D\in\mathcal{A}_{\epsilon}^{ \underline{j},n}:\frac{1}{\sqrt{\epsilon}n}\sum_{k=(1-\sqrt{\epsilon})n}^{n-1}m \circ f^{-k}(D)\leq e^{\epsilon^{\frac{6}{2}}n}\frac{1}{\sqrt{\epsilon}n}\sum_{k=( 1-\sqrt{\epsilon})n}^{n-1}m\circ f^{-k}(D\cap\mathcal{S}_{n})\}\) and \(\bar{\mathcal{A}}_{\epsilon}^{\underline{j},n}:=\mathcal{A}_{\epsilon}^{ \underline{j},n}\setminus\bar{\mathcal{A}}_{\epsilon}^{\underline{j},n}\). Then \(\#\mathcal{A}_{\epsilon}^{\underline{j},n}\geq e^{n\epsilon(h_{\mu_{\underline{j} }}(f)-2\epsilon^{2})}e^{-\epsilon^{\frac{3}{2}}n}\); otherwise \[0<\rho_{\epsilon}\leq \frac{1}{\sqrt{\epsilon}n}\sum_{k=(1-\sqrt{\epsilon})n}^{n-1}m \circ f^{-k}(\bigcup\mathcal{A}_{\epsilon}^{\underline{j},n}\cap\mathcal{S}_{n})\] \[\leq \#\bar{\mathcal{A}}_{\epsilon}^{\underline{j},n}\cdot e^{-n\epsilon(h _{\mu_{\underline{j}}}(f)-\epsilon^{2})}e^{-n\epsilon^{\frac{3}{2}}}+e^{n \epsilon(h_{\mu_{\underline{j}}}(f)-\epsilon^{2})-n\epsilon^{\frac{1}{3}}}\cdot e ^{-n\epsilon^{\frac{1}{3}}(h_{\mu_{\underline{j}}}(f)-\epsilon^{2})}\leq 2e^{-\frac{1}{2}n\epsilon^{\frac{3}{2}}}\xrightarrow[n \to\infty]{}0,\] a contradiction! Now, recall that \(\mathcal{A}_{\epsilon}^{\underline{j},n}\) is a cover of multiplicity bounded by \(e^{2\epsilon^{2}n}\), and hence so is \(\mathcal{A}_{\epsilon}^{\underline{j},n}\). As in [BORH, Lemma 2.2], there exists a pair-wise disjoint sub-cover \(\overline{\mathcal{A}}_{\epsilon}^{\underline{j},n}\subseteq\mathcal{A}_{ \epsilon}^{\underline{j},n}\) s.t \(\#\overline{\mathcal{A}}_{\epsilon}^{\underline{j},n}\geq\#\mathcal{A}_{ \epsilon}^{\underline{j},n}e^{-2\epsilon^{2}n}\). Finally, notice that \(\vee_{i=0}^{\frac{1-2\sqrt{\epsilon}}{\epsilon}}f^{ine}[\vee_{\underline{j}^{i} ^{\prime}}\mathcal{A}_{\epsilon}^{\underline{j}^{\prime},n}]\) refines \(\overline{\mathcal{A}}_{\epsilon}^{\underline{j},n}\). Therefore it follows that for any \(\underline{j}\), there exists \(D_{\underline{j}}\in\overline{\mathcal{A}}_{\epsilon}^{\underline{j},n}\) s.t \[\#\{B\in\vee_{i where \(C_{\epsilon}:=(\frac{2d\log M_{f}}{\epsilon^{5}})^{\frac{d+1}{s}}\). For all \(n\) large enough s.t \(e^{-n\epsilon^{3}}\leq\frac{1}{C_{\epsilon}}\), we have \[\#\{B\in\vee_{i=\bar{0}}^{\frac{1-2\sqrt{\epsilon}}{2}}f^{ine}[\vee_{\underline {j}^{\prime}}\mathcal{A}_{\epsilon}^{\underline{j}^{\prime},n}]:B\cap D_{ \underline{j}}\neq\varnothing\}\leq e^{nh_{\underline{j}^{\prime}}(f)+2\epsilon ^{\frac{3}{2}}n}. \tag{4}\] Therefore, in total, \[e^{-n\epsilon(h_{\underline{j}^{\prime}}(f)-\epsilon^{2})}\leq \mu(D_{\underline{j}})\leq 2\frac{1}{\sqrt{\epsilon n}}\sum_{k=(1-\sqrt{ \epsilon})n}^{n-1}m\circ f^{-k}(D_{\underline{j}})\leq 2e^{n\epsilon^{\frac{3}{2} }}\frac{1}{\sqrt{\epsilon n}}\sum_{k=(1-\sqrt{\epsilon})n}^{n-1}m\circ f^{-k} (D_{\underline{j}}\cap\mathcal{S}_{n})\] \[\leq 2e^{n\epsilon^{\frac{3}{2}}}\cdot e^{nh_{\underline{j}^{\prime} }(f)+2\epsilon^{\frac{3}{2}}n}\cdot\max_{B\cap D_{\underline{j}}\neq\varnothing }\frac{1}{\sqrt{\epsilon n}}\sum_{k=(1-\sqrt{\epsilon})n}^{n-1}m\circ f^{-k} (B)\] \[\leq 2e^{n\epsilon^{\frac{3}{2}}}\cdot e^{nh_{\underline{j}^{\prime} }(f)+2\epsilon^{\frac{3}{2}}n}\cdot\max_{B\cap D_{\underline{j}}\neq\varnothing }\frac{1}{\sqrt{\epsilon n}}\sum_{k=(1-\sqrt{\epsilon})n}^{n-1}m\circ f^{-(k-(1 -2\sqrt{\epsilon})n)}(f^{-(1-2\sqrt{\epsilon})n}[B])\] \[\leq 2e^{n\epsilon^{\frac{3}{2}}}\cdot e^{nh_{\underline{j}^{\prime} }(f)+2\epsilon^{\frac{3}{2}}n}\cdot\max_{B\cap D_{\underline{j}}\neq\varnothing }e^{2\sqrt{\epsilon}d\log M_{f}n}m(f^{-(1-2\sqrt{\epsilon})n}[B])\] \[\leq 2e^{n\epsilon^{\frac{3}{2}}}\cdot e^{nh_{\underline{j}^{\prime} }(f)+2\epsilon^{\frac{3}{2}}n}\cdot e^{2\sqrt{\epsilon}d\log M_{f}n}e^{-n(1-2 \sqrt{\epsilon})(\chi^{u}(\mu_{\underline{j}})-\epsilon^{2})}e^{2\epsilon^{ \frac{1}{3}}n}.\] Where the last inequality is by (3) similarly to Lemma 2.5. Then, \[e^{-nh_{\underline{\mu}^{\prime}}(f)}\leq e^{-n\chi^{u}(\mu_{\underline{j}})}e^{10d \log M_{f}\epsilon^{\frac{1}{3}}n},\] and since \(\epsilon>0\) is allowed to be arbitrarily small, for \(\mu\)-a.e \(x\)\(h_{\mu_{x}}(f)\geq\chi^{u}(x)\), and we are done. ## 4. Volume is "almost" exponentially mixing In this section we assume further a condition stronger than (1), yet weaker than the exponential mixing of the volume: \[\exists C>0,\alpha\in(0,1],\gamma>0:\forall g,h\in\mathrm{H\ddot{o}l}_{\alpha }(M),\left|\int g\circ f^{n}\cdot hdm-\mu(g)\cdot m(h)\right|\leq C\cdot\|g \|_{\alpha}\cdot\|h\|_{\alpha}\cdot e^{-\gamma n}. \tag{5}\] This condition is inspired by the setup studied by Dolgopyat, Kangowski, and Rodriguez-Hertz in [DKRH], however notice that the volume need not be invariant in this case. We continue to show that in this case, unless \(\mu\) is a Dirac delta measures at a fixed point (a necessary condition, see the remark following Theorem 4.1), indeed \(\mu\) must be an ergodic SRB measure with a positive exponent almost everywhere, and it is the unique SRB measure of \((M,f)\). A nice corollary of our proof is that every \(f\)-invariant Borel probability measure on \(M\) has a uniform bound form below on its maximal Lyapunov exponent in terms of the rate of mixing, aside at most for \(\mu\) in the case where it is a Dirac delta measure. **Theorem 4.1**.: _The following dichotomy holds:_ 1. _for every ergodic_ \(f\)_-invariant Borel probability_ \(\nu\)_,_ \(\max_{i}\chi^{+}_{i}(x)>\frac{\gamma}{2d}\)__\(\nu\)_-a.e,_ 2. \(\mu\) _is a Dirac delta measure at a fixed point with_ \(\chi^{u}(\mu)=0\)_, and every other ergodic_ \(f\)_-invariant Borel probability_ \(\nu\) _has_ \(\max_{i}\chi^{+}_{i}(x)>\frac{\gamma}{2d}\)__\(\nu\)_-a.e._ In particular, if \(\mu\) is not a Dirac delta measure at a fixed point, then \(\max_{i}\chi^{+}_{i}(x)\geq\frac{\gamma}{2d}\)__\(\mu\)-a.e. Proof.: Let \(\nu\) be an ergodic \(f\)-invariant Borel probability s.t \(\max_{i}\chi^{+}_{i}(x)\leq\frac{\gamma}{2d}-8(d+1)\epsilon\)\(\nu\)-a.e, where w.l.o.g \(0<\epsilon\ll\frac{\gamma}{2d}\). Let \(x\) be a \(\nu\)-typical point. Let \(n\) large s.t \(f^{i}[B(x,n,e^{-\epsilon n})]\) is contained in the Pesin chart of \(f^{i}(x)\) for all \(0\leq i\leq n\). Let \(g_{x}\) be a Lipschitz function s.t \(g_{x}|_{B(x,n,e^{-2\epsilon n})}=1\), \(g_{x}|_{B(x,n,e^{-\epsilon n})^{c}}=0\), and \(\mathrm{Lip}(\mathrm{g_{x}})\leq e^{(\frac{\gamma}{2d}-3(d+1)\epsilon)n}\). Let \(p\) and \(q\) s.t \(\mu(B(p,e^{-\epsilon n})),\mu(B(q,e^{-\epsilon n}))\geq e^{-n(d+1)\epsilon}\) for all \(n\) large enough, and let \(g_{t}|_{B(t,e^{-2\epsilon n})}=1\), \(g_{t}|_{B(t,e^{-\epsilon n})^{c}}=0\), and \(\operatorname{Lip}(\operatorname{g}_{t})\leq e^{2\epsilon n}\), \(t\in\{p,q\}\). Then by (5), for \(t\in\{p,q\}\) and all \(n\) large enough, \[\int g_{t}\circ f^{n}g_{x}dm= \mu(g_{t})m(g_{x})\pm 4Ce^{-\gamma n}e^{2\epsilon n}e^{(\frac{\gamma}{2 d}-3(d+1)\epsilon)n}\] \[= e^{\pm\epsilon}\mu(g_{t})m(g_{x})>0\ (\because\mu(g_{t})m(g_{x}) \geq e^{-n(d+1)2\epsilon-(\frac{\gamma}{2d}+\epsilon)dn}).\] Thus in particular \(B(p,e^{-\epsilon n})\cap B(f^{n}(x),e^{-\epsilon n})\neq\varnothing\) and \(B(q,e^{-\epsilon n})\cap B(f^{n}(x),e^{-\epsilon n})\neq\varnothing\), and so \(d(p,q)\leq 4e^{-\epsilon n}\xrightarrow[n\to\infty]{}0\). Therefore \(\mu=\delta_{p}=\delta_{q}\). Moreover, if we assume further that \(x\) is \(\nu\)-generic, for any \(h\in\operatorname{Lip}_{+}(M)\), \[m(g_{x})(h\circ f^{n}(x)\pm\operatorname{Lip}(h)e^{-\epsilon n})= \int h\circ f^{n}g_{x}dm=\mu(h)m(g_{x})\pm 4Ce^{-\gamma n}e^{ \epsilon n}e^{(\frac{\gamma}{2d}-3(d+1)\epsilon)n}\] \[= m(g_{x})(\mu(h)\pm 4Ce^{-\frac{\gamma}{2}n}e^{\epsilon n}e^{(\frac{ \gamma}{2d}-3(d+1)\epsilon)n}e^{\epsilon dn}).\] Averaging over \(n=N,\dots,2N\), for \(N\in\mathbb{N}\) large, \[\nu(h)\underset{\infty\epsilon\to N}{\longleftarrow}\ \frac{1}{N}\sum_{n=N}^{2N-1}h\circ f^{n}(x)\pm \operatorname{Lip}(h)e^{-\epsilon N}=\mu(h)\pm 4Ce^{-\frac{\gamma}{2}N}e^{ \epsilon N}e^{(\frac{\gamma}{2d}-3(d+1)\epsilon)N}e^{\epsilon dN} \xrightarrow[N\to\infty]{}\mu(h).\] Therefore \(\mu(h)=\nu(h)\), and so by the Riesz representation theorem, \(\nu=\mu\) (since \(\overline{\operatorname{Lip}_{+}(M)}^{C(M)}=C_{+}(M)\)). **Remark:** The assumption that \(\mu\) is not a Dirac delta measure is necessary, as can be seen by the north-pole south-pole example: Let \(\mathbb{S}^{1}\) be the unit circle, let \(f\in\operatorname{Diff}^{1+\beta}(\mathbb{S}^{1})\), and let two fixed points, \(N\in\mathbb{S}^{1}\) with a derivative bigger than \(1\), and \(S\in\mathbb{S}^{1}\) with a derivative smaller than \(1\). One can check that in this case \(\big{|}\int_{\mathbb{S}^{1}}g\circ f^{n}hdm-g(S)m(h)|\), \(g,h\in\operatorname{Lip}(\mathbb{S}^{1})\), is exponentially small as in (5), by using a partition of unity which separates \(N\) and \(S\). This example extends to a closed surface using a D-A system with two repellers, and a fixed attracting point. By Theorem 3.1 and Theorem 4.1, it follows from [10] that \(\mu\) has absolutely continuous conditionals on unstable leaves a.e. The following theorem is a corollary of this fact together with (5). The proof uses absolutely continuous fake \(cs\)-foliations in exponentially small charts, constructed in [1]. These foliations are used to treat the trajectory of an exponentially small ball as the trajectory of single unstable leaf, where the conditional measure of \(\mu\) is equivalent to the induced Riemannian volume, which lets us compare the two measures. **Proposition 4.2**.: _Assume that there exists an ergodic SRB measure \(\nu\) with \(h_{\nu}(f)>0\). For every \(\epsilon\in(0,\frac{\gamma}{4\log M_{f}})\) there is a set \(G_{\epsilon}\) with \(\nu(G_{\epsilon})\geq e^{-\epsilon}\) s.t \(\forall x\in G_{\epsilon}\), \(\forall\delta\in(0,\epsilon)\), \(\forall n\geq n_{\epsilon,\delta}\), \(\forall g\in\operatorname{Hol}_{\alpha}^{+}(M)\),_ \[\mu(g)\geq\frac{e^{-7\delta^{2}d}}{\nu_{\xi^{u}(x)}(B^{u}(x,e^{-\delta n})\cap K _{\epsilon})\mathcal{W}_{n}^{\mathrm{cs}}(x))}\int\limits_{B^{u}(x,e^{-\delta n })\cap K_{\epsilon}}(g\circ f^{n}-\|g\|_{\alpha-\operatorname{Hol}}e^{-\frac{ \delta}{2}n\alpha})d\nu_{\xi^{u}(x)}-C\|g\|_{\alpha-\operatorname{Hol}}e^{-( \gamma-2\delta)n} \tag{6}\] _where \(\xi^{u}\) is a measurable partition subordinated to the unstable foliation of \(\nu\), \(\nu_{\xi^{u}(\cdot)}\) are the respective conditional measures, and \(K_{\epsilon}\) is a Pesin block with \(\nu(K_{\epsilon})\geq e^{-\epsilon^{2}}\)._ For the definition of a measurable partition subordinated to the unstable foliation of \(\nu\), see [10], and the respective conditional measures exists \(\nu\)-a.e by the Rokhlin disintegration theorem. Proof.: **Step 1:**\(\nu_{\xi^{u}(x)}=C^{\pm 1}\frac{1}{m_{\xi^{u}(x)}(1)}m_{\xi^{u}(x)}\) on a large set, where \(m_{\xi^{u}(x)}\) is the induced Riemannian volume on \(\xi^{u}(x)\) and \(C>0\) is a constant close to \(1\). **Proof:** Let \(\xi^{u}\) be a partition measurable partition subordinate to the unstable foliation of \(\nu\) s.t \(\xi^{u}(x)\supseteq B^{u}(x,r_{x})\) for \(\nu\)-a.e \(x\) (see [10]). Let \(\nu=\int\nu_{\xi^{u}(x)}d\nu(x)\) be the corresponding disintegration of \(\nu\) given by the Rokhlin disintegration theorem. By Theorem 4.1, and [10], for \(\nu\)-a.e \(x\), \(\nu_{\xi^{u}(x)}\sim m_{\xi^{u}(x)}\). Denote by \(\rho_{x}\) the Radon-Nikodym derivative \(\frac{d\nu_{\xi^{u}(x)}}{dm_{\xi^{u}(x)}}\). By the construction of \(\xi^{u}\), \(\rho_{x}(x)\) is uniformly bounded a.e, since the elements of \(\xi^{u}\) are contained in local unstable leaves, and moreover \(\log\rho_{x}\) is \(\frac{\beta}{3}\)-Holder continuous with a uniform Holder constant (see [10] for more details of this classical result). Therefore, given \(\epsilon>0\) and a small \(\delta\in(0,\epsilon)\), there exists \(\ell_{\epsilon}=\ell_{\epsilon}(\delta)\) s.t \(\nu\left(\Lambda_{\ell_{\epsilon}}^{(\chi(\nu),\delta^{3}\tau_{\chi^{(\nu)}}) }\right)\geq e^{-\epsilon^{2}}\). Let \(0<\chi\leq\min\{\chi_{i}(\nu):\chi_{i}(\nu)\neq 0,i\leq d\}\) (w.l.o.g \(\delta\leq\frac{\chi}{2}\)), and set \(K_{\epsilon}:=\Lambda_{\ell_{\epsilon}}^{(\chi(\nu),\delta^{3}\tau_{\chi^{(\nu )}})}\). For all \(x\in K_{\epsilon}\), the local unstable leaf of \(x\) contains a relative open ball of radius at least \(\frac{1}{2\ell_{\epsilon}}\). **Step 2:** Absolutely continuous fake \(cs\)-foliation in \(B(x,n\epsilon,e^{-\delta n})\), for \(x\in K_{\epsilon}\), by [11]. **Proof:** Given \(x\in K_{\epsilon}\), and \(n\) large enough so \(e^{-\frac{\delta}{3}n}\ll\frac{1}{\ell_{\epsilon}}\), let \(V_{n}^{\rm cs}(x)\) be a "fake central-stable leaf" at \(x\), constructed in [11]. That is, for every \(x\in K_{\epsilon}\) there exists a local submanifold of \(x\) transversal to \(\xi^{u}(x)\), \(V_{n}^{\rm cs}(x)\), s.t \(\forall 0\leq i\leq n\), 1. \(f^{i}[B_{V_{n}^{\rm cs}(x)}(x,e^{-\delta n})]\subseteq B(f^{i}(x),e^{-\frac{ \delta}{2}n})\), 2. \(f^{i}[V_{n}^{\rm cs}(x)]\) is a graph of a function with a Lipschitz constant smaller or equal to \(\frac{2\tau}{\chi}\leq\delta^{2}\) over \(\psi_{f^{i}(x)}[\mathbb{R}^{\rm cs}\cap B(0,e^{-\frac{\delta}{2}n})]\) (where \(\psi_{y}\) is the Pesin chart of \(y\)), 3. \(\{V^{\rm cs}(y):y\in B(x,e^{-\delta n})\cap K_{\epsilon}\}\) is a foliation of \(B(x,e^{-\delta n})\cap K_{\epsilon}\) ([11], SS 5]). Moreover, by [11], Proposition 6.4], 1. For all \(n\) large enough (when \(\delta>0\) is small enough), the holonomy map \(\pi\) along \(\{V^{\rm cs}(x^{\prime})\}_{x^{\prime}\in K_{\epsilon}\cap\xi^{u}(x)}\) from \(\xi^{u}(x)\cap K_{\epsilon}\) to \(\xi^{u}(y)\), \(y\in K_{\epsilon}\cap B(x,e^{-\delta n})\), has a Jacobian \({\rm Jac}(\pi)=e^{\pm\delta^{2}}\). In fact it follows that \({\rm Jac}(\pi|_{B^{\rm cs}(x,e^{-\delta n})})=e^{o(1)}\). **Step 3:** For every \(x\in K_{\epsilon}\) and \(n\) large enough, and for every \(g\in\mathrm{H\ddot{o}l}_{\alpha}^{+}(M)\), \(\frac{1}{\nu_{\xi^{u}(x)}(\mathcal{W}_{n}^{\rm cs}(x))}\int_{\mathcal{W}_{n}^ {\rm cs}(x)}g\circ f^{n}d\nu_{\xi^{u}(x)}=(\mu(g)\pm e^{-\frac{\gamma}{2}n}\|g \|_{\alpha-\mathrm{H\ddot{o}l}})e^{\pm\delta}\), where \(\mathcal{W}_{n}^{\rm cs}(x)\) is a foliation box in the chart of \(x\). **Proof:** We start by defining \(\mathcal{W}_{n}^{\rm cs}(x)\) for \(x\in K_{\epsilon}\). Let \(R(x,e^{-\delta n}e^{2\delta^{2}}):=\psi_{x}(R(0,e^{-\delta n}e^{2\delta^{2}}))\), where \(\psi_{x}\) is the Pesin chart of \(x\), and \(R(\cdot,r)\) is a ball of radius \(r\) w.r.t to the metric \(|\cdot|^{\prime}:=\max\{|\cdot_{u}|_{2},|\cdot_{\rm cs}|_{2}\}\), where \({\rm u},{\rm cs}\) are the corresponding components in the chart of \(x\). In particular, given \(y\in K_{\epsilon}\cap B^{u}(x,e^{-\delta n})\), \(B_{V_{n}^{\rm cs}(y)}(y,e^{-\delta n})\subseteq R(x,e^{-\delta n}e^{2\delta^{2}})\), since \(V_{n}^{\rm cs}(y)\) is the graph of a \(\delta^{2}\)-Lipschitz function in the chart of \(x\). We define \(\mathcal{W}_{n}^{\rm cs}(x):=\bigcup_{y\in K_{\epsilon}\cap B^{u}(x,e^{- \delta n})}B_{V_{n}^{\rm cs}(y)}(y,e^{-\delta n})\). Let \(x\) be a \(\nu_{\xi^{u}(x)}\)-density point of \(K_{\epsilon}\) s.t \(\frac{\nu_{\xi^{u}(x)}(K_{\epsilon}\cap B^{u}(x,r))}{\nu_{\xi^{u}(x)}(B^{u}(x, r))}\geq e^{-\delta^{2}}\), \(\forall r\in(0,2e^{-\delta n})\). By the Holder continuity of \(\log\rho_{x}\), \(m_{\xi^{u}(x)}=(\rho_{x}(x))^{-1}e^{\pm 2e^{-\frac{\delta}{2}n}}\mu_{\xi^{u}(x)}\) on \(B^{u}(x,2e^{-\delta n})\). Therefore, \(\frac{m_{\xi^{u}(x)}(K_{\epsilon}\cap B(x,e^{-\delta n}))}{m_{\xi^{u}(x)}(B(x,e ^{-\delta n}))}\geq e^{-2\delta^{2}}\) for all \(n\) large enough. Finally, let \(h\) be a Lipschitz function s.t \(h|_{R(x,e^{-\delta n}e^{2\delta})}=1\), \(h|_{R(x,e^{-\delta n}e^{2\delta})^{c}}=0\), \({\rm Lip}(h)\leq 2e^{2\delta n}\). In particular, \(m(\mathcal{W}^{\rm cs}(x))\geq e^{-2\delta^{2}d}m(h)\). In addition, by the absolute continuity of the foliation \(\mathcal{W}^{\rm cs}(x)\) and since the induced leaf volume of each laminate in \(\mathcal{W}^{\rm cs}(x)\) is comparable up to a \(e^{\pm 2\delta^{2}d}\) factor, \(\frac{1}{m(\mathcal{W}_{n}^{\rm cs}(x))}m|_{\mathcal{W}_{n}^{\rm cs}(x)}=e^{\pm 3 \delta^{2}d}\frac{1}{m_{\xi^{u}(x)}(\mathcal{W}_{n}^{\rm cs}(x))}m_{\xi^{u}(x)}|_{ \mathcal{W}_{n}^{\rm cs}(x)}\) for sets saturated by \(W^{\rm cs}(x)\) for all \(n\) large enough. Thus by Step 2, \[\frac{1}{m(h)}\int h\cdot g\circ f^{n}dm= \frac{m(\mathcal{W}_{n}^{\rm cs}(x))}{m(h)}\cdot\frac{1}{m(\mathcal{ W}_{n}^{\rm cs}(x))}\int h\cdot g\circ f^{n}dm\] \[\geq \frac{m(\mathcal{W}_{n}^{\rm cs}(x))}{m(B(x,e^{-\delta n}))}\cdot \frac{m(B(x,e^{-\delta n}))}{m(h)}\cdot\frac{1}{m(\mathcal{W}_{n}^{\rm cs}(x))} \int\mathbb{1}_{\mathcal{W}_{n}^{\rm cs}(x)}\cdot g\circ f^{n}dm\] \[\geq e^{-2\delta^{2}d}\cdot e^{-2\delta^{2}d}\cdot e^{-3\delta^{2}d} \frac{1}{m_{\xi^{u}(x)}(\mathcal{W}_{n}^{\rm cs}(x))}\int\limits_{\mathcal{W}_{n}^ {\rm cs}(x)}(g\circ f^{n}-\|g\|_{\alpha-\mathrm{H\ddot{o}l}}e^{-\frac{\delta}{2}n \alpha})dm_{\xi^{u}(x)}\] \[\geq e^{-7\delta^{2}d}\frac{1}{\nu_{\xi^{u}(x)}(\mathcal{W}_{n}^{\rm cs }(x))}\int\limits_{\mathcal{W}_{n}^{\rm cs}(x)}(g\circ f^{n}-\|g\|_{\alpha- \mathrm{H\ddot{o}l}}e^{-\frac{\delta}{2}n\alpha})d\nu_{\xi^{u}(x)}. \tag{7}\] By (5), the l.h.s equals to \(\mu(g)\pm C\cdot\|g\|_{\alpha-\mathrm{H\ddot{o}l}}e^{-(\gamma-2\delta)n}\), and **Remark:** An upper bound for (6) can be achieved similarly through (7), although the error term will not be exponentially small in \(n\); however the error term is to the already averaged quantity. **Corollary 4.3**.: _The system \((M,f)\) admits at most one ergodic SRB measure with positive entropy, and if it exists it is the measure \(\mu\). In particular, \(\mu\) is ergodic._ Proof.: Let \(\nu\) be an ergodic SRB measure with \(h_{\nu}(f)>0\). Let \(g\in\operatorname{Lip}(M)\) s.t \(0\leq g\leq 1\). Let \(\epsilon>0\), and \(x\in G_{\epsilon}\) which is \(\nu\)-generic for \(g\). Assume further that \(x\) is a \(\nu_{\xi^{u}(x)}\)-density point of \(E_{n^{\prime}}:=\{x^{\prime}\in K_{\epsilon}:\forall n\geq n^{\prime},\frac{1 }{n}\sum_{k=n}^{2n-1}g\circ f^{k}=e^{\pm\delta}\nu(g)\}\) s.t \(\frac{\nu_{\xi^{u}(x)}(B^{u}(x,e^{-\delta u})\cap E_{n^{\prime}})}{\nu_{\xi^{ u}(x)}(B^{u}(x,e^{-\delta u}))}\geq e^{-\delta}\), for some large \(n^{\prime}\). Then, for all \(n\) large enough, \(\mu(g)\geq e^{-\delta}(e^{-7d\delta^{2}}\nu(g)-\|g\|_{\operatorname{Lip}}e^{- \frac{n\alpha\delta}{3}})\). If \(\nu(g)=0\), then \(\mu(g)=0\), otherwise for all \(n\) large enough w.r.t \(g\), \(\mu(g)\geq e^{-8d\delta^{2}}\nu(g)\), hence \(\nu=\mu\). In particular, \(\mu\geq e^{-8d\delta^{2}}\nu\) (by the Riesz representation theorem, and since \(\overline{\operatorname{Lip}^{+}(M)}^{C(M)}=C(M)\)). Since \(\delta>0\) is arbitrary, \(\mu\geq\nu\) for every ergodic SRB measure \(\nu\) s.t \(h_{\nu}(f)>0\). Assume that \(\mu\) can be written as \(\mu=a\mu_{1}+(1-a)\frac{\mu-\mu_{1}}{1-a}\) with \(a\in(0,1)\) and \(\mu_{1}\perp(\mu-\mu_{1})\). If \(\mu\) admitted an ergodic component with no positive Lyapunov exponents, then by Theorem 4.1\(\mu\) is a Dirac mass at a fixed point, which contradicts the fact that \(\mu\geq\nu\) with \(h_{\nu}(f)>0\). Therefore \(\mu_{1}\) admits a positive Lyapunov exponent a.e. Therefore \(\mu\geq\mu_{1}\). Let \(G\) be a set s.t \(\mu_{1}(G)=1\) and \((\mu-\mu_{1})(G)=0\). Then \(1>a=a\mu_{1}(G)=\mu(G)\geq\mu_{1}(G)=1\), a contradiction! Thus \(\mu\) is ergodic and has positive entropy.
2303.16909
RetClean: Retrieval-Based Data Cleaning Using Foundation Models and Data Lakes
Can foundation models (such as ChatGPT) clean your data? In this proposal, we demonstrate that indeed ChatGPT can assist in data cleaning by suggesting corrections for specific cells in a data table (scenario 1). However, ChatGPT may struggle with datasets it has never encountered before (e.g., local enterprise data) or when the user requires an explanation of the source of the suggested clean values. To address these issues, we developed a retrieval-based method that complements ChatGPT's power with a user-provided data lake. The data lake is first indexed, we then retrieve the top-k relevant tuples to the user's query tuple and finally leverage ChatGPT to infer the correct value (scenario 2). Nevertheless, sharing enterprise data with ChatGPT, an externally hosted model, might not be feasible for privacy reasons. To assist with this scenario, we developed a custom RoBERTa-based foundation model that can be locally deployed. By fine-tuning it on a small number of examples, it can effectively make value inferences based on the retrieved tuples (scenario 3). Our proposed system, RetClean, seamlessly supports all three scenarios and provides a user-friendly GUI that enables the VLDB audience to explore and experiment with the system.
Mohammad Shahmeer Ahmad, Zan Ahmad Naeem, Mohamed Eltabakh, Mourad Ouzzani, Nan Tang
2023-03-29T08:06:22Z
http://arxiv.org/abs/2303.16909v1
# RetClean: Retrieval-Based Data Cleaning Using Foundation Models and Data Lakes ###### Abstract. Can foundation models (such as ChatGPT) clean your data? In this proposal, we demonstrate that indeed ChatGPT can assist in data cleaning by suggesting corrections for specific cells in a data table (scenario 1). However, ChatGPT may struggle with datasets it has never encountered before (e.g., local enterprise data) or when the user requires an explanation of the source of the suggested clean values. To address these issues, we developed a retrieval-based method that complements ChatGPT's power with a user-provided data lake. The data lake is first indexed, we then retrieve the top-\(k\) relevant tuples to the user's query tuple and finally leverage ChatGPT to infer the correct value (scenario 2). Nevertheless, sharing enterprise data with ChatGPT, an externally hosted model, might not be feasible for privacy reasons. To assist with this scenario, we developed a custom RoBERTa-based foundation model that can be locally deployed. By fine-tuning it on a small number of examples, it can effectively make value inferences based on the retrieved tuples (scenario 3). Our proposed system, **RetClean**, seamlessly supports all three scenarios and provides a user-friendly GUI that enables the VLDB audience to explore and experiment with the system. + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote † †: *Equal contributions + Footnote †: *Equal contributions + Footnote †: *Equal contributions + Footnote † †: *Equal contributions + Footnote † †: *Equal contributions + Footnote † †: *Equal contributions + Footnote † †: *Equal contributions + Footnote † †: *Equal contributions + Footnote † † †: *Equal contributions + Footnote † † †: *Equal contributions + Footnote † and foundation models for data cleaning. Our demonstration of **RetClean** marks the first instance of showcasing its capabilities. The code is available at GitHub: [https://github.com/qcri/RetClean](https://github.com/qcri/RetClean). ## 2. System Architecture Figure 1 shows the architecture of **RetClean**. **User Input.** The user uploads a relational table and indicates which column(s) contain the missing values to be fixed. The user can optionally specify a subset of non-dirty pivot columns as relevant to the cleaning task (i.e., these columns functionally determine the values in the dirty column). Take the following configuration as an example (refer to the 3\({}^{rd}\) column in "health.csv" table in Figure 1): ``` 1:kable="health.csv" 2:dirty_column="Gender" 3:relevant_columns="['Name'.'Age'] 4:value='NULL' 5:is_local_model=false#use_ChatGPT ``` **Listing 1**: Not retrieval-based configuration Here, the user wants to impute the missing values (indicated by value = NULL) in the Gender column. The Name and Age columns are identified as the pivot columns. Assuming that these columns are not highly sensitive, the user asks **RetClean** to use ChatGPT to perform the missing value imputation task. Another example of a configuration is (refer to the 4\({}^{th}\) column in "health.csv" table): ``` 1:table="health.csv" 2:dirty_column="RI"#BIRisbloodtype 3:relevant_columns=ALL 4:value='NULL' 5:database="/Users/hosp_tables/"#AfolderofCSVfiles 6:is_local_model=True ``` **Listing 2**: Retrieval-based configuration Here, the user wants to impute the missing values in the Blood Type (BT) column. Such details are most probably not available as world knowledge, but could be available in a local data lake, e.g., a hospital database. Therefore, the user opts for and specifies the use of a data lake. The local mode flag is set to True, which indicates the use of the custom local foundation model, possibly for privacy concerns. We offer a user-friendly Graphical User Interface (GUI) (see Section 3) to simplify the process of specifying the user's requirements. **Non-retrieval based data cleaning.****RetClean** employs a tuple-by-tuple cleaning approach. In the case the user opts for non-retrieval-based methods (e.g., cleaning \(t_{1}\) and \(t_{4}\) in column Gender), **RetClean** reads one tuple at a time and passes it to ChatGPT in the **Reasoner** module. In this case, retrieval-related modules (i.e., the Indexer and Reranker) are bypassed. This allows ChatGPT to impute values for \(t_{1}\) and \(t_{4}\), as depicted in the output of Figure 1. **Retrieval-based data cleaning.** If the user opts for a retrieval-based method, **RetClean** will index all tuples in the specified data lake. The **Tuple-Based**Indexer** module supports both a syntactic index using Elasticsearch and a semantic index using Meta Faiss (Bradner et al., 2017). The indexes are typically constructed offline-but for the demo purpose, we allow uploading small data lakes for on-the-fly indexing. Then, given a dirty tuple (e.g., \(t_{2}\) in the BT column), **RetClean** first retrieves the top-\(n\) relevant tuples, where \(n\) is large enough (e.g., \(n=100\)) for ensuring high recall. Then, the tuples are passed to the **Reranker** module. The main role of the **Reranker** is to refine and reorder the retrieved tuples using more advanced methods, e.g., ColBERT (Bradner et al., 2017) or CrossBERT (Bradner et al., 2017), and finally produces the top-\(k\) candidates, where \(k\ll n\) (e.g., \(k=5\)), to ensure high precision. The **Reranker** and **Reasoner** are designed as separate modules to provide flexibility so we can easily use ChatGPT as the **Reasoner**. Given a dirty tuple (e.g., \(t_{2}\)) and top-\(k\) retrieved tuples, the **Reasoner** module employs either ChatGPT or the local model to determine the appropriate retrieved tuple to use (matching step) and the value to extract for cleaning the dirty value (extraction). This step is performed in a pair-wise fashion, i.e., the query tuple and each individual retrieved tuple will form a candidate pair that is passed to the **Reasoner** module. The module not only infers the imputed value (e.g., blood type B for \(t_{2}[BT]\)), but also identifies the tuple and attribute from which the value is obtained (i.e., the lineage information), as demonstrated in the output of Figure 1 (the dotted lines in the output box). It is important to note that the matching step here is beyond simple entity matching, our aim is to find any tuple in the data lake that semantically overlaps with the input tuple to be cleaned. **Remarks.****RetClean** can also be used to detect and repair data errors. For instance, if the user wishes to validate all Gender values, then Line 4 in the example configurations (Listings 1 and 2) is omitted, and thus **RetClean** will infer all values, and then compare the inferred values with the existing values (if present) to identify and rectify data errors. ## 3. Demonstration Scenarios For the demonstration, we have prepared 13 datasets from 9 different domains (e.g., football matches, movies, and smartphone products), with 40 tables as the data lake. Attendees can also upload other datasets. **Initial Setup.** To use **RetClean** for data cleaning, the user needs to upload a table and indicate the dirty column requiring cleaning, as shown in the top left of Figure 2. **Scenario 1:**_Data cleaning with ChatGPT._ By default, users can simply click the "Run" button located on the bottom left-hand side of Figure 2. Optionally, the user can select certain columns as the pivot column(s) from the drop-down list. As we will demonstrate, such selection is important as it eliminates noisy (irrelevant) columns before passing the content to the backend of **RetClean**. As such, the backend modules (**Indexer**, **Reranker**, **Reasoner**) focus only on the data that matters. This scenario exclusively relies on the ChatGPT's knowledge to generate the cleaned data values (the right-most green-marked column in Figure 2). **Scenario 2:**_Retrieval-based data cleaning with ChatGPT._ Consider the case that the user's data is not part of ChatGPT's world knowledge, e.g., very recent movie data. In this case, the user can connect their local data lake (e.g., database or CSV files), and requests a **RetClean** cleaning procedure to perform retrieval-based data cleaning. This approach utilizes different indexers, either syntactic or semantic, to retrieve relevant tuples from the data lake (see the "Indexer" option in Figure 2). These tuples serve as the context that contains the relevant information, and **RetClean** leverages ChatGPT's powerful language reasoning to extract the relevant information from these retrieved tuples. The cleaning results are presented in the same manner as in Scenario 1. However, there is one main difference: when the user clicks the "I" button, the system displays the source tuple from the data lake that the value was extracted from (the bottom box in Figure 2). **Scenario 3:**_Retrieval-based data cleaning with local models._ In this scenario, we use our custom-built local model for reasoning and extracting the correct value from the retrieved tuples. The model is much smaller than ChatGPT, but it can still perform the reasoning and extraction aspects reasonably well. Importantly, it never sends the data anywhere, making it suitable for sensitive data. In the retrieval-based scenarios (Scenarios 2 and 3), users can optionally select a **Reranker** to reorder the retrieved tuples to potentially obtain higher-quality cleaning results (more details are presented in Section 4). ## 4. Implementation & Early Results ### Prompts to Stand-Alone ChatGPT When ChatGPT is used alone for data cleaning (i.e., Scenario 1), each dirty tuple is converted into a prompt and sent to ChatGPT. Figure 1. An Overview of RetClean. Figure 2. Demonstration Scenarios. By default, we use the following template: "[attribute1 : value1 ; attribute2 : value2 ;... attribute n : ]". Followed by a question statement "_what is the value of attribute n_". We also allow users to provide a customized prompt template, as shown in Figure 2. Note that ChatGPT, as a generative model, may not always give concise answers, we perform some post-processing on its output to extract the relevant value to be provided to the user. ### Retrieval-Based Indexer **Retclean** supports two types of indexes for indexing the tuples in a user-provided data lake, namely ElasticSearch and embeddings-based Fais (embeddings are generated by BERT). Elastic search works well for retrieving tuples similar to the query tuple based on syntactic q-gram terms, while Fais retrieves relevant tuples based on their semantic similarity. In the demonstration, we plan to use both and show their trade-offs. Indexes are typically constructed offline and available for use during the cleaning time. ### Retrieval-Based Reranker The top-\(n\) retrieved tuples from the index are based on a coarse-grained similarity. The **Reranker** module is designed to rerank these top-\(n\) results using a more fine-grained comparison mechanism, by holistically comparing each token of the query and each token of the retrieved tuple, in order to compute a better score of the retrieved tuple. We adopt a ColBERT-like strategy (Bordes and McAllester, 2015) as the default method. To achieve this, we split the tuples into attribute-value pairs, which we treat as individual "chunks" for processing. Each of these chunks is independently encoded using a Sentence Transformer encoder, and a maximum similarity (**maxsim**) operation is performed on all chunks of the query against all chunks of the retrieved tuples. We utilize cosine similarity for the **maxsim** operation. This process is repeated for all retrieved tuples corresponding to each query tuple. The summed **maxsim** score for a retrieved tuple is used to determine its ranking score, with higher scores indicating a better match. We also support a Sentence Transformer cross-encoding approach. We use a cross-encoder that takes as input the pair of the serialized query tuple and a serialized retrieved tuple, and outputs a score for the similarity between the input pair. Again, we do this across all retrieved tuples per query tuple. The total score for each pair determines the ranking score (higher is better) of the retrieved tuple in the pair. ### Retrieval-Based Reasoner The reasoning module can be set to use either ChatGPT or our custom local model. In the case of the former, a prompt is created with the serialized query tuple, serialized retrieved tuple, and a question statement corresponding to the specific data cleaning task. In this scenario, ChatGPT may select the value from the retrieved tuple provided in the prompt, generate a value of its own, or state that no such value can be found. This process is repeated for each retrieved tuple. Thus, if we have \(m\) query tuples and each gets \(k\) relevant tuples from the **Reranker**, then \(m*k\) prompts are sent to ChatGPT. The value is then extracted from ChatGPT's response and presented to the user with the source information (from the data lake). Each prompt sent to ChatGPT consists of two cascaded questions. The first question is "_Do these two tuples relate to the same exact entity?_" with potential answers of Yes or No. Only if the answer is Yes, the second question, within the same prompt, becomes active in the form of "_what is the value for..._". For the local model case, we developed two custom RoBERTa-based (Bordes and McAllester, 2015) models; the _Macher_ and _Extractor_ models, both are encapsulated inside the **Reasoner** module. The _Macher_ model is trained to take two serialized tuples (query & retrieved) and outputs a Boolean value indicating whether or not they _match_. Here, matching implies two conditions; the two tuples are about the same entity, e.g., same movie, player, book, and the target dirty attribute is present in the retrieved tuple, even if it is not an exact syntactic match. Notice that the pivot columns significantly help in this task because the _Macher_ focuses only on the columns that matter. In our experiments, we also found that for unseen datasets and schemas, the performance of the _Macher_ model significantly improves if it is fine-tuned on a small number of examples, e.g., 10 or 20 samples from the new dataset. Thus, **Retclean** allows the user to provide a sample dataset for fine-tuning the model. A pair of tuples that passes the _Macher_ feeds the _Extractor_ model, which is trained to identify and extract the desired value from the retrieved tuple. The model compares the Sentence Transformer embedding of the dirty attribute name with the embedding of each attribute name in the matched retrieved tuple to identify the most similar attribute using cosine similarity. That attribute is used for extracting the value missing in the query tuple. ### Preliminary Results In Table 1, we present the results on our experiment on four datasets using the three cleaning approaches. The dirty columns with missing values for each dataset are: "Country" for Cricket Players (CP), "Genetic Order" for Animals (AN), "Age Rating" for Shows-Movies (SM), and "Department" for QCRI Personnel (QP). For the retrieval-based techniques, we manually constructed a data lake of 12 tables covering the four domains. For the CP dataset, which is mostly part of the world knowledge, we observe that the stand-alone ChatGPT as well as the retrieval-based techniques all perform well. However, for the AN and SM datasets where the missing information is harder to find, e.g., genetic information for different animals, or show ratings for unpopular shows, the retrieval-based techniques are superior. For a very domain-specific dataset (e.g., the QP dataset), it is clear that the stand-alone ChatGPT is useless, and the cleaning process has to rely on internal data lakes. It is worth highlighting that our developed custom model (Scenario 3) is competitively effective in its inference power to ChatGPT. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline Dataset & \#Tuples & Scenario 1 & Scenario 2 & Scenario 3 \\ \hline \hline Cricket Players (CP) & 100 & 97\% & 96\% & 97\% \\ \hline Animals (AN) & 100 & 79\% & 96\% & 96\% \\ \hline Shows-Movies (SM) & 100 & 27\% & 57\% & 74\% \\ \hline QCRI Personnel (QP) & 18 & 5\% & 94\% & 88\% \\ \hline \end{tabular} \end{table} Table 1. Accuracy Scores for RetClean Experiments.
2306.05615
Mixed-Integer Programming for a Class of Robust Submodular Maximization Problems
We consider robust submodular maximization problems (RSMs), where given a set of $m$ monotone submodular objective functions, the robustness is with respect to the worst-case (scaled) objective function. The model we consider generalizes two variants of robust submodular maximization problems in the literature, depending on the choice of the scaling vector. On one hand, by using unit scaling, we obtain a usual robust submodular maximization problem. On the other hand, by letting the scaling vector be the optimal objective function of each individual (NP-hard) submodular maximization problem, we obtain a second variant. While the robust version of the objective is no longer submodular, we reformulate the problem by exploiting the submodularity of each function. We conduct a polyhedral study of the resulting formulation and provide conditions under which the submodular inequalities are facet-defining for a key mixed-integer set. We investigate several strategies for incorporating these inequalities within a delayed cut generation framework to solve the problem exactly. For the second variant, we provide an algorithm to obtain a feasible solution along with its optimality gap. We apply the proposed methods to a sensor placement optimization problem in water distribution networks using real-world datasets to demonstrate the effectiveness of the methods.
Hsin-Yi Huang, Hao-Hsiang Wu, Simge Kucukyavuz
2023-06-09T01:38:09Z
http://arxiv.org/abs/2306.05615v1
# Mixed-Integer Programming for a Class of Robust Submodular Maximization Problems ###### Abstract We consider robust submodular maximization problems (RSMs), where given a set of \(m\) monotone submodular objective functions, the robustness is with respect to the worst-case (scaled) objective function. The model we consider generalizes two variants of robust submodular maximization problems in the literature, depending on the choice of the scaling vector. On one hand, by using unit scaling, we obtain a usual robust submodular maximization problem. On the other hand, by letting the scaling vector be the optimal objective function of each individual (NP-hard) submodular maximization problem, we obtain a second variant. While the robust version of the objective is no longer submodular, we reformulate the problem by exploiting the submodularity of each function. We conduct a polyhedral study of the resulting formulation and provide conditions under which the submodular inequalities are facet-defining for a key mixed-integer set. We investigate several strategies for incorporating these inequalities within a delayed cut generation framework to solve the problem exactly. For the second variant, we provide an algorithm to obtain a feasible solution along with its optimality gap. We apply the proposed methods to a sensor placement optimization problem in water distribution networks using real-world datasets to demonstrate the effectiveness of the methods. _Keywords_: robust optimization; integer programming; facet; submodularity; sensor networks \({}^{1}\)Co-first authors ordered alphabetically. \({}^{2}\)Corresponding author. ## 1 Introduction We study two variants of robust submodular maximization problems (RSMs) considered in Krause et al. (2008b) and He and Kempe (2016), where the robustness is with respect to the worst case of a finite number of (scaled) submodular functions. Specifically, let \(V=\{1,\ldots,n\}\) be a finite non-empty ground set, where \(n\in\mathbb{N}\). Let \([m]=\{1,\ldots,m\}\) be the set of the first \(m\in\mathbb{N}\) positive integers. For all \(i\in[m]\), a function \(f_{i}:2^{V}\rightarrow\mathbb{R}\) is submodular if \[f_{i}(X\cup\{j\})-f_{i}(X)\geq f_{i}(X^{\prime}\cup\{j\})-f_{i}(X^{\prime}) \text{ for }X^{\prime}\subseteq X\subseteq V\text{ and }j\in V\setminus X.\] This definition of submodularity uses the concept of a marginal contribution. In particular, the term \(f_{i}(X\cup\{j\})-f_{i}(X)\) denotes the marginal contribution of the element \(j\) when added to the set \(X\) in function \(f_{i}\), and the marginal contribution of \(j\) decreases if the set \(X\) includes more elements from the set \(V\setminus X\). Given monotonically non-decreasing submodular functions, \(f_{i}\), we assume, without loss of generality, that \(f_{i}(\emptyset)=0,i\in[m]\). Note that, throughout the paper, we use the notation \(\bar{\mathbf{x}}\in\mathbb{B}^{n}\) and its support \(\bar{X}=\{i\in V:\bar{x}_{i}=1\}\), and refer to the corresponding function evaluations \(f_{i}(\bar{\mathbf{x}})\) for \(\bar{\mathbf{x}}\in\mathbb{B}^{n}\) and \(f_{i}(\bar{X})\) for the corresponding support \(\bar{X}\subseteq V\), interchangeably. Let \(\mathcal{X}\) be a set of constraints on the binary variables \(\mathbf{x}\in\mathbb{B}^{n}\). Given a single monotone submodular set function \(f_{i}(\cdot)\), the traditional submodular maximization problem is defined as \[\max_{\mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n}}f_{i}(\mathbf{x}). \tag{1}\] It is well-known that submodular maximization is NP-hard. Krause et al. (2008b) study a robust variant of Problem (1), where given \(m\) submodular functions \(f_{i}:2^{V}\rightarrow\mathbb{R},i\in[m]\), the objective is to maximize the worst case (minimum) of these \(m\) submodular functions, i.e., \[\max_{\mathbf{x}\in\mathcal{X}}\min_{i\in[m]}f_{i}(\mathbf{x}). \tag{2}\] In other words, Problem (2) aims to find a solution \(\mathbf{x}\in\mathcal{X}\) that is robust against the minimum possible value given by \(\min_{i\in[m]}f_{i}(\mathbf{x})\). That is, an optimal solution \(\mathbf{x}^{*}\in\mathcal{X}\cap\mathbb{B}^{n}\) satisfies \(\min_{i\in[m]}f_{i}(\mathbf{x}^{*})\geq\min_{i\in[m]}f_{i}(\mathbf{\bar{x}})\) for all \(\mathbf{\bar{x}}\in\mathcal{X}\cap\mathbb{B}^{n}\). Problem (2), introduced by Krause et al. (2008b), is the first robust extension of submodular maximization, and it inspired various extensions of robustness such as He and Kempe (2016); Bogunovic et al. (2017); Orlin et al. (2018); Staib et al. (2019); Adibi et al. (2022). In this paper, in addition to the basic RSM Problem (2), we also consider the formulation of He and Kempe (2016), which extends the robustness of Problem (2) to consider the performance of the robust solution in proportion to the performance of the optimal solution for each submodular function. More precisely, let \(\mathbf{x}_{i}^{*}\) be an optimal solution of the \(i\)-th traditional submodular maximization problem (1). The RSM of He and Kempe (2016) is defined as \[\max_{\mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n}}\min_{i\in[m]}\frac{f_{i}( \mathbf{x})}{f_{i}(\mathbf{x}_{i}^{*})}. \tag{3}\] For \(\mathbf{x}\in\mathcal{X}\), the authors consider the proportion of the function value \(f_{i}(\mathbf{x})\) to the largest possible function value \(f_{i}(\mathbf{x}_{i}^{*})\) for each \(i\in[m]\). Problem (3) aims to find a solution \(\mathbf{x}\in\mathcal{X}\) that maximizes the worst (smallest) value of these \(m\) proportions. In other words, the optimal solution \(\mathbf{x}^{*}\) of Problem (3) satisfies \(\min_{i\in[m]}\frac{f_{i}(\mathbf{\bar{x}}^{*})}{f_{i}(\mathbf{x}_{i}^{*})} \geq\min_{i\in[m]}\frac{f_{i}(\mathbf{\bar{x}})}{f_{i}(\mathbf{x}_{i}^{*})}\) for all \(\mathbf{\bar{x}}\in\mathcal{X}\). In fact, we observe that Problems (2) and (3) can be generalized as the problem \[\max_{\mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n}}\min_{i\in[m]}\frac{f_{i}( \mathbf{x})}{\alpha_{i}}, \tag{4}\] where \(\boldsymbol{\alpha}=(\alpha_{1},\alpha_{2},\ldots,\alpha_{m})\in\mathbb{R}_{+} ^{m}\) is a given vector of nonnegative scalars. Problem (4) is equivalent to Problem (2) under the case \(\boldsymbol{\alpha}=\mathbf{1}\). Furthermore, if we solve \(m\) submodular maximization problems and let \(\boldsymbol{\alpha}=(f_{1}(\mathbf{x}_{1}^{*}),f_{2}(\mathbf{x}_{2}^{*}), \ldots,f_{m}(\mathbf{x}_{m}^{*}))\), then Problem (3) is the same as Problem (4). Krause et al. (2008b) review a wide range of applications of RSMs. For example, sensor placement optimization for detecting the contamination of water networks (Krause et al., 2008a; Leskovec et al., 2007; Krause et al., 2008b) can be modeled in the form of Problem (4). Note that for this application in critical infrastructure we must take into account the issues of public health and security (Ostfeld et al., 2008), and rather than a placement that optimizes an expected performance measure, we are interested in optimizing the worst-case performance. Under public health considerations, a relevant objective concerns the population affected by the pollutant, where either the exact number or the proportion of people far from the pollutant is relevant. For example, functions \(f_{1}(\mathbf{\bar{x}})=2\) and \(f_{2}(\mathbf{\bar{x}})=10\) capture the exact number of individuals protected from the outbreak by decision \(\bar{x}\) under \(m=2\) scenarios. Using the first performance measure, \(f_{1}(\mathbf{\bar{x}})=2\) is the worst case of two scenarios. However, if a decision maker initially assesses an ability to protect \(\alpha_{1}=2\) and \(\alpha_{2}=20\) individuals for the first and second scenarios respectively, then the second scenario with \(\frac{f_{2}(\mathbf{\bar{x}})}{\alpha_{2}}=0.5\) has the worst proportion compared to \(\frac{f_{1}(\mathbf{\bar{x}})}{\alpha_{1}}=1\). A higher value of \(\alpha_{i}\) for all \(i\in[m]\) indicates an ambition to protect more individuals in a given scenario; however, because of the limitation of resources, the largest number of saved individuals cannot be greater than \(f_{i}(\mathbf{x}_{i}^{*})\) for all scenarios \(i\in[m]\). Therefore, it is reasonable to assume that \(1\leq\alpha_{i}\leq f_{i}(\mathbf{x}_{i}^{*})\) for all \(i\in[m]\). In our computational study, we demonstrate the effectiveness of our proposed methods on this sensor placement optimization problem. The detailed model of Krause et al. (2008a); Leskovec et al. (2007) for Problem (4) will be given in Section 3. Previous literature on RSM focuses on a bicriteria approximation of the relaxation of Problem (2) under certain constraints. Krause et al. (2008b) show that under the cardinality constraint \(\mathcal{X}_{c}=\{x:\sum_{i\in V}x_{i}\leq b\}\) and \(b\in\mathbb{N}\), there is no constant-ratio approximation algorithm for solving Problem (2) unless \(\mathcal{NP}=\mathcal{P}\). Krause et al. (2008b) propose the SATURATE algorithm that provides a solution \(\mathbf{\bar{x}}_{s}\) such that the objective value \(\min_{i\in[m]}f_{i}(\mathbf{\bar{x}}_{s})\geq\max_{\mathbf{x}\in\mathcal{X}_ {c}\cap\mathbb{B}^{n}}\min_{i\in[m]}f_{i}(\mathbf{x})\), where \(||\mathbf{\bar{x}}_{s}||_{0}\leq\lambda_{s}b\) and \(\lambda_{s}=1+\log(\max_{j\in V}\sum_{i\in[m]}f_{i}(\{j\}))\). Powers et al. (2016) subsequently propose the GENSAT algorithm under an assumption that the submodular maximization problem with a matroid constraint has an approximation guarantee, \(\lambda_{g}\). For a fixed \(\tau\in\mathbb{R}\), given a \(\beta\in\mathbb{R}\), GENSAT provides a lower bound \(\beta\tau\) for the minimal value of every fraction \(\gamma\) of \(m\) submodular functions, where \(\gamma\geq\frac{\lambda_{g}-\beta}{1-\beta}\) and \(\lambda_{g}\in\mathbb{R}\) is an approximation guarantee based on the assumption shown in Theorem 1 of Powers et al. (2016). For the case \(\boldsymbol{\alpha}=(f_{1}(\mathbf{x}_{1}^{*}),f_{2}(\mathbf{x}_{2}^{*}),\ldots,f_{m}(\mathbf{x}_{m}^{*}))\) in Problem (4) under cardinality constraint \(\mathcal{X}_{c}\), He and Kempe (2016) show a strong approximation hardness result that the bicriteria approximation has to select at least a factor of \(\lceil b\log m\rceil\) elements from \(V\). Despite the hardness of solving the RSMs shown in Krause et al. (2008b); He and Kempe (2016), our research interest is to study the mathematical structure of Problem (4). Instead of the approximation methods, the main goal of this paper is to provide exact methods based on mixed-integer programming and polyhedral theory to solve Problem (4), leveraging the tremendous power of mixed-integer programming solvers in obtaining solutions to many NP-hard problems. Numerous optimization problems involving submodularity have been investigated via a mixed-integer programming lens, including but not limited to, submodular maximization (Nemhauser and Wolsey, 1981; Ahmed and Atamturk, 2011; Wu and Kucukyavuz, 2018; Yu and Ahmed, 2017; Shi et al., 2022; Coniglio et al., 2022), submodular minimization (Yu and Kucukyavuz, 2022; 2023), conic quadratic optimization (Gomez, 2018; Atamturk and Gomez, 2020; 2022; Kilinc-Karzan et al., 2020), \(k\)-submodular optimization (Yu and Kucukyavuz, 2021a;b), and chance-constrained optimization (Wu and Kucukyavuz, 2019; Kilinc-Karzan et al., 2022; Shen and Jiang, 2023). We refer the reader to a recent tutorial (Kucukyavuz and Yu, 2023) for an overview of these approaches. Motivated by the success of these approaches in finding exact solutions to challenging submodular optimization problems, in this paper, we also undertake a polyhedral approach for Problem (4), which is a robust version of the submodular maximization problem (1). One immediate challenge we face, as we will see later, is that the robust objective is no longer submodular even if each individual function is submodular. Robust optimization aims to deal with the worst-case over uncertain data with a broad array of applications such as finance (Ghaoui et al., 2003; Goldfarb and Iyengar, 2003; Tutuncu and Koenig, 2004), supply chain management (Ben-Tal et al., 2005; Bertsimas and Thiele, 2006), social networks (He and Kempe, 2016; Nannicini et al., 2019), and energy systems (Mulvey et al., 1995; Zhao and Zeng, 2012; Bertsimas et al., 2013). We refer the reader to the survey of Kouvelis and Yu (1997); Bertsimas et al. (2011) for an overview of various domains. There are scalable algorithms for robust convex optimization (Ben-Tal and Nemirovski, 1998; 1999; 2000), robust discrete optimization under certain uncertainty sets (Bertsimas and Sim, 2003; 2004; Atamturk, 2006), and two-stage robust linear programming (Zhao and Zeng, 2012; Jiang et al., 2012; Bertsimas et al., 2013; Zeng and Zhao, 2013), mainly relying on duality results of convex (or linear) programs. However, submodular functions are neither convex nor concave, in general. Therefore these approaches are not directly applicable for the robust submodular optimization problem we consider. Recall that Problem (4) is a robust version of the submodular maximization problem (1). Given \(i\in[m]\), Problem (1) is a class of \(\mathcal{NP}\)-hard problems (see, e.g., Feige, 1998; Feige et al., 2011). In addition to network optimization (Church and Velle, 1974; Kempe et al., 2003; Wu and Kucukyavuz, 2018; Fischetti et al., 2018; Cordeau et al., 2019; Gunnec et al., 2019), submodular maximization appears in other modern applications including but not limited to public security and health (Leskovec et al., 2007; Krause et al., 2008a; Zheng et al., 2019), computer vision (Boykov and Jolly, 2001; Jegelka and Bilmes, 2011), computational linguistics (Lin and Bilmes, 2011), and artificial intelligence (Krause et al., 2008c; Golovin and Krause, 2011). We refer the reader to the survey of Krause and Golovin (2012) for an overview of various application domains of submodular optimization. There are two well-known approaches for solving Problem (1), either exactly using delayed constraint generation approaches or approximately using the greedy method based on the seminal results of Nemhauser and Wolsey (1981) and Nemhauser et al. (1978), respectively. The greedy method has \((1-1/e)\) optimality guarantee for monotone submodular maximization under a cardinality constraint \(\mathcal{X}_{c}\). For a stochastic (expected value) version of Problem (1) with a finite number of scenarios, Wu and Kucukyavuz (2018) introduce a two-stage stochastic submodular optimization model assuming that the second-stage objective function is submodular, where a corresponding delayed constraint generation algorithm with the submodular inequality of Nemhauser and Wolsey (1981) can be used for solving the problem. The expectation of stochastic submodular functions preserves submodularity, thereby enabling the adaptation of methods that exploit submodularity to the stochastic case. In contrast, in this paper, we consider a robust variant of monotone submodular function maximization (Problem (4)). There are three difficulties with solving Problem (4). First, for a given \(\mathbf{x}\in\mathcal{X}\), the objective \(\min_{i\in[m]}\frac{f_{i}(\mathbf{x})}{\alpha_{i}}\) loses the submodularity property, and one cannot use the method of Nemhauser and Wolsey (1981) directly. Second, we do not restrict ourselves to a particular type of constraint set (such as cardinality) in \(\mathcal{X}\), therefore any algorithm that assumes a particular constraint structure cannot be immediately applied. Finally, under the special case \(\boldsymbol{\alpha}=(f_{1}(\mathbf{x}_{1}^{*}),f_{2}(\mathbf{x}_{2}^{*}),\ldots,f_{m}(\mathbf{x}_{m}^{*}))\), it is very hard to solve \(m\)\(\mathcal{NP}\)-hard problems within a reasonable period of an execution time limit in order to define Problem (3). To conquer these difficulties, we provide an alternative formulation of Problem (4) that allows us to leverage the known submodular inequalities. We then conduct a polyhedral study of the associated mixed-integer set. Finally, for the hard special case with \(\boldsymbol{\alpha}=(f_{1}(\mathbf{x}_{1}^{*}),f_{2}(\mathbf{x}_{2}^{*}),\ldots,f_{m}(\mathbf{x}_{m}^{*}))\), we provide an algorithm that obtains a near-optimal solution equipped with an optimality gap. The contributions and the outline of this paper are summarized as follows. In Section 2, we review an alternative piecewise-linear reformulation of Problem (4), which enables the use of the submodular inequalities of Nemhauser and Wolsey (1981). We conduct a polyhedral analysis of the associated mixed-integer set given by the alternative formulation and propose a facet-defining condition for the submodular inequalities. For the special case of Problem (3), we propose a method to estimate the optimality gap of the problem if it is too time-consuming to obtain the optimal value of \(\alpha_{i}=f_{i}(\mathbf{x}_{i}^{*})\) for all \(i\in[m]\). Based on these analyses, we investigate several computational strategies and propose a delayed constraint generation algorithm for Problem (4). Finally, in Section 3, we demonstrate the proposed methods on a sensor placement optimization problem in water networks using real-world datasets. We conclude in Section 4. ## 2 Models and Methods In this section, we investigate models and methods for Problem (4). Krause et al. (2008b) observe that the objective \(\min_{i\in[m]}f_{i}(\mathbf{x})\) of Problem (2) is no longer submodular, even though each individual function \(f_{i}\) is submodular. Therefore, Problem (4) also loses the submodularity property in the associated objective \(\min_{i\in[m]}\frac{f_{i}(\mathbf{x})}{\alpha_{i}}\) even for the case \(\boldsymbol{\alpha}=\mathbf{1}\). However, we propose an alternative formulation that exploits the submodularity property of each individual function. This alternative formulation is crucial to derive several approaches to solve Problem (4). ### An Alternative Formulation We first consider the alternative formulation of Problem (4). Given constants \(\alpha_{i},i\in[m]\), the formulation is defined as \[\max \eta \tag{5a}\] \[\text{s.t.} \eta\leq\frac{\theta_{i}}{\alpha_{i}} \forall i\in[m]\] (5b) \[\theta_{i}\leq f_{i}(\mathbf{x}) \forall i\in[m]\] (5c) \[\mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n},\eta\in\mathbb{R}, \boldsymbol{\theta}\in\mathbb{R}^{m}, \tag{5d}\] where \(\eta\in\mathbb{R}\) is a variable that captures the value of \(\min_{i\in[m]}\frac{f_{i}(\mathbf{x})}{\alpha_{i}}\), and \(\boldsymbol{\theta}\) is an \(m\)-dimensional vector of variables \(\theta_{i}\) lower bounding the value of \(f_{i}(\mathbf{x})\) for each \(i\in[m]\). Note that in Formulation (5), constraints (5c) entail the hypograph of \(m\) submodular functions. Since the function \(f_{i}(\mathbf{x})\) of Formulation (5) is submodular over the domain \(\mathbb{B}^{n}\) for all \(i\in[m]\), its hypograph is defined by submodular inequalities of Nemhauser and Wolsey (1981), given by \[\theta_{i}\leq f_{i}(S)-\sum_{j\in S}\rho_{j}^{i}(V\setminus\{j\})(1-x_{j})+ \sum_{j\in V\setminus S}\rho_{j}^{i}(S)x_{j},\forall S\subseteq V, \tag{6}\] where \(\rho_{j}^{i}(S)=f_{i}(S\cup\{j\})-f_{i}(S)\) captures the marginal contribution of including \(j\in V\setminus S\) to a subset \(S\). Using this observation, we derive a mixed-integer linear programming reformulation, where constraint (5c) is replaced by inequalities (6) for all \(i\in[m]\). Furthermore, the variables \(\theta_{i},i\in[m]\) can be projected out to arrive at the formulation \[\max \eta \tag{7a}\] \[\mathrm{s.t.} \eta\leq\frac{1}{\alpha_{i}}(f_{i}(S)-\sum_{j\in S}\rho_{j}^{i}(V \setminus\{j\})(1-x_{j})+\sum_{j\in V\setminus S}\rho_{j}^{i}(S)x_{j}),\forall S \subseteq V,i\in[m]\] (7b) \[\mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n},\eta\in\mathbb{R}. \tag{7c}\] The resulting formulation (7) has exponentially many constraints. Hence, we propose a delayed constraint generation (DCG) method to solve Formulation (5). In the proposed model, a relaxed master problem (RMP) at any iteration is formulated as \[\max \eta \tag{8a}\] \[\mathrm{s.t.} (\eta,\mathbf{x})\in\mathcal{C}\] (8b) \[\mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n},\eta\in\mathbb{R}, \tag{8c}\] where \(\mathcal{C}\) is a mixed-integer set defined by a subset of the constraints (7b) generated until the current iteration. In the next subsection, we consider how to choose the inequalities to include in the set \(\mathcal{C}\). ### Analyses of the Submodular Inequality for RSM First, we observe that for large \(m\), adding a submodular inequality (7b) for each \(i\) in a DCG algorithm may be inefficient. Motivated by this, we make a key observation that a mixed-integer set that includes fewer submodular inequalities compared to the submodular inequalities for all \(i\in[m]\) is sufficient to define \(\mathcal{C}\) to find an optimal solution of Problem (4). Before we give our analysis, we provide some useful definitions that identify an important index that determines the minimum of \(m\) submodular functions for a given set. **Definition 2.1**: _Given a subset \(S\subseteq V\), we define a function_ \[\mathbf{i}(S)=\operatorname*{arg\,min}_{i\in[m]}\frac{f_{i}(S)}{\alpha_{i}},\] _where the function \(\mathbf{i}:2^{V}\to\mathbb{N}\) returns the value of \(i\) for which \(\frac{f_{i}(S)}{\alpha_{i}}\) is the smallest. In other words, given a subset \(S\subseteq V\), the corresponding value \(\mathbf{i}(S)\) denotes an index such that \(\frac{f_{i(S)}(S)}{\alpha_{i}(S)}\leq\frac{f_{i}(S)}{\alpha_{i}}\) for all \(i\in[m]\)._ Throughout this paper, the function \(\mathbf{i}\) plays a key role in providing an upper bound for Problem (4). Based on this index function, we define a mixed-integer set \(\mathcal{F}\) as \[\mathcal{F}=\{(\eta,\mathbf{x})\in\mathbb{R}\times\mathbb{B}^{n}:\eta\leq \frac{1}{\alpha_{\mathbf{i}(S)}}(f_{\mathbf{i}(S)}(S)-\sum_{j\in S}\rho_{j}^{ \mathbf{i}(S)}(V\setminus\{j\})(1-x_{j})+\sum_{j\in V\setminus S}\rho_{j}^{ \mathbf{i}(S)}(S)x_{j}),\forall S\subseteq V\}. \tag{9}\] In what follows, we prove that we can let \(\mathcal{C}=\mathcal{F}\) in constraint (8b) of RMP (8). **Proposition 2.1**: _The mixed-integer set \(\mathcal{F}\) is sufficient for defining \(\mathcal{C}\) in (8b) of RMP (8) to find an optimal solution of Problem (4)._ Proof.: Nemhauser and Wolsey (1981) show the validity of submodular inequality (6). Thus, we have \[\eta \leq\frac{\theta_{\mathbf{i}(S)}}{\alpha_{\mathbf{i}(S)}}\leq \frac{f_{\mathbf{i}(S)}(S)}{\alpha_{\mathbf{i}(S)}}\] \[\leq\frac{f_{\mathbf{i}(S)}(S)-\sum_{j\in S}\rho_{j}^{\mathbf{i}( S)}(V\setminus\{j\})(1-x_{j})+\sum_{j\in V\setminus S}\rho_{j}^{\mathbf{i}(S)}(S)x_{j }}{\alpha_{\mathbf{i}(S)}}.\] Therefore, Problem (4) is equivalent to the mixed-integer linear program \[\max \eta\] s.t. \[\eta\leq\frac{\theta_{i}}{\alpha_{i}} \forall i\in[m]\] \[\theta_{\mathbf{i}(S)}\leq f_{\mathbf{i}(S)}(S)-\sum_{j\in S} \rho_{j}^{\mathbf{i}(S)}(V\setminus\{j\})(1-x_{j})+\sum_{j\in V\setminus S} \rho_{j}^{\mathbf{i}(S)}(S)x_{j} \forall S\subseteq V\] \[\mathbf{S}\in\mathcal{X}\cap\mathbb{B}^{n},\eta\in\mathbb{R}, \boldsymbol{\theta}\in\mathbb{R}^{m}.\] Projecting out the \(\theta\) variables, we obtain the desired result. Proposition 2.1 shows that given \(S\subset V\), it is sufficient to add a submodular inequality (7b) with \(i=\mathbf{i}(S)\) to define the set \(\mathcal{C}\) in RMP (4) for solving the problem. Note that considering all submodular inequalities defining \(F_{i},i\in[m]\) may give a stronger formulation than considering \(\mathcal{F}\). In our computational experiments, we observe that obtaining a violated submodular inequality is time-consuming, and as such, Proposition 2.1 plays an important role in reducing the total number of inequalities. Below, we provide further analysis of \(\mathcal{F}\) to improve algorithmic efficiency. We start by providing a proposition that gives sufficient conditions under which the submodular inequality (7b) is facet-defining for \(\mathrm{conv}(\mathcal{F})\). Let \(\mathbf{e}_{j}\) be the \(j\)th unit vector of appropriate dimension. **Proposition 2.2**: _Given \(S\subseteq V\) and \(\bar{i}\in[m]\), the submodular inequality_ \[\eta\leq\frac{1}{\alpha_{i}}(f_{\bar{i}}(S)-\sum_{j\in S}\rho_{j}^{\bar{i}}(V \setminus\{j\})(1-x_{j})+\sum_{j\in V\setminus S}\rho_{j}^{\bar{i}}(S)x_{j})\] _is facet defining for \(\mathrm{conv}(\mathcal{F})\) if the following conditions hold:_ 1. _for any_ \(j\in S\)_, there exists at least one element_ \(k_{j}\in V\setminus S\) _such that_ \(\rho_{j}^{\bar{i}}(\{k_{j}\})=0\) _and_ \(\frac{f_{\bar{i}}(S)}{\alpha_{i}}=\frac{f_{\bar{i}(S)}(S)}{\alpha_{\mathbf{i} (S)}}=\frac{f_{\bar{i}(S\setminus\{j\}\cup\{k_{j}\})}(S\setminus\{k_{j}\})}{ \alpha_{\mathbf{i}(S\setminus\{k_{j}\})}}=\frac{f_{\bar{i}(S\cup\{k_{j}\})}(S \cup\{k_{j}\})}{\alpha_{\mathbf{i}(S\cup\{k_{j}\})}}\)__ 2. _for any_ \(j\in V\setminus S\)_, we have_ \(\frac{f_{\bar{i}}(S)+\rho_{j}^{\bar{i}}(S)}{\alpha_{\bar{i}}}=\frac{f_{\bar{i }(S\cup\{j\})}(S\cup\{j\})}{\alpha_{\mathbf{i}(S\cup\{j\})}}\)_, where_ \(\bar{i}=\mathbf{i}(S)\)_._ Proof. Since \(\dim(\mathcal{F})=n+1\), we enumerate \(n+1\) affinely independent points on the face defined by the submodular inequality (7b) under conditions (i) and (ii). 1. Given \(S\subseteq V\), consider the point \((\eta,\mathbf{x})=(\frac{f_{\bar{i}(S)}(S)}{\alpha_{\mathbf{i}(S)}},\sum_{i \in S}\mathbf{e}_{i})\) on the face defined by inequality (7b). 2. Building on the point given in (a), we consider a set of points \(P\), where \(|P|=|V\setminus S|\) and each point \((\eta,\mathbf{x})\in P\) is given by \((\frac{f_{\mathbf{i}(S\cup\{j\})}(S\cup\{j\})}{\alpha_{\mathbf{i}(S\cup\{j\} )}},\sum_{i\in S}\mathbf{e}_{i}+\mathbf{e}_{j})\) for all \(j\in V\setminus S\), which is on the face defined by inequality (7b) under condition (ii). 3. From conditions (i) and (ii), for any \(j\in S\), there exists \(k_{j}\in V\setminus S\) such that \(\rho_{j}^{\bar{i}}(\{k_{j}\})=0\) and \(\bar{i}=\mathbf{i}(S)\). We conclude that \(\rho_{j}^{\mathbf{i}(S)}(V\setminus\{j\})=0\) for any \(j\in S\). Note that \(\frac{f_{\bar{i}(S\setminus\{j\})}(S\setminus\{j\}\cup\{k_{j}\})}{\alpha_{ \mathbf{i}(S\setminus\{j\})}(S\setminus\{k_{j}\})}=\frac{f_{\bar{i}(S\cup\{k_{j}\} )}(S\cup\{k_{j}\})-\rho_{j}^{\mathbf{i}(S\cup\{k_{j}\})}(S\cup\{k_{j}\})}{ \alpha_{\mathbf{i}(S\cup\{k_{j}\})}}=\frac{f_{\bar{i}(S\cup\{k_{j}\})}(S\cup\{ k_{j}\})}{\alpha_{\mathbf{i}(S\cup\{k_{j}\})}}\) since \(\rho_{j}^{\mathbf{i}(S\cup\{k_{j}\})}(S\cup\{k_{j}\})=\rho_{j}^{\bar{i}}(\{k_{j }\})=0\) and \(\frac{f_{\bar{i}}(S)}{\alpha_{\bar{i}}}=\frac{f_{\bar{i}(S)}(S)}{\alpha_{\bar{i} (S)}}=\frac{f_{\bar{i}(S\setminus\{j\}\cup\{k_{j}\})}(S\setminus\{j\}\cup\{k_{j }\})}{\alpha_{\mathbf{i}(S\setminus\{j\})}(S\cup\{k_{j}\})}=\frac{f_{\bar{i}(S \cup\{k_{j}\})}(S\cup\{k_{j}\})}{\alpha_{\mathbf{i}(S\cup\{k_{j}\})}}\). Therefore, we obtain a set of points \(\bar{P}\) on the face defined by inequality (7b), where \((\eta,\mathbf{x})=(\frac{f_{\bar{i}(S\setminus\{j\})}(S\setminus\{k_{j}\})(S \setminus\{j\}\cup\{k_{j}\})}{\alpha_{\mathbf{i}(S\setminus\{j\})}(S\cup\{k_{j }\})},\sum_{i\in S\setminus\{j\}}\mathbf{e}_{i}+\mathbf{e}_{k_{j}})\in\bar{P}\) for all \(j\in S\) and \(|\bar{P}|=|S|\). Note that these \(n+1\) points can be represented as an \((n+1)\times(n+1)\) matrix, where the first \(|V\setminus S|\) rows are the points \(P\) described in (b), from the \((|V\setminus S|+1)\)-th row to the \(|V|\)-th row are the points \(\bar{P}\) described in (c), and the \((|V|+1)\)-th row is the point \((\eta,\mathbf{x})=(\frac{f_{\bar{H}(S)}(S)}{\alpha_{\mathbf{i}(S)}},\sum_{i \in S}\mathbf{e}_{i})\) given in (a). Consider the following row operations. 1. We multiply the \((|V|+1)\)-th row by -1 to get a row \((\frac{-f_{\bar{H}(S)}(S)}{\alpha_{\mathbf{i}(S)}},\sum_{i\in S}\mathbf{-e}_{ i})\). 2. We add the new row \((\frac{-f_{\bar{H}(S)}(S)}{\alpha_{\mathbf{i}(S)}},\sum_{i\in S}\mathbf{-e}_{ i})\) to each of the first \(|V\setminus S|\) rows. Then, we get \(|V\setminus S|\) linearly independent rows, \((\frac{f_{\bar{H}(S\cup\{j\})}(S\cup\{j\})}{\alpha_{\mathbf{i}(S\cup\{j\})}} +\frac{-f_{\bar{H}(S)}(S)}{\alpha_{\mathbf{i}(S)}},\mathbf{e}_{j})\) for all \(j\in V\setminus S\). 3. We multiply each of the \(|V\setminus S|\) linearly independent rows of Step 2 by -1. We get \((\frac{-f_{\bar{H}(S\cup\{j\})}(S\cup\{j\})}{\alpha_{\mathbf{i}(S\cup\{j\})} }+\frac{f_{\bar{H}(S)}(S)}{\alpha_{\mathbf{i}(S)}},-\mathbf{e}_{j})\) for all \(j\in V\setminus S\). 4. Recall that from the \((|V\setminus S|+1)\)-th row to the \(|V|\)-th row, each of the rows is represented by \((\frac{f_{\bar{H}(S\cup\{j\})}(S\cup\{j\}\cup\{k_{j}\})}{\alpha_{\mathbf{i}(S \setminus\{j\}\cup\{k_{j}\})}},\sum_{i\in S\setminus\{j\}}\mathbf{e}_{i}+ \mathbf{e}_{k_{j}})\) for a given \(j\in S\). Here, for a given \(j\in S\), there exists a row \((-\frac{f_{\bar{H}(S\cup\{k_{j}\})}(S\cup\{k_{j}\})}{\alpha_{\mathbf{i}(S \cup\{k_{j}\})}}+\frac{f_{\bar{H}(S)}(S)}{\alpha_{\mathbf{i}(S)}},-\mathbf{e }_{k_{j}})\) from Step 3. Now for a given \(j\in S\), from the \((|V\setminus S|+1)\)-th row to the \(V\)-th row, we add the rows \((\frac{-f_{\bar{H}(S)}(S)}{\alpha_{\mathbf{i}(S)}},\sum_{i\in S}\mathbf{-e}_{ i})\) and \((\frac{-f_{\bar{H}(S\cup\{k_{j}\})}(S\cup\{k_{j}\})}{\alpha_{\mathbf{i}(S\cup\{k_{ j}\})}}+\frac{f_{\bar{H}(S)}(S)}{\alpha_{\mathbf{i}(S)}},-\mathbf{e}_{k_{j}})\) to the row \((\frac{f_{\bar{H}(S\setminus\{j\}\cup\{k_{j}\})}(S\setminus\{j\}\cup\{k_{j}\} )}{\alpha_{\mathbf{i}(S\setminus\{j\}\cup\{k_{j}\})}},\sum_{i\in S\setminus\{j \}}\mathbf{e}_{i}+\mathbf{e}_{k_{j}})\). Then we get \(|S|\) linearly independent rows, \((0,-\mathbf{e}_{j})\) for all \(j\in S\). Steps 1 to 4 show that the \(n+1\) points described in (a)-(c) are affinely independent. We provide Example 2.1 to demonstrate Proposition 2.2. **Example 2.1**: _Suppose that we have \(m=2\) submodular functions with \(\alpha_{1}=\alpha_{2}=1\) and \(n=4\) elements \(V=\{1,2,3,4\}\). For the case \(S=\{1,2\}\), we have two associated submodular inequalities_ \[\theta_{1} \leq 2+2x_{3}+3x_{4},\text{ and}\] \[\theta_{2} \leq 5-2(1-x_{1})-3(1-x_{2})+x_{3}+4x_{4},\] _where \(f_{1}(S)=2\), \(f_{2}(S)=5\), \(\rho_{1}^{1}(V\setminus\{1\})=\rho_{2}^{1}(V\setminus\{2\})=0\), \(\rho_{1}^{2}(V\setminus\{1\})=2\), \(\rho_{2}^{2}(V\setminus\{2\})=3\), \(\rho_{3}^{1}(S)=2\), \(\rho_{4}^{1}(S)=3\), \(\rho_{3}^{2}(S)=1\), and \(\rho_{4}^{2}(S)=4\). Note that the function \(\mathbf{i}(S)=\arg\min_{i\in[2]}\frac{f_{i}(S)}{\alpha_{i}}\) is equal to 1 since the first submodular function at \(S\) attains the smallest value \(f_{1}(S)<f_{2}(S)\). Here, the submodular inequality \(\eta\leq 2+2x_{3}+3x_{4}\) is facet defining, because \(f_{1}(S\cup\{3\})=4\), \(f_{1}(S\cup\{4\})=5\), and \(\rho_{1}^{1}(\{3\})=\rho_{2}^{1}(\{4\})=0\). Condition (i) of Proposition 2.2 holds, since \(\rho_{1}^{1}(\{3\})=\rho_{2}^{1}(\{4\})=0\). Condition (ii) of Proposition 2.2 holds, since \(f_{1}(S\cup\{3\})=f_{1}(S)+\rho_{3}^{1}(S)=2+2=4\), \(f_{1}(S\cup\{4\})=f_{1}(S)+\rho_{4}^{1}(S)=2+3=5\), and \(\mathbf{i}(S)=\mathbf{i}(S\setminus\{1\}\cup\{3\})=\mathbf{i}(S\setminus\{2\} \cup\{4\})=1\)._ _From (a)-(c) of the proof of Proposition 2.2, the \(n+1\) affinely independent points (\(\eta,x_{1},x_{2},x_{3},x_{4}\)) are as follows. The point (2,1,1,0,0) is based on the selection of \(S\) as described in (a). From (b), there exist \(|V\setminus S|=2\) points, (4,1,1,1,0) and (5,1,1,0,1) based on the selection of \(S\cup\{3\}\) and \(S\cup\{4\}\). From (c), there exist \(|S|=2\) points (4,0,1,1,0) and (5,1,0,0,1) based on the marginal contributions \(\rho_{1}^{1}(\{3\})=\rho_{2}^{1}(\{4\})=0\). We demonstrate the row operation steps 1 to 4 of the proof as follows, where the final table shows that the \(n+1=5\) points are affinely independent._ \[\begin{pmatrix}4&1&1&1&0\\ 5&1&1&0&1\\ 4&0&1&1&0\\ 5&1&0&0&1\\ 2&1&1&0&0\end{pmatrix}\xrightarrow{Step\,^{t}}\begin{pmatrix}4&1&1&1&0\\ 5&1&1&0&1\\ 4&0&1&1&0\\ 5&1&0&0&1\\ -2&-1&-1&0&0\end{pmatrix}\xrightarrow{Step\,^{2}}\begin{pmatrix}2&0&0&1&0\\ 3&0&0&0&1\\ 4&0&1&1&0\\ 5&1&0&0&1\\ -2&-1&-1&0&0\end{pmatrix}\xrightarrow{Step\,^{3}}\\ \begin{pmatrix}-2&0&0&-1&0\\ -3&0&0&-1\\ 4&0&1&1&0\\ 5&1&0&0&1\\ -2&-1&-1&0&0\end{pmatrix}\xrightarrow{Final\,Matrix}\begin{pmatrix}0&0&0&1&0 \\ 0&0&0&0&1\\ 0&1&0&0&0\\ 0&0&1&0&0\\ 1&0&0&0&0\end{pmatrix}\] We note that it may be difficult to find a submodular inequality that simultaneously meets the two conditions of Proposition 2.2. Specifically, given a submodular inequality for \(S\), the computational effort to check whether the two conditions hold may be close to generating all \(m\) submodular inequalities corresponding to the set \(S\) (not just one inequality corresponding to \(\tilde{i}\)). However, we are able to derive some computational strategies based on the two conditions of Proposition 2.2. Lemma 2.1, Lemma 2.2, and Lemma 2.3 provide an important observation to this end. **Lemma 2.1**: _Given an index \(i\in[m]\), \(\tilde{X}^{\prime\prime}\subseteq\tilde{X}\subseteq V\), and \(\tilde{S}\subseteq V\), where \(\tilde{X}\) and \(\tilde{S}\) follow the equality \(f_{i}(\tilde{X}\cup\tilde{S})=f_{i}(\tilde{S})+\sum_{j\in\tilde{X}}\rho^{i}_{j }(\tilde{S})\), if the equality \(\rho^{i}_{j}(\tilde{X}^{\prime\prime}\cup\tilde{S})=\rho^{i}_{j}(\tilde{S})\) holds for \(j\in\tilde{X}\setminus\tilde{X}^{\prime\prime}\), then the relation_ \[\rho^{i}_{j}(\tilde{X}^{\prime\prime}\cup\tilde{S}\cup Z)=\rho^{i}_{j}(\tilde{ S}\cup Z) \tag{10}\] _also holds for \(Z\subseteq V\)._ Proof. We prove the relation (10) by mathematical induction. Given an index \(i\in[m]\) and \(\tilde{S}\subseteq V\), consider the base case of the induction with \(\tilde{X}^{\prime}=\emptyset\). We have \(\rho^{i}_{j}(\emptyset\cup\tilde{S}\cup Z)=\rho^{i}_{j}(\tilde{S}\cup Z)\), which trivially satisfies (10), for all \(j\in\tilde{X}\setminus\tilde{X}^{\prime}\) and \(Z\subseteq V\). Now for the case with \(\tilde{X}^{\prime}=\{j_{1},\ldots,j_{\tilde{n}-1}\}\) for \(2\leq\bar{n}\leq|\tilde{X}^{\prime\prime}|\), we assume that for all \(j\in\tilde{X}\setminus\tilde{X}^{\prime}\) and \(Z\subseteq V\), the relation \[\rho^{i}_{j}(\tilde{X}^{\prime}\cup\tilde{S}\cup Z)=\rho^{i}_{j}(\tilde{S}\cup Z) \tag{11}\] holds. Now consider the case with \(\tilde{X}^{\prime\prime}=\{j_{1},\ldots,j_{\bar{n}-1},j_{\bar{n}}\}\). Equation (11) can be rewritten as \[f_{i}(\{j\}\cup\tilde{X}^{\prime}\cup\tilde{S}\cup Z)-f_{i}(\tilde{X}^{\prime} \cup\tilde{S}\cup Z)=f_{i}(\{j\}\cup\tilde{S}\cup Z)-f_{i}(\tilde{S}\cup Z).\] Note that since \(j_{\bar{n}}\in\tilde{X}^{\prime\prime}\backslash\tilde{X}^{\prime}\), \(\rho^{i}_{j_{\bar{n}}}(\tilde{X}^{\prime}\cup\tilde{S}\cup Z)=\rho^{i}_{j_{ \bar{n}}}(\tilde{S}\cup Z)\) for \(Z\subseteq V\), we can construct a new \(Z^{\prime}=\{j\}\cup Z\subseteq V\) and algebraically handle the above equation for all \(j\in\tilde{X}^{\prime\prime}\setminus\tilde{X}^{\prime}\) as \[[f_{i}(\{j\}\cup\tilde{X}^{\prime}\cup\tilde{S}\cup Z)+\rho^{i}_ {j_{\bar{n}}}(\tilde{X}^{\prime}\cup\tilde{S}\cup Z^{\prime})]-[f_{i}(\tilde{X }^{\prime}\cup\tilde{S}\cup Z)+\rho^{i}_{j_{\bar{n}}}(\tilde{X}^{\prime}\cup \tilde{S}\cup Z)]\] \[=[f_{i}(\{j\}\cup\tilde{S}\cup Z)+\rho^{i}_{j_{\bar{n}}}(\tilde{S} \cup Z^{\prime})]-[f_{i}(\tilde{S}\cup Z)+\rho^{i}_{j_{\bar{n}}}(\tilde{S} \cup Z)],\] which is equivalent to \[f_{i}(\{j\}\cup\tilde{X}^{\prime\prime}\cup\tilde{S}\cup Z)-f_{i}(\tilde{X}^{ \prime\prime}\cup\tilde{S}\cup Z)=f_{i}(\{j\}\cup\tilde{S}\cup Z)-f_{i}( \tilde{S}\cup Z).\] Therefore, (10) holds for \(\tilde{X}^{\prime\prime}\), which completes the proof. **Lemma 2.2**: _Given an index \(i\in[m]\) and \(\tilde{X},\tilde{S}\subseteq V\), if the equality_ \[f_{i}(\tilde{X}\cup\tilde{S})=f_{i}(\tilde{S})+\sum_{j\in\tilde{X}}\rho_{j}^{i}( \tilde{S}) \tag{12}\] _holds, then the relation_ \[f_{i}(\tilde{X}\cup\tilde{S}\cup\{z\})=f_{i}(\tilde{S}\cup\{z\})+\sum_{j\in \tilde{X}}\rho_{j}^{i}(\tilde{S}\cup\{z\}) \tag{13}\] _also holds for any \(z\in V\)._ Proof. We prove the relation (13) by mathematical induction. Given an index \(i\in[m]\), \(z\in V\), and \(\tilde{S}\subseteq V\), consider the base case of the induction with a single element \(\tilde{X}^{\prime}=\{j_{1}\}\), where \[f_{i}(\{j_{1}\}\cup\tilde{S}\cup\{z\})-f_{i}(\tilde{S}\cup\{z\})=\rho_{j_{1}} ^{i}(\tilde{S}\cup\{z\}),\] which follows from the definition of \(\rho\). Now for the case with \(\tilde{X}^{\prime\prime}=\{j_{1},\ldots,j_{\bar{n}-1}\}\) with \(\bar{n}-1<|\tilde{X}|\) elements, we assume that under the condition \[f_{i}(\tilde{X}^{\prime\prime}\cup\tilde{S})=f_{i}(\tilde{S})+\sum_{j\in \tilde{X}^{\prime\prime}}\rho_{j}^{i}(\tilde{S}), \tag{14}\] and the relation \[f_{i}(\tilde{X}^{\prime\prime}\cup\tilde{S}\cup\{z\})=f_{i}(\tilde{S}\cup\{z\} )+\sum_{j\in\tilde{X}^{\prime\prime}}\rho_{j}^{i}(\tilde{S}\cup\{z\}) \tag{15}\] holds. For the case with \(\bar{n}\) elements \(\tilde{X}=\{j_{1},\ldots,j_{\bar{n}-1},j_{\bar{n}}\}\), we have \[f_{i}(\tilde{X}^{\prime\prime}\cup\{j_{\bar{n}}\}\cup\tilde{S}) -f_{i}(\tilde{S}) =f_{i}(\tilde{X}^{\prime\prime}\cup\tilde{S})+\rho_{j_{n}}^{i}( \tilde{X}^{\prime\prime}\cup\tilde{S})-f_{i}(\tilde{S})\] \[=f_{i}(\tilde{S})+\sum_{j\in\tilde{X}^{\prime\prime}}\rho_{j}^{i} (\tilde{S})+\rho_{j_{n}}^{i}(\tilde{X}^{\prime\prime}\cup\tilde{S})-f_{i}( \tilde{S})\] \[=\sum_{j\in\tilde{X}}\rho_{j}^{i}(\tilde{S}),\] where the second equality follows from (14), and the third equality is from the condition (12) of the final case. Since \(\sum_{j\in\tilde{X}^{\prime\prime}}\rho_{j}^{i}(\tilde{S})+\rho_{j_{n}}^{i}( \tilde{X}^{\prime\prime}\cup\tilde{S})=\sum_{j\in\tilde{X}}\rho_{j}^{i}( \tilde{S})\), we have \(\rho_{j_{n}}^{i}(\tilde{X}^{\prime\prime}\cup\tilde{S})=\rho_{j_{n}}^{i}( \tilde{S})\). Here, the element \(j_{\bar{n}}\in\tilde{X}\setminus\tilde{X}^{\prime\prime}\), and therefore, the relation \(\rho_{j_{n}}^{i}(\tilde{S}\cup\{z\})=\rho_{j_{n}}^{i}(\tilde{X}^{\prime\prime }\cup\tilde{S}\cup\{z\})\) holds from Lemma 2.1. We have \[f_{i}(\tilde{X}^{\prime\prime}\cup\{j_{\bar{n}}\}\cup\tilde{S} \cup\{z\})-f_{i}(\tilde{S}\cup\{z\}) =f_{i}(\tilde{X}^{\prime\prime}\cup\tilde{S}\cup\{z\})+\rho_{j_{n} }^{i}(\tilde{X}^{\prime\prime}\cup\tilde{S}\cup\{z\})-f_{i}(\tilde{S}\cup\{z\})\] \[=f_{i}(\tilde{X}^{\prime\prime}\cup\tilde{S}\cup\{z\})+\rho_{j_{n} }^{i}(\tilde{S}\cup\{z\})-f_{i}(\tilde{S}\cup\{z\}).\] From the above relations, since the assumption of the relation (15) holds, we have \[f_{i}(\tilde{X}\cup\tilde{S}\cup\{z\}) =f_{i}(\tilde{X}^{\prime\prime}\cup\{j_{\bar{n}}\}\cup\tilde{S} \cup\{z\})\] \[=f_{i}(\tilde{X}^{\prime\prime}\cup\tilde{S}\cup\{z\})+\rho_{j_{n} }^{i}(\tilde{S}\cup\{z\})\] \[=f_{i}(\tilde{S}\cup\{z\})+\sum_{j\in\tilde{X}^{\prime\prime}}\rho _{j}^{i}(\tilde{S}\cup\{z\})+\rho_{j_{n}}^{i}(\tilde{S}\cup\{z\})\] \[=f_{i}(\tilde{S}\cup\{z\})+\sum_{j\in\tilde{X}}\rho_{j}^{i}(\tilde{ S}\cup\{z\}).\] This completes the proof. **Lemma 2.3**: _Given an index \(i\in[m]\) and \(\tilde{X},\tilde{S}\subseteq V\), if the equality (12) of Lemma 2.2 holds, then the relation_ \[f_{i}(\tilde{X}\cup\tilde{S}\cup Z)=f_{i}(\tilde{S}\cup Z)+\sum_{j\in\tilde{X}} \rho_{j}^{i}(\tilde{S}\cup Z) \tag{16}\] _also holds for \(Z\subseteq V\)._ Suppose that we are given an index \(i\in[m]\), \(Z\subseteq V\), and \(\tilde{X},\tilde{S}\) for the equality. The following steps show that adding all elements from \(Z\) to \(\tilde{S}\) recursively does not violate the relation (13) of Lemma 2.2. 1. Pick an element \(z\in Z\). 2. Since the equality (12) of Lemma 2.2 holds, the relation (13) of Lemma 2.2 holds. 3. Set \(\tilde{S}=\tilde{S}\cup\{z\}\) for the relation (13) of Lemma 2.2. The new \(\tilde{S}\) satisfies the equality (12) of Lemma 2.2. 4. Let \(Z=Z\setminus\{z\}\). Go to Step 1 if \(Z\neq\emptyset\); otherwise, stop. This completes the proof. \(\Box\) Using this lemma, we provide a proposition that informs a useful computational strategy to select a more compact set of sufficient submodular inequalities. We separate the set \(S\) into two disjoint subsets. From the first subset, we derive a new subset of elements, which is based on condition (i) of Proposition 2.2. Then, we consider a union of the second subset with the new subset and make sure that the submodular inequality (7b) associated with this particular union of subsets does not violate Proposition 2.1. **Proposition 2.3**: _Given a set \(\bar{X}\subseteq V\) and an index \(i\in[m]\), we define two associated subsets \(\tilde{X}_{i}\subseteq\bar{X}\) and_ \[\mathcal{S}(i,\tilde{X}_{i})=\{j\in V\setminus\tilde{X}_{i}:\exists k\in \tilde{X}_{i}\text{ with }\rho_{j}^{i}(\{k\})=0\}. \tag{17}\] _If the condition_ \[f_{\mathfrak{l}(\bar{X})}(\tilde{X}_{\mathfrak{l}(\bar{X})})=f_{\mathfrak{l}( \bar{X})}(\mathcal{S}(\mathfrak{i}(\bar{X}),\tilde{X}_{\mathfrak{l}(\bar{X})} ))+\sum_{j\in\tilde{X}_{\mathfrak{l}(\bar{X})}}\rho_{j}^{\mathfrak{l}(\bar{X}) }(\mathcal{S}(\mathfrak{i}(\bar{X}),\tilde{X}_{\mathfrak{l}(\bar{X})})) \tag{18}\] _holds for all \(\bar{X}\subseteq V\), then using the mixed-integer set given by_ \[\mathcal{F}^{\prime}= \{(\eta,\mathbf{x})\in\mathbb{R}\times\mathbb{B}^{n}:\] \[\eta\leq\frac{1}{\alpha_{\mathfrak{l}(\bar{X})}}(f_{\mathfrak{l} (\bar{X})}(\mathcal{S}(\mathfrak{i}(\bar{X}),\tilde{X}_{\mathfrak{l}(\bar{X}) })\cup\bar{X}\setminus\tilde{X}_{\mathfrak{l}(\bar{X})})-\sum_{j\in\mathcal{S} (\mathfrak{l}(\bar{X}),\tilde{X}_{\mathfrak{l}(\bar{X})})\cup\bar{X} \setminus\tilde{X}_{\mathfrak{l}(\bar{X})}}\rho_{j}^{\mathfrak{l}(\bar{X})}(V \setminus\{j\})(1-x_{j})+\] \[\sum_{j\in V\setminus\{\mathcal{S}(\mathfrak{l}(\bar{X}),\tilde{ X}_{\mathfrak{l}(\bar{X})})\cup\bar{X}\setminus\tilde{X}_{\mathfrak{l}(\bar{X})}\}} \rho_{j}^{\mathfrak{l}(\bar{X})}(\mathcal{S}(\mathfrak{i}(\bar{X}),\tilde{X }_{\mathfrak{l}(\bar{X})})\cup\bar{X}\setminus\tilde{X}_{\mathfrak{l}(\bar{X} )})x_{j}),\forall\bar{X}\subseteq V\},\] _to define the set \(\mathcal{C}\) in Formulation (8) provides an optimal solution to Problem (4)._ Given \(\bar{X}\subseteq V\) and the associated \(\tilde{X}_{\mathfrak{l}(\bar{X})}\subseteq\bar{X}\), we have \[f_{\mathfrak{l}(\bar{X})}(\bar{X})-f_{\mathfrak{l}(\bar{X})}( \mathcal{S}(\mathfrak{i}(\bar{X}),\tilde{X}_{\mathfrak{l}(\bar{X})})\cup\bar{ X}\setminus\tilde{X}_{\mathfrak{l}(\bar{X})})\] \[=f_{\mathfrak{l}(\bar{X})}(\tilde{X}_{\mathfrak{l}(\bar{X})} \cup\mathcal{S}(\mathfrak{i}(\bar{X}),\tilde{X}_{\mathfrak{l}(\bar{X})})\cup \bar{X}\setminus\tilde{X}_{\mathfrak{l}(\bar{X})})-f_{\mathfrak{l}(\bar{X})}( \mathcal{S}(\mathfrak{i}(\bar{X}),\tilde{X}_{\mathfrak{l}(\bar{X})})\cup \bar{X}\setminus\tilde{X}_{\mathfrak{l}(\bar{X})}) \tag{19a}\] \[=\sum_{j\in\tilde{X}_{\mathfrak{l}(\bar{X})}}\rho_{j}^{\mathfrak{ l}(\bar{X})}(\mathcal{S}(\mathfrak{i}(\bar{X}),\tilde{X}_{\mathfrak{l}(\bar{X})})\cup \bar{X}\setminus\tilde{X}_{\mathfrak{l}(\bar{X})}). \tag{19b}\] Equality (19a) follows from \(\tilde{X}_{\mathbf{i}(\bar{X})}\subseteq\bar{X}\) and \(f_{\mathbf{i}(\bar{X})}(\tilde{\mathbf{X}}_{\mathbf{i}(\bar{X})})=f_{\mathbf{i} (\bar{X})}(\tilde{\mathbf{X}}_{\mathbf{i}(\bar{X})}\cup\mathcal{S}(\mathbf{i}( \bar{X}),\tilde{\mathbf{X}}_{\mathbf{i}(\bar{X})}))\) since for all \(j\in\mathcal{S}(\mathbf{i}(\bar{X}),\tilde{X}_{\mathbf{i}(\bar{X})})\), there exists \(k\in\tilde{X}_{\mathbf{i}(\bar{X})}\) such that \(\rho_{j}^{\mathbf{i}(\bar{X})}(\{k\})=0\) shown in the definition of (17). Equality (19b) follows from condition (18) and Lemma 2.3 as follows. Suppose that \(X^{\prime}=\bar{X}\setminus\tilde{X}_{\mathbf{i}(\bar{X})}\). Equality (19b) provides \[f_{\mathbf{i}(\bar{X})}(\mathcal{S}(\mathbf{i}(\bar{X}),\tilde{ X}_{\mathbf{i}(\bar{X})})\cup\tilde{X}_{\mathbf{i}(\bar{X})}\cup X^{\prime}) =f_{\mathbf{i}(\bar{X})}(\bar{X})\] \[=\sum_{j\in\tilde{X}_{\mathbf{i}(\bar{X})}}\rho_{j}^{\mathbf{i} (\bar{X})}(\mathcal{S}(\mathbf{i}(\bar{X}),\tilde{X}_{\mathbf{i}(\bar{X})}) \cup X^{\prime})+f_{\mathbf{i}(\bar{X})}(\mathcal{S}(\mathbf{i}(\bar{X}), \tilde{X}_{\mathbf{i}(\bar{X})})\cup\bar{X}\setminus\tilde{X}_{\mathbf{i}(\bar{ X})})\] \[=\sum_{j\in\tilde{X}_{\mathbf{i}(\bar{X})}}\rho_{j}^{\mathbf{i} (\bar{X})}(\mathcal{S}(\mathbf{i}(\bar{X}),\tilde{X}_{\mathbf{i}(\bar{X})}) \cup X^{\prime})+f_{\mathbf{i}(\bar{X})}(\mathcal{S}(\mathbf{i}(\bar{X}), \tilde{X}_{\mathbf{i}(\bar{X})})\cup X^{\prime}),\] where we note that \(\mathcal{S}(\mathbf{i}(\bar{X}),\tilde{X}_{\mathbf{i}(\bar{X})})\) is \(\tilde{S}\) of Lemma 2.3, \(\tilde{X}_{\mathbf{i}(\bar{X})}\) is \(\tilde{X}\) of Lemma 2.3, and \(X^{\prime}\) is \(Z\) of Lemma 2.3. Consider the given \(\bar{X}\subseteq V\) and \(\tilde{X}_{\mathbf{i}(\bar{X})}\) with the subset \(\mathcal{S}(\mathbf{i}(\bar{X}),\tilde{X}_{\mathbf{i}(\bar{X})})\) that follows (17) and (18). Then \[\eta\leq\frac{1}{\alpha_{\mathbf{i}(\bar{X})}}(f_{\mathbf{i}(\bar{X})}( \mathcal{S}(\mathbf{i}(\bar{X}),\tilde{X}_{\mathbf{i}(\bar{X})})\cup\bar{X} \setminus\tilde{X}_{\mathbf{i}(\bar{X})})+\sum_{j\in\tilde{X}_{\mathbf{i}( \bar{X})}}\rho_{j}^{\mathbf{i}(\bar{X})}(\mathcal{S}(\mathbf{i}(\bar{X}), \tilde{X}_{\mathbf{i}(\bar{X})})\cup\bar{X}\setminus\tilde{X}_{\mathbf{i}( \bar{X})}))=\frac{f_{\mathbf{i}(\bar{X})}(\bar{X})}{\alpha_{\mathbf{i}(\bar{ X})}},\] where the equality follows from (19) and (17) with \(\rho_{j}^{\mathbf{i}(\bar{X})}(V\setminus\{j\})=0\) for all \(j\in\mathcal{S}(\mathbf{i}(\bar{X}),\tilde{X}_{\mathbf{i}(\bar{X})})\). Finally, Formulation (8) with the mixed-integer set \(\mathcal{F}^{\prime}\) derived from Formulation (5) provides \[\eta \leq\frac{\theta_{\mathbf{i}(\bar{X})}}{\alpha_{\mathbf{i}(\bar{X })}}\leq\frac{f_{\mathbf{i}(\bar{X})}(\bar{X})}{\alpha_{\mathbf{i}(\bar{X})}}\] \[\leq\frac{f_{\mathbf{i}(\bar{X})}(\mathcal{S}(\mathbf{i}(\bar{X} ),\tilde{X}_{\mathbf{i}(\bar{X})})\cup\bar{X}\setminus\tilde{X}_{\mathbf{i}( \bar{X})})-\sum_{j\in\mathcal{S}(\mathbf{i}(\bar{X}),\tilde{X}_{\mathbf{i}( \bar{X})})\cup\bar{X}\setminus\tilde{X}_{\mathbf{i}(\bar{X})}}\rho_{j}^{\mathbf{ i}(\bar{X})}(V\setminus\{j\})(1-x_{j})}{\alpha_{\mathbf{i}(\bar{X})}}\] \[+\frac{\sum_{j\in V\setminus\{\mathcal{S}(\mathbf{i}(\bar{X}), \tilde{X}_{\mathbf{i}(\bar{X})})\cup\bar{X}\setminus\tilde{X}_{\mathbf{i}( \bar{X})}\}}\rho_{j}^{\mathbf{i}(\bar{X})}(\mathcal{S}(\mathbf{i}(\bar{X}), \tilde{X}_{\mathbf{i}(\bar{X})})\cup\bar{X}\setminus\tilde{X}_{\mathbf{i}(\bar{ X})})x_{j}}{\alpha_{\mathbf{i}(\bar{X})}}.\] Following the end of Proposition 2.1, this completes the proof. In Proposition 2.3, from condition (i) of Proposition 2.2, we define a set \(\mathcal{S}(i,\tilde{X}_{i})=\{j\in V\setminus\tilde{X}_{i}:\exists k\in\tilde{X }_{i}\text{ with }\rho_{j}^{i}(\{k\})=0\}\) based on an index \(i\in[m]\) and a subset \(\tilde{X}_{i}\subseteq\bar{X}\subseteq V\), where an index \(j\in\mathcal{S}(i,\tilde{X}_{i})\) has at least one associated index \(k_{j}\in\tilde{X}_{i}\) such that \(\rho_{j}^{i}(\{k_{j}\})=0\). Then, if condition (18) is satisfied, we show that with \(i=\mathbf{i}(\bar{X})\) and \(S=\mathcal{S}(\mathbf{i}(\bar{X}),\tilde{X}_{\mathbf{i}(\bar{X})})\cup\bar{X} \setminus\tilde{X}_{\mathbf{i}(\bar{X})}\), the associated submodular inequality (7b) provides an upper bound \(\frac{f_{\mathbf{i}(\bar{X})}(\bar{X})}{\alpha_{\mathbf{i}(\bar{X})}}\) of the RSM (5) for a solution \(\bar{\mathbf{x}}\in\mathcal{X}\). The verification of the upper bound for a solution is necessary to establish that it suffices to consider the set \(\mathcal{F}^{\prime}\) in defining set \(\mathcal{C}\). This further enhances the computational efficiency, as we will show in our computational study. We now consider condition (ii) of Proposition 2.2. Although finding a facet-defining submodular inequality is challenging, we give a sequence of two propositions showing when, under certain conditions, a submodular inequality (7b) is redundant and when it is a facet of \(\mathrm{conv}(\mathcal{F})\). We first show that given \(\bar{X}\subseteq V\), some submodular inequalities based on \(\bar{X}\) may be redundant (i.e., dominated) in RMP (8). **Proposition 2.4**: _Given \(\bar{X}\subseteq V\) and \(\bar{i},\bar{i}^{\prime}\in[m]\) and \(\bar{i}\neq\bar{i}^{\prime}\), if \(\frac{f_{\mathbf{i}(\bar{X})}}{\alpha_{\bar{i}}}\leq\frac{f_{\mathbf{i}^{\prime}( \bar{X})}}{\alpha_{\bar{i}}}\), \(\frac{-\rho_{\bar{i}}^{\bar{i}}(V\setminus\{j\})}{\alpha_{\bar{i}}}\leq\frac{- \rho_{\bar{i}^{\prime}}^{\bar{i}^{\prime}}(V\setminus\{j\})}{\alpha_{\bar{i}^{ \prime}}}\) for all \(j\in\bar{X}\), and \(\frac{\rho_{\bar{i}}^{\bar{i}^{\prime}}(\bar{X})}{\alpha_{\bar{i}}}\leq\frac{ \rho_{\bar{i}^{\prime}}^{\bar{i}^{\prime}}(\bar{X})}{\alpha_{\bar{i}^{\prime}}}\) for all \(j\in V\setminus\bar{X}\), then inequality (7b) with \(\bar{X}\subseteq V\) and \(i=\bar{i}^{\prime}\),_ \[\eta\leq\frac{1}{\alpha_{\bar{i}^{\prime}}}(f_{\bar{i}^{\prime}}(\bar{X})-\sum_{j \in\bar{X}}\rho_{j}^{\bar{i}^{\prime}}(V\setminus\{j\})(1-x_{j})+\sum_{j\in V \setminus\bar{X}}\rho_{j}^{\bar{i}^{\prime}}(\bar{X})x_{j}),\] _is redundant in RMP (8)._ Proof.: We follow the proof of Proposition 2.1 and the relations of Proposition 2.4, and obtain \[\eta \leq\frac{\theta_{\mathbf{i}(\bar{X})}}{\alpha_{\mathbf{i}(\bar{X}) }}\leq\frac{f_{\mathbf{i}(\bar{X})}(\bar{X})}{\alpha_{\mathbf{i}(\bar{X})}}\] \[\leq\frac{f_{\bar{i}}(\bar{X})-\sum_{j\in\bar{X}}\rho_{j}^{\bar{ i}}(V\setminus\{j\})(1-x_{j})+\sum_{j\in V\setminus\bar{X}}\rho_{j}^{\bar{i}}( \bar{X})x_{j}}{\alpha_{\bar{i}}}\] \[\leq\frac{f_{\bar{i}^{\prime}}(\bar{X})-\sum_{j\in\bar{X}}\rho_{ j}^{\bar{i}^{\prime}}(V\setminus\{j\})(1-x_{j})+\sum_{j\in V\setminus\bar{X}} \rho_{j}^{\bar{i}^{\prime}}(\bar{X})x_{j}}{\alpha_{\bar{i}^{\prime}}},\] where inequality (7b) with \(\bar{X}\subseteq V\) and \(i=\bar{i}\) provides a better upper bound compared to the submodular inequality (7b) with \(\bar{X}\subseteq V\) and \(i=\bar{i}^{\prime}\). This completes the proof. **Example 2.2**: _Suppose that we have \(m=3\) submodular functions with \(\alpha_{1}=\alpha_{2}=\alpha_{3}=1\) and \(n=3\) elements \(V=\{1,2,3,4\}\). For \(\bar{X}=\{1,2\}\), we have three associated submodular inequalities_ \[\eta \leq 3+2x_{3}+3x_{4},\text{ and }\] \[\eta \leq 2+3x_{3}+4x_{4},\text{ and }\] \[\eta \leq 5+3x_{3}+5x_{4}\}.\] _The third inequality is redundant for RMP (8) since \(3+2x_{3}+3x_{4}\leq 5+3x_{3}+5x_{4}\) and \(2+3x_{3}+4x_{4}\leq 5+3x_{3}+5x_{4}\) from Proposition 2.4._ Proposition 2.4 shows that if a submodular inequality's right-hand side (RHS) and coefficients are all greater than those of another submodular inequality, the former is redundant for RMP (8). Based on Propositions 2.2 and 2.4, we also give a corollary that, under certain conditions, a given set of submodular inequalities is a facet for RMP (8). Given a subset \(S\subseteq V\) and \(I\subseteq[m]\), we define a mixed-integer set of the set of submodular inequalities as \(C(S,I)=\{(\eta,\mathbf{x})\in\mathbb{R}\times\mathbb{B}^{n}:\eta\leq\frac{1}{ \alpha_{i}}(f_{i}(S)-\sum_{j\in S}\rho_{j}^{i}(V\setminus\{j\})(1-x_{j})+\sum_ {j\in V\setminus S}\rho_{j}^{i}(S)x_{j}),\forall i\in I\}\). **Corollary 2.1**: _Given \(\bar{X}\subseteq V\) and \(I\subseteq[m]\), each submodular inequality defining the set \(C(\bar{X},I)=\{(\eta,\mathbf{x})\in\mathbb{R}\times\mathbb{B}^{n}:\eta\leq \frac{1}{\alpha_{i}}(f_{i}(\bar{X})-\sum_{j\in\bar{X}}\rho_{j}^{i}(V\setminus\{ j\})(1-x_{j})+\sum_{j\in V\setminus\bar{X}}\rho_{j}^{i}(\bar{X})x_{j}),\forall i\in I\}\) is a facet of conv\((\mathcal{F})\) if the following conditions hold_ * _for all_ \(j\in\bar{X}\) _and_ \(i\in I\)_, there exists at least an element_ \(k_{j}\in V\setminus\bar{X}\) _such that_ \(\rho_{j}^{i}(\{k_{j}\})=0\) _and_ \(\frac{f_{i}(\bar{X})}{\alpha_{i}}=\frac{f_{i}(X\setminus\{j\})(\bar{X})}{ \alpha_{i}(\bar{X})}=\frac{f_{i}(X\setminus\{j\})(\bar{X}\setminus\{j\})(\bar{X }\setminus\{k_{j}\})}{\alpha_{i}(\bar{X}\setminus\{j\})}=\frac{f_{i}(X\cup\{k_{ j}\})(\bar{X}\cup\{k_{j}\})}{\alpha_{i}(\bar{X}\cup\{k_{j}\})}\)_,_ * _for all_ \(i\in I\)_, we have_ \(\frac{f_{i}(X\setminus\bar{X})}{\alpha_{i}(\bar{X})}=\frac{f_{i}(\bar{X})}{ \alpha_{i}}\)_,_ * _for any_ \(\bar{i}^{\prime}\in[m]\setminus I\)_, given an inequality_ \((\eta,\mathbf{x})\in C(\bar{X},[m]\setminus I)\)_, there must exist an index_ \(\bar{i}\in I\) _for another inequality_ \((\eta,\mathbf{x})\in C(\bar{X},I)\) _such that the relations of Proposition_ 2.4 _hold._ Proof.: Condition (i) delineates that all submodular inequalities in \(C(\bar{X},I)\) satisfy condition (i) of Proposition 2.2 for all \(\bar{i}\in I\). Conditions (ii) and (iii) imply that given \(\bar{X}\subseteq V\) and \(j\in V\setminus\bar{X}\), we have \(\min_{i\in I}\{\frac{f_{i}(\bar{X})+\rho_{j}^{i}(\bar{X})}{\alpha_{i}}\}=\frac{ f_{i}(X\cup\{j\})(\bar{X}\cup\{j\})}{\alpha_{i}(\bar{X}\cup\{j\})}\) since \(C(\bar{X},[m]\setminus I)\) includes redundant inequalities (from Proposition 2.4). Then \(n+1\) affinely independent points defined in (a)-(c) in the proof of Proposition 2.2 satisfy conditions (i)-(iii) of this corollary. Corollary 2.1 shows that given \(\bar{X}\subseteq V\), if the RHS of \(\bar{X}\) satisfies condition (i) of Proposition 2.2, it may not be necessary to include all submodular inequalities for all \(i\in[m]\). Next, we derive the following corollary directly from Corollary 2.1. Corollary 2.2: _The inequalities defined by \(C(\emptyset,[m])\) are facets of conv\((\mathcal{F})\)._ Proof. Since \(\bar{X}=\emptyset\), for any \(k\in V\), it follows from condition (i) of Corollary 2.1 that \(\rho_{j}^{i}(\{k\})=0\) for all \(j\in\bar{X}\) and \(i\in[m]\). Furthermore, conditions (ii) and (iii) hold, because \(I=[m]\) and \(f_{i}(\emptyset)=0\) for all \(i\in I\). \(\Box\) Example 2.3: _Suppose that we have \(m=3\) submodular functions with \(\alpha_{1}=\alpha_{2}=\alpha_{3}=1\) and \(n=3\) elements \(V=\{1,2,3\}\). For the case \(\bar{X}=\emptyset\), we have three associated submodular inequalities \(C(\emptyset,[m])=\{\)_ \[\eta \leq 0+2x_{1}+2x_{2}+3x_{3},\text{ and}\] \[\eta \leq 0+x_{1}+3x_{2}+4x_{3},\text{ and}\] \[\eta \leq 0+3x_{1}+3x_{2}+x_{3}\}.\] _For the point (\(x_{1},x_{2},x_{3}\)) = (1,0,0), the second inequality provides an upper bound equal to 1 for the variables \(\eta\) and \(\theta_{2}\). For the point (\(x_{1},x_{2},x_{3}\)) = (0,1,0), the first inequality provides an upper bound equal to 2 for \(\eta\). For the point (\(x_{1},x_{2},x_{3}\)) = (0,0,1), the third inequality provides an upper bound equal to 1 for \(\eta\). The \(n+1\) affinely independent points (\(\eta,x_{1},x_{2},x_{3}\)) are (1,1,0,0), (2,0,1,0), (1,0,0,1), and (0,0,0,0)._ ### An Analysis of a Special Case of RSM At the end of Section 1, we highlighted the difficulty of solving Problem (3). That is, to get \(m\) values \(f_{i}(\mathbf{x}_{i}^{*})\) for all \(i\in[m]\), we have to solve \(m\)\(\mathcal{NP}-\)hard problems (1). Let \(F_{i}\) be a mixed-integer set defined by the set of submodular inequalities for each \(i\in[m]\), i.e., \(F_{i}=\{(\theta_{i},\mathbf{x})\in\mathbb{R}\times\mathbb{B}^{n}:\theta_{i}\leq f _{i}(S)-\sum_{j\in S}\rho_{j}^{i}(V\setminus\{j\})(1-x_{j})+\sum_{j\in V \setminus S}\rho_{j}^{i}(S)x_{j},\forall S\subseteq V\}\). Recall that \(\mathbf{x}_{1}^{*}\) is the optimal solution to the \(i\)-th submodular maximization problem (1) and \(f_{i}(\mathbf{x}_{i}^{*})=\max\{\theta_{i}:(\theta_{i},\mathbf{x})\in F_{i}, \mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n},\theta_{i}\in\mathbb{R}\}\) for all \(i\in[m]\). Let \(LB\) and \(UB\) be lower and upper bounds of the optimal value of Problem (3), respectively, i.e., \(LB\leq\max_{\mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n}}\min_{i\in[m]}\frac{f_ {i}(\mathbf{x})}{f_{i}(\mathbf{x}_{i}^{*})}\leq UB\). It may appear that, without solving the \(m\) problems, we cannot solve Problem (3) or even find an optimality gap \(\frac{UB-LB}{UB}\). We show how we can overcome this difficulty based on the following proposition. Proposition 2.5: _Let \(lb_{i}\) and \(ub_{i}\) be lower and upper bounds of the optimal value of the \(i\)-th Problem (1) for all \(i\in[m]\), i.e., \(lb_{i}\leq f_{i}(\mathbf{x}_{1}^{*})\leq ub_{i}\). Let \(\bar{\eta}_{relax}=\max\{\eta:\eta\leq\frac{\theta_{i}}{lb_{i}},\ i\in[m],\ (\theta_{i},\mathbf{x})\in F_{i},i\in[m],\ \mathbf{x}\in \mathcal{X},\ \eta\in\mathbb{R},\ \boldsymbol{\theta}\in\mathbb{R}^{m}\}\) be the objective value of the relaxation of RSM (4). For a given \(\bar{\mathbf{x}}\in\mathcal{X}\cap\mathbb{B}^{n}\), we have_ \[\min_{i\in[m]}\frac{f_{i}(\bar{\mathbf{x}})}{ub_{i}}\leq\max_{\mathbf{x}\in \mathcal{X}\cap\mathbb{B}^{n}}\min_{i\in[m]}\frac{f_{i}(\mathbf{x})}{f_{i}( \mathbf{x}_{i}^{*})}\leq\bar{\eta}_{relax}. \tag{20}\] Proof. We start by showing that \(\max_{\mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n}}\min_{i\in[m]}\frac{f_{i}( \mathbf{x})}{f_{i}(\mathbf{x}_{i}^{*})}\leq\bar{\eta}_{relax}\). From Formulation (5), we have \(\max\{\eta:\eta\leq\frac{f_{i}(\mathbf{x}_{i}^{*})}{f_{i}(\mathbf{x}_{i}^{*})} \ \forall i\in[m],\mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n},\eta\in\mathbb{R}\}= \max_{\mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n}}\min_{i\in[m]}\frac{f_{i}( \mathbf{x})}{f_{i}(\mathbf{x}_{i}^{*})}\). Since \(lb_{i}\leq f_{i}(\mathbf{x}_{i}^{*})\) for all \(i\in[m]\), the constraint \(\eta\leq\frac{f_{i}(\mathbf{x})}{lb_{i}}\) is a relaxation of \(\eta\leq\frac{f_{i}(\mathbf{x})}{f_{i}(\mathbf{x}_{i}^{*})}\) for Formulation (5). We have the following inequality \[\max\{\eta:\eta\leq\frac{f_{i}(\mathbf{x})}{f_{i}(\mathbf{x}_{i}^{*})},i\in[m], \mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n},\eta\in\mathbb{R}\}\leq\max\{\eta: \eta\leq\frac{f_{i}(\mathbf{x})}{lb_{i}},i\in[m],\mathbf{x}\in\mathcal{X}\cap \mathbb{B}^{n},\eta\in\mathbb{R}\}.\] Furthermore, the objective value \(\bar{\eta}_{relax}\) is obtained from the relaxation of \(\mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n}\) as \(\mathbf{x}\in\mathcal{X}\). Therefore, we conclude that \[\max_{\mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n}}\min_{i\in[m]} \frac{f_{i}(\mathbf{x})}{f_{i}(\mathbf{x}_{i}^{*})} =\max\{\eta:\eta\leq\frac{f_{i}(\mathbf{x})}{f_{i}(\mathbf{x}_{i }^{*})},i\in[m],\mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n},\eta\in\mathbb{R}\}\] \[\leq\max\{\eta:\eta\leq\frac{f_{i}(\mathbf{x})}{lb_{i}},i\in[m], \mathbf{x}\in\mathcal{X},\eta\in\mathbb{R}\}\] \[\leq\max\{\eta:\eta\leq\frac{\theta_{i}}{lb_{i}},i\in[m],\ (\theta_{i},\mathbf{x})\in F_{i},i\in[m],\ \mathbf{x}\in\mathcal{X},\ \eta\in \mathbb{R},\ \boldsymbol{\theta}\in\mathbb{R}^{m}\}\] \[=\bar{\eta}_{relax}.\] Next, we show the second part of inequality (20), \(\min_{i\in[m]}\frac{f_{i}(\mathbf{x})}{ub_{i}}\leq\max_{\mathbf{x}\in \mathcal{X}\cap\mathbb{B}^{n}}\min_{i\in[m]}\frac{f_{i}(\mathbf{x})}{f_{i}( \mathbf{x}_{i}^{*})}\). Since \(f_{i}(\mathbf{x}_{i}^{*})\leq ub_{i}\) for all \(i\in[m]\), we have \(\eta\leq\frac{f_{i}(\mathbf{x})}{ub_{i}}\leq\frac{f_{i}(\mathbf{x})}{f_{i}( \mathbf{x}_{i}^{*})}\). Thus, \[\max\{\eta:\eta\leq\frac{f_{i}(\mathbf{x})}{ub_{i}},i\in[m],\mathbf{x}\in \mathcal{X}\cap\mathbb{B}^{n},\eta\in\mathbb{R}\}\leq\max\{\eta:\eta\leq\frac {f_{i}(\mathbf{x})}{f_{i}(\mathbf{x}_{i}^{*})},i\in[m],\mathbf{x}\in\mathcal{X }\cap\mathbb{B}^{n},\eta\in\mathbb{R}\}.\] In addition, the solution \(\bar{\mathbf{x}}\in\mathcal{X}\cap\mathbb{B}^{n}\) satisfies \[\max\{\eta:\eta\leq\frac{f_{i}(\bar{\mathbf{x}})}{ub_{i}},i\in[m],\eta\in \mathbb{R}\}\leq\max\{\eta:\eta\leq\frac{f_{i}(\mathbf{x})}{ub_{i}},i\in[m], \mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n},\eta\in\mathbb{R}\}.\] From the above relations, we conclude \[\min_{i\in[m]}\frac{f_{i}(\bar{\mathbf{x}})}{ub_{i}} =\max\{\eta:\eta\leq\frac{f_{i}(\bar{\mathbf{x}})}{ub_{i}},i\in[m],\eta\in\mathbb{R}\}\] \[\leq\max\{\eta:\eta\leq\frac{f_{i}(\mathbf{x})}{f_{i}(\mathbf{x} _{i}^{*})},i\in[m],\mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n},\eta\in\mathbb{R}\}\] \[=\max_{\mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n}}\min_{i\in[m]} \frac{f_{i}(\mathbf{x})}{f_{i}(\mathbf{x}_{i}^{*})}.\] This completes the proof. Here, we also make an observation that we can solve Problem (3) without exactly solving \(m\) submodular maximization problems, by instead solving Problem (4) with a particular choice of \(\boldsymbol{\alpha}\), such that \(lb_{i}\leq\alpha_{i}\leq ub_{i}\) for all \(i\in[m]\), under certain conditions. Proposition 2.6: _Let \(\bar{\mathbf{x}}^{\prime}\) be an optimal solution of \(\max_{\mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n}}\min_{i\in[m]}\frac{f_{i}( \mathbf{x})}{lb_{i}}\) and \(\mathbf{i}(\bar{X}^{\prime})=\arg\min_{i\in[m]}\frac{f_{i}(\bar{X}^{\prime})}{ lb_{i}}\). If \(lb_{\mathbf{i}(\bar{X}^{\prime})}\geq ub_{i}\) for all \(i\in[m]\setminus\{\mathbf{i}(\bar{X}^{\prime})\}\), then \(\bar{\mathbf{x}}^{\prime}\) is an optimal solution of Problem (3)._ Proof.: Since \(f_{\mathbf{i}(\bar{X}^{\prime})}(\mathbf{x}_{\mathbf{i}(\bar{X}^{\prime})}^{*}) \geq lb_{\mathbf{i}(\bar{X}^{\prime})}\geq ub_{i}\) for all \(i\in[m]\setminus\{\mathbf{i}(\bar{X}^{\prime})\}\) and \(\max_{\mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n}}\min_{i\in[m]}\frac{f_{i}( \mathbf{x})}{lb_{i}}=\min_{i\in[m]}\frac{f_{i}(\mathbf{x}^{\prime})}{lb_{i}}\), we have the following relation \[\frac{f_{\mathbf{i}(\bar{X}^{\prime})}(\mathbf{\bar{x}}^{\prime})}{f_{ \mathbf{i}(\bar{X}^{\prime})}(\mathbf{x}_{\mathbf{i}(\bar{X}^{\prime})}^{*})} \leq\frac{f_{\mathbf{i}(\bar{X}^{\prime})}(\mathbf{\bar{x}}^{\prime})}{lb_{ \mathbf{i}(\bar{X}^{\prime})}}\leq\frac{f_{i}(\bar{\mathbf{x}}^{\prime})}{ub_{i}} \leq\frac{f_{i}(\bar{\mathbf{x}}^{\prime})}{f_{i}(\mathbf{x}_{i}^{*})}\leq \frac{f_{i}(\bar{\mathbf{x}}^{\prime})}{lb_{i}},i\in[m]\setminus\{\mathbf{i}( \bar{X}^{\prime})\}.\] Consequently, the formulation \(\max_{\mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n}}\min_{i\in[m]}\frac{f_{i}( \mathbf{x})}{lb_{i}}\) has the following relation with several optimization problems \[\max_{\mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n}}\min_{i\in[m]} \frac{f_{i}(\mathbf{x})}{lb_{i}}=\min_{i\in[m]}\frac{f_{i}(\mathbf{x}^{\prime})}{ lb_{i}}\] \[=\max\{\eta:\eta\leq\frac{f_{\mathfrak{i}(\bar{\mathbf{x}}^{\prime})}}{ lb_{i}(\bar{\mathbf{x}}^{\prime})},i\in[m],\eta\in\mathbb{R}\}\] \[=\max\{\eta:\eta\leq\frac{f_{\mathfrak{i}(\bar{\mathbf{x}}^{\prime })}(\bar{\mathbf{x}}^{\prime})}{lb_{\mathfrak{i}(\bar{\mathbf{x}}^{\prime})}}, \eta\leq\frac{f_{i}(\bar{\mathbf{x}}^{\prime})}{lb_{i}},i\in[m]\setminus\{ \mathfrak{i}(\bar{\mathbf{x}}^{\prime})\},\eta\in\mathbb{R}\}\] \[=\max\{\eta:\eta\leq\frac{f_{\mathfrak{i}(\bar{\mathbf{x}}^{\prime })}(\bar{\mathbf{x}}^{\prime})}{lb_{\mathfrak{i}(\bar{\mathbf{x}}^{\prime})}}, \eta\leq\frac{f_{i}(\bar{\mathbf{x}}^{\prime})}{ub_{i}},i\in[m]\setminus\{ \mathfrak{i}(\bar{\mathbf{x}}^{\prime})\},\eta\in\mathbb{R}\}\] \[=\max\{\eta:\eta\leq\frac{f_{\mathfrak{i}(\bar{\mathbf{x}}^{\prime })}(\bar{\mathbf{x}}^{\prime})}{lb_{\mathfrak{i}(\bar{\mathbf{x}}^{\prime})} },\eta\leq\frac{f_{i}(\bar{\mathbf{x}}^{\prime})}{f_{i}(\mathbf{x}_{i}^{*})}, i\in[m]\setminus\{\mathfrak{i}(\bar{\mathbf{x}}^{\prime})\},\eta\in\mathbb{R}\}\] \[=\max\{\eta:\eta\leq\frac{f_{\mathfrak{i}(\bar{\mathbf{x}}^{\prime })}(\mathbf{x})}{lb_{\mathfrak{i}(\bar{\mathbf{x}}^{\prime})}},\eta\leq\frac{ f_{i}(\mathbf{x})}{f_{i}(\mathbf{x}_{i}^{*})},i\in[m]\setminus\{\mathfrak{i}(\bar{ \mathbf{x}}^{\prime})\},\mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n},\eta\in \mathbb{R}\}.\] From the above relations, since \(\frac{f_{\mathfrak{i}(\bar{\mathbf{x}}^{\prime})}(\mathbf{x})}{lb_{\mathfrak{ i}(\bar{\mathbf{x}}^{\prime})}}\leq\frac{f_{i}(\mathbf{x})}{f_{i}( \mathbf{x}_{i}^{*})}\) for all \(i\in[m]\), the inequalities \(\{\eta\leq\frac{f_{i}(\mathbf{x})}{f_{i}(\mathbf{x}_{i}^{*})},\ \forall i\in[m]\setminus\{ \mathfrak{i}(\bar{\mathbf{x}}^{\prime})\}\}\) are redundant while solving \(\max\{\eta:\eta\leq\frac{f_{i}(\bar{\mathbf{x}}^{\prime})}{lb_{i}}\ \forall i\in[m],\eta\in\mathbb{R}\}\). Therefore, \[\max_{\mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n}}\min_{i\in[m]} \frac{f_{i}(\mathbf{x})}{lb_{i}} =\min_{i\in[m]}\frac{f_{i}(\bar{\mathbf{x}}^{\prime})}{lb_{i}}\] \[=\max\{\eta:\eta\leq\frac{f_{\mathfrak{i}(\bar{\mathbf{X}}^{\prime })}(\mathbf{x})}{lb_{\mathfrak{i}(\bar{\mathbf{X}}^{\prime})}},\mathbf{x}\in \mathcal{X}\cap\mathbb{B}^{n},\eta\in\mathbb{R}\}.\] Since \(f_{\mathfrak{i}(\bar{\mathbf{X}}^{\prime})}(\mathbf{x}_{\mathfrak{i}(\bar{ \mathbf{X}}^{\prime})}^{*})\geq lb_{\mathfrak{i}(\bar{\mathbf{X}}^{\prime})} \geq ub_{i}\geq f_{i}(\mathbf{x}_{i}^{*})\) for all \(i\in[m]\setminus\{\mathfrak{i}(\bar{\mathbf{X}}^{\prime})\}\), the inequalities \(\{\eta\leq\frac{f_{i}(\mathbf{x})}{f_{i}(\mathbf{x}_{i}^{*})}\ \forall i\in[m] \setminus\{\mathfrak{i}(\bar{\mathbf{X}}^{\prime})\}\}\) are redundant for \(\max\{\eta:\eta\leq\frac{f_{i}(\mathbf{x})}{f_{i}(\mathbf{x}_{i}^{*})}\ \forall i\in[m], \mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n},\eta\in\mathbb{R}\}\). Therefore, \[\max_{\mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n}}\min_{i\in[m]} \frac{f_{i}(\mathbf{x})}{f_{i}(\mathbf{x}_{i}^{*})} =\max\{\eta:\eta\leq\frac{f_{\mathfrak{i}(\bar{\mathbf{X}}^{\prime })}(\mathbf{x})}{f_{\mathfrak{i}(\bar{\mathbf{X}}^{\prime})}(\mathbf{x}_{ \mathfrak{i}(\bar{\mathbf{X}}^{\prime})}^{*})},\eta\leq\frac{f_{i}(\mathbf{x })}{f_{i}(\mathbf{x}_{i}^{*})},i\in[m]\setminus\{\mathfrak{i}(\bar{\mathbf{X}}^ {\prime})\},\mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n},\eta\in\mathbb{R}\}\] \[=\max\{\eta:\eta\leq\frac{f_{\mathfrak{i}(\bar{\mathbf{X}}^{\prime })}(\mathbf{x})}{f_{\mathfrak{i}(\bar{\mathbf{X}}^{\prime})}(\mathbf{x}_{ \mathfrak{i}(\bar{\mathbf{X}}^{\prime})}^{*})},\mathbf{x}\in\mathcal{X}\cap \mathbb{B}^{n},\eta\in\mathbb{R}\}.\] From the above relations, we observe that \(lb_{\mathfrak{i}(\bar{\mathbf{X}}^{\prime})}\) and \(f_{\mathfrak{i}(\bar{\mathbf{X}}^{\prime})}(\mathbf{x}_{\mathfrak{i}(\bar{ \mathbf{X}}^{\prime})}^{*})\) are two constants and the solution \(\bar{\mathbf{x}}^{\prime}\) is the largest value of the function \(f_{\mathfrak{i}(\bar{\mathbf{X}}^{\prime})}\). Therefore, the solution \(\bar{\mathbf{x}}^{\prime}\) is also an optimal solution of \[\max_{\mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n}}\min_{i\in[m]} \frac{f_{i}(\mathbf{x})}{f_{i}(\mathbf{x}_{i}^{*})} =\max\{\eta:\eta\leq\frac{f_{\mathfrak{i}(\bar{\mathbf{X}}^{\prime })}(\mathbf{x})}{f_{\mathfrak{i}(\bar{\mathbf{X}}^{\prime})}(\mathbf{x}_{ \mathfrak{i}(\bar{\mathbf{X}}^{\prime})}^{*})},\mathbf{x}\in\mathcal{X}\cap \mathbb{B}^{n},\eta\in\mathbb{R}\}.\] \[=\max\{\eta:\eta\leq\frac{f_{\mathfrak{i}(\bar{\mathbf{X}}^{\prime })}(\bar{\mathbf{x}}^{\prime})}{f_{\mathfrak{i}(\bar{\mathbf{X}}^{\prime})}( \mathbf{x}_{\mathfrak{i}(\bar{\mathbf{X}}^{\prime})}^{*})},\eta\in\mathbb{R}\}.\] Proposition 2.6 shows that solving Problem (4) with certain \(\alpha_{i}\neq f_{i}(\mathbf{x}_{i}^{*})\) for all \(i\in[m]\) may provide an optimal solution of Problem (3). From Propositions 2.5 and 2.6, we arrive at a corollary for the final analysis of Problem (3). **Corollary 2.3**: _From Propositions 2.5 and 2.6, for Problem (4) with \(lb_{i}\leq\alpha_{i}\leq ub_{i}\) for all \(i\in[m]\), an optimal solution \(\bar{\mathbf{x}}^{\prime\prime}\) of Problem (4) is equivalent to an optimal solution of Problem (3) if one of the following conditions holds._ * _if the solution_ \(\bar{\mathbf{x}}^{\prime\prime}\) _satisfies_ \(\min_{i\in[m]}\frac{f_{i}(\bar{\mathbf{x}}^{\prime\prime})}{ub_{i}}=\bar{\eta}_{ relax}\)_, or_ * _the condition of Proposition_ 2.6 _holds for_ \(\bar{\mathbf{x}}^{\prime}=\bar{\mathbf{x}}^{\prime\prime}\) Finally, we derive a corollary that provides a strategy for the computational study. **Corollary 2.4**: _Given \(\bar{X}\subseteq V\), we have \(\max\{\eta:(\eta,\mathbf{x})\in\mathcal{C}\cap C(\bar{X},\{\mathbf{i}(\bar{X})\}), \ \mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n},\eta\in\mathbb{R}\}\geq\max\{\eta: \forall i\in[m],\ (\eta,\mathbf{x})\in\mathcal{C}\cap C(\bar{X},\mathbf{I}(\bar{X})),\ \mathbf{x}\in \mathcal{X}\cap\mathbb{B}^{n},\ \eta\in\mathbb{R}\}\geq\max\{\eta:(\eta,\mathbf{x})\in \mathcal{C}\cap C(\bar{X},\mathbf{I}(\bar{X})),\ \eta\leq\frac{\theta_{i}}{\alpha_{i}}\ \ \forall i\in[m],\ ( \theta_{i},\mathbf{x})\in F_{i},i\in[m],\ \mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n},\ \eta\in\mathbb{R},\ \boldsymbol{\theta}\in \mathbb{R}^{m}\}\), where \(\mathbf{I}(\bar{X})=\{j\in[m]:\frac{f_{i}(\bar{X})}{\alpha_{j}}=\frac{f_{i}( \bar{X})}{\alpha_{i}(\bar{X})}\}\), \(\alpha_{i}=\max\{\theta_{i}:(\theta_{i},\mathbf{x})\in\bar{F}_{i},\mathbf{x}\in \mathcal{X}\cap\mathbb{B}^{n},\theta_{i}\in\mathbb{R}\}\) and \(\bar{F}_{i}\supseteq F_{i}\) for all \(i\in[m]\)._ Proof.: Given \((\bar{\eta},\boldsymbol{\bar{\theta}},\bar{\mathbf{x}})\), where \(\bar{\mathbf{x}}\in\mathbb{B}^{n}\cap\mathcal{X}\cap\mathcal{C}\), we have the following relations \[\bar{\eta} \leq\min_{i\in\mathbf{I}(\bar{X})}\frac{\bar{\theta}_{i}}{\alpha _{i}}\leq\min_{i\in\mathbf{I}(\bar{X})}\left\{\frac{f_{i}(\bar{X})-\sum_{j\in \bar{X}\setminus\bar{X}}\rho_{j}^{i}(V\setminus\{j\})+\sum_{j\in\bar{X}\setminus \bar{X}}\rho_{j}^{i}(\bar{X})}{\alpha_{i}}\right\}\] \[\leq\frac{f_{\mathbf{i}(\bar{X})}(\bar{X})-\sum_{j\in\bar{X} \setminus\bar{X}}\rho_{j}^{\mathbf{i}(\bar{X})}(V\setminus\{j\})+\sum_{j\in \bar{X}\setminus\bar{X}}\rho_{j}^{\mathbf{i}(\bar{X})}(\bar{X})}{\alpha_{ \mathbf{i}(\bar{X})}},\] where the above relations follow from the definition of submodular inequality (6) and \(\mathbf{i}(\bar{X})\in\mathbf{I}(\bar{X})\). This completes the proof. In Corollary 2.2, we establish that the set of submodular inequalities \(C(\emptyset,[m])\) satisfies the conditions of Corollary 2.1. On the other hand, Corollary 2.4 shows that with the same RHS, adding a set of submodular inequalities provides a tighter bound compared to just adding one submodular inequality to RMP (8), where the RHS is the value of \(\frac{f_{i(\bar{X})}(\bar{X})}{\alpha_{i}(\bar{X})}\leq\frac{f_{i}(\bar{X})}{ \alpha_{i}}\) for a given \(\bar{X}\in V\). Finally, Corollary 2.4 notes that as we solve a submodular maximization problem \(\alpha_{i}=\max\{\theta_{i}:(\theta_{i},\mathbf{x})\in\bar{F}_{i},\mathbf{x} \in\mathcal{X}\cap\mathbb{B}^{n},\theta_{i}\in\mathbb{R}\}\), a subset of submodular inequalities defining the mixed-integer set \(\bar{F}_{i}\supseteq F_{i}\) can be reused to derive a class of valid inequalities of RMP (8) for solving Problem (3). In the next section, we design algorithms for solving Problem (4), including the special case of Problem (3). The idea of Proposition 2.5 is that if we obtain all the lower and upper bounds of the \(m\) submodular maximization problems (1), we can calculate the optimality gap of the associated Problem (3). Proposition 2.5 provides a strategy for solving Problem (3). That is, we could set a time limit for each submodular maximization Problem (1) to obtain upper and lower bounds of the problem. Using these bounds, for all \(i\in[m]\), we set \(\alpha_{i}\) as the lower bound \(lb_{i}\). By solving the relaxation \(\max\{\eta:(\eta,\mathbf{x})\in\mathcal{C},\ \mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n},\ \eta\in\mathbb{R}\) we obtain the optimality gap of Problem (3). ### Algorithms In the final part of this section, we summarize the mentioned strategies and provide algorithms for Problem (4) including the special case of Problem (3), with \(\boldsymbol{\alpha}=(f_{1}(\mathbf{x}_{1}^{*}),f_{2}(\mathbf{x}_{2}^{*}),\ldots,f_{m}(\mathbf{x}_{m}^{*}))\). The core algorithm is a delayed constraint generation algorithm described in Algorithm 1. Algorithm 1 takes as input \(\boldsymbol{\alpha}\), a subset of cuts defining \(\mathcal{C}\) (could be empty), and a Boolean parameter \(reduce\) that determines whether we consider Proposition 2.1 and Corollary 2.4. The True value of the parameter \(reduce\) indicates that we consider the mixed-integer set \(\mathcal{F}\) or \(\mathcal{F}^{\prime}\) that includes fewer submodular inequalities compared to adding all inequalities for \(F_{i},i\in[m]\) under the False value of the parameter. The termination criteria can be a time limit, \(T\) and/or an optimality gap tolerance, \(\epsilon\in[0,1]\), where, for a lower bound on the optimal solution denoted as \(\min(\Lambda)\) and an incumbent objective value \(\bar{\eta}\), the optimality gap is given by \(\bar{\eta}-\min(\Lambda)\). Note that the user can provide warm-start cuts for the set \(\mathcal{C}\) of the RMP (8) as input. In particular, in Corollary 2.2, we have shown that the set of submodular inequalities \(C(\emptyset,[m])\) satisfies the facet conditions given in Corollary 2.1. Therefore, in line 2 of Algorithm 1, we add the facet-defining inequalities \(C(\emptyset,[m])\) to \(\mathcal{C}\) as a class of warm-start cuts. In line 4 of the while loop of Algorithm 1, we solve RMP (8) and get an incumbent solution \((\bar{\eta},\bar{\mathbf{x}})\). In line 6, based on the incumbent \(\bar{\mathbf{x}}\), we form a set \(\Lambda\) including \(m\) values \(\frac{f_{i}(\bar{\mathbf{x}})}{\alpha_{i}}\) for all \(i\in[m]\). We compute \(\min_{i\in[m]}\frac{f_{i}(\mathbf{\bar{x}})}{\alpha_{i}}\) using the function \(\min(\Lambda)\) that returns the minimal value of the elements of the set \(\Lambda\). Note that the function \(\min(\Lambda)\) provides a lower bound of Problem (4) based on the incumbent \(\mathbf{\bar{x}}\). The lower bound is used to compute an optimality gap and obtain the smallest value of the set \(\left\{\frac{f_{i}(\mathbf{\bar{x}})}{\alpha_{1}},\ldots,\frac{f_{i}(\mathbf{ \bar{x}})}{\alpha_{i}},\ldots,\frac{f_{m}(\mathbf{\bar{x}})}{\alpha_{m}}\right\}\), which is essential to determine the set \(\mathcal{F}\) of Proposition 2.1. The for loop in lines 8 to 21 is for adding the submodular inequalities. Given an incumbent \(\mathbf{\bar{x}}\), if \(reduce\) = True, the for loop adds fewer submodular inequalities to RMP (8) following Proposition 2.1 and Corollary 2.4, compared to the case that \(reduce\) = False. Next, in Algorithm 2, we describe the separation routine, \(\text{FindSetRoutine}(\mathbf{\bar{x}}\text{,}i)\), of this for loop. Recall that given an incumbent \(\bar{X}\) and \(i\in[m]\), Proposition 2.3 separates the incumbent \(\bar{X}\) into two sets \(\tilde{X}_{i}\subseteq\bar{X}\) and \(\bar{X}\setminus\tilde{X}_{i}\subseteq\bar{X}\), where \(\tilde{X}_{i}\subseteq\bar{X}\) determines the set \(\mathcal{S}(i,\tilde{X}_{i})=\{j\in V\setminus\tilde{X}_{i}:\rho_{j}^{i}(\{k\} )=0,\exists k\in\tilde{X}_{i}\}\). Given an element \(j\in V\), lines 8 to 17 first evaluate if there exists an element \(k\in\bar{X}\) with \(\rho_{j}^{i}(\{k\})=0\) for some \(i\in[m]\). Then, the algorithm determines if \(j\in V\) can be a candidate of \(S\), which is used to determine the set \(\mathcal{S}(i,\tilde{X}_{i})\), based on the condition shown in line 13, where \(StopPt\in\mathbb{N}\) denotes the number of elements in \(\bar{X}\) with zero marginal contribution. If \(StopPt=0\), then \(\text{FindSetRoutine}(\mathbf{\bar{x}}\text{,}i)\) returns the original input \(\bar{X}\). Here, line 13 follows the condition (18) of Proposition 2.3 that allows us to consider the mixed-integer set \(\mathcal{F}^{\prime}\) as a valid set of submodular inequalities for the set \(\mathcal{C}\) of the RMP (8). Finally, we present Algorithm 3 for solving Problem (3), which is Problem (4) with a special choice of \(\boldsymbol{\alpha}=(f_{1}(\mathbf{x}_{1}^{\star}),f_{2}(\mathbf{x}_{2}^{ \star}),\ldots,f_{m}(\mathbf{x}_{m}^{\star}))\). Recall that obtaining \(\boldsymbol{\alpha}=(f_{1}(\mathbf{x}_{1}^{\star}),f_{2}(\mathbf{x}_{2}^{ \star}),\ldots,f_{m}(\mathbf{x}_{m}^{\star}))\), before solving the corresponding RMP (8), requires the solution of \(m\)\(\mathcal{NP}\)-hard submodular maximization problems (1). However, even if we cannot solve the \(m\) problems optimally, we can use Algorithm 3 to find a feasible solution for RSM (2) along with an optimality gap. Lines 6 to 18 follow a standard method for solving a submodular maximization problem using the submodular inequalities with some additional features. In lines 8 to 11, when we finish solving a submodular maximization problem, the lower and upper bounds are recorded. Furthermore, since the submodular inequalities for the corresponding submodular maximization problem can be reused for solving Problem (4), we adapt and store the inequalities to the set \(\bar{\mathcal{C}}\) for further usage in line 11. After the for loop of Algorithm 3, we call Algorithm 1 based on a new vector \(\boldsymbol{\alpha}\) and a set of warm-start cuts \(\bar{\mathcal{C}}\). At the end of Algorithm 3, using the returned incumbent solution of Algorithm 1, we are able to compute an optimality gap for Problem (2), where the computation of the gap follows Proposition 2.5. ## 3 An Application on a Class of Water Sensor Placement Optimization Problems In this section, we apply the proposed algorithms to a class of sensor placement optimization problems in a water distribution network, where the goal of the deployed sensors is to detect contaminants in the network. Various objectives have been considered to quantify the effectiveness of the sensor deployment, such as the volume of the contaminated water (Kessler et al., 1998), the contaminant detection time (Kumar et al., 1997; Dorini et al., 2004), or the population affected by the pollutants. We refer the reader to Berry et al. (2005); Ostfeld et al. (2008) for a detailed introduction to sensor placement optimization in real-world applications. In addition, Watson et al. (2004); Huang et al. (2006); Preis and Ostfeld (2006); Wu and Walski (2006); Dorini et al. (2006); Austin et al. (2009) provide an introduction to multi-objective sensor placement optimization problems. ### A Model for Sensor Placement Optimization Problems In this subsection, we introduce the outbreak detection model of Leskovec et al. (2007) in a water distribution network (see also, Krause et al., 2008a). Let \(J\) be a set of possible contamination events corresponding to a source node \(j\in J\) polluting the network with probability \(p_{j}\in[0,1]\). Therefore, a network may have \(|J|\) different contamination sources. Let \(V\) be the set of all possible sensor locations and \(S\subseteq V\) be a set of selected sensor placements. Note that each sensor \(s\in S\) has its own cost \(a_{s}\in\mathbb{R}_{+}\); the total cost \(\sum_{s\in S}a_{s}\) of the selection \(S\) must be less than or equal to a given budget \(b\in\mathbb{R}_{+}\). Let \(W\) be a vector of edge flow velocities (time). Let \(T(S,j)\) be a detection time that a set of sensors \(S\subseteq V\) detects the contamination of a source \(j\in J\). From the definition, the function ``` 1 Input: \(\mathbf{\alpha}=(\alpha_{1},\alpha_{2},\ldots,\alpha_{m})\), \(\mathcal{C}\), and a Boolean parameter \(reduce\) 2\(\mathcal{C}\leftarrow\mathcal{C}\cap C(\emptyset,[m])\) 3whileTermination criteria not metdo 4 Solve RMP (8) and obtain an incumbent \((\bar{\eta},\bar{\mathbf{x}})\) 5for\(i\in[m]\)do 6\(\Lambda\leftarrow\Lambda\cup\left\{\frac{f_{i}(\bar{\mathbf{x}})}{\alpha_{i}}\right\}\) 7 end for 8for\(i\in[m]\)do 9if\(reduce=\mathit{True}\)then 10if\(\frac{f_{i}(\bar{\mathbf{x}})}{\alpha_{i}}=\min(\Lambda)\) and \(\bar{\eta}>\frac{f_{i}(\bar{\mathbf{x}})}{\alpha_{i}}\)then 11\(S\leftarrow\mathrm{FindSetRoutine}(\bar{\mathbf{x}}\),\(i\)) 12 Add a submodular inequality \(\eta\leq\frac{1}{\alpha_{i}}(f_{i}(S)-\sum_{j\in S}\rho_{j}^{i}(V\setminus\{j\}) (1-x_{j})+\sum_{j\in V\setminus S}\rho_{j}^{i}(S)x_{j})\) to \(\mathcal{C}\) 13 end if 14 15 end for 16 17else 18if\(\bar{\eta}>\frac{f_{i}(\bar{\mathbf{x}})}{\alpha_{i}}\)then 19\(S\leftarrow\mathrm{FindSetRoutine}(\bar{\mathbf{x}}\),\(i\)) 20 Add a submodular inequality \(\eta\leq\frac{1}{\alpha_{i}}(f_{i}(S)-\sum_{j\in S}\rho_{j}^{i}(V\setminus\{j\}) (1-x_{j})+\sum_{j\in V\setminus S}\rho_{j}^{i}(S)x_{j})\) to \(\mathcal{C}\) 21 end if 22 23 end for 24 25 end for Return \((\bar{\eta},\bar{\mathbf{x}})\) as the optimal value and solution. ``` **Algorithm 1**Delayed Constraint Generation Algorithm (\(\mathbf{\alpha}\), \(\mathcal{C}\), \(reduce\)) ``` 1 Set a stop point \(StopPt\in\mathbb{N}\) 2\(Q\leftarrow\emptyset\) 3\(S\leftarrow\emptyset\) 4for\(j\in V\)do 5if\(\rho_{j}^{i}(\bar{X})\) = 0then 6\(tmpQ\gets Q\) 7\(counter\gets 0\) 8for\(k\in\bar{X}\)do 9if\(\rho_{j}^{i}(\{k\})\) = 0then 10\(counter\gets counter+1\) 11\(tmpQ\gets tmpQ\cup\{k\}\) 12 end if 13if\(counter=StopPt\) and \(f_{i}(tmpQ)=f_{i}(S\cup\{j\})+\sum_{l\in tmpQ}\rho_{l}^{i}(S\cup\{j\})\)then 14\(S\gets S\cup j\) 15\(Q\gets Q\cup tmpQ\) 16 end if 17 18 end for 19 20 end for 21 22 end for 23 24 end for 25 26 end for 27 28 end for 29 30 end for 31\(S\gets S\cup\{\bar{X}\setminus Q\}\) 32 Return \(S\) ``` **Algorithm 2**FindSetRoutine(\(\bar{\mathbf{x}}\),\(i\)) ``` 1 Set a stop point \(StopPt\in\mathbb{N}\) 2\(Q\leftarrow\emptyset\) 3\(S\leftarrow\emptyset\) 4for\(j\in V\)do 5if\(\rho_{j}^{i}(\bar{X})\) = 0then 6\(tmpQ\gets Q\) 7\(counter\gets 0\) 8for\(k\in\bar{X}\)do 9\(\rho_{j}^{i}(\{k\})\) = 0then 10\(counter\gets counter+1\) 11\(tmpQ\gets tmpQ\cup\{k\}\) 12 end if 13if\(counter=StopPt\) and \(f_{i}(tmpQ)=f_{i}(S\cup\{j\})+\sum_{l\in tmpQ}\rho_{l}^{i}(S\cup\{j\})\)then 14\(S\gets S\cup j\) 15\(Q\gets Q\cup tmpQ\) 16 end if 17 28 end for 29 30 end for 32\(S\gets S\cup\{\bar{X}\setminus Q\}\) 33 Return \(S\) ``` **Algorithm 3**FindSetRoutine(\(\bar{\mathbf{x}}\),\(i\)) ``` 1 Let \(UB\leftarrow\infty\) be the upper bound of Problem (3) 2 Let \(LB\gets 0\) be the lower bound of Problem (3) 3\(SubCutReduction\leftarrow\) True 4for\(i\in[m]\)do 5 Let \(\bar{F}_{i}\) be a mixed-integer set derived from a subset of constraints for the \(i\)-th submodular maximization Problem (1) 6while\(True\)do 7 Solve a master problem \(\max\{\eta:\mathbf{x}\in\mathcal{X}\cap\mathbb{B}^{n},\eta\in\mathbb{R},(\eta,\mathbf{x})\in\bar{F}_{i}\}\) and get an incumbent \((\bar{\eta},\mathbf{\bar{x}})\) 8ifTermination criteria metthen 9\(ub_{i}\leftarrow\bar{\eta}\) 10\(\bar{\alpha_{i}}\gets lb_{i}\gets f_{i}(\mathbf{\bar{x}})\) 11 Modify each submodular inequality of \(\bar{F}_{i}\) to the form \(\eta\leq\frac{1}{\bar{\alpha_{i}}}(f_{i}(S)-\sum_{j\in S}\rho_{j}^{i}(V\setminus \{j\})(1-x_{j})+\sum_{j\in V\setminus S}\rho_{j}^{i}(S)x_{j})\) and add the modified inequalities to \(\bar{\mathcal{C}}\) 12break; 13 14 end if 15 16else 17\(S\leftarrow\) FindSetRoutine(\(\mathbf{\bar{x}}\),\(i\)) 18 Add a submodular inequality \(\eta\leq f_{i}(S)-\sum_{j\in S}\rho_{j}^{i}(V\setminus\{j\})(1-x_{j})+\sum_{j\in V \setminus S}\rho_{j}^{i}(S)x_{j}\) to \(\bar{F}_{i}\) 19 20 end if 21 22 end for 23 24 end for 25 26\(\mathbf{\bar{\alpha}}\leftarrow(\bar{\alpha_{1}},\bar{\alpha_{2}},\ldots,\bar{ \alpha_{i}})\) 27\((\bar{\eta},\mathbf{\bar{x}})\leftarrow\) Algorithm 1\((\mathbf{\bar{\alpha}},\bar{\mathcal{C}},SubCutReduction)\) 28\(UB\leftarrow\bar{\eta}\) 29\(LB\leftarrow\min_{i\in[m]}\frac{f_{i}(\mathbf{\bar{x}})}{ub_{i}}\) 30\(Gap\leftarrow\frac{UB-LB}{UB}\) ``` **Algorithm 3**Solution Method for Problem (3) \(T(\{s\},j)\) denotes the time that a sensor \(s\in V\) detects the contamination of \(j\). We then derive a relation \(T(S,j)=\min_{s\in S}T(\{s\},j)\) meaning the time for detecting a contamination of \(j\) is the minimal time for the contamination detected by any sensor \(s\in S\). Note that if the contamination of \(j\) cannot be detected by the set \(S\), the function \(T(S,j)\) takes a value of \(\infty\). Following the definition of the detection time, we let \(\beta_{j}(t)\) be a penalty function that denotes the amount of damage caused by a source \(j\in J\) after a time \(t\). Here, the amount of damage can be defined by users. For example, in a water distribution network, the associated damage can be the number of polluted nodes, the population affected by contamination, or the total cost of the contamination. For the case \(t=\infty\), the function \(\beta_{j}(\infty)\) denotes the total amount of damage caused by the contamination of \(j\in J\). Note that the penalty function is non-decreasing, where \(\beta_{j}(t)\leq\beta_{j}(t^{\prime})\) for \(t\leq t^{\prime}\) and \(t,t^{\prime}\in\mathbb{R}_{+}\). Consider a water distribution network represented by a graph \(G=(V,E,J)\), where \(V\) is a set of nodes, \(E\) is a set of directed edges, \(J\subseteq V\) is a set of possible contamination sources. Based on the definition of the penalty function, given a set of sensors \(S\) and a contamination source \(j\in J\) in \(G\), the penalty reduction is defined as \(R_{G,W}(S,j)=\beta_{j}(\infty)-\beta_{j}(T(S,j))\). The penalty reduction measures the amount of damage that can be avoided due to the contamination of \(j\) after deploying a set \(S\) of sensors in the water distribution network. Recall that the probability of the event \(j\in J\) is \(p_{j}\in[0,1]\). For a set of possible contamination sources \(J\) and a set of deployed sensors \(S\subseteq V\), we consider the expected penalty reduction function \(\mathcal{R}_{G,W}:2^{V}\rightarrow\mathbb{R}\), where \(\mathcal{R}_{G,W}(S)=\sum_{j\in J}p_{j}R_{G,W}(S,j)\) is submodular (see, Leskovec et al., 2007, for a proof of submodularity). Below, we give an example to illustrate the outbreak detection model in a water distribution network. Example 3.1: _Consider the water distribution network shown in Figure 1. The network is represented as \(G=(V,E,J)=(\{0,1,2,3\},\{(0,2),(0,3),(1,3)\},\{0,1\})\), where each directed edge \((i,j)\in E\) indicates the water flow from node \(i\) to node \(j\). Each edge \((i,j)\in E\) has a weight representing the flow time from \(i\) to \(j\). For example, the weight of the edge \((0,2)\) is 4, indicating that it takes 4 hours for the water to flow from node \(0\) to node \(2\). In the network \(G\), we deploy a set of sensors \(S=\{1,2\}\) indicated by two double circles on nodes \(1\) and \(2\). For a set of two contamination sources \(J=\{0,1\}\), we consider the following two cases in Figure 1._ _In the left subfigure of Figure 1, a contamination event is indicated by the red node corresponding to the Figure 1: An example introducing the penalty reduction via a network \(G=(V,E,J)=(\{0,1,2,3\},\{(0,2),(0,3),(1,3)\},\{0,1\})\) with 4 nodes, 3 directed edges, and 2 possible contamination sources \(J=\{0,1\}\), and \(W=(4,1,2)\). contamination source \(j=0\). Apart from the polluted red source node 0, the gray nodes 2 and 3, receiving the water flow from the source node \(0\), are polluted if no sensor detects the contamination. Thus, the penalty function \(\beta_{0}(\infty)=3\) captures the number of polluted nodes without any sensors, given by the two gray nodes and the red node. If a sensor is placed at node 2 (indicated by the double circle), the contamination at source node 0 will be detected after 4 hours; however, the sensor deployed at node 1 cannot detect the contamination because there is no water flow from the source node \(0\) to node \(1\). Therefore, we conclude \(T(\{1,2\},0)=\min_{s\in\{1,2\}}T(\{s\},0)=4\), where \(T(\{2\},0)=4\) and \(T(\{1\},0)=\infty\). The associated penalty reduction \(R_{G,W}(\{1,2\},0)=\beta_{0}(\infty)-\beta_{0}(T(\{1,2\},0))=3-2=1\) denotes that under the contamination event at node \(j=0\), one node is not polluted because of the sensors deployed at \(S\) in \(G\). In other words, the set \(S\) saves the damage to one node in \(G\) under this contamination event._ _In the right subfigure of Figure 1, we consider another contamination source at \(j=1\). Two nodes (1 and 3) can be polluted by the water flow from source \(1\). However, since a sensor is placed at source node 1, the contamination event from node \(1\) will be detected immediately. There are no nodes in \(G\) polluted by the source \(j=1\) because of the sensor at node 1. Thus, we conclude that the associated penalty reduction \(R_{G,W}(\{1,2\},1)=\beta_{1}(\infty)-\beta_{1}(T(\{1,2\},1))=2-0=2\), where \(T(\{1,2\},0)=0\)._ _Finally, for all \(j\in J\), we assume that each contamination event of the source \(j\) has the same probability \(p_{j}=\frac{1}{2}\). The expected penalty reduction \(\mathcal{R}_{G,W}(\{1,2\})=\sum_{j\in\{0,1\}}p_{j}R_{G,W}(\{1,2\},j)=p_{0}R_{G,W}(\{1,2\},0)+p_{1}R_{G,W}(\{1,2\},1)=0.5\times 1+0.5\times 2=1.5\)._ Next, we formulate a robust variant of the outbreak detection problem, with uncertain water flow velocity along each edge. The uncertainty is due to hurricane disturbances, clogged pipes, and pump failures that may affect the flow velocity along the pipes. We represent each scenario \(i\) for \(i\in[m]\) with \(g_{i}=(G,W_{i})\), where \(W_{i}\) is a vector of velocities (weights) for the edges in \(G\). Recall that \(\mathcal{R}_{G,W}:2^{V}\rightarrow\mathbb{R}\) is a submodular function. In our experiments, we let \(f_{i}=\mathcal{R}_{g_{i}}\) for all \(i\in[m]\). Furthermore, the set of constraints \(\mathcal{X}\) is given by \(\{x:\sum_{i\in V}a_{i}x_{i}\leq b\}\). Given a scenario \(g_{i}\), the goal of the submodular maximization problem (1) is to find an optimal solution that provides the maximal value of the expected penalty reduction for this scenario under the constraint \(\sum_{i\in V}a_{i}x_{i}\leq b\). In other words, for a scenario \(g_{i}\) and the budget constraint, Problem (1) aims to place a set of sensors that avoid the largest expected amount of damage (i.e., save the largest expected number of nodes) caused by contamination events in \(J\). In contrast, given a set of sensors, Problem (2) aims to find an optimal sensor placement that protects the largest expected amount of nodes in the worst case of \(m\) scenarios. On the other hand, in Problem (3), given a scenario \(g_{i}\) and a set of sensors \(S\), we consider the proportion of the number of saved nodes by sensors in \(S\) to the maximal number of protected nodes with an optimal placement under scenario \(i\), where the latter value is obtained by solving the \(i\)-th submodular maximization problem (1). In the following subsection, we evaluate our proposed methods shown in Section 2 on real water distribution networks. ### Computational Results In this subsection, we report our computational experience with the proposed methods. We first introduce the three water distribution networks used in our computational study. We consider two networks, EN2 and EN3, from EPANET developed by the United States Environmental Protection Agency. Furthermore, we consider a network, BWSN1, from the battle of water sensor networks of Ostfeld et al. (2008). Note that in a water distribution network, a facility, such as a junction, reservoir source, or tank, is represented by a node. A pipe is represented by a node pair \((i,j)\) denoting the direction of an edge from \(i\) to \(j\) (see Node1 and Node2 of PIPES in [http://epanet.de/js/index.html.en](http://epanet.de/js/index.html.en)). The network EN2 includes 36 nodes and 41 edges, EN3 includes 97 nodes and 117 edges, and BWSN1 includes 129 nodes and 168 edges. Based on the three networks, we use the following parameters for Problem (4). We set the number of nodes to \(|V|\in\{36,97,129\}\), where there are \(|J|\in\{25,50\}\) contamination sources for EN3 and BWSN1, and \(|J|\in\{12,25\}\) for the small-size network EN2. The probability of a contamination event at a contamination source \(j\in|J|\) is \(p_{j}=\frac{1}{|J|}\). We generate \(m\in\{50,100\}\) scenarios for each network, where the weights \(W_{i}\) of the directed edges for a scenario \(g_{i}\) are chosen from a discrete uniform distribution \(\mathcal{U}(1,10)\) for all \(i\in[m]\). We consider a budget \(b\in\{30,50\}\), where the cost of a sensor \(a_{i}\in A\) is from a discrete uniform distribution \(\mathcal{U}(5,10)\) for all \(i\in V\). Note that given a fixed budget \(b\), the different cost set \(A\) may affect the number of sensors deployed in a network. For each setting \((|V|,b,m,|J|)\), we generate three instances and report the average statistics. All algorithms are implemented in Python with Gurobi 8.1.1 Optimizer. We execute all experiments on a laptop with Intel Core i5-10210U 1.60 GHz CPU, 8 GB DRAM, x64 processor, and Windows 10 operating system. The time limit for each instance is set to 1800 seconds. We consider \(\epsilon=0\) and use the default integrality gap (MIPgap) of Gurobi, where a MIPgap of \(10^{-4}\%\) is considered optimal. First, we consider Problem (2), which is Problem (4) under the case \(\boldsymbol{\alpha}=\mathbf{1}\). Algorithm 1 is used for solving the problem. The Baseline-RSM (2) column provides baseline computational results for solving Problem (2) with Algorithm 1 using the parameters \(reduce\)= False, and with \(StopPt=0\) for the associated FindSetRoutine (Algorithm 2). That is, Baseline-RSM (2) with \(reduce\)= False considers all submodular inequalities for \(F_{i},i\in[m]\) for \(\mathcal{C}\) instead of considering \(\mathcal{F}\) with fewer submodular inequalities as shown in Proposition 2.1. Also, in Baseline-RSM (2), since the parameter \(StopPt\) of the associated FindSetRoutine is zero, given an incumbent solution \(\bar{\mathbf{x}}\in\mathcal{X}\), the algorithm does not utilize Proposition 2.3 that allows us to find a better set than \(\bar{X}\) to generate the corresponding submodular inequality. We consider two other methods, Poly\(\mathcal{F}\)-Algo 1 and Poly\(\mathcal{F}^{\prime}\)-Algo 1, shown in the other two columns of Table 1 to evaluate the computational benefits of Propositions 2.1 and 2.3 described in Section 2, respectively. In Poly\(\mathcal{F}\)-Algo 1, we consider Algorithm 1 with \(reduce\) = True and the parameter \(StopPt=0\) in the associated FindSetRoutine. That is, RMP (8) uses the set \(\mathcal{F}\) for deriving the cuts in \(\mathcal{C}\). In Poly\(\mathcal{F}^{\prime}\)-Algo 1, Problem (8) considers set \(\mathcal{F}^{\prime}\) shown in Proposition 2.3 for deriving the cuts in \(\mathcal{C}\). Note that for set \(\mathcal{F}^{\prime}\), we let \(reduce\) = True and \(StopPt=2\) in the FindSetRoutine of Algorithm 1. We summarize our computational results in Table 1. The Time-s column denotes the average computational time of three instances (in seconds). Note that the number in the parenthesis under the Time-s column denotes the number of instances that cannot be solved within the time limit of 1800 seconds. The average gap of the unsolved instances is reported in the Gap-% column and we use a dash symbol to indicate when all three instances of each setting are solved optimally. The Iteration-# column records the number of iterations to solve RMP (8). The Cut-# column reports the number of submodular inequalities added to set \(\mathcal{C}\) of RMP (8). From the Time-s columns, we observe that Poly\(\mathcal{F}\)-Algo 1 is faster than the baseline, which demonstrates the effectiveness of Proposition 2.1. We note that the Baseline-RSM (2) adds more inequalities to RMP (8), leading to many unsolved instances for the EN3 and BWSN1 instances. Comparing Poly\(\mathcal{F}\)-Algo 1 and Poly\(\mathcal{F}^{\prime}\)-Algo 1, we observe that for most instances, Poly\(\mathcal{F}^{\prime}\)-Algo 1 outperforms Poly\(\mathcal{F}\)-Algo 1 in both computational time and the number of added inequalities. This highlights the effectiveness of Proposition 2.3 in these instances. Next, we consider RSM (3), which is RSM (4) under the case \(\boldsymbol{\alpha}=(f_{1}(\mathbf{x}_{1}^{*}),f_{2}(\mathbf{x}_{2}^{*}), \ldots,f_{m}(\mathbf{x}_{m}^{*}))\). From these previous experiments, we conclude that using set \(\mathcal{F}^{\prime}\) in deriving the submodular inequalities is the best strategy for solving RSM (4). Therefore, in Algorithm 3, we set \(reduce\) = True and \(StopPt=2\) of the FindSetRoutine in Algorithm 1. In these experiments, we aim to highlight the benefits of Algorithm 3. That is, we demonstrate different experiments on the **If**-condition of lines 8-11 of Algorithm 3. Baseline-RSM (3) considers the basic method without any computational enhancements described in Section 2.3. That is, algorithm 3 without reusing the submodular inequalities generated for solving \(m\) submodular maximization problems exactly to calculate \(\alpha\). For Baseline-RSM (3), we set \(\bar{\mathcal{C}}=\emptyset\) in line 11 and \(t=\infty\) of Algorithm 3. Here, the parameter \(t=\infty\) indicates that Algorithm 3 has to exactly compute \(\boldsymbol{\alpha}=(f_{1}(\mathbf{x}_{1}^{*}),f_{2}(\mathbf{x}_{2}^{*}),\ldots,f_{m}(\mathbf{x}_{m}^{*}))\) before solving RSM (4). Poly\(\mathcal{F}^{\prime}\)-Algo 3 with a finite \(t\) demonstrates the effectiveness of Proposition 2.5. That is, without completely solving \(m\) submodular maximization problems, we aim to find a near-optimal solution with a provable optimality gap based on Proposition 2.5. For Poly\(\mathcal{F}^{\prime}\)-Algo 3, we set \(t=15\)s for \(m=100\) and \(t=30\)s for \(m=50\). Note that because the time limit is \(1800\) seconds, if \(m\) submodular maximization problems take \(t\times m\) seconds, then the time limit of algorithm 1 embedded in Algorithm 3 is \(1800-t\times m\) seconds. Table 2 provides the computational results of the three methods introduced in the previous paragraph. For the instances that can be solved by both Baseline-RSM (3) and Poly\(\mathcal{F}^{\prime}\)-Algo 3 with a finite \(t\), we observe that the setting \(\bar{\mathcal{C}}=\emptyset\) slows down the performance of Algorithm 3. This shows the effectiveness of line 11 in Algorithm 3. We now consider the the unsolved instances (N/A) of Table 2 and observe that Poly\(\mathcal{F}^{\prime}\)-Algo 3 with a finite \(t\) outperforms Baseline-RSM (3) significantly. We note that in Baseline-RSM (3), there are many unsolved instances for EN3 and BWSN1, and the unsolved instances cannot provide a gap as indicated by the N/A symbol in Table 2. However, Poly\(\mathcal{F}^{\prime}\)-Algo 3 with a finite \(t\) overcomes this issue and provides a small optimality gap for the instances unsolved within the time limit. Given that security of the water distribution infrastructure is critical, a high-quality sensor deployment plan with a certifiable performance guarantee which is robust to disruptions as provided by Algorithm 3 is highly desirable. ## 4 Conclusion We investigate mixed-integer programming methods and a polyhedral study for a class of robust submodular optimization problems. We start by introducing a fundamental robust submodular optimization problem, where the goal is to deal with the worst case of a set of possible submodular functions. Several propositions, including a facet condition on the submodular inequalities of the associated polyhedral structure, allow us to devise a delayed constraint generation method to solve the problem optimally. We also consider an extension of the fundamental robust submodular optimization problem that generalizes several robust submodular maximization subproblems of interest. For cases in which the submodular maximization subproblems cannot be solved exactly within a time limit, we provide a method for finding a feasible solution with a certifiable optimality gap. Our computational experiments on a sensor placement optimization problem for water distribution networks with real-world datasets demonstrate the effectiveness of the proposed methods. ###### Acknowledgements. Simge Kucukyavuz is supported, in part by, ONR Grant N00014-22-1-2602. Haos-Hsiang Wu is supported, in part by, NSTC Taiwan 111-2221-E-A49-079 and 109-2222-E-009-005-MY2. Hsin-Yi Huang is supported, in part by, NSTC Taiwan 109-2222-E-009-005-MY2.
2302.06833
VQ3D: Learning a 3D-Aware Generative Model on ImageNet
Recent work has shown the possibility of training generative models of 3D content from 2D image collections on small datasets corresponding to a single object class, such as human faces, animal faces, or cars. However, these models struggle on larger, more complex datasets. To model diverse and unconstrained image collections such as ImageNet, we present VQ3D, which introduces a NeRF-based decoder into a two-stage vector-quantized autoencoder. Our Stage 1 allows for the reconstruction of an input image and the ability to change the camera position around the image, and our Stage 2 allows for the generation of new 3D scenes. VQ3D is capable of generating and reconstructing 3D-aware images from the 1000-class ImageNet dataset of 1.2 million training images. We achieve an ImageNet generation FID score of 16.8, compared to 69.8 for the next best baseline method.
Kyle Sargent, Jing Yu Koh, Han Zhang, Huiwen Chang, Charles Herrmann, Pratul Srinivasan, Jiajun Wu, Deqing Sun
2023-02-14T05:15:16Z
http://arxiv.org/abs/2302.06833v1
# VQ3D: Learning a 3D-Aware Generative Model on ImageNet ###### Abstract Recent work has shown the possibility of training generative models of 3D content from 2D image collections on small datasets corresponding to a single object class, such as human faces, animal faces, or cars. However, these models struggle on larger, more complex datasets. To model diverse and unconstrained image collections such as ImageNet, we present VQ3D, which introduces a NeRF-based decoder into a two-stage vector-quantized autoencoder. Our Stage 1 allows for the reconstruction of an input image and the ability to change the camera position around the image, and our Stage 2 allows for the generation of new 3D scenes. VQ3D is capable of generating and reconstructing 3D-aware images from the 1000-class ImageNet dataset of 1.2 million training images. We achieve an ImageNet generation FID score of 16.8, compared to 69.8 for the next best baseline method. For video results, please see the project webpage. ## 1 Introduction 3D assets are an important part of popular media formats such as video games, movies, and computer graphics. Given that 3D content can be time-consuming to create by hand, leveraging machine learning techniques to automatically generate 3D content is an active area of research. While machine learning techniques benefit from training on large amounts of data, existing 3D datasets have noisy labels and are orders of magnitude smaller than those of 2D images. To get around the limitations of 3D datasets, recent work has shown the possibility of learning generative models of 3D scenes from images with limited or no 3D labels [22, 5, 15, 23]. These GAN-based approaches demonstrate the promise of learning 3D representations from 2D data. However, these methods require fine-tuning of prior pose distributions for individual models and datasets [22, 15, 4, 23], or the usage of ground truth pose data [5], and thereby typically operate on single-class datasets, e.g., human faces [16], animal faces [6], or cars [41]. In contrast, many 2D generative models, such as text-to-image generation models [24, 29, 44] and two-stage image models [12, 43] show impressive performance on very large and diverse image collections. The most recent state-of-the-art 2D models leverage diffusion or vector quantization rather than GANs to scale well to large datasets. This motivates us to pursue vector quantization as an alternative to GANs for learning 3D generative models. In this paper, we propose VQ3D, a strong 3D-aware generative model that can be learnt from large and diverse 2D image collections, such as ImageNet [7]. To encourage stability and higher reconstruction quality, we forgo GAN-based [14] approaches [22, 5, 15, 23, 4], in favor of the 2-stage autoencoder formulation of VQGAN [12] and ViT-VQGAN [43]. But, different from these 2D autoencoder models, we learn 3D geometry by introducing a conditional NeRF decoder and modified triplane representation which can handle unbounded scenes, and training with a novel loss formulation Figure 1: Fully generated 3D-aware images from our Stage 2 model on ImageNet. Please see supplemental materials for video results. which encourages high-quality geometry and novel views. Our formulation has three advantages, ensuring it to scale well to ImageNet. First, separating the training into two stages (reconstruction and generation) enables us to directly supervise the first stage training via a novel depth loss, using pseudo-GT depth. This is possible because in the first stage, as our conditional NeRF decoder learns to reconstruct the input, it also predicts the depth of each image. Second, we do not require hand-tuning of pose sampling distributions or ground-truth pose data, which are required by previous GAN-based approaches [22, 23, 4, 5, 15]. Our training objective simply enforces reconstruction from a canonical camera pose, and plausible novel views within a neighborhood of the canonical pose. While this objective regrettably rules out very large camera motion, it also eliminates the need for excessive tuning of the pose distribution for each dataset, and allows our model to work out-of-the-box for multiple object categories. Thus, our model uses identical pose sampling hyperparameters for each dataset. Finally, our two-stage formulation is simpler and more reliable than existing techniques for training 3D-aware generative models. Previous work [2, 30] has identified difficulties in scaling up GANs to large datasets (such as ImageNet). We verify that baseline 3D-aware GAN methods [23, 4, 5, 15], while working well on single-object datasets, fail to learn good generative models for ImageNet. Our formulation does not use progressive growing [4, 5], a neural upsampler [23, 4, 5], pose conditioning [5, 36], or patch-wise discriminators [31, 36], but still learns meaningful 3D representations. Compared to the best existing 3D-aware baseline, VQ3D attains a 75.9% relative improvement on FID scores for 3D-aware ImageNet images (69.8 for StyleNeRF [15] to 16.8 for VQ3D). In summary, we make the following three contributions: * We present a novel 3D-aware generative model that can be trained on large and diverse 2D image collections. Our model does not require tuning pose hyperparameters for each dataset or ground truth poses, and can leverage a pseudo-depth estimator during training. * We obtain state-of-the-art generation results on ImageNet and competitive results on CompCars, demonstrating that our 3D-aware generative model is capable of fitting a dataset at the scale and diversity of ImageNet. Our model significantly outperforms the next best baseline. * The Stage 1 of our model enables 3D-aware image editing and manipulation. One forward pass through our network converts a single RGB image into a manipulable NeRF, without relying on an expensive inversion optimization used in prior work [4, 5]. ## 2 Related Work 3D-aware generative models.Several recent papers tackle the task of modeling 3D-aware generation, primarily through the GAN framework [14]. HoloGAN [22] learns perspective projection and rendering of 3D features, and applies 3D rigid-body transforms to generate new images from different poses. More recently, several papers use NeRF [21] as the 3D backbone [40, 15, 23, 4], which allows the 3D scene to be defined as a 3D volume parameterized by an MLP. EG3D [5] proposes a hybrid triplane representation which scales well with resolution, and enables greater generation detail. Disentangled3D [37] learns a 3D-aware GAN from monocular images with disentangled geometry, appearance, and pose. Pix2NeRF [3] proposes a method for unsupervised learning of neural representations with a shared pose prior, which enables rendering of novel views from a single input image. GRAF [31] and EpiGRAF [36] train 3D GANs via patch-wise representations to save on the expense of volume rendering. GRAM [8] proposes learning a set of implicit surfaces, shared for the training object category. At inference time, images are generated by accumulating the radiance along each ray using ray-surface intersections as samples. Conditional NeRF and other 3D representations.Recent work has focused on the appropriate way to condition NeRF to achieve maximum expressiveness. GI-RAFFE [23] demonstrated success with the "conditioning-by-concatenation" approach [35], in which the scene's latent codes are fed into the first layer of the NeRF MLP and not thereafter. Other work such as pi-GAN [4] transforms the latent code into a vector of frequencies and phase shifts for each layer of a SIREN [34]. Other work has used hypernetworks [35, 33] to parameterize 3D representations, and MetaSDF [33] showed that many forms of conditioning are special cases of the hypernetwork approach. Our model can be seen as a conditional NeRF. We show that our novel decoder architecture, consisting of a ViT-L [10] and contracted triplane representation, is powerful enough to encode and reconstruct all of ImageNet. Given a single image, we show that in a single forward pass and without any optimization, our model can create a NeRF of an input RGB image with reasonable reconstruction at the main view and plausible novel views. Quantization models.Image quantization is a powerful paradigm used in recent state-of-the-art generative models. In this setup, an image is encoded into a discrete latent representation [38], which improves generation quality when paired with an autoregressive generative prior (most often a transformer [39]). This has led to impressive results in image generation [43, 20, 13], text-to-image generation [44, 9, 25], and other tasks. Recent image quantization models improve reconstruction quality by introducing adversarial losses [13], using vision transformer encoders and decoders (ViT) [10, 43] as both encoder and decoder, representing discrete codes as a stacked map [20], and more. Such quantization architectures typically use powerful CNNs [12] or ViT [43] encoders and decoders; ViT and CNN-based architectures show good performance reconstructing large image datasets; in this paper, we show that our NeRF-based decoder can also work well in the quantization framework. It has the capacity to encode and reconstruct a large and diverse dataset such as ImageNet, and also learns a discrete latent codebook that can be used to train a powerful fully generative Stage 2 model. **Single-view 3D reconstruction and novel view synthesis.** Various approaches for 3D reconstruction or novel view synthesis in the context of generative or auto-encoder models have been proposed. Kato et. al [17] propose an adversarial training scheme using two discriminators for single-view 3D reconstruction. Their scheme, in which the main discriminator critiques real and reconstructed views, while an auxiliary discriminator distinguishes between the reconstructed input view and predicted novel views, inspires our use of two discriminators for similar reasons. However, their model cannot sample totally new scenes. More recently, uORF [42] uses NeRFs as 3D object representations to enable 3D scene decomposition. uORF represents a 3D scene as a composition of an object radiance field for each object, and a background radiance field for the remainder of the scene. This enables re-rendering and editing of 3D scenes from an input image. However, uORF also cannot sample new scenes, and moreover requires multi-view training datasets. In the domain of novel scene generation, Generative Query Networks (GQN) [11] use CNNs to represent and generate scenes. GQNs can imagine and re-render scenes from novel viewpoints, but due to the usage of CNNs, do not explicitly embed 3D geometry or have any guarantees of scene consistency. NeRF-VAE [19] proposes an improved representation using a VAE which models multiple scenes. This enables efficient inference-time sampling of novel scenes, as well as re-rendering from multiple viewpoints. Unlike GQNs, which have no 3D prior, NeRF-VAE uses NeRF to achieve 3D consistency. However, it relies on multi-view training data. LOLNeRF [28] learns a generative model of 3D face images but requires a pretrained keypoint estimator and the auto-decoder formulation requires an optimization to be applied to examples outside its training set. By contrast, our method can be applied to single RGB images and requires only 2D training data and an off-the-shelf depth estimator for training. Figure 2: Diagram of our model architecture. ## 3 Model ### Overview of VQ3D Our model is a vector-quantized autoencoder [12, 43], which is trained in two stages. Stage 1 of our model consists of an encoder and decoder. The encoder encodes RGB images into a learned latent codebook, and the decoder reconstructs them. A diagram of the inputs, outputs, and architecture of the first stage is given in the top of Figure 2. The encoder of our first stage is a ViT similar to VIM [43], but the decoder is a conditional NeRF. The first stage is trained end-to-end by encoding and reconstructing RGB training images while minimizing reconstruction and adversarial losses. Because the decoder is a NeRF, we are able to supervise the NeRF geometry with an additional training loss using pseudo-GT disparity. We also render novel views of decoded images and critique them with an additional adversarial loss. A diagram of the key losses used in Stage 1 training is shown in Figure 3. After training, the first stage can be used to encode unseen single RGB images and then reconstruct them in 3D, which enables novel view synthesis, image editing and manipulations. Stage 2 is a generative autoregressive transformer which predicts sequences of latent tokens. A diagram of the inputs, outputs, and architecture is shown in the bottom of Figure 2. The architectural and training details are generally the same as [43]. We train it on the sequences of latent codes produced by our Stage 1 encoder. After training, the autoregressive transformer can be used to generate totally new 3D images by first sampling a sequence of latent tokens and then applying our NeRF-based decoder. Importantly, our Stage 2 model inherits the properties optimized in Stage 1, so the fully generated images have high quality geometry and plausible novel views. ### Training We now provide additional training details for the two stages of our model. Stage 1.The goal of the first stage is to learn a model which can compress image pixels into a sequence of discrete indices corresponding to a learnt latent codebook [12, 43]. Since we desire our model to be 3D-aware, we impose several additional criteria: 1. **Good reconstruction from a canonical view.** On ImageNet, ground truth camera extrinsics are unknown and probably not even well-defined due to the presence of deformable and ambiguous object categories and scenes without salient objects. Therefore, we simply fix a single 'canonical pose' for reconstruction, and our criterion is that our conditional NeRF-based autoencoder should successfully reconstruct the dataset from this view. 2. **Reasonable novel views.** We expect that images decoded at novel views within a specified range of the canonical view will have similar quality to images decoded at the canonical view. Figure 3: Diagram of the key losses in Stage 1 optimization. 3. **Correct geometry.** The geometry of the scene as represented by the NeRF should correspond to the unknown ground truth geometry of the RGB image up to scale and shift. We enforce these criteria by introducing several auxiliary models and losses, summarized in Figure 3. To enforce (1) good reconstruction at the canonical view, we train with a combination of the MSE, perceptual, and logit-laplace loss following [43], the combination of which we term \(\mathcal{L}_{\text{rec}}\). To enforce (2) reasonable novel views, we leverage a main and auxilliary discriminator similar to [17]. The first discriminator distinguishes between real and reconstructed images at the canonical viewpoint, while the second distinguishes between reconstructed images at the canonical viewpoint and novel views. In this way, the model cannot allocate all its capacity to reconstructing images at the canonical viewpoint without also having high-quality novel views. As noted by [17], the generator may slightly corrupt the main view in order to collaborate with the novel view branch to fool the discriminator; thus, we add a stop-grad between the main view and the novel view discriminator. Unlike [4, 5, 15, 23], we find it unnecessary to tune a separate distribution of novel views for each dataset, and instead sample novel views uniformly in a disc tangent to a sphere at the canonical camera pose. We use the non-saturating GAN objective \(\mathcal{L}_{\text{gan}}\)[14] for both discriminators. We additionally concatenate the predicted depth as input to the auxilliary discriminator to ensure the distribution of depths does not change depending on the camera viewpoint. To enforce (3) correct geometry, we supervise the NeRF depth with pseudo-GT geometry at the main viewpoint. We employ the pretrained depth prediction transformer model DPT [26] which produces pseudo-GT disparity estimates for the images in our training datasets. Thus, our model is limited to some extent by the quality of the depth estimator chosen. [27] proposed a shift- and scale- invariant \(l_{2}\) loss for training monocular depth estimation in which the shift and scale are determined by solving a closed-form least squares alignment with the GT depth. We propose a novel formulation of this shift- and scale- invariant loss adapted to the NeRF setting, in which we supervise the weight of every sample along each ray rather than the accumulated depth. For a given image, let \(i\in\{1...N\}\) and \(k\in\{1...L\}\) be indices which range over the image plane and ray samples respectively, let \(D_{ik}\) be the pointwise disparities of the NeRF sample locations, let \(W_{ik}\) be corresponding NeRF weights from volumetric rendering [21], and let \(d_{i}\) be the pseudo-GT depth from DPT. Then we define \(s^{*},t^{*}\) to be the closed-form solution of the weighted least squares problem \[s^{*},t^{*}=\arg\min_{s,t}\frac{1}{N}\sum_{i=1}^{N}\sum_{k=1}^{L}W_{ik}(sD_{ik }+t-d_{i})^{2} \tag{1}\] And set our depth loss to be the weighted scale- and shift-invariant loss \[\mathcal{L}_{\text{depth}}=\frac{1}{N}\sum_{i=1}^{N}\sum_{d=1}^{L}W_{ik}(s^{ *}D_{ik}+t^{*}-d_{i})^{2} \tag{2}\] Assuming the weight sum to 1 along each ray, this loss is minimized when the NeRF allocates 0 weight to all but one sample location along each ray, and the expectation with respect to the weights of the disparity is equal to the GT disparity map up to a scale and shift. In this way it functions similarly to the distortion loss proposed in [1] by penalizing weight distributions which are too spread out, but also encourages the weights to be concentrated near the correct geometry. Importantly, this formulation still allows for more than one surface along each ray and thus for occlusion and disocclusion, because the penalty is applied to the volumetric rendering weights and not the predicted density. We find this depth loss formulation to be critical for good performance. In particular, supervising the accumulated disparity rather than the pointwise disparities leads to poor performance, and we provide an ablation of this and other design choices in the supplementary material. We additionally introduce two penalties on the scale determined by this alignment: \[\mathcal{L}_{\text{scale}}=\lambda_{s1}\max(0,-s^{*}_{\text{scale}})+\lambda_{ s2}\max(s^{*}_{\text{scale}}-1,0) \tag{3}\] \(\lambda_{\text{s1}}\) is the weight of a small penalty to prevent the sign of the disparity scale from flipping negative, which we found necessary unlike in [27]. \(\lambda_{s2}\) weights a penalty preventing the disparity maps from becoming too flat, which encourages perceptually pleasing novel views. We additionally include the same vector-quantization loss \(\mathcal{L}_{\text{vq}}\) as [43], and the distortion and interlevel losses of MipNeRF360 [1], given by \(\mathcal{L}_{\text{nerf}}\). The loss for our autoencoder is thus: \[\mathcal{L}=\mathcal{L}_{\text{rec}}+\mathcal{L}_{\text{gan}}+\mathcal{L}_{ \text{depth}}+\mathcal{L}_{\text{scale}}+\mathcal{L}_{\text{vq}}+\mathcal{L}_ {\text{nerf}} \tag{4}\] Stage 2.The goal of Stage 2 is to learn an autoregressive model over the discrete encodings produced by the Stage 1 encoder, so that completely new 3D scenes can be generated. Our Stage 2 transformer and training details follow [43]. We verify experimentally that our fully generative Stage 2 model inherits the properties optimized in Stage 1; namely, 3D-consistent novel views and high quality geometry. We also apply top-\(k\) and top-\(p\) filtering similar to [13]. ### Architecture A full architecture diagram is shown in Figure 2. Similar to [43], we leverage the powerful vision transformer [10] architecture in both the encoder and decoder. Different from [43], which is trained on 2D images, we utilize a novel decoder with 3D inductive bias to facilitate the learning of 3D representations. We now give an overview of the individual components of our architecture. Encoder and triplane generator.For the encoder, we use a ViT-S model. For the decoder, we use a ViT-L model to decode the latent codes into 3 triplanes of size 512x512 with feature dimension 32. We find that the triplane construction stage of the decoder benefits from the increased capacity of the ViT-L model. Contracted triplane representation & NeRF MLP.We must reconstruct and generate potentially unbounded ImageNet scenes, but we are motivated to leverage the powerful triplane representation [5], Therefore, we propose an adapted triplane representation borrowing from both [5] and [1]. We apply the contraction function of MipNeRF360 to bound coordinates within the triplanes before looking up their values, and use the linear-in-disparity sampling scheme with separate proposal and NeRF MLP. The MLPs convert interpolated triplane features to density and, in the case of the NeRF MLP, RGB color. Similar to [5], our MLPs are lightweight, with 2 layers and 32 hidden units each; unlike [5], we directly render RGB color rather than using a neural upsampler, as we found neural upsampling to be a source of myriad and confusing artifacts not fixable via dual discriminators [5] or consistency losses [15]. Autoregressive transformer.We train transformer [39] to autoregressively predict the next image token. We follow the hyperparameters in the base model of VIM [43]. For ImageNet, we train a conditional model, and for other datasets we train unconditional generative models. ## 4 Experiments ### Main results We study the performance of our method and the baseline methods on ImageNet. The ImageNet dataset [7] is a well-known classification benchmark which consists of 1.28M images of 1000 object classes. It is a standard benchmark for 2D image generation, for both conditional and unconditional generation. We compare against pi-GAN [4], GIRAFFE [23], EG3D [5], and StyleNeRF [15]. We re-implemented pi-GAN and GIRAFFE using our internal framework, and ran the provided code for EG3D and StyleNeRF. Since ImageNet does not have GT poses and pseudo-GT poses are not possible to compute, we disable generator and discriminator pose conditioning for EG3D and sample from a pre-defined pose distribution. We note that EG3D exhibits significant inter-run variance in ImageNet FID even for the same config, and provide more details in the supplementary material. Our main results for generation on ImageNet compared against the benchmarks are given in Table 1. Notably, our FID score on ImageNet is the best by a wide margin. We show generated examples from our method and the benchmarks in Figure 4 and note our method generates superior samples. In addition to generating high quality scenes, Stage 1 of our method can also be used for single-view 3D reconstruction and manipulation. Figure 5 shows single RGB images reconstructed by our Stage 1 with estimated geometry. Our network performs well at reconstruction and needs only a single forward pass to compute a NeRF for an input image, unlike prior work [4, 5] which requires an inversion optimization. Moreover, the reconstructed NeRFs can be manipulated, for instance to render novel views. We show examples of novel views in Figure 6. For our main results on ImageNet, we training for the longest possible time and use the most optimal top-p and top-k samping parameters. We conduct additional analysis experiments on the learning of geometry and model ablation, for which we use a consistent Stage 1 step, Stage 2 step across each study and do not using top-p or top-k sampling unless ablating it directly as in Table 4. First, we study the learning of good geometry, both for our model and the baseline methods. One potential concern may be that the use of pseudo-GT depth limits the comparability of our technique with the baseline GAN methods. We address this concern by analyzing both the FID score and the depth accuracy metric used in [5, 32]. This metric is defined as the mean- and variance-normalized MSE between the NeRF depth and the predicted depth of the generated image. Table 2 gives the result for generative models with and without depth losses. Note that EG3D's FID without depth loss is different from Table 1 due to the significant inter-run variance for EG3D's performance. For the GAN methods, we find that our pointwise disparity loss works poorly but the original scale- and shift- invariant MSE loss from [27] improves geometry. For our method, we show the Stage 2 performance with and without our novel pointwise weighted depth loss. While performance on the depth accuracy metric can improve when various depth losses are incorporated training, the effect on FID is negligible. In this way we see that incorporating pseudo-GT depth is unlikely to meaningfully improve the FID for the baseline methods without substantial changes. We were unable to design a depth loss which prevented flat depths for StyleNeRF. \begin{table} \begin{tabular}{l c} \hline \hline **Generation** & **FID**\(\downarrow\) \\ \hline pi-GAN [4] & 97.8 \\ GIRAFFE [23] & 132.0 \\ StyleNeRF [15] & 69.8 \\ EG3D [5] & 82.2 \\ \hline \hline VQ3D (Ours) & **16.8** \\ \hline \hline \end{tabular} \end{table} Table 1: FID scores of 3D generative models on ImageNet. We set a new state of the art on ImageNet with a more than fourfold improvement over the next best baseline. Better geometry does not imply better FID. Additionally, learning geometry without a depth loss may be unreliable. For example, StyleNeRF [15] found learning of geometry was unreliable without training tricks such as progressive growing. During our ImageNet experiments, we also observed that StyleNeRF is sensitive to hyperparameters, and can learn to produce flat depths. EG3D [5] showed that removing GT poses as input to the discriminator is enough to cause the geometry to degenerate to a flat plane. We conduct ablations on our Stage 1 in Table 3 starting from our baseline architecture (row 1). Using a CNN encoder and decoder rather than ViT (row 2) is unstable and leads to Figure 4: Generated samples and disparity from models trained on ImageNet. Ours model generates high-quality images and geometry. Figure 5: Reconstructions and estimated disparity on single images by our conditional NeRF-based autoencoder. Though our model is trained on ImageNet and achieves comparable performance on unseen ImageNet images, we show OpenImages results for licensing reasons. divergence. Eliminating the GAN loss (row 3) or depth scale loss (row 4) leads to a higher learned disparity scale causing perceptually flat novel views. Removing the GAN loss (row 3) also leads to artifacts in inpainting disoccluded pixels. Eliminating the NeRF loss (row 5) leads to worse depth accuracy and a very high disparity scale. Eliminating the depth loss (row 6) improves reconstruction FID, but causes the depths to collapse to a flat plane and leads to worse depth accuracy. A fully implicit representation instead of triplanes (row 7) gives very poor FID since we are forced to use a very small MLP due to the expense of volume rendering at 256x256. We analyze the performance of VQ3D with top-\(p\) and top-\(k\) sampling in Table 4, as [12] noted these sampling changes can give significant performance improvements analogous to truncation sampling for GANs [2]. For VQ3D, a top-\(k\) of 1000 and top-\(p\) of 1.0 gives the best FID results. ### Other 3D benchmark datasets Two other prominent 3D-aware benchmark datasets are FFHQ [16] and CompCars [41]. Due to the ethical and legal issues associated with manipulation and generative modeling of faces, we do not study FFHQ. On CompCars, our model is competitive with the state of the art (Table 5). require tuning hyperparameters of the pose distribution. We are committed to understanding and promoting positive societal impacts. Although we do not train a generative model on FFHQ and thus avoid many serious ethical considerations, ImageNet does contain some images of humans and human faces, and our model will likely inherit biases which are present in the dataset. Conclusion.We have presented VQ3D, a framework for 3D-aware representation learning and generation. VQ3D sets a state-of-the-art by a wide margin on the large and diverse ImageNet dataset, relative to existing strong geometry-aware generative model baselines. We conduct extensive analysis and ablation and also show our model performs competitively on the more standard single class benchmark CompCars. Our work shows that it could be a fruitful path to use large and diverse 2D image datasets to train 3D-aware generative models, thereby facilitating 3D content creation.
2306.12369
Towards Efficient MPPI Trajectory Generation with Unscented Guidance: U-MPPI Control Strategy
The classical Model Predictive Path Integral (MPPI) control framework lacks reliable safety guarantees since it relies on a risk-neutral trajectory evaluation technique, which can present challenges for safety-critical applications such as autonomous driving. Additionally, if the majority of MPPI sampled trajectories concentrate in high-cost regions, it may generate an infeasible control sequence. To address this challenge, we propose the U-MPPI control strategy, a novel methodology that can effectively manage system uncertainties while integrating a more efficient trajectory sampling strategy. The core concept is to leverage the Unscented Transform (UT) to propagate not only the mean but also the covariance of the system dynamics, going beyond the traditional MPPI method. As a result, it introduces a novel and more efficient trajectory sampling strategy, significantly enhancing state-space exploration and ultimately reducing the risk of being trapped in local minima. Furthermore, by leveraging the uncertainty information provided by UT, we incorporate a risk-sensitive cost function that explicitly accounts for risk or uncertainty throughout the trajectory evaluation process, resulting in a more resilient control system capable of handling uncertain conditions. By conducting extensive simulations of 2D aggressive autonomous navigation in both known and unknown cluttered environments, we verify the efficiency and robustness of our proposed U-MPPI control strategy compared to the baseline MPPI. We further validate the practicality of U-MPPI through real-world demonstrations in unknown cluttered environments, showcasing its superior ability to incorporate both the UT and local costmap into the optimization problem without introducing additional complexity.
Ihab S. Mohamed, Junhong Xu, Gaurav S Sukhatme, Lantao Liu
2023-06-21T16:25:38Z
http://arxiv.org/abs/2306.12369v2
# Towards Efficient MPPI Trajectory Generation with Unscented Guidance: U-MPPI Control Strategy ###### Abstract The classical Model Predictive Path Integral (MPPI) control framework lacks reliable safety guarantees since it relies on a _risk-neutral_ trajectory evaluation technique, which can present challenges for safety-critical applications such as autonomous driving. Additionally, if the majority of MPPI sampled trajectories concentrate in high-cost regions, it may generate an _infeasible_ control sequence. To address this challenge, we propose the U-MPPI control strategy, a novel methodology that can effectively manage system uncertainties while integrating a more efficient trajectory sampling strategy. The core concept is to leverage the Unscented Transform (UT) to propagate not only the mean but also the covariance of the system dynamics, going beyond the traditional MPPI method. As a result, it introduces a novel and more efficient trajectory sampling strategy, significantly enhancing state-space exploration and ultimately reducing the risk of being trapped in local minima. Furthermore, by leveraging the uncertainty information provided by UT, we incorporate a _risk-sensitive_ cost function that explicitly accounts for risk or uncertainty throughout the trajectory evaluation process, resulting in a more resilient control system capable of handling uncertain conditions. By conducting extensive simulations of 2D aggressive autonomous navigation in both known and unknown cluttered environments, we verify the efficiency and robustness of our proposed U-MPPI control strategy compared to the baseline MPPI. We further validate the practicality of U-MPPI through real-world demonstrations in unknown cluttered environments, showcasing its superior ability to incorporate both the UT and local costmap into the optimization problem without introducing additional complexity. Autonomous vehicle navigation, MPPI, unscented transform, occupancy grid map path planning. ## I Introduction Planning and controlling autonomous vehicles under uncertainty is a partially-solved yet highly challenging problem in robotics. The uncertainty or risk can arise from multiple sources, such as the vehicle's dynamics, the dynamics of other objects in the environment, the presence of unexpected obstacles, and the accuracy of the sensors used to perceive the environment. These factors can lead to unpredictable vehicle behavior, making it challenging to anticipate its movements and actions. Therefore, it is crucial for motion planning and control algorithms to account for the uncertainties of the system's states, sensing, and control actions. This eventually enables autonomous vehicles to adapt to unexpected environmental changes and make reliable decisions in real-time [1]. Model Predictive Control (MPC), also referred to as _receding-horizon_ optimal control, has been introduced as an effective solution for improving system safety and managing random disturbances and uncertainties, owing to its flexibility and ability to handle system constraints and nonlinearities while optimizing system performance. It plans a sequence of optimal control inputs over a finite time-horizon by repeatedly solving the optimal control problem using the _receding-horizon_ principle, with the first control input applied to the system. There are two main categories of MPC frameworks that can handle various forms of uncertainty: Robust MPC (RMPC) and Stochastic MPC (SMPC). RMPC considers worst-case scenarios of uncertainty to ensure stability and constraint satisfaction, while SMPC takes into account expected costs and probabilistic constraints (e.g., chance constraints) to optimize performance under uncertain conditions. RMPC is known to generate the safest solutions among MPC categories due to its incorporation of worst-case scenarios into the optimization problem and its emphasis on minimizing worst-case objective functions. However, it may lead to excessively conservative control actions, resulting in low system performance [2, 3]. On the other hand, despite the superior ability of SMPC to leverage the probabilistic nature of uncertainties, many SMPC approaches have performance limitations, including being tailored to specific forms of stochastic noise, requiring dynamics linearization to ensure a _real-time_ performance and implementation of the control strategy, and employing chance constraints that may not accurately reflect the severity of constraint violations or potential accidents and can be computationally demanding to evaluate, particularly for complex Fig. 1: Our proposed sampling strategy, for a ground vehicle model, under the U-MPPI control strategy based on unscented transform; such a sampling strategy propagates both the mean \(\Re_{k}\) (blue dots) and covariance \(\mathbf{\Sigma}_{k}\) (gray ellipses) of the state vector at each time-step \(k\); to generate \(M\) sampled trajectories, we propagate \(M_{x}\) sets of batches, where each batch contains \(n_{\sigma}\) trajectories corresponding to the \(n_{\sigma}\) sigma points, where \(M=n_{\sigma}M_{\sigma}\), \(n_{\sigma}=2n_{x}+1\), and red lines refer to \(2n_{x}\) sigma-point trajectories surrounding the nominal trajectories (blue lines); for our validation-used robot, \(n_{x}=3\). or high-dimensional probability distributions [4, 5]. Additionally, SMPC has a further limitation in that it relies on a _risk-neutral_ expectation to predict future uncertain outcomes, which may not be reliable in the case of a tail-end probability event (i.e., a rare event with a low probability of occurring) actually happening [6]. To address the challenges posed by uncertainties, Risk-Sensitive MPC (RSMPC) approaches have gained traction in recent years, thanks to their ability to balance the benefits and drawbacks of robust and stochastic MPC methods. By integrating the concept of _risk measures_ or _risk metrics_ into the optimization problem, RSMPC can evaluate the impact of uncertainty and adjust responses accordingly to different levels of uncertainty [7, 8, 9]. The Model Predictive Path Integral (MPPI) framework, a type of SMPC method, has emerged as a promising control strategy for complex robotics systems with stochastic dynamics and uncertainties [10, 11, 12, 13]. Such a method solves the stochastic optimal control problem in a _receding-horizon_ control setting by: (i) leveraging Monte Carlo simulation to rollout real-time simulated trajectories propagated from the system dynamics, (ii) evaluating these trajectories, (iii) computing the optimal control sequence by taking the weighted average of the costs of the sampled trajectories, and (iv) applying the first control input to the system while using the remaining control sequence to warm-start the optimization in the next time-step, enabling the method to solve the optimization problem effectively [11]. MPPI stands out among alternative MPC methods due to its attractive features, such as being a sampling-based and derivative-free optimization method, not relying on assumptions or approximations of objective functions and system dynamics, being effective for highly dynamic systems, and benefiting from parallel sampling and the computational capabilities of Graphics Processing Units (GPUs) to achieve optimized and _real-time_ performance [14].1 Footnote 1: It is worth noting that a CPU-based MPPI, optimized using vectorization and tensor operations, is now available. See the link for further information: [https://github.com/artofnothingness/mppic](https://github.com/artofnothingness/mppic) While MPPI has appealing characteristics, it may also pose challenges in practice. One particular concern is that, much like any sampling-based optimization algorithm, it could generate an _infeasible_ control sequence if all the resulting MPPI sampled trajectories are concentrated in a high-cost region, which may lead to violations of system constraints or a higher likelihood of being trapped in local minima [15, 16]. In [17], Tube-MPPI was proposed as a solution to alleviate the situation by incorporating an iterative Linear Quadratic Gaussian (iLQG) controller (which, unfortunately, requires the linearization of dynamics) as an ancillary controller to track the MPPI-generated nominal trajectory. Similarly, in [13], an augmented version of MPPI is employed, which includes a nonlinear \(\mathcal{L}_{1}\) adaptive controller to address model uncertainty. Recently, novel sampling techniques have been introduced to enhance the performance of MPPI, as discussed in [15] and [16]. In [15], the MPPI algorithm is enhanced by incorporating the covariance steering (CS) principle, while [16] proposes sampling trajectories from a product of normal and log-normal distributions (NLN mixture), instead of solely using a Gaussian distribution. These methods result in more efficient trajectories than the vanilla MPPI, leading to better exploration of the system's state-space and reducing the risk of encountering local minima. Another constraint of MPPI is its inability to explicitly incorporate risk levels during planning due to its _risk-neutral_ technique in evaluating sampled trajectories during the optimization process, making it challenging to achieve the desired balance between risk and robustness. Additionally, the MPPI optimization problem concentrates solely on minimizing the objective function, which is influenced by a minor perturbation injected into the control input, without explicitly considering any uncertainties or risks associated with the system dynamics or the environment. This can eventually lead to sub-optimal or overly aggressive control actions, as MPPI may select trajectories that appear to have a low expected cost but may actually be riskier or less robust in practice. Consequently, MPPI cannot guarantee safety when environmental conditions change, which limits its applicability for safety-critical applications such as autonomous driving. The Risk-aware MPPI (RA-MPPI) algorithm [18] is a more recent approach that addresses this issue by utilizing Conditional Value-at-Risk (CVaR) to generate risk-averse controls that evaluate real-time risks and account for systematic uncertainties. However, such a method employs Monte-Carlo sampling to estimate the CVaR, which can be computationally intensive and time-consuming since generating a large number of random samples is required for accurate estimation using Monte Carlo methods. In contrast to existing solutions aimed at mitigating the shortcomings of MPPI, our proposed solution can effectively reflect the uncertainties of system states, sensing, and control actions, in addition to integrating a novel and more efficient trajectory sampling strategy. To this end, we introduce the U-MPPI control strategy, a novel methodology that enhances the classical MPPI algorithm by combining the Unscented Transform (UT) with standard optimal control theory (also known as unscented guidance [19]) to effectively manage system uncertainties. Such a control strategy leverages the UT for two purposes: regulating the propagation of the dynamical system and proposing a new state-dependent cost function formulation that incorporates uncertainty information. To the best of the authors' knowledge, the proposed control strategy has not been previously discussed in the literature. In summary, the contributions of this work can be summarized as follows: 1. While vanilla MPPI variants propagate only the mean value of the system dynamics, as depicted in Fig. 2, we propose a novel trajectory sampling technique equipped with a more effective sampling distribution policy based on UT, this technique utilizes UT to propagate both the mean and covariance of the system dynamics at each time-step, as demonstrated in Figs. 1 and 3, and explained in detail in Section IV-A; by doing so, our new sampling method achieves significantly better exploration of the state-space of the given system and reduces the risk of getting trapped in local minima. 2. Then, by utilizing the propagated uncertainty information (i.e., state covariance matrix), we introduce a _risk-sensitive_ cost function that explicitly considers risk or uncertainty during the trajectory evaluation process, leading to a safer and more robust control system, especially for safety-critical applications, as discussed in Section IV-B. 3. In Section V, we validate the effectiveness of our U-MPPI control strategy for aggressive collision-free navigation in both known and unknown cluttered environments using intensive simulations; by comparing it with the baseline MPPI, we demonstrate its superiority in producing more efficient trajectories that explore the state-space of the system better, resulting in higher success and task completion rates, reducing the risk of getting trapped in local minima, and ultimately leading the robot to find feasible trajectories that avoid collisions. ## II Stochastic Optimal Control This section aims to establish the problem statement of stochastic optimal control and present a concise overview of MPPI as a potential solution to address this problem. ### _Problem Formulation_ Within the context of discrete-time stochastic systems, let us consider the system state \(\mathbf{x}_{k}\in\mathbb{R}^{n_{x}}\), the control input \(\mathbf{u}_{k}\in\mathbb{R}^{n_{u}}\), and the underlying non-linear dynamics \[\mathbf{x}_{k+1}=f\left(\mathbf{x}_{k},\mathbf{w}_{k}\right). \tag{1}\] The actual (i.e., disturbed) control input, \(\mathbf{w}_{k}\), is represented as \(\mathbf{w}_{k}=\mathbf{u}_{k}+\delta\mathbf{u}_{k}\sim\mathcal{N}(\mathbf{u}_ {k},\Sigma_{\mathbf{u}})\), where \(\delta\mathbf{u}_{k}\sim\mathcal{N}(\mathbf{0},\Sigma_{\mathbf{u}})\) is a zero-mean Gaussian noise with covariance \(\Sigma_{\mathbf{u}}\) which represents the injected disturbance into the control input. Within a finite time-horizon \(N\), we denote the control sequence \(\mathbf{U}=\left[\mathbf{u}_{0},\mathbf{u}_{1},\dots,\mathbf{u}_{N-1}\right]^ {\top}\in\mathbb{R}^{n_{u}N}\) and the corresponding state trajectory \(\mathbf{x}=\left[\mathbf{x}_{0},\mathbf{x}_{1},\dots,\mathbf{x}_{N}\right]^ {\top}\in\mathbb{R}^{n_{x}(N+1)}\). Moreover, let \(\mathcal{X}^{d}\) denote the \(d\) dimensional space with \(\mathcal{X}_{rob}\left(\mathbf{x}_{k}\right)\subset\mathcal{X}^{d}\) and \(\mathcal{X}_{obs}\subset\mathcal{X}^{d}\) represent the area occupied by the robot and obstacles, respectively. In this scenario, the objective of the stochastic optimal control problem is to find the optimal control sequence, \(\mathbf{U}\), that generates a collision-free trajectory, guiding the robot from its initial state, \(\mathbf{x}_{s}\), to the desired state, \(\mathbf{x}_{f}\), under the minimization of the cost function, \(J\), subject to specified constraints. The optimization problem at hand can be formulated using the vanilla MPPI control strategy as \[\min_{\mathbf{U}} J(\mathbf{x},\mathbf{u})=\mathbb{E}\left[\phi\left(\mathbf{x}_{N} \right)+\!\!\sum_{k=0}^{N-1}\!\!\left(\!q(\mathbf{x}_{k})+\frac{1}{2}\mathbf{u }_{k}^{\top}R\mathbf{u}_{k}\!\!\right)\!\!\right]\!,\] (2a) s.t. \[\mathbf{x}_{k+1}=f\left(\mathbf{x}_{k},\mathbf{w}_{k}\right), \delta\mathbf{u}_{k}\sim\mathcal{N}(\mathbf{0},\Sigma_{\mathbf{u}}), \tag{2b}\] \[\mathcal{X}_{rob}\left(\mathbf{x}_{k}\right)\cap\mathcal{X}_{obs }=\emptyset,\;h(\mathbf{x}_{k},\mathbf{u}_{k})\leq 0,\] (2c) \[\mathbf{x}_{0}=\mathbf{x}_{s},\;\mathbf{u}_{k}\in\mathbb{U},\; \mathbf{x}_{k}\in\mathbb{X}, \tag{2d}\] where \(R\in\mathbb{R}^{n_{u}\times n_{u}}\) is a positive-definite control weighting matrix, \(\mathbb{U}\) denotes the set of admissible control inputs, and \(\mathbb{X}\) denotes the set of all possible states \(\mathbf{x}_{k}\); the state terminal cost function, \(\phi\left(\mathbf{x}_{N}\right)\), and the running cost function, \(q\left(\mathbf{x}_{k}\right)\), can be defined as arbitrary functions, offering a more flexible and dynamic approach to cost modeling that can be adapted to meet the specific requirements of the system being controlled. ### _Overview of MPPI Control Strategy_ MPPI solves the optimization problem defined in (2) by minimizing the objective function \(J\) (2a), taking into account the system dynamics (2b) and constraints, including collision avoidance and control constraints, detailed in (2c). To this end, at each time-step \(\Delta t\), MPPI employs the Monte Carlo simulation to sample thousands of _real-time_ simulated trajectories, represented by \(M\), propagated from the underlying system dynamics, as illustrated in Fig. 2. Subsequently, within the time-horizon \(N\), the _cost-to-go_ of each trajectory \(\tau_{i}\) can be evaluated as \[\tilde{S}\left(\tau_{i}\right)=\phi\left(\mathbf{x}_{N}\right)+\!\!\sum_{k=0} ^{N-1}\tilde{q}\left(\mathbf{x}_{k},\mathbf{u}_{k},\delta\mathbf{u}_{k,i} \right),\forall i\in\!\{1,\cdots,M\}, \tag{3}\] where \(\phi(\mathbf{x}_{N})\) refer to the terminal state cost, while the instantaneous running cost \(\tilde{q}\left(\mathbf{x}_{k},\mathbf{u}_{k},\delta\mathbf{u}_{k}\right)\) encompasses both the state-dependent running cost \(q\left(\mathbf{x}_{k}\right)\) and the quadratic control cost \(q\left(\mathbf{u}_{k},\delta\mathbf{u}_{k}\right)\) and is formulated as \[\tilde{q}=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! bution function (PDF) after it passes through a non-linear transformation using a set of sampled points, as sigma points [22]. Formally, given the mean \(\bar{\mathbf{x}}_{k}\) and covariance \(\mathbf{\Sigma}_{k}\) of a Gaussian-distributed system state \(\mathbf{x}_{k}\), with \(\mathbf{x}_{k}\sim\mathcal{N}(\bar{\mathbf{x}}_{k},\mathbf{\Sigma}_{k})\), UT approximates the distribution over the next state, \(\mathbf{x}_{k+1}\), by first introducing a set of sigma points \(\left\{\mathcal{X}_{k}^{(i)}\right\}_{i=0}^{2n_{x}}\in\mathbb{R}^{n_{\sigma}}\) around the mean \(\bar{\mathbf{x}}_{k}\) and the corresponding weights \(\left\{w^{(i)}\right\}_{i=0}^{2n_{x}}\in\mathbb{R}^{n_{\sigma}}\), where \(n_{\sigma}=2n_{x}+1\). These sigma points are designed to capture the covariance of the distribution at time-step \(k\) as follows \[\mathcal{X}_{k}^{(0)} =\bar{\mathbf{x}}_{k}, \tag{6}\] \[\mathcal{X}_{k}^{(i)} =\bar{\mathbf{x}}_{k}+\left(\sqrt{(n_{x}\!+\!\lambda_{\sigma}) \mathbf{\Sigma}_{k}}\right)_{\!i},\;\forall i\!=\!\{1,\ldots,n_{x}\},\] \[\mathcal{X}_{k}^{(i)} =\bar{\mathbf{x}}_{k}-\left(\sqrt{(n_{x}\!+\!\lambda_{\sigma}) \mathbf{\Sigma}_{k}}\right)_{\!i},\;\forall i\!=\!\{n_{x}\!+\!1,\ldots,2n_{x}\},\] where \(\left(\sqrt{(n_{x}\!+\!\lambda_{\sigma})\mathbf{\Sigma}_{k}}\right)_{\!i}\) represents the \(i\)th row or column of the square root of the weighted covariance matrix \((n_{x}+\lambda_{\sigma})\mathbf{\Sigma}_{k}\), and \(\lambda_{\sigma}=\alpha^{2}(n_{x}\!+\!k_{\sigma})\!-\!n_{x}\) is influenced by the scaling parameters \(k_{\sigma}\geq 0\) and \(\alpha\in(0,1]\) that determine how far the sigma points are spread from the mean [23], as demonstrated in Fig. 4. Each \(\mathcal{X}_{k}^{(i)}\) is associated with two weights, \(w_{m}^{(i)}\) for computing the mean and \(w_{c}^{(i)}\) for determining the covariance of the transformed distribution, computed as \[w_{m}^{(0)} =\frac{\lambda_{\sigma}}{n_{x}+\lambda_{\sigma}}, \tag{7}\] \[w_{c}^{(0)} =w_{m}^{(0)}+(1-\alpha^{2}+\beta),\] \[w_{m}^{(i)} =w_{c}^{(i)}=\frac{1}{2(n_{x}+\lambda_{\sigma})},\] where \(\beta\) is a hyper-parameter controlling the relative importance of the mean and covariance information. In other words, \(\beta\) is employed to incorporate prior knowledge about the distribution of the state \(\mathbf{x}\). For Gaussian distributions, the optimal value for \(\beta\) is 2 [24]. In the second step, we propagate the \((2n_{x}+1)\) sigma points through the non-linear system to produce the transformed sigma points \(\mathcal{X}_{k+1}^{(i)}=f(\mathcal{X}_{k}^{(i)})\) at the next time-step. Finally, the mean and covariance of \(\mathbf{x}_{k+1}\) can be estimated using the transformed sigma points and their corresponding weights, as follows \[\bar{\mathbf{x}}_{k+1} =\sum_{i=0}^{2n_{x}}w_{m}^{(i)}\mathcal{X}_{k+1}^{(i)}, \tag{8}\] \[\mathbf{\Sigma}_{k+1} =\sum_{i=0}^{2n_{x}}w_{c}^{(i)}(\mathcal{X}_{k+1}^{(i)}-\bar{ \mathbf{x}}_{k+1})(\mathcal{X}_{k+1}^{(i)}-\bar{\mathbf{x}}_{k+1})^{\top}.\] ### _Unscented Optimal Control_ By incorporating the unscented transform with standard optimal control, the unscented optimal control, also referred to as unscented guidance [19], presents a novel methodology for addressing the uncertainties in non-linear dynamical systems within an _open-loop_ framework [25, 26]. Given the sigma points \(\mathcal{X}_{k}^{(i)}\) and disturbed control input \(\mathbf{w}_{k}\) at time-step \(k\), each sigma point can be propagated through the underlying non-linear dynamics given in (1), as follows \[\mathcal{X}_{k+1}^{(i)}=f\left(\mathcal{X}_{k}^{(i)},\mathbf{w}_{k}\right), \forall i=0,\cdots,2n_{x}\in\mathbb{R}^{n_{\sigma}}. \tag{9}\] Consider an \(n_{\sigma}n_{x}\)-dimensional vector \(\mathbf{X}\), defined as \(\mathbf{X}=\left[\mathcal{X}^{(0)},\mathcal{X}^{(1)},\ldots,\mathcal{X}^{(2n_ {x})}\right]^{\top}\in\mathbb{R}^{n_{\sigma}}\). Then, the dynamics of \(\mathbf{X}\) are characterized by \(n_{\sigma}\) instances of the function \(f\), specified as \[\mathbf{X}_{k+1}=\left[\begin{array}{c}f\left(\mathcal{X}_{k}^{(0)},\mathbf{w}_{ k}\right)\\ f\left(\mathcal{X}_{k}^{(1)},\mathbf{w}_{k}\right)\\ \vdots\\ f\left(\mathcal{X}_{k}^{(2n_{x})},\mathbf{w}_{k}\right)\end{array}\right]:= \mathbf{f}(\mathbf{X}_{k},\mathbf{w}_{k}). \tag{10}\] With these preliminaries, the original stochastic optimal control problem described in (2) can be re-formulated within the context of the unscented guidance framework as \[\min_{\mathbf{U}}\] (11a) s.t. \[\mathbf{X}_{k+1}=\mathbf{f}\left(\mathbf{X}_{k},\mathbf{w}_{k} \right),\delta\mathbf{u}_{k}\sim\mathcal{N}(\mathbf{0},\Sigma_{\mathbf{u}}), \tag{11b}\] \[\mathcal{X}_{rob}\left(\mathbf{X}_{k}\right)\cap\mathcal{X}_{ obs}=\emptyset,\;\mathbf{h}(\mathbf{X}_{k},\mathbf{u}_{k})\leq 0,\] (11c) \[\mathbf{X}_{0}=\left[\mathcal{X}_{0}^{(0)},\ldots,\mathcal{X}_{0}^{(2n _{x})}\right]^{\top},\mathbf{u}_{k}\in\mathbb{U},\mathbf{X}_{k}\in\mathbb{X}, \tag{11d}\] where \(\mathbf{\Phi}\left(\mathbf{X}_{N}\right)=\left[\phi\left(\mathcal{X}_{N}^{(0)} \right),\ldots,\phi\left(\mathcal{X}_{N}^{(2n_{x})}\right)\right]^{\top}\in \mathbb{R}^{n_{\sigma}}\), and \(\mathbf{q}\left(\mathbf{X}_{k}\right)=\left[q\left(\mathcal{X}_{k}^{(0)} \right),\ldots,q\left(\mathcal{X}_{k}^{(2n_{x})}\right)\right]^{\top}\in \mathbb{R}^{n_{\sigma}}\). The objective of our proposed U-MPPI control strategy, as detailed in Section IV, is to minimize the objective function, \(\mathbf{J}\), in (11a) by finding the optimal control sequence, \(\mathbf{U}=\{\mathbf{u}_{k}\}_{k=0}^{N-1}\), while taking into account (i) the system constraints previously discussed in Section II-A, and (ii) the uncertainties associated with both system states and control actions. ## IV U-MPPI Control Strategy As previously outlined in Section II-B, the control noise variance \(\Sigma_{\mathbf{u}}\) is not updated by MPPI, and the state-space exploration is performed by adjusting \(\nu\) (refer to (4)). Nevertheless, a too-high value of \(\nu\) can cause control inputs with considerable chatter. Similarly, increasing \(\Sigma_{\mathbf{u}}\) might violate system constraints and eventual divergence from the desired state [16]. Additionally, the MPPI problem, as stated in (2), is focused solely on minimizing the cost function that is affected by a minor perturbation in the control input, represented by \(\delta\mathbf{u}_{k}\), without explicitly incorporating the uncertainties that may be associated with either the system states or the surrounding environment. To ensure effectiveness in practice, the motion control strategy should be able to reflect the uncertainties of system states, sensing, and control actions. To this end, we introduce the U-MPPI control strategy, a new technique that leverages unscented transform to deal with these uncertainties. More precisely, the Unscented Transform (UT) is utilized for the purpose of regulating the propagation of the dynamical system, thereby introducing a novel and more efficient trajectory sampling strategy than the standard MPPI variants, as demonstrated in Section IV-A. Furthermore, as discussed in Section IV-B, UT proposes a new cost function formulation incorporating uncertainty information, leading to a safer and more robust control system, especially for safety-critical applications. ### _Unscented-Based Sampling Strategy_ By leveraging Monte Carlo simulation, the vanilla MPPI algorithm simulates a large number of _real-time_ trajectories \(M\), propagated from the system dynamics defined in (1), by solely manipulating the injected Gaussian noise into the mean control sequence (see Fig. 2). Additionally, the computation of the propagated states during the time period of \(N\) is restricted to the mean value or the first moment with respect to the initial state \(\mathbf{x}_{0}\), without propagating the covariance of \(\mathbf{x}_{k}\). Therefore, to enhance the performance of the classic MPPI algorithm, we propose a new trajectory sampling technique that utilizes the Unscented Transform (UT) to propagate both the mean \(\bar{\mathbf{x}}_{k}\) and covariance \(\mathbf{\Sigma}_{k}\) of the state vector \(\mathbf{x}_{k}\) at each time-step \(k\). In Fig. 3, we illustrate how the sigma points propagate through the nonlinear dynamical system within the U-MPPI control framework, leading to a total of \(M\) rollouts. These rollouts are achieved by sampling \(M_{\sigma}\) sets of batches, also referred to as cones, with each batch comprising \(n_{\sigma}\) trajectories, such that \(M=n_{\sigma}M_{\sigma}\). The propagation process of our proposed sampling strategy can be summarized in the following steps. At time-step \(k=0\), we compute \(n_{\sigma}\) sigma points \(\left\{\mathcal{X}_{0}^{(i)}\right\}_{i=0}^{2n_{\sigma}}\) using (6), given the initial state \(\mathbf{x}_{0}\sim\mathcal{N}(\bar{\mathbf{x}}_{0},\mathbf{\Sigma}_{0})\). We then apply the underlying non-linear dynamics expressed in (9) to the sigma points, which can be merged into a single vector using (10). Finally, we use the resulting sigma points \(\mathbf{X}_{1}\) at \(k=1\) to estimate the first and second moments, namely \(\bar{\mathbf{x}}_{1}\) and \(\mathbf{\Sigma}_{1}\), of the propagated state vector \(\mathbf{x}_{1}\) by applying (8). This propagation process is repeated until \(k=N-1\), resulting in a sequence of state vectors denoted as \(\left\{\mathbf{X}_{k}\right\}_{i=0}^{N}\), which represent the \(n_{\sigma}\) propagated sigma points. This entire process is carried out for each batch, resulting in a total of \(M\) trajectories. In this study, during time-step \(k\), we assume that all sigma points belonging to the \(m^{\text{th}}\) batch are being affected by the same Gaussian control noise represented by \(\delta\mathbf{u}_{k,m}\). Figure 1 depicts the visual representation of our proposed unscented-based sampling strategy displayed in Fig. 3. The primary objective of the new sampling strategy is to achieve significantly better exploration of the state-space of the given system and more efficient sampling of trajectories compared to MPPI, while utilizing the same injected control noise \(\Sigma_{\mathbf{u}}\). To exemplify how leveraging the UT for trajectory sampling can enhance the performance of the MPPI algorithm and to investigate the effect of UT parameters on the distribution of sampled rollouts, we present a concrete example in Fig. 4. In particular, using the discrete-time kinematics model of a differential wheeled robot from [16] and control schemes parameters listed in Section V-A1, we generate \(210\) rollouts by sampling \(\delta\mathbf{u}_{k}\) from a zero-mean Gaussian distribution with a covariance of \(0.025\mathbf{I}_{2}\) under the classical MPPI framework, as illustrated in Fig. 4(a), where \(\mathbf{I}_{n}\) denotes an \(n\times n\) identity matrix. Similarly, we employ the unscented-based sampling strategy to draw \(210\) trajectories, considering: (i) the same injected disturbance into the control input, i.e., \(\delta\mathbf{u}_{k}\sim\mathcal{N}(\mathbf{0},0.025\mathbf{I}_{2})\), and (ii) setting the UT scaling parameters to \(\alpha=1\) and \(k_{\sigma}=0\), along with an initial state covariance matrix \(\mathbf{\Sigma}_{0}\) of \(0.01\mathbf{I}_{3}\), as illustrated in Fig. 4(b). It is noteworthy to observe in Fig. 4(b) that our proposed sampling strategy is more efficient than the classical MPPI sampling strategy, as it generates more spread-out trajectories that cover a larger state space. This enables the robot to explore the environment more extensively and find better solutions, thereby reducing the likelihood of getting trapped in local minima, as revealed in Section V-A4. Nevertheless, as depicted in Fig. 4(c), employing higher Fig. 3: Schematic illustration of nonlinear dynamical system propagation under the proposed U-MPPI control strategy for \(M\) sampled trajectories over a finite time-horizon \(N\), where \(M=n_{\sigma}M_{\sigma}\). values of UT parameters (specifically, \(\alpha,k_{\sigma},\mathbf{\Sigma}_{0}\)) may result in a loss of precision and continuity in the distribution of trajectories across the state-space, as the sigma points become more spread out from the mean \(\mathcal{X}_{k}^{(0)}\). This can impact the resulting control actions of the system and, in the context of autonomous navigation, potentially lead to collisions with obstacles. In contrast, Fig. 4(d) demonstrates that using lower values of \(\alpha\) and \(k_{\sigma}\) results in a sampling strategy that closely resembles MPPI. Similarly, if trajectories are sampled only from \(\mathcal{X}_{k}^{(0)}\) (as depicted by the blue trajectories in Fig. 1), while excluding other sigma-point trajectories, the same sampling strategy can be achieved. We refer to this approach as sampling mode 0 (SM\({}_{0}\)), while the default strategy that includes all sigma points is referred to as SM\({}_{1}\). ### _Risk-Sensitive Cost_ One of the main limitations of the vanilla MPPI is that it typically assumes a _risk-neutral_ approach when assessing the sampled trajectories during the optimization process, without explicitly considering risk or uncertainty in the trajectory evaluation process, as outlined in Section II-B, particularly in (4). The commonly employed method in sampling-based MPC algorithms involves using a quadratic cost function to guide the current state \(\mathbf{x}_{k}\) towards its desired state \(\mathbf{x}_{f}\), denoted as \(q_{\text{state}}(\mathbf{x}_{k})\), and expressed mathematically as follows \[q_{\text{state}}(\mathbf{x}_{k})=\left(\mathbf{x}_{k}-\mathbf{x}_{f}\right)^{ \top}\mathrm{Q}\left(\mathbf{x}_{k}-\mathbf{x}_{f}\right)=\left\|\mathbf{x}_ {k}-\mathbf{x}_{f}\right\|_{Q}^{2}, \tag{12}\] where \(\mathrm{Q}\) is a positive definite weighting matrix. Whittle introduced in [27] an interesting method for incorporating risk in decision-making by replacing the expected quadratic cost with a risk-sensitive benchmark in the form of an exponential-quadratic function. The _risk-sensitive_ (RS) cost \(q_{\text{rs}}\left(\mathbf{x}_{k}\right)\) can be obtained by evaluating the log-expectation of the exponentiated quadratic cost as follows \[\begin{split} q_{\text{rs}}(\mathbf{x}_{k})&=- \frac{2}{\gamma}\log\mathbb{E}\left[\exp\left(-\frac{1}{2}\gamma q_{\text{ state}}(\mathbf{x}_{k})\right)\right]\\ &=-\frac{2}{\gamma}\log\mathbb{E}\left[\exp\left(-\frac{1}{2} \gamma\left\|\mathbf{x}_{k}-\mathbf{x}_{f}\right\|_{\mathrm{Q}}^{2}\right) \right],\end{split} \tag{13}\] where \(\gamma\) is a real scalar denoted as the _risk-sensitivity_ parameter, dictating how the controller reacts to risk or uncertainty. For example, when \(\gamma>0\), the controller exhibits _risk-seeking_ or _risk-preferring_ behavior. Conversely, when \(\gamma<0\), the controller demonstrates _risk-averse_ or _risk-avoiding_ behavior. When \(\gamma=0\), the controller is considered _risk-neutral_. We refer to [7] for more details and clear visualizations demonstrating the relationship between the sign of \(\gamma\) and the RS cost \(q_{\text{rs}}\). Our proposed U-MPPI control strategy assumes that the system state \(\mathbf{x}_{k}\sim\mathcal{N}(\mathbf{\bar{x}}_{k},\mathbf{\Sigma}_{k})\) follows Gaussian distribution. With this assumption, we can approximate the RS state-dependent cost expressed in (13) as \[q_{\text{rs}}(\mathbf{x}_{k})=\frac{1}{\gamma}\log\det\left(\mathbf{I}+ \gamma Q\mathbf{\Sigma}_{k}\right)+\left\|\mathbf{\bar{x}}_{k}-\mathbf{x}_{f} \right\|_{Q_{\text{rs}}}^{2}, \tag{14}\] where \(Q_{\text{rs}}\) represents the _risk-sensitive_ penalty coefficients matrix or adaptive weighting matrix, given by \(Q_{\text{rs}}(\mathbf{\Sigma}_{k})=\left(Q^{-1}+\gamma\mathbf{\Sigma}_{k} \right)^{-1}\). The derivation of (14) is given in Appendix A. Such a new formulation is employed by U-MPPI to assess each batch of sigma-point trajectories, where the predicted mean state \(\mathbf{\bar{x}}_{k}\) is replaced with the predicted sigma points \(\left\{\mathcal{X}_{k}^{(i)}\right\}_{i=0}^{2n_{\text{r}}}\). Therefore, the modified RS cost for the \(i^{\text{th}}\) U-MPPI sampled trajectory within a certain batch is defined by \[q_{\text{rs}}\!\left(\mathcal{X}_{k}^{(i)};\mathbf{\Sigma}_{k}\right)=\frac{1 }{\gamma}\log\det\left(\mathbf{I}+\gamma Q\mathbf{\Sigma}_{k}\right)\!+\! \left\|\mathcal{X}_{k}^{(i)}\!-\!\mathbf{x}_{f}\right\|_{Q_{\text{rs}}}^{2}. \tag{15}\] It is noteworthy to observe in (15) that \(q_{\text{rs}}(\cdot)\) incorporates uncertainty information \(\mathbf{\Sigma}_{k}\) as a feedback into the weighting matrix \(Q\), which measures the difference between the predicted sigma point \(\mathcal{X}_{k}^{(i)}\) and the desired state \(\mathbf{x}_{f}\). By incorporating this uncertainty feedback mechanism, the proposed control strategy exhibits a risk-sensitive behavior that effectively minimizes the RS cost over the time-horizon \(N\). This, in turn, enables the development of a more robust and risk-conscious control policy, as empirically validated through intensive simulations outlined in Section V. To be more precise, if \(\gamma<0\), the penalty coefficients matrix \(Q_{\text{rs}}\) utilized for tracking the desired state increases with the level of system uncertainty \(\mathbf{\Sigma}_{k}\). This can be expressed mathematically over a finite time-horizon \(N\) as \(Q_{\text{rs}}(\mathbf{\Sigma}_{0})<Q_{\text{rs}}(\mathbf{\Sigma}_{1})<\cdots<Q_{ \text{rs}}(\mathbf{\Sigma}_{N-1})\) for all \(\gamma<0\). On the other hand, if \(\gamma>0\), the penalty matrix decreases as the uncertainty level increases. Additionally, when \(\gamma=0\), the penalty remains constant and equal to the weighting matrix \(Q\), regardless of the level of the uncertainty [8]. More in-depth analysis of how the U-MPPI performance is influenced by the sign of \(\gamma\) is discussed in Section V-A1. By incorporating the system uncertainty \(\mathbf{\Sigma}_{k}\) into (3), we can obtain the modified _cost-to-go_ for each trajectory \(\tau_{m}^{(i)}\) in batch \(m\) as \[\begin{split}\tilde{S}\left(\tau_{m}^{(i)}\right)&= \phi\left(\mathcal{X}_{N}^{(i)}\right)+\sum_{k=0}^{N-1}\tilde{q}\left( \mathcal{X}_{k}^{(i)},\mathbf{\Sigma}_{k},\mathbf{u}_{k},\delta\mathbf{u}_{k,m}\right),\\ &\qquad\qquad\qquad\forall m\in\{1,\cdots,M_{\sigma}\},\quad \forall i\in\{0,\cdots,2n_{x}\},\end{split} \tag{16}\] where the instantaneous running cost \(\tilde{q}\) is a combination of the state-dependent running cost \(q\left(\mathcal{X}_{k}^{(i)},\mathbf{\Sigma}_{k}\right)\), which relies on the RS cost defined in (15) (as shown in (18) as an example), as well as the quadratic control cost \(q(\mathbf{u}_{k},\delta\mathbf{u}_{k})\) from (4). Note that the _cost-to-go_ for all sigma-point trajectories in the \(m^{\text{th}}\) batch can be expressed in vector form as \(\mathbf{\tilde{S}}\left(\tau_{m}\right)=\left[\tilde{S}\left(\tau_{m}^{(0)} \right),\ldots,\tilde{S}\left(\tau_{m}^{(2n_{x})}\right)\right]^{\top}\in \mathbb{R}^{n_{\sigma}}\). Similarly, \(\mathbf{\tilde{q}}\left(\mathbf{X}_{k},\mathbf{\Sigma}_{k},\mathbf{u}_{k}, \delta\mathbf{u}_{k,m}\right)=\left[\tilde{q}\left(\mathcal{X}_{k}^{(0)}, \cdot\right),\ldots,\tilde{q}\left(\mathcal{X}_{k}^{(2n_{x})},\cdot\right) \right]^{\top}\). ### _Real-Time U-MPPI Control Algorithm_ We are now prepared to describe the _real-time_ control-loop of our U-MPPI algorithm, as depicted in Algorithm 1, employing the default sampling strategy that considers all sigma points (referred to as SM\({}_{1}\)). At every time-step \(\Delta t\), the algorithm estimates the current system state \(\mathbf{\bar{x}}_{0}\) with an external state estimator, generates \(N\times M_{\sigma}\) random control perturbations \(\delta\mathbf{u}\) on the GPU using CUDA's random number generation library, and then produces \(M_{\sigma}\) sets of batches in parallel on the GPU (lines \(2:4\)). Subsequently, for each batch and starting from the actual (i.e., initial) state \(\mathbf{x}_{0}\sim\mathcal{N}(\mathbf{\bar{x}}_{0},\mathbf{\Sigma}_{0})\), the algorithm samples and propagates \(n_{\sigma}\) sigma-point trajectories by applying the non-linear dynamics in (10) to the sigma points computed via (6) (lines \(5:9\)). These trajectories are then evaluated using (16), and the first and second moments of the propagated state are estimated by applying (8) (lines \(10:15\)). Then, the algorithm updates the optimal control sequence \(\{\mathbf{u}_{k}\}_{k=0}^{N-1}\), applies a Savitzky-Galoy filter for smoothing, and applies the first control \(\mathbf{u}_{0}\) to the system (lines \(16:19\)). It then slides down the remaining sequence of length \(N-1\) to be utilized at the next time-step (lines \(20:23\)). It is noteworthy that when executing the algorithm in sampling mode 0 (\(\mathtt{SM}_{0}\)), where only the mean \(\mathcal{X}_{k}^{(0)}\) is used for trajectory sampling, it is essential to set \(M_{\sigma}\) to \(M\) instead of \(int(\frac{M}{n_{\sigma}})\). Additionally, it is necessary to evaluate solely the _cost-to-go_ of the nominal trajectory \(\tilde{S}\left(\tau_{m}^{(0)}\right)\). ``` 1:\(M,M_{\sigma},N\): # of trajectories, batches, time-horizon, \(\mathbf{f},n_{x},\Delta t\): Dynamics, state dimension, time-step size, \(\phi,q,Q,\gamma,\lambda,\nu,\Sigma_{\mathbf{u}},R\): Cost/Control parameters, \(\lambda_{\sigma},k_{\sigma},n_{\sigma},\alpha,\beta,\mathbf{\Sigma}_{0}\): UT parameters, SGF: Savitzky-Galoy (SG) convolutional filter, Input:\(\mathbf{U}=[\mathbf{u}_{0},\mathbf{u}_{1},\ldots,\mathbf{u}_{N-1}]^{\top}\): Initial control sequence, SM: Set U-MPPI sampling mode (\(\mathtt{SM}_{0}\) OR \(\mathtt{SM}_{1}\)), 1:whiletask not completeddo 2:\(\bar{\mathbf{x}}_{0}\leftarrow\)StateEstimator(), \(\triangleright\)\(\bar{\mathbf{x}}_{0}\in\mathbb{R}^{n_{x}}\) 3:\(\delta\mathbf{u}\leftarrow\)RandomNoiseGenerator(\(\mathbf{0},\Sigma_{\mathbf{u}}\)), \(\triangleright\delta\mathbf{u}\in\mathbb{R}^{N\times M_{\sigma}}\) 4:for\(m\gets 0\)to\(M_{\sigma}\) in paralleldo 5:\((\bar{\mathbf{x}},\mathbf{\Sigma})\!\leftarrow\!(\bar{\mathbf{x}}_{0},\mathbf{ \Sigma}_{0})\), \(\triangleright\)Actual state \(\mathbf{x}_{0}\!\sim\!\!\langle\bar{\mathbf{x}}_{0},\mathbf{\Sigma}_{0}\rangle\), \(\mathbf{S}\left(\tau_{m}\right)\leftarrow[0,\ldots,0]^{\top}\), \(\triangleright\)\(\bar{\mathbf{S}}\left(\tau_{m}\right)\in\mathbb{R}^{n_{\sigma}}\) 6:for\(k\gets 0\)to\(N-1\)do 7:\(\mathbf{X}_{k}\leftarrow\)Moments2SigmaPoints(\(\bar{\mathbf{x}}_{k},\mathbf{\Sigma}_{k}\)), 8:\(\mathbf{X}_{k+1}\leftarrow\mathbf{X}_{k}+\mathbf{f}\left(\mathbf{X}_{k},\mathbf{ u}_{k}+\delta\mathbf{u}_{k,m}\right)\Delta t\), 9:\(\bar{\mathbf{S}}\left(\tau_{m}\right)\!\leftarrow\!\bar{\mathbf{S}}\left(\tau_ {m}\right)+\bar{\mathbf{q}}\left(\mathbf{X}_{k},\mathbf{\Sigma}_{k},\mathbf{ u}_{k},\delta\mathbf{u}_{k,m}\right)\), 10:\((\bar{\mathbf{x}}_{k+1},\mathbf{\Sigma}_{k+1})\!\leftarrow\)SigmaPoints2Moments(\( \mathbf{X}_{k+1}\)), 11:endfor 12:\(\mathbf{\tilde{S}}\left(\tau_{m}\right)\leftarrow\bar{\mathbf{S}}\left(\tau_ {m}\right)+\mathbf{\Phi}\left(\mathbf{X}_{N}\right)\), 13:endfor 14:endfor 15:\(\tilde{S}_{\min}\leftarrow\min_{m}[\bar{\mathbf{S}}\left(\tau_{m}\right)]\), \(\forall m=\{1,\ldots,M_{\sigma}\}\) 16:for\(k\gets 0\)to\(N-1\)do 17:\(\mathbf{u}_{k}\!\leftarrow\)SGF\(\left(\!\mathbf{u}_{k}\!+\!\frac{\sum_{n=0}^{M_{\sigma}}\exp\left(\frac{-1}{n} \left[\bar{\mathbf{S}}(\tau_{m})-\tilde{S}_{\min}\right]\right)\delta\mathbf{ u}_{k,m}\!}{\sum_{n=0}^{M_{\sigma}}\exp\left(\frac{-1}{n}\left[\bar{\mathbf{S}}( \tau_{m})-\tilde{S}_{\min}\right]\right)}\!\right)\), 18:endfor 19:\(\mathbf{u}_{0}\leftarrow\)SendToActuators(\(\mathbf{U}\)), 20:for\(k\gets 1\)to\(N-1\)do 21:\(\mathbf{u}_{k-1}\leftarrow\mathbf{u}_{k}\), 22:endfor 23:\(\mathbf{u}_{N-1}\leftarrow\)ControlSequenceInitializer(\(\mathbf{u}_{N-1}\)), 24: Check for task completion 25:endwhile ``` **Algorithm 1**_Real-Time_ U-MPPI Control Algorithm ## V Simulation-Based Evaluation In this section, we evaluate the effectiveness of our proposed control strategy by comparing it to the standard MPPI control framework. Herein, we focus on two goal-oriented autonomous ground vehicle (AGV) navigation tasks in 2D cluttered environments. The first task, presented in Section V-A, involves maneuvering the AGV within a given map. This experiment allows us to understand the algorithmic advantages of the proposed U-MPPI control scheme without adding real-world complexity. Section V-B introduces a more complex and realistic scenario where the map is unknown a priori. It examines the adaptability and robustness of our proposed control strategy, providing a thorough evaluation of its potential in real-world applications. ### _Aggressive Navigation in Known Cluttered Environments_ #### V-A1 Simulation Setup In this experiment, we use the kinematics model of a differential drive robot presented in [16] for sampling trajectories in the conventional MPPI and propagating sigma points in the proposed U-MPPI technique, as outlined in (9). The model's state includes its position and orientation in the world frame, given by \(\mathbf{x}=[x,y,\theta]^{\top}\in\mathbb{R}^{3}\). The control input consists of the robot's desired linear and angular velocities, denoted by \(\mathbf{u}=[v,\omega]^{\top}\in\mathbb{R}^{2}\). To ensure a fair comparison, both MPPI and U-MPPI simulations were conducted under consistent parameters. A time prediction of \(8\,\mathrm{s}\) and a control rate of \(30\,\mathrm{Hz}\) were employed, resulting in \(240\) control time-steps (i.e., \(N=240\)). At each time-step \(\Delta t\), a total of \(2499\) rollouts were sampled, accompanied by an exploration noise of \(\nu=1200\). To account for control weighting, a control weighting matrix \(R\) was utilized, which was formulated as \(\lambda\Sigma_{\mathbf{u}}^{-\frac{1}{2}}\). Additionally, the inverse temperature parameter was set to \(\lambda=0.572\), while the control noise variance matrix \(\Sigma_{\mathbf{u}}=\mathrm{Diag}\left(\sigma_{v}^{2},\sigma_{w}^{2}\right)\) was defined as \(\Sigma_{\mathbf{u}}=\mathrm{Diag}\left(0.023,0.028\right)\). To smooth the control sequence, we utilized the Savitzky-Galoy (_SG_) convolutional filter with a quintic polynomial function, i.e., \(n_{sg}=5\), and a window length \(l_{sg}\) of \(61\). As described in Section IV-A, U-MPPI has additional parameters in the unscented transform to regulate the spread of the sigma points. These parameters are set to \(\alpha=1,k_{\sigma}=0.5,\text{ and},\beta=2\). Additionally, the initial state covariance matrix \(\mathbf{\Sigma}_{0}\) is set to \(0.001\mathbf{I}_{3}\). The baseline and U-MPPI are implemented using Python and incorporated within the Robot Operating System (ROS) framework. They are executed in _real-time_ on an NVIDIA GeForce GTX 1660 Ti laptop GPU. In the context of the 2D navigation task, MPPI employs a commonly-used instantaneous state-dependent cost function described in (17). This cost function consists of two terms. The first term, denoted as \(q_{\text{state}}(\mathbf{x}_{k})\), encourages the robot to reach the desired state; its formulation, which employs a quadratic expression, is provided in (12). The second term, \(q_{\text{crash}}(\mathbf{x}_{k})=w_{\text{crash}}\mathbb{I}_{\text{crash}}\), serves as an indicator function that imposes a high penalty when the robot collides with obstacles, with \(\mathbb{I}_{\text{crash}}\) being a Boolean variable and \(w_{\text{crash}}\) representing the collision weighting coefficient. While implementing the proposed U-MPPI approach, we replace the quadratic cost \(q_{\text{state}}(\mathbf{x}_{k})\), as depicted in (18), with our newly introduced RS cost denoted as \(q_{\text{rs}}\left(\mathcal{X}_{k}^{(i)},\mathbf{\Sigma}_{k}\right)\) and defined in (15). Within the scope of this work, we set the values of \(Q\) and \(w_{\text{crash}}\) as follows: \(Q=\mathrm{Diag}(2.5,2.5,2)\) and \(w_{\text{crash}}=10^{3}\). It is worth noting that in this particular task, assigning positive values to \(\gamma\) (i.e., \(\gamma>0\)) ensures a trade-off between compelling the robot to reach its desired state and minimizing the risk of collisions with obstacles. This trade-off arises from the fact that the penalty coefficients matrix \(Q_{\text{rs}}\), utilized for tracking the desired state, decreases as the system uncertainty \(\mathbf{\Sigma}_{k}\) increases over the time-horizon \(N\). Therefore, we have chosen \(\gamma=1\) to maintain this trade-off and strike a balance between task completion and collision avoidance. On the other hand, we believe that assigning negative values to \(\gamma\) in applications such as autonomous racing [28] and visual servoing [14] could enhance the performance of U-MPPI. In such tasks, it is crucial to prioritize forcing the current state to reach its desired state, which can be achieved by assigning a higher penalty coefficients matrix \(Q_{\text{rs}}\). \[q(\mathbf{x}_{k})=q_{\text{state}}(\mathbf{x}_{k})+q_{\text{ crash}}(\mathbf{x}_{k}). \tag{17}\] \[q\left(\mathcal{X}_{k}^{(i)},\mathbf{\Sigma}_{k}\right)=q_{\text{rs} }\!\left(\mathcal{X}_{k}^{(i)}\!,\mathbf{\Sigma}_{k}\!\right)+q_{\text{crash}}( \mathbf{x}_{k}). \tag{18}\] #### Iv-B2 Simulation Scenario In order to assess the effectiveness of the proposed control framework within cluttered environments, three distinct scenarios with different difficulty levels were examined. In each scenario, we randomly generate one unique forest type consisting of \(25\) individual forests, resulting in a total of \(\mathcal{N}_{T}=25\) tasks. Each forest represents a cluttered environment with dimensions of \(50\,\mathrm{m}\times 50\,\mathrm{m}\). In the first scenario (referred to as _Scenario #1_), the average distance between obstacles was \(1.5\,\mathrm{m}\), indicated as \(d_{\text{min}}^{\text{obs}}=1.5\,\mathrm{m}\); while in the second and third scenarios (i.e., _Scenario #2_ and _Scenario #3_), they were placed at average distances of \(2\,\mathrm{m}\) and \(3\,\mathrm{m}\), respectively. Additionally, we set the maximum desired velocity \(v_{\max}\) of the robot based on the degree of clutter in each scenario. Specifically, \(v_{\max}\) is set to \(2\,\mathrm{m}/\mathrm{s}\), \(3\,\mathrm{m}/\mathrm{s}\), and \(4\,\mathrm{m}/\mathrm{s}\) in _Scenario #1_, #2_, and #3, respectively. #### Iv-B3 Performance Metrics To achieve a fair comparison between the two control strategies, we use the following criteria: (i) firstly, in all simulation instances, the robot is required to reach the designated desired pose, denoted as \(\mathbf{x}_{f}=[50,50,0]^{\top}\), from a predetermined initial pose \(\mathbf{x}_{0}=[0,0,0]^{\top}\), measured in \(([\mathrm{m}],[\mathrm{m}],[\mathrm{deg}])\); (ii) secondly, a comprehensive set of metrics are defined to evaluate the overall performance [16], including the _task completion percentage_\(\mathcal{T}_{\text{c}}\), the _success rate_\(\mathcal{S}_{R}\), the _average number of collisions_\(\mathcal{N}_{\text{c}}\), the _average number of local minima occurrences_\(\mathcal{R}_{\text{lim}}\), the _average distance_ traversed by the robot \(d_{\text{av}}\) to reach the desired state \(\mathbf{x}_{f}\) from its initial state \(\mathbf{x}_{0}\), the _average linear velocity_\(v_{\text{av}}\) of the robot during the execution of its task in the cluttered environment, and the _average execution time per iteration_\(t_{\text{exec.}}\) of the control algorithm. Successful task completion is characterized by the robot reaching the desired pose without colliding with obstacles within a predefined finite time, i.e., \(\mathcal{T}_{\text{c}}=100\%,\mathcal{N}_{\text{c}}=0\), and \(\mathcal{R}_{\text{lim}}=0\). Furthermore, in all the given scenarios, if the robot fails to reach the desired pose within a duration of \(70\,\mathrm{s}\) while successfully avoiding collisions with obstacles, we classify the simulation episode as reaching a local minimum, indicated by \(\mathcal{R}_{\text{lim}}=1\). #### Iv-B4 Simulation Results Table I presents the performance analysis of the proposed U-MPPI and the baseline MPPI control strategies, considering the three predefined scenarios. For each scenario, two trials were conducted over the 25 individual forests, resulting in a total of 50 tasks (\(\mathcal{N}_{T}=50\)). In _Scenario #1_, where \(d_{\text{min}}^{\text{obs}}=1.5\,\mathrm{m}\), it is noteworthy that U-MPPI outperforms MPPI. Specifically, U-MPPI achieves a notably higher task completion percentage (\(\mathcal{T}_{c}=98.78\%\)) compared to MPPI (\(\mathcal{T}_{c}=92.86\%\)), effectively avoids collisions (\(\mathcal{N}_{c}=0\)), mitigates local minimum occurrences (\(\mathcal{R}_{\text{lim}}=2\)), and achieves a significantly higher success rate (\(\mathcal{S}_{R}=96\%\) vs. \(\mathcal{S}_{R}=78\%\) when MPPI is utilized). Furthermore, it successfully navigates the cluttered environment with a slightly improved average linear velocity \(v_{\text{av}}\), which exhibits a very low standard deviation and approaches the maximum desired speed \(v_{\max}\) of \(2\,\mathrm{m}/\mathrm{s}\) (likewise observed in the other two scenarios). Similarly, in _Scenario #2_, with a minimum obstacle distance of \(d_{\text{min}}^{\text{obs}}=2\,\mathrm{m}\), U-MPPI achieves a perfect task completion rate of \(100\%\), outperforming MPPI's \(98\%\), as it successfully avoids collisions and local minima, surpassing the baseline MPPI that experienced one collision with obstacles (\(\mathcal{N}_{c}=1\)) and encountered two instances of local minima (\(\mathcal{R}_{\text{lim}}=2\)). In the least cluttered scenario, _Scenario #3_, both control strategies effectively complete all assigned tasks while successfully avoiding obstacles in the cluttered environment. However, U-MPPI stands out by offering a slightly more direct route towards the desired pose, with the robot traveling an average distance \(d_{\text{av}}\) of approximately \(71.98\,\mathrm{m}\), compared to \(72.19\,\mathrm{m}\) when utilizing MPPI. On the contrary, in _Scenarios #1_ and #2, MPPI demonstrates an enhanced performance in terms of the average distance traveled \(d_{\text{av}}\) by the robot when compared to our proposed U-MPPI. In the last column of Table I, despite both control methods ensuring _real-time_ performance (since \(t_{\text{exec.}}<33.33\,\mathrm{m}\)s), it is worth emphasizing that the average execution time \(t_{\text{exec.}}\) of our proposed U-MPPI control strategy is slightly shorter than that of MPPI. This can be attributed to the parallel implementation of the U-MPPI algorithm on GPU, where each thread is responsible for computing the dynamics and costs of the entire batch when sampling from all sigma points (\(\text{SM}_{1}\)). On the other hand, the parallel implementation of MPPI, as well as U-MPPI with sampling mode 0 (\(\text{SM}_{0}\)), employs a single thread to compute each sampled trajectory, resulting in a relatively longer execution time, as evidenced by the intensive simulations in Table I, as well as Tasks #3 and #4 in Table II, where only the mean \(\mathcal{X}_{k}^{(0)}\) is used for trajectory sampling (i.e., \(\text{SM}_{0}\)). To summarize, the intensive simulations clearly demonstrate that our U-MPPI method consistently outperforms the baseline MPPI control framework in all tested scenarios, particularly in \begin{table} \begin{tabular}{|c||c|c|c|c|c|} \hline Scheme & \(\mathcal{R}_{\text{lim}}(\mathcal{N}_{c})\) & \(\mathcal{S}_{R}\) [\%] & \(\mathcal{T}_{c}\) [\%] & \(d_{\text{av}}\) [\(\mathrm{m}\)] & \(v_{\text{av}}\) [\(\mathrm{m}/\mathrm{s}\)] & \(t_{\text{exec.}}\) [\(\mathrm{ms}\)] \\ \hline \hline _Scenario \#1:_\(v_{\max}=2\,\mathrm{m}/\mathrm{s}\) & \(\mathcal{R}_{\text{min}}^{\text{obs}}=1.5\,\mathrm{m},\mathrm{\gamma}=1,w_{ \text{crash}}=10^{3}\) \\ \hline MPPI & 9 (2) & 78 & 92.86 & 75.18 & \(1.85\pm 0.21\) & 9.68 \\ U-MPPI & 2 (0) & 96 & 98.78 & 75.34 & \(1.85\pm 0.18\) & 9.12 \\ \hline \hline _Scenario \#2:_\(v_{\max}=3\,\mathrm{m}/\mathrm{s}\) & \(\mathcal{R}_{\text{min}}^{\text{obs}}=2\,\mathrm{m},\mathrm{\gamma}=1,w_{ \text{crash}}=10^{3}\) \\ \hline MPPI & 2 (1) & 94 & 98 & 75.31 & \(2.49\pm 0.74\) & 10.03 \\ U-MPPI & 0 (0) & 100 & 100 & 75.78 & \(2.53\pm 0.49\) & 8.87 \\ \hline \hline _Scenario \#3:_\(v_{\max}=4\,\mathrm{m}/\mathrm{s}\) & \(\mathcal{R}_{\text{min}}^{\text{obs}}=3\,\mathrm{m},\mathrm{\gamma}=1,w_{ \text{crash}}=10^{3}\) \\ \hline MPPI & 0 (0) & 100 & 100 & 72.19 & \(3.52\pm 0.74\) & 8.55 \\ U-MPPI & 0 (0) & 100 & 100 & 71.98 & \(3.54\pm 0.59\) & 7.7 \\ \hline \end{tabular} \end{table} TABLE I: Performance comparisons of the two control schemes, where the gray cells represent better results. environments with higher levels of clutter. These remarkable results can be credited to two key factors: the effective utilization of an unscented-based sampling strategy, which provides more flexible and efficient trajectories, and the incorporation of a _risk-sensitive_ (RS) cost function that explicitly takes into account risk and uncertainty during the trajectory evaluation process; thanks to the incorporation of these crucial components, our approach ensures a significantly enhanced exploration of the state-space of the controlled system, even while leveraging the same injected Gaussian noise \(\delta\mathbf{u}_{k}\) into the mean control sequence, effectively reducing the likelihood of being trapped in local minima and yielding a safer and more resilient control system that is suitable for aggressive navigation in highly complex cluttered environments. To achieve a comprehensive understanding of how the behavior of the U-MPPI control strategy is affected by integrating the proposed sampling strategy and RS cost function, we expanded our intensive simulations in Table II to include varying operating conditions and hyper-parameters, differing from those utilized in Section V-A1. More precisely, in the first four intensive simulations (namely, Test #1 to Test #4), we investigate the potential benefits of integrating the RS cost function into the U-MPPI control strategy through two approaches: (i) reducing the collision weighting coefficient \(w_{\text{crash}}\) (specifically, Tests #1 and #2), and (ii) adopting sampling mode 0 (\(\mathbb{M}_{0}\)) as an alternative to the default mode \(\mathbb{M}_{1}\) (i.e., Tests #3 and #4). Additionally, in the subsequent four tests, we extensively analyze the influence of the UT parameters on the performance of U-MPPI. For Tests #1 and #2, we replicated the U-MPPI simulations presented in Table I, specifically for _Scenarios_ #1 and #2, by assuming a reduced collision weighting coefficient \(w_{\text{crash}}\) of \(500\), representing half of its nominal value. We can clearly observe that lowering the value of \(w_{\text{crash}}\) has no significant impact on the success rate \(\mathcal{S}_{R}\) (as also depicted in Fig. 5(a)) and task completion rate \(\mathcal{T}_{c}\). Nevertheless, it demonstrates improved performance in the robot's average travel distance \(d_{\text{av}}\) for completing the assigned tasks in both scenarios, outperforming both U-MPPI and MPPI as indicated in Table I. As an example, in _Scenario #2_ shown in Fig. 5(b), we can observe that \(d_{\text{av}}\) is approximately \(1.23\,\mathrm{m}\) shorter than that of U-MPPI when \(w_{\text{crash}}\) is set to \(10^{3}\). On the contrary, we empirically observed that reducing \(w_{\text{crash}}\) in the case of MPPI, which utilizes a _risk-neutral_ technique for evaluating sampled trajectories (as expressed in (17)), does not lead to a performance improvement, as depicted in Fig. 5(c). For instance, in _Scenario #1_, it can be noted from Fig. 5(c) that the success rate \(\mathcal{S}_{R}\) experiences a decline from 78% to 72% with the reduction of \(w_{\text{crash}}\). By employing sampling mode 0 (\(\mathbb{M}_{0}\)) in Tests #3 and #4 as an alternative to the default sampling \(\mathbb{M}_{1}\) in U-MPPI, a slight decrease in performance is observed. However, U-MPPI continues to demonstrate impressive capabilities in successfully accomplishing assigned tasks and navigating around obstacles, outperforming the classical MPPI, particularly in _Scenario #1_ (refer to Fig. 5(d)), owing to the integration of our efficient RS cost function for trajectory assessment. Furthermore, the comprehensive simulations performed in Tests #3 and #4 highlight the importance of employing the default sampling strategy \(\mathbb{M}_{1}\), which takes into account all sigma points, in extremely challenging scenarios. This strategy leads to enhancements in both the success rate and the trajectory quality of the robot, as depicted in the illustrative example presented in Fig. 6(a). Now, it is time to delve into investigating how the key UT parameters (namely, \(\mathbf{\Sigma}_{0},k_{\sigma},\alpha\)) affect the distribution of sampled rollouts and their subsequent influence on the performance of the U-MPPI algorithm; to achieve this, we will explore their effects in highly cluttered environments, with a specific focus on _Scenario #1_. In Test #5, the initial state covariance matrix \(\mathbf{\Sigma}_{0}\) is increased from \(0.001\)\(\mathrm{I}_{3}\) to \(0.005\)\(\mathrm{I}_{3}\). It is observed that this increase in \(\mathbf{\Sigma}_{0}\) results in a more conservative yet safer trajectory, with a success rate \(\mathcal{S}_{R}\) of 100% and an average traveled distance \(d_{\text{av}}\) of \(77.19\,\mathrm{m}\), compared to a success rate of 96% and an average distance of \(75.34\,\mathrm{m}\) when \(\mathbf{\Sigma}_{0}\) is set \begin{table} \begin{tabular}{|c||c|c|c|c|c|c|} \hline Test No. & \(\mathcal{R}_{\text{in}}(\mathcal{N}_{0})\) & \(\mathcal{S}_{R}\) [\%] & \(\mathcal{T}_{c}\) [\%] & \(\mathcal{d}_{\text{av}}\) [m] & \(\mathrm{v}_{\text{av}}\) [m/s] & \(\mathcal{f}_{\text{exec.}}\) [ms] \\ \hline \hline \multicolumn{6}{|c|}{_Scenarios \#1 and \#2:_\(w_{\text{crash}}=500\) instead of \(w_{\text{crash}}=10^{3}\)_} \\ \hline \hline Test \#1 & 2 (0) & 96 & 99.4 & 74.43 & \(1.86\pm 0.15\) & 8.69 \\ Test \#2 & 0 (0) & 100 & 100 & 74.55 & \(2.56\pm 0.56\) & 8.9 \\ \hline \hline \multicolumn{6}{|c|}{_Scenarios \#1 and \#2:_\(\mathbb{M}_{0}\) instead of \(\mathbb{M}_{1}\)} \\ \hline \hline Test \#3 & 3 (0) & 94 & 98.24 & 75.88 & \(1.84\pm 0.23\) & 12.87 \\ Test \#4 & 1 (0) & 98 & 99.8 & 76.87 & \(2.47\pm 0.68\) & 12.59 \\ \hline \hline \multicolumn{6}{|c|}{_Scenario \#1:_ Impact of UT parameters (\(\mathbf{\Sigma}_{0},k_{\sigma},\alpha\))} \\ \hline \hline Test \#5 & 0 (0) & 100 & 100 & 77.19 & \(1.79\pm 0.24\) & 8.80 \\ Test \#6 & 1 (0) & 98 & 98.7 & 74.65 & \(1.80\pm 0.28\) & 8.88 \\ Test \#7 & 4 (0) & 92 & 96.88 & 75.75 & \(1.81\pm 0.28\) & 9.46 \\ Test \#8 & 9 (0) & 82 & 92.91 & 75.25 & \(1.83\pm 0.24\) & 9.02 \\ \hline \end{tabular} \end{table} TABLE II: Influence of collision weighting coefficient \(w_{\text{crash}}\), sampling mode (i.e., \(\mathbb{M}_{0}\) & & \(\mathbb{M}_{1}\)), and UT parameters (namely, \(\mathbf{\Sigma}_{0},k_{\sigma},\alpha\)) on U-MPPI performance. Fig. 5: Impact of decreasing collision weighting coefficient \(w_{\text{crash}}\) on (a) U-MPPI success rate \(\mathcal{S}_{R}\), (b) average distance traveled by the robot \(d_{\text{av}}\) in U-MPPI, and (c) MPPI success rate \(\mathcal{S}_{R}\), as well as (d) the effect of utilizing sampling mode 0 (\(\mathbb{M}_{0}\)) on U-MPPI success rate \(\mathcal{S}_{R}\) compared to MPPI success rate. to \(0.001\mathrm{I}_{3}\), as shown in Table I. During Tests #6 and #7, we adjust the UT scaling parameters (\(k_{\sigma}\) and \(\alpha\)), which control the spread of sigma points; specifically, we increase \(k_{\sigma}\) from \(0.5\) to \(3\) in Test #6 and decrease \(\alpha\) from \(1\) to \(0.1\) in Test #7, while keeping all other parameters constant. It is noteworthy to observe in Test #6 that assigning a higher value to \(k_{\sigma}\), along with \(\alpha=1\), generates more widely spread trajectories that cover a larger state space. This enables the robot to explore the environment more extensively and find better solutions, while improving the quality of the robot trajectory (as \(d_{\text{av}}=$74.65\,\mathrm{m}$\)), thereby reducing the likelihood of getting trapped in local minima. Conversely, reducing the spread of sigma points by assigning very low values to \(\alpha\) increases the likelihood of the robot becoming trapped in local minima, as illustrated in the simulations conducted during Test #7. In Test #8, we replicated the simulations from Test #7 using the same reduced UT scaling parameters (\(k_{\sigma}=0.5\) and \(\alpha=0.1\)), while excluding the _risk-sensitive_ behavior by setting the _risk-sensitive_ parameter \(\gamma\) to 0, resulting in a constant penalty coefficients matrix \(Q_{\text{rs}}\) that is equal to the weighting matrix \(Q\). Such a setup yields a sampling strategy closely resembling that of MPPI, resulting in the worst performance achieved, albeit only marginally better than the performance of MPPI illustrated in Table I. In Fig. 6, we showcase the behavior of MPPI and U-MPPI (utilizing U-MPPI-specific sampling modes) in one of the randomly generated \(50\,\mathrm{m}\times 50\,\mathrm{m}\) cluttered environments with \(2\,\mathrm{m}\) obstacles apart, i.e., _Scenario #2_. As shown in Fig. 6(a), both control strategies successfully achieve collision-free navigation in the cluttered environment. However, U-MPPI in the default sampling mode 1 (\(\mathtt{SM}_{1}\)) demonstrates a significantly shorter route to the desired pose, with a robot trajectory length \(d_{\text{av}}\) of \(75.45\,\mathrm{m}\), compared to \(77.39\,\mathrm{m}\) for the classical MPPI and \(78.33\,\mathrm{m}\) for U-MPPI in sampling mode 0 (\(\mathtt{SM}_{0}\)). In the given cluttered environment, as depicted in Fig. 6(b), MPPI demonstrates a smoother velocity profile compared to U-MPPI (see Figs. 6(c) and 6(d)), with an average traveling speed \(v_{\text{av}}\) of \(2.63\,\mathrm{m}\)/s, as opposed to \(v_{\text{av}}=$2.52\,\mathrm{m}\mathrm{/}\mathrm{s}$\) for U-MPPI (\(\mathtt{SM}_{1}\)) and \(e_{\text{av}}=$2.57\,\mathrm{m}\mathrm{/}\mathrm{s}$\) for U-MPPI (\(\mathtt{SM}_{0}\)). Furthermore, it is worth noting that none of the control strategies violate the control (namely, velocity) constraint, which is defined as \(v\leq v_{\text{max}}=$3\,\mathrm{m}\mathrm{/}\mathrm{s}$\), as observed from the velocity profiles. ### _Aggressive Navigation in Unknown Environments_ In Section V-A, although extensive simulations of our proposed control strategies are performed using the fully autonomous ClearPath Jackal robot to ensure their effectiveness and validate their performance in realistic scenarios, the autonomous navigation tasks in cluttered environments are typically carried out under the assumption that the costmap representing the environment is known beforehand. Such a setup or configuration could be limited in many real-world scenarios where autonomous robot systems often need to operate in _partially_ observed environments due to the constraints of limited sensor range and the impracticality of obtaining a complete map before task deployment [10]. To tackle this challenge, the robot needs to construct a _real-time_ 2D local costmap, also known as a 2D occupancy grid map, centered at its current state; this grid map is utilized to store information about obstacles in the robot's surrounding area, gathered from the incoming sensory data acquired by its on-board sensor. To this end, we utilize the _costmap_2d_ ROS package to generate the occupancy grid based on the sensory data, as depicted in Fig. 7(b)[29]. Afterward, the local costmap is incorporated into the optimization problem of the sampling-based MPC algorithm to assess sampled trajectories, aiming to achieve collision-free navigation. We refer to our previous work [16] and the corresponding open-source code.2 In this work, we successfully integrated the 2D grid map into the MPPI algorithm. Footnote 2: [https://github.com/IhabMohamed/log-MPPI_ros](https://github.com/IhabMohamed/log-MPPI_ros) #### V-B1 Simulation Setup We employed the same simulation setup for both control strategies previously presented in Section V-A1, with the exception that: (i) the collision indicator function \(q_{\text{crash}}(\mathbf{x})\) is herein calculated based on the robot-centered 2D grid map, and (ii) the time prediction is reduced to \(6\,\mathrm{s}\), resulting in \(N=$180$\), to be compatible with the size of the grid map. In this study, a 16-beam Velodyne LiDAR sensor mounted on the Clearpath Jackal AGV is used to construct the grid map (local costmap), with the costmap having dimensions of \(240\,\mathrm{cell}\times$240\,\mathrm{cell}$\) and a resolution of \(0.05\,\mathrm{m}\mathrm{/}\mathrm{c}\mathrm{l}\mathrm{e}\mathrm{l}\). #### V-B2 Simulation Scenarios For the performance evaluation, we utilize two types of forest-like cluttered environments Fig. 6: Performance analysis of MPPI and U-MPPI in a \(50\,\mathrm{m}\times 50\,\mathrm{m}\) cluttered environment with \(2\,\mathrm{m}\) obstacle spacing (Scenario #2), utilizing U-MPPI-specific sampling modes (\(\mathtt{SM}_{0}\) and \(\mathtt{SM}_{1}\)). within the Gazebo simulator, each measuring \(50\mathrm{m}\times 50\mathrm{m}\). The first type, referred to as _Forest #1_, consists of trees of different sizes with a density of \(0.1\,\mathrm{trees}/\mathrm{m}^{2}\), whereas the second type, known as _Forest #2_, contains tree-shaped obstacles with a density of \(0.2\,\mathrm{trees}/\mathrm{m}^{2}\). For _Forest #1_, the desired velocity \(v_{\text{max}}\) is set to \(3\,\mathrm{m}/\mathrm{s}\), while for _Forest #2_, it is set to \(2\,\mathrm{m}/\mathrm{s}\). #### V-B3 Performance Metrics To assess the efficiency of the U-MPPI control scheme in unknown environments, we compare it with the baseline MPPI by evaluating the predefined set of performance metrics outlined in Section V-A3, namely: \(\mathcal{R}_{\text{lm}}\), \(\mathcal{N}_{c}\), \(\mathcal{S}_{R}\), \(\mathcal{T}_{c}\), \(d_{\text{av}}\), \(v_{\text{av}}\), and \(t_{\text{exec}}\). In the first forest-like environment (_Forest #1_), the robot is directed to autonomously navigate from an initial pose \(G_{0}=[0,0,0]^{\top}\) to a sequence of desired poses expressed in ([m], [m], [deg]): \(G_{1}=[20,20,45]^{\top},G_{2}=[-18,2,0]^{\top},G_{3}=[20,-21,90]^{\top}\), \(G_{4}=[20,20,0]^{\top}\), and ultimately reaching a stop at \(G_{5}=[0,0,100]^{\top}\). Meanwhile, in _Forest #2_, for the sake of simplicity, the robot navigates solely from \(G_{0}\) to \(G_{3}\), where it comes to a stop. #### V-B4 Simulation Results In Table III, we present a comparison of performance statistics for the proposed control strategies in achieving goal-oriented autonomous navigation in both _Forest #1_ and _Forest #2_, where the statistics are averaged over \(10\) trials for each environment. The obtained results validate the anticipated superiority of U-MPPI over MPPI in all scenarios, owing to its efficient unscented-based sampling distribution policy and _risk-sensitive_ based trajectory evaluation technique. This superiority is evident in several aspects, including: (i) achieving a higher task completion rate \(\mathcal{T}_{c}\), such as \(\mathcal{T}_{c}=100\%\) with U-MPPI versus 91.01% when using MPPI in _Forest #1_, (ii) reducing the probability of encountering local minima, as demonstrated by the comparison in _Forest #2_, where U-MPPI leads to \(\mathcal{R}_{\text{lm}}=1\) compared to \(\mathcal{R}_{\text{lm}}=4\) with MPPI, and (iii) improving the quality of the generated robot trajectory, as evidenced by a significantly shorter average distance traveled by the robot \(d_{\text{av}}\), particularly noticeable in _Forest #2_. Furthermore, it is worth emphasizing that both control methods guarantee _real-time_ performance, highlighting the superiority of the sampling-based MPC algorithm, particularly our proposed U-MPPI, in incorporating not only the local costmap but also the unscented transform into the optimization problem without introducing additional complexity. ## VI Conclusion and Future Work In this paper, we proposed the U-MPPI control strategy, a novel methodology that enhances the vanilla MPPI algorithm by leveraging the unscented transform for two primary objectives. Firstly, it regulates the propagation of the dynamical system, resulting in a more effective sampling distribution policy that effectively propagates both the mean \(\mathbf{\bar{x}}_{k}\) and covariance \(\mathbf{\Sigma}_{k}\) of the state vector \(\mathbf{x}_{k}\) at each time-step \(k\). Secondly, it incorporates a _risk-sensitive_ cost function that explicitly accounts for risk or uncertainty throughout the trajectory evaluation process. Through extensive simulations and real-world demonstrations, we demonstrated the effectiveness of U-MPPI in achieving aggressive collision-free navigation in both known and unknown cluttered environments. By comparing it to MPPI, our approach accomplished a substantial improvement in state-space exploration while utilizing the same injected Gaussian noise \(\delta\mathbf{u}_{k}\) in the mean control sequence. As a result, it yielded higher success and task completion rates, effectively minimizing the likelihood of getting trapped in local minima, and enabling the robot to identify feasible trajectories that avoid collisions. Our future plan involves incorporating chance constraints [4] into the U-MPPI control architecture to effectively address uncertainties in system dynamics and the environment, including moving obstacles, resulting in an enhanced safety and robustness of the control system to handle uncertain conditions, especially in safety-critical applications.
2305.13527
Aligning the Norwegian UD Treebank with Entity and Coreference Information
This paper presents a merged collection of entity and coreference annotated data grounded in the Universal Dependencies (UD) treebanks for the two written forms of Norwegian: Bokm{\aa}l and Nynorsk. The aligned and converted corpora are the Norwegian Named Entities (NorNE) and Norwegian Anaphora Resolution Corpus (NARC). While NorNE is aligned with an older version of the treebank, NARC is misaligned and requires extensive transformation from the original annotations to the UD structure and CoNLL-U format. We here demonstrate the conversion and alignment processes, along with an analysis of discovered issues and errors in the data - some of which include data split overlaps in the original treebank. These procedures and the developed system may prove helpful for future corpus alignment and coreference annotation endeavors. The merged corpora comprise the first Norwegian UD treebank enriched with named entities and coreference information.
Tollef Emil Jørgensen, Andre Kåsen
2023-05-22T22:44:53Z
http://arxiv.org/abs/2305.13527v2
# Aligning the Norwegian UD Treebank with Entity and Coreference Information ###### Abstract This paper presents a merged collection of entity and coreference annotated data grounded in the Universal Dependencies (UD) treebanks for the two written forms of Norwegian: Bokmal and Nynorsk. The aligned and converted corpora are the _Norwegian Named Entities_ (NorNE) and _Norwegian Anaphora Resolution Corpus_ (NARC). While NorNE is aligned with an older version of the treebank, NARC is misaligned and requires extensive transformation from the original annotations to the UD structure and CoNLL-U format. We here demonstrate the conversion and alignment processes, along with an analysis of discovered issues and errors in the data - some of which include data split overlaps in the original treebank. These procedures and the developed system may prove helpful for future corpus alignment and coreference annotation endeavors. The merged corpora comprise the first Norwegian UD treebank enriched with named entities and coreference information. ## 1 Introduction Resources for the Norwegian language have drastically increased in the last few years. Large text corpora such as the Norwegian Newspapers Corpus1 and the Norwegian Colossal Corpus (Kummervold et al., 2022) supported the development of transformer-based models: _NB-BERT_(Kummervold et al., 2021) and _NorBERT_(Kutuzov et al., 2021). Moreover, there are task-specific resources for document-level and fine-grained sentiment analysis (Velldal et al., 2018; Barnes et al., 2019; Ovrelid et al., 2020), dependency syntax, part-of-speech, morphological features, lemmatization (Solberg et al., 2014; Ovrelid and Hohle, 2016), named entity recognition (Jorgensen et al., 2019) and coreference resolution (Maehlum et al., 2022). Footnote 1: [https://www.nb.no/sprakbanken/ressurskatalog/gai-nb-no-sbr-4/](https://www.nb.no/sprakbanken/ressurskatalog/gai-nb-no-sbr-4/) In addition to UD Norwegian Bokmal and UD Norwegian Nynorsk, there are two more available treebanks: 1) _Language Infrastructure made Accessible_ (LIA) (Ovrelid et al., 2018) and 2) _Norwegian Dialect Corpus_ (NDC) (Kasen et al., 2022). These are based on speech transcripts rather than written sources like the former two. LIA is also converted to UD with the procedure from Ovrelid and Hohle (2016). Currently, no up-to-date baselines2 exist for Norwegian coreference resolution, which motivated this work in part of conforming to the CorefUD initiative (Nedoluzhko et al., 2022), with the goal of unifying coreference corpora to a standardized CoNLL-U format. Footnote 2: There is, however, an earlier effort for Norwegian coreference found in: Borthen (2004), Nøklestad and Johansson (2006), Hølen (2007), Johanson and Nøklestad (2008) and Nøklestad (2009) The following sections describe related work, an overview of data sources and statistics, conversion, alignment with UD, error analysis, conclusions, and limitations. ## 2 Related Work NARC is annotated using the BRAT annotation tool (Stenetorp et al., 2012). While conversion scripts are available for the resulting pairs of _.ann_ and _.txt_ files, such as the official from BRAT3, none sufficed for the annotation scheme used in NARC, due to cases like discontinuous mentions, validation checks for self-referring clusters and more. We can find an example of BRAT outputs and CoNLL in the Litebank corpus (Bamman et al., 2019), but the initial annotations used in BRAT are unlike NARC, nor is there available code. We set up a conversion pipeline to the commonly used JSON line format for coreference resolution, as popularized by Lee et al. (2018), and finally to CoNLL-U4, conforming to the CorefUD standards and validation requirements Nedoluzhko et al. (2022). The procedures were validated throughout the alignment process using tools from UD5 and Udapi Popel et al. (2017). Footnote 4: [https://universaldependencies.org/format.html](https://universaldependencies.org/format.html) Footnote 5: [https://github.com/UniversalDependencies/tools](https://github.com/UniversalDependencies/tools) ## 3 Data Three key data sources are involved in this project: UD treebanks for Bokmal and Norwegian, NARC, and NorNE. Following are brief descriptions along with statistics on the merging process. ### Universal Dependencies The current UD treebank is based on the Norwegian treebank Solberg et al. (2014), one of the first widely used resources for Norwegian, initially developed within an in-house framework corresponding to the theories and practices described and documented in Faarlund et al. (1998). The inventory of part-of-speech tags follows those defined for the Oslo-Bergen tagger Hagen et al. (2000). The treebank was later converted and included in Universal Dependencies Ovrelid and Hohle (2016). It is structured in the CoNLL-U format, bound by sentence identifiers without document-level bounds, as shown in Appendix A.1. As of April 2023, the UD treebank for both Bokmal6 and Nynorsk7 have been updated to the latest version of UD (version 2.12). Footnote 6: [https://github.com/UniversalDependencies/UD_Norwegian-Bokmaalfchangelog](https://github.com/UniversalDependencies/UD_Norwegian-Bokmaalfchangelog) Footnote 7: [https://github.com/UniversalDependencies/UD_Norwegian-Nynorsk&changelog](https://github.com/UniversalDependencies/UD_Norwegian-Nynorsk&changelog) ### Narc NARC Mazhum et al. (2022) is the first openly available corpus for Norwegian coreference resolution. The corpus consists mainly of news texts (85%), the rest being government reports, parliamentary transcripts, and blog posts. Its annotations include markables, either as singleton mentions or as referred relational mentions, the latter subdivided into the four types: anaphoric, cataphoric, split antecedent and bridging relations. There are three major issues regarding conversion: 1) NARC is released per document, lacking sentence identifiers for direct alignment with UD. 2) It is annotated on a character-level basis, where the CoNLL-U format requires word-level annotations. 3) Some documents do not exist in the UD treebanks. We will revisit the issues in section 4. ### NorNE NorNE Jorgensen et al. (2019) is one of the most extensive corpus for Norwegian named entities, annotated with persons, organizations, locations, geo-political entities, products, and events, in addition to a separate _derived_ class for nominals derived from a name. While the NorNE corpus is already an enrichment of the UD treebank, UD has since received updates, mostly in terms of corrected token HEADs. The alignment process only included extracting the CoNLL-U _MISC_ field (the named entities) from NorNE, placing them with their matching token indices in UD. For an experimental exploration of NorNE, the reader is advised to consult Aasmoe (2019). Earlier efforts for Norwegian with respect to NER can be found in both Johannessen et al. (2005), Haaland (2008) and Johansen (2019). The mentioned update of UD ensures NorNe, through the conversion processes described in this paper, inherits all updated values. ### Statistics As annotated documents in NARC contain a subset of the existing UD documents, there is an obvious information loss. Full statistics on the number of sentences, tokens and more, across UD, NorNE and NARC can be found in Appendix B. The information loss from NARC, to the aligned final corpora, is shown in Table 1. We cannot reduce these losses, as the texts simply do not occur in UD. However, much of the lost data were unrelated terms preceding the document; an example of this is shown in Appendix A.2. \begin{table} \begin{tabular}{l r r r} \hline \hline \multicolumn{1}{c}{\begin{tabular}{c} NARC \\ Alignment \\ loss \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} Bokmal \\ (\%) \\ \end{tabular} } & \multicolumn{1}{c}{ \begin{tabular}{c} Nynorsk \\ (\%) \\ \end{tabular} } & \multicolumn{1}{c}{Total} \\ \hline Sentences & 789 (4.8\%) & 281 (2.2\%) & 1,070 \\ Tokens & 13,510 (5.2\%) & 6,562 (3.1\%) & 20,073 \\ Markables & 2,410 (4.4\%) & 1,071 (2.3\%) & 3,483 \\ Mentions & 3,582 (4.6\%) & 1,522 (2.4\%) & 5,104 \\ SplitAnte & 6 (4.3\%) & 1 (1.2\%) & 7 \\ Clusters & 6 (4.3\%) & 27 (3.1\%) & 62 \\ Bridging & 35 (3.4\%) & 27 (3.1\%) & 62 \\ Clusters & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Information loss during the alignment of NARC We remind the reader that the corpus contains ~85% news texts, which often include topics, categories, and other text that may not be related to the article's main body. As such, the raw numbers may not represent an equal loss regarding usability and realistic use cases. All numbers are extracted using Udapi (Popel et al., 2017), both its command-line tool and the Python integration8 (corefud.MiscStats and corefud.Stats modules). The _NARC_-column represents converted CoNLL-U formatted NARC, whereas the _Aligned_-column represents the aligned train/test/dev splits. While the statistics differ from those presented in the original paper (Maehlum et al., 2022), the categories are described as follows: Footnote 8: [https://github.com/udapi/udapi-python](https://github.com/udapi/udapi-python) * Markables are all unique entities in the document (including singletons) * Mentions are all occurrences and references to the markables * and split antecedent clusters refer to the count of grouped clusters of each respective mention type - not the number of relations within each group. See Appendix B.1 for examples. ## 4 Coreference Conversion and Alignment The initial part of aligning NARC is converting the original annotation files (_ann/.txt_ pairs) to the CoNLL-U format. A natural step along the way was to parse these files into the JSON line format with sentence, token, and clustering information. The JSON line files are converted to CoNLL-U and aligned with the UD treebanks. The steps involved are: **Ann\(\rightarrow\)JSON conversion** 1. Extract markables and mentions, bridging and split antecedents, group discontinuous mentions 2. Find connected clusters by building a graph of coreference links 3. Map character-based indices to word indices 4. Restructure word-indexed markables and clusters into a JSON line (one.jsonl per.ann) **JSON\(\rightarrow\)CoNLL-U conversion** 1. Adjust markables spanning tokens not in their equivalent UD spans 2. Iteratively add markables and mention clusters token-wise, ensuring correct ordering of multi-entity spans according to UD standards (see UDs _Level 6_ validation for coreference and named entities9) Footnote 9: [https://github.com/UniversalDependencies/tools/blob/master/validate.py#L2112](https://github.com/UniversalDependencies/tools/blob/master/validate.py#L2112) 3. Restructure according to the CoNLL-U format guidelines, populating the MISC column, leaving out empty fields to be filled by the UD treebank. **NARC\(\rightarrow\) UD alignment** A highly compressed overview of the alignment process can be described as follows: 1. Map UD sentence text \(\rightarrow\) UD index 2. Map UD index \(\rightarrow\) train/test/dev split 3. Process NARC documents and extract UD index candidate sentences (one-to-many) 4. For every sentence with multiple candidates, extract its sentence identifiers in both NARC (\(N\)) and UD (\(U\)) and build a cost matrix based on the distances to neighboring indices: \(C_{i,j}=\) sent_to_UD_dist_score\((N_{i},U_{j})\) We then disambiguate by minimizing sentence distances by solving the linear assignment problem for \(C\)(Jonker and Volgenant, 1988). 5. Verify whether a sentence index is part of more than one UD split. If so, discard the document. ## 5 Analysis We discovered several issues and error patterns in the conversion and alignment processes - some already mentioned in the steps above. The following error analysis documents problems with the current corpora and illustrates how the developed system may aid future alignment tasks in detecting errors, especially if one has a corpus managed and annotated by multiple parties. **Sentence mismatch and tokenization issues** A typical error in NARC is an inserted pipe character (l) preceding _some_ commas and the end-of-sentences, which is not the case for UD data. The extra character is often included in involved markable spans, and its end-index must be decremented accordingly. A total of 2057 spans were corrected for 561 documents. Another issue is two aligned sentences having different tokens (see Table 2). In this case, we map 1:1 sentences to the UD tokens. In the same analysis, four documents in NARC Bokmal (klassekampen_{01,02,03,04}) were not found in UD Bokmal, but had matches in UD Nynorsk and should thus be moved. #### 5.2.2 Duplicates and multiple sentence matches Most commonly occurring in dialogue-based texts, we may observe recurring sentences like "illustrasjonsfoto" (illustration photo), "les ogsa" (read also), interjections, and entity names included multiple times throughout a document. Pure string matching would fail in these cases, such as in the following example, where two people (_Elling_ and _Espen_) have several mentions in a dialogue setting. The numbers are sentence indexes where the sentence itself is either _Elling_ or _Espen_. 'Elling': [15, 26, 41, 56, 63, 79, 87, 97, 103, 108, 114, 119], 'Espen': [33, 45, 65, 74, 91, 99, 106, 110, 117] Example 1: Elling and Expen mentioned in a dialogue setting (doc: kknn-20030124-27894) There are, in total, 597 ambiguous sentences across 234 documents. These are resolved by the sentence disambiguation process in step (d) above. #### 5.2.3 Lemma injection In rare cases, sentences have no symmetric match (even after preprocessing for tokenization issues) in both NARC and UD. Two of these were found to have a lemma injected in place of their original entry. 1. vtbnn~20090625-4275, sentence 23. "kostar **vi** mykje" (costs we a lot) where **vi** (we) is **oss** (us) in UD Nynorsk test, ID 017342. 2. ifdann~20100305-5007021, sentence 15. "ordforar" (mayor) is "ordforaren" (the mayor) in UD Nynorsk train, ID 005311. vtbnn~20031111-1592 has a unique error, where the conjunction "at" (that) is in place of the adposition "ved" (by), token 26 of UD Nynorsk train, ID 012440. #### 5.2.4 Data split overlap Eleven documents were found to span train/test/dev splits in the original treebanks (6 Bokmal, 5 Nynorsk). Although comprising one coherent text, these documents have two parts (with no logical separation), each in a different split in UD. The suggested correction is to update the original treebanks to contain the entire document. Details are found in Appendix C. ## 6 Limitations While the system may be applied to other UD-related expansions, task specific details must be customized in the pipeline. Further, there are likely more UD alignment errors to uncover for data sources other than those described here. ## 7 Conclusions We have presented the merging and alignment of NARC, NorNE, and UD for Norwegian Bokmal and Nynorsk, along with statistics of the final corpora. The processes are modular in the sense that updates to any of the corpora will be supported and will still align with their root in UD. With the developed system supporting the conversion of BRAT annotation files and the alignment of treebanks, we have been able to maximize the included data throughout the merging process. Future work is twofold: 1) correct the data split overlaps in UD and 2) adjust the NARC annotation files according to the findings here to avoid future errors. All related code can be found in the repository UD-NARC10. Footnote 10: [https://github.com/tollefj/UD-NARC](https://github.com/tollefj/UD-NARC) ## Acknowledgements Thanks to Michal Novak and Daniel Zeman for valuable feedback throughout the conversion and alignment process. \begin{table} \begin{tabular}{l l} \hline \hline NARC sentence & UD sentence \\ \hline Illustrasjonsfoto. & Illustrasjonsfoto \\ Illustrasjonsfoto! & Illustrasjonsfoto \\ Illustrasjonsfoto! & Illustrasjonsfoto. \\ Nei! & - Nei? \\ Nei! & - Nei. \\ - Ja. & Ja. \\ \hline \hline \end{tabular} \end{table} Table 2: Examples of tokenization mismatch
2310.01395
Requirements' Characteristics: How do they Impact on Project Budget in a Systems Engineering Context?
Background: Requirements engineering is of a principal importance when starting a new project. However, the number of the requirements involved in a single project can reach up to thousands. Controlling and assuring the quality of natural language requirements (NLRs), in these quantities, is challenging. Aims: In a field study, we investigated with the Swedish Transportation Agency (STA) to what extent the characteristics of requirements had an influence on change requests and budget changes in the project. Method: We choose the following models to characterize system requirements formulated in natural language: Concern-based Model of Requirements (CMR), Requirements Abstractions Model (RAM) and Software-Hardware model (SHM). The classification of the NLRs was conducted by the three authors. The robust statistical measure Fleiss' Kappa was used to verify the reliability of the results. We used descriptive statistics, contingency tables, results from the Chi-Square test of association along with post hoc tests. Finally, a multivariate statistical technique, Correspondence analysis was used in order to provide a means of displaying a set of requirements in two-dimensional graphical form. Results: The results showed that software requirements are associated with less budget cost than hardware requirements. Moreover, software requirements tend to stay open for a longer period indicating that they are "harder" to handle. Finally, the more discussion or interaction on a change request can lower the actual estimated change request cost. Conclusions: The results lead us to a need to further investigate the reasons why the software requirements are treated differently from the hardware requirements, interview the project managers, understand better the way those requirements are formulated and propose effective ways of Software management.
Panagiota Chatzipetrou, Michael Unterkalmsteiner, Tony Gorschek
2023-10-02T17:53:54Z
http://arxiv.org/abs/2310.01395v1
Requirements' Characteristics: How do they Impact on Project Budget in a Systems Engineering Context? ###### Abstract Background: Requirements engineering is of a principal importance when starting a new project. However, the number of the requirements involved in a single project can reach up to thousands. Controlling and assuring the quality of natural language requirements (NLRs), in these quantities, is challenging. Aims: In a field study, we investigated with the Swedish Transportation Agency (STA) to what extent the characteristics of requirements had an influence on change requests and budget changes in the project. Method: We choose the following models to characterize system requirements formulated in natural language: Concerning Model of Requirements (CMR), Requirements Abstractions Model (RAM) and Software-Hardware model (SHM). The classification of the NLRs was conducted by the three authors. The robust statistical measure Fleiss' Kappa was used to verify the reliability of the results. We used descriptive statistics, contingency tables, results from the Chi-Square test of association along with post hoc tests. Finally, a multivariate statistical technique, Correspondence analysis was used in order to provide a means of displaying a set of requirements in two-dimensional graphical form. Results: The results showed that software requirements are associated with less budget cost than hardware requirements. Moreover, software requirements tend to stay open for a longer period indicating that they are "harder" to handle. Finally, the more discussion or interaction on a change request can lower the actual estimated change request cost. Conclusions: The results lead us to a need to further investigate the reasons why the software requirements are treated differently from the hardware requirements, interview the project managers, understand better the way those requirements are formulated and propose effective ways of Software management. Requirements Engineering, Natural Language Requirements (NLRs), Project budget, Software Management ## I Introduction Requirements specifications written in natural language text are ubiquitous in the public sector where projects are implemented following a request for tender process. Formal contracts between the ordering party and suppliers serve as the basis for formulating requirements statements, which, in turn, are the starting point for more refined technical requirements and design specifications. In other words, natural language requirements bridge the gap between the needs expressed by the public (government) and the solutions that fulfill those needs, designed and implemented by contractors [19]. We have performed a field study at the Swedish Transport Agency (STA) to investigate the characteristics of natural language requirements (NLRs) and their impact on project execution, accounted by change requests on these initially defined NLRs. Our goal was to identify any associations between particular types of requirements and budget changes. This information would be beneficial for STA as it could be used to focus on problematic requirements types, early on in the project, by monitoring their design and implementation more closely. In the studied case, there were dozens of contractors and the requirements were used not only to prepare for tendering, but also to coordinate between all project parties. In the scientific literature, several different models for analyzing and controlling NLRs have been introduced. We adopted the following three models: the Concern-based Model of Requirements (CMR) [8] proposed by Glinz, the Requirements Abstractions Model (RAM) [9] proposed by Gorschek and Wohlin, and the Software-Hardware Model (SHW), which we incorporated from the systems and software engineering standard for architecture descriptions, ISO/IEC/IEEE 42010 [11]. We chose these models because of their general applicability in the systems engineering context and their successful use in the past to characterize requirements. The classification of the NLRs was conducted by the three authors. Each one of the authors independently categorized an equal number of requirements. The robust statistical measure Fleiss' Kappa was used to verify the reliability of the results. Moreover, we chose to analyse requirements where we had a record on how they developed over time in relation to their budget and the employees' interaction (i.e. written comments on the change requests). In our study we aimed to investigate how change requests to NLRs impact a project's budget changes, but also to understand and how system requirements in large infrastructure projects are managed over time. For that reason, we used an exploratory case study, where we applied and presented results from descriptive statistics, contingency tables, Chi-Square test of association along with post hoc tests. In addition, a multivariate statistical technique, Correspondence analysis was used, in order to illustrate the sets of requirements in two-dimensional graphical form. The results showed that Software requirements are treated differently from the Hardware requirements, in particular they are associated with less budget while they are more difficult to handle. The remainder of the paper is structured as follows: Section II provides an outline of the related work. Section III provides a brief description of the requirements' characterization models that were used in the study. Section IV presents the research methodology of our work. Section V presents the results of the analysis. Finally, in Section VI discussion and conclusions are provided. ## II Related Work The association between requirements engineering and other activities in the software product development process, project success, and product quality has been the focus of several studies in the past. Damian and Chisan [14] investigated in a case study how improvements in the requirements engineering process affect other development processes. They have observed that the introduced practices (feature decomposition, requirements traceability, group analysis sessions, cross functional teams, structured requirements, testing according to requirements) had payoffs in increased productivity, quality and improved risk management. Zowghi and Nurmuilani [15] conducted a survey among 52 software development companies in Australia, studying the relationship between requirements volatility, i.e. the potential for change in the business environment or the stakeholders' fluctuation in the understanding of the requirements, and project performance measured by accuracy of schedule and cost estimates. They found support for their hypothesis that requirements' volatility leads to budget and schedule overruns. Particular errors in requirements specifications and their impact on project outcomes have been also studied. Veras et al. [16] classified 2188 requirements from the aerospace domain and found a surprisingly high number of defective requirements (10%), the most frequent types of errors being external conflicts/consistency, traceability, external completeness and requirement completeness. While the authors suggested application scenarios for their findings and proposed classification (training requirements reviewers, requirements error estimation, checklists for requirements verification, benchmarking requirements specifications), they did not investigate the impact of requirements errors and their type on downstream development. Chari and Agrawal [17] studied how change requests on incorrect or incomplete requirements affect software quality and development effort. They analyzed data from 49 management information system projects and found that, while the resolution of incorrect requirements led to fewer defects, new requirements were generated at the same time which were associated with an increase in delivered defects and effort. Similarly, Kamata and Tamai [18] investigated in 72 projects of a software company in Japan to what degree requirements quality is associated with project performance. They found that the quality of the introductory section of a requirements specification (purpose, scope, definitions, references, overview) is associated with project performance in terms of cost overruns. Unfortunately, the authors did not illustrate how the organizations quality assurance team assessed the quality of requirements. All these studies provide valuable insight on how particular requirements engineering aspects (requirements management activities, requirements quality, and requirements errors), affect downstream development. The purpose of the study presented in this paper is to extend that body of knowledge with the perspective of requirement types, classified according to the models presented in Section III. Moreover, we investigate the impact of the requirements' inherent characteristics, i.e. the amount of frequency of discussions and overall analysis time, on budget changes. Therefore, the contribution of the paper is twofold: first to describe the process and the design of the study. The main contribution is to understand and reason the management of the Software requirements. The case study was applied in a real industry data so as to draw interesting and usefully conclusions regarding the requirements management in relation with the project budget. ## III Requirements characterization models The purpose of classifying a set of requirements according to a given model is to enhance our understanding of the nature of the requirements that are specified in large-scale, infrastructure projects. We therefore chose models that were simple enough (structurally and conceptually) to be applied to realistic requirement statements that were written by stakeholders unaware of to these models. Another criterion was that the chosen models contain operative guidelines or rules on how to classify requirements. ### _Concern-based Model of Requirements (CMR)_ Glinz [8] proposes classifying requirements into four categories: 1. _Functional requirements_, which are related to functional concern 2. _Non Functional Requirements_ * _Performance requirements_ which are related to a performance concern, * _Specific quality requirements_ which are related to a quality concern 3. _Constraints_ which refer to requirements that constrains the solution space beyond what is necessary for meeting the given functional, performance, and specific quality requirements. Glinz [8] suggests using the following questions, applied in this order, to classify requirements: Does the requirement specify... 1.... some of the system's behavior, data, input, or reaction to input stimuli - regardless of the way how this is done? \(\implies\) Functional 2.... restrictions about timing, processing or reaction speed, data volume, or throughput? \(\implies\) Performance 3.... a specific quality that the system or component shall have? \(\implies\) Specific quality 4.... any other restriction about what the system shall do, how it shall do it, or any prescribed solution or solution element? \(\implies\) Constraint We used these questions, allowing however for requirements to be classified into multiple categories as we soon realized that real-world requirements may cover multiple concerns. A requirement could be part of more than one category. Thus, when the three authors independently categorized the 215 requirements, they assigned with them 1 or 0 depending on whether the requirement belonged to at the respective category i.e. Functional requirements, Non-Functional or Quality requirements, Constrain requirements, Functional AND Quality requirements e.t.c. ### _Requirements Abstraction Model (RAM)_ Gorschek and Wohlin [9] propose classifying requirements in four abstraction levels: product, feature, function and component level. 1. _Product Level_: requirements on this, most abstract, level do not fitting the normal definition of a requirement (e.g., testable and unambiguous). Product Level requirements are considered abstract enough to be comparable directly to the product strategies and indirectly to the organizational strategies. 2. _Feature Level_: requirements on this level are features that the product supports, i.e. the requirements are usually an abstract description of the feature itself. 3. _Function Level_: requirements on the this level describe what a user should be able to do: actions that are possible to perform or non-functional requirements that should be fulfilled. 4. _Component Level_: requirements on this level are of a detailed nature depicting information that is closer to (or even examples of) how something should be solved. Similar to the CMR, Gorschek and Wohlin [9] provide a set of questions as a mean to operationalize the model and classify requirements: 1. Is the requirement functional or does it describe testable characteristics that the product should have? \(\implies\) Functional level 2. Does the requirement consist of a specific suggestion of _how_ something should be solved? \(\implies\) Component level 3. Is the requirement abstract enough to be comparable to the product strategies? \(\implies\) Product level 4. Does the requirements describe a feature that should be supported? \(\implies\) Feature level We used these questions to classify requirements into _Problem oriented_ requirement formulations (Product, Feature and Function level) and _Solution oriented_ requirement formulations (Component level). ### _Software-Hardware Model (SHM)_ Since we were studying a case from the system engineering context, we were interested in what degree the requirements describe hardware, software and mixed product aspects. Hence, we chose to classify the requirements, using definitions from the systems and software engineering standard, ISO/IEC/IEEE 42010 [11]: 1. _Software intensive requirements_ belong to "any system where software contributes essential influences to the design, construction, deployment, and evolution of the system as a whole". Moreover, requirements that refer to hardware controlled by software are also included in this category. 2. _Hardware intensive requirements_ include only hardware actions or properties that are not controlled by software. ## IV Research Methodology Our work is driven by the following research questions: _RQ1: Which requirement characteristics are associated with budget changes within the studied project?_ We wanted to explore whether particular requirements' characteristics, defined by the models described in Section III, can be associated with budget changes during the project execution. _RQ2: Which requirements' inherent characteristics are associated with budget changes within the studied project?_ We wanted to explore whether particular requirements' inherent characteristics, presented in Section IV-B, can be associated with budget changes during the project execution. The above RQs are the starting point in our investigation and will help us drive our research and gain further understanding towards whether the requirements' characteristics (inherent or not) affect on the project budget and in which ways. ### _Design of the study_ To guarantee that the requirements will be classified objectively across the three requirements' characterization models, we involved all the three authors into the classification process. To access the reliability of the agreement between the three authors we applied the robust statistical measure Fleiss' Kappa [1, 2]. Different classifications have been suggested for assessing how good the strength of agreement is when based on the value of Cohen's kappa coefficient. According to the guidelines from [3] and [4] (Table I) we considered an agreement when the kappa value is above 0.5. The design of the study is depicted on Fig. 1. The three authors had a kick-off meeting where they discussed the nature of the requirements received from the Swedish Transport Agency (STA). At the same meeting they discussed the structure of each one of the three requirements' characterization models in order to assure that all three shared the same understanding. At the next step, 20 requirements from our data set were chosen randomly and were independently categorized by the authors. The authors returned one week later with their classifications. The robust statistical measure Fleiss' Kappa [1, 2] was applied to access the reliability of the agreement between the three authors. Since we considered an agreement when the kappa value is above 0.5, in the opposite case (Kappa lower than 0.5), we agreed that we disagreed on our classifications and we discussed the requirements' characterization models again. At the end of the meeting a new set of 20 different requirements was again chosen. The three authors repeated the classification, again independently. We repeated the process until we reached to a good agreement (Fleiss' Kappa above 0.5). We reached agreement after 3 iterations. At that point we split the whole data set into three equal parts and each author was assigned with one part. The classification of all the requirements was completed. The statistical analysis and the results are presented in the next section. ### _Data set_ The studied requirements originate from a large infrastructure project that commenced in 2007 and was finally completed in 2017. #### Iii-B1 Description of the data set The data set contained important information about the requirements, i.e. budget, comments from the employees, time sequence of change requests and dates on which a change request was decided and the last date on which the requirement was changed. We refer to them as _inherent characteristics_ of the requirements. Initially our data set contained 5073 requirements. Since our aim was to study the requirements with assigned budget and change requests in relation to the different requirements' models, the original data set was screened to ensure it contained the necessary data for our research. Specifically: * Requirements with no change requests were excluded, * For investigating the reasons behind the change requests and the amount of budget assigned, we included in our analysis only the requirements which contain comments. The final data set included 215 requirements, which were used in the subsequent analyses. #### Iii-B2 Descriptive statistics The distribution of the requirements within the three different above mentioned models are available in Fig. 2, Fig. 3 and Fig. 4. In particular, regarding the Concern-based Model of Requirements (Fig. 2), the requirements were categorized with almost the same distribution between the two dominant categories, i.e. Functional requirements and Quality requirements, (almost 42% to each category). The same is true for the Software - Hardware intensive model (Fig. 4). Almost half of the requirements (52%) are categorized as Hardware, where 44% is categorized as Software requirements. Among the 4,6% of the requirements that were not categorized, the most frequent cases included requirements that were vague, irrelevant or implying to belong to both categories. On the other hand, when we categorize the 215 requirements towards the RAM Model (Fig. 3), the majority of the requirements (almost 85%) were characterized as Requirements, i.e. belong at the Problem level, where only the 10% was characterized as Solutions, i.e. belong at the last level of abstraction. Furthermore, the important inherent characteristics of the requirements that we used in our analysis are: a) Budget and b) Comments from the employees. The distribution of the requirements within Budget and Comments from the Employees are available in Fig. 5 and Fig. 6 respectively. The results showed that the majority of the change requests (65,5%) required the budget to be increased in order to be addressed, while only the 35,5% of the change requests could be addressed without extra budget cost. Although, we noticed Fig. 1: Design of the study Fig. 2: Concern-based Model of Requirements (CMR) that a small percentage of the change requests (only 2%) when they were addressed, the project budget was decreased. Moreover, we categorize the requirements according to the number of comments the received from the employees. The majority of the requirements received only one comment (72,5%), where the 20% received two comments and just 7,4% received three or more comments. Furthermore, the extrapolated data were used to calculate two important variables for our study: a) _Analysis Time_ and b) _Time sequence of change requests and comments_. The (a) _Analysis Time_ is the time between the date on which a change request was decided and the last date on which the requirement has changed. Due to the fact that a change in a requirement may have occurred before the change request was decided, negative values exist. The time is computed in total days. To investigate whether employees' comments are crucial and affect the existence of the changes requests we calculated the second variable (b) _Time sequence of change requests and comments_ which deals with the chronological order of the date on which the decision of a change request was made and the date on which the first comment was made. Three categories were identified (Fig. 7): * In case A, first we have one or more comments and then a change request is decided. * In case B, first we have a change request decided and then one, more comments follow. * In case C, we don't have any information on the dates The results showed that only the 25% of the requirements were commented on by employees before a change request was received. Most of the cases were commented on after a change request has occurred (more than 70%). ### _Data Analysis_ #### Iv-C1 Chi-square test of association To test if there are any associations between the different models' categories and Fig. 4: Software intensive - Hardware intensive Model Fig. 5: Budget of the requirements with Change requests Fig. 3: Requirements Abstraction Model (RAM) Fig. 6: Comments of the employees (in numbers) Fig. 7: Time sequence between comments and change requests the other factors i.e. Budget, Time, we used the Chi-Square test of association. To prevent Type I errors, we used exact tests, and more specifically, the Monte-Carlo test of statistical significance based on 10 000 sampled tables and assuming (p = 0.05) [10]. To examine the strength of associations we use Cramer's V test. Cramer's V is a measure of the strength of association of a nominal by nominal relationship. Cramer's V ranges in value from 0 to +1 with a value of 0 indicating no association to a value of 1 indicating complete association. Cohen [7] suggested the following guidelines for interpreting Cramer's V (See Table II). However, finding an association did not provide us with further details about this association (e.g., which cases are'responsible' for this association). Therefore, following up our statistical significant results, we performed post hoc testing using adjusted standardized residuals [6, 12]. By analyzing these values, we had a cell-by-cell comparison of the expected versus observed frequencies, which helped us to understand which cases where deviated from the independence. We consider an adjusted residual significant if the absolute value is above 1.96, as suggested by [6]. #### Iv-C2 Correspondence Analysis Correspondence analysis provides an interpretation approach for illustrating our results. As the name of the method suggests, it is a way to explore the "system of associations" between the elements of two sets. Correspondence Analysis is a statistical visualisation method for picturing the associations between the categorical variables of a two-way contingency table. These relationships are described by projecting the values of the variables as points on a two-dimensional space, in such a way that the resulting plot describes simultaneously the relationships between the variables. For each variable, the distances between points in the plot reflect the relationships between them [20]. #### Iv-C3 Non-parametric tests Non-parametric tests i.e. Kruskal-Wallis H test was performed to determine if there are statistically significant differences between two or more variables which are on a continuous and categorical level [21]. ### _Validity Threats_ The validity threats are distinguished between four aspects of validity according to [13]: #### Iv-D1 Construct validity Construct validity reflects the extent to which the operational measures represent the study subject. In the present study, all the data were acquired from company archives, thus representing objective measures. No subjective measures were used, such as the ones elicited through interviews or surveys. #### Iv-D2 Internal validity Internal validity refers to the examination of causal relations, which is the intended outcome of our investigation. In our case study, we investigated on the impact between a number of factors i.e. budget, time and discussion in order to understand how those requirements' characteristics impact on the project budget. #### Iv-D3 External validity External validity threat concerns to what extent the results could be valid to other companies. In our case, the study is clearly exploratory and by no means can the findings be generalized to an isolated company. #### Iv-D4 Reliability Regarding reliability, this aspect is concerned with to what extent the data and the analysis are dependent on the specific researchers, i.e. avoid respondent's and researcher's personal biases. To address this threat, we conducted a number of workshops between the authors. The purpose of those workshops was twofold: First, to ensure that all the authors share the same knowledge and understanding towards the chosen requirements models. Second, to evaluate the reliability of their agreement by assigning them to categorize independently random sets of requirements and assess the reliability of their agreement by using the robust statistical measure Fleiss' Kappa. ## V Results As already mentioned, the study is exploratory, thus, a number of different factors were analyzed in relation with the budget of the change requests. The most interesting results are presented in the present section. ### _Software - Hardware intensive requirements VS Budget_ The results from the contingency tables and Correspondence analysis (Fig.8) showed that Functional software requirements are associated with less budget change cost than functional hardware requirements (\(p<0.05\), Cramer's \(V=0.300\)). In particular, among the functional requirements, more than 85% of the hardware intensive requirements were assigned with an increasing budget change while more than half of the software requirements were assigned with no budget change. A possible interpretation is that hardware changes have actual costs associated with the company, therefore, the company can easily estimate those costs. On the other hand, software cost estimation tend to be zero i.e. no budget changed, which indicates that either the company perceive wrongly Software changes as not costly or it is very difficult to estimate the correct cost (Fig. 8). ### _Software - Hardware intensive requirements VS Analysis Time_ In order to understand the previous finding, a follow up analysis was conducted regarding the analysis time of the Software and the Hardware requirements. The analysis time is the time between the date on which a change request was decided and the last date on which the requirement was changed. Due to the fact that a change in a requirement may occurred before the change request was decided, negative values exist. The time is computed in total days. The results from the non-parametric test showed that there is statistical significant difference between the analysis time and the software or hardware characteristic of a requirement (\(p<0.05\)). Moreover, in Fig. 9 the results showed that analysis time presents more variation for Software rather than for Hardware requirements. A possible interpretation is that since Software changes are open for longer periods of time, they are harder to handle. Another interpretation could be that the company faces difficulties in managing the software changes and a solution would be to acquire more input from a outside the company, i.e. a software vendor, for managing software changes satisfactorily. ### _Number of Comments VS Budget_ A chi-square test of independence was conducted between the number of the comments and the budget of the change request. The results showed that Budget is statistical significant associated with the Number of the comments (\(p<0.05\), Cramer's \(V=0.200\)). In particular, the contingency tables show that almost the 80% of the change requests connected with increased budget, was commented on only once. On the other hand, among the change requests with no budget change, more than half of them were commented on more than twice. The above results indicated that a long conversation (more comments) on a change request leads us to lower the cost of that change request. The visualization of those results from Corresponding analysis are available at Fig. 10. ### _Time sequence of change requests and comments VS Budget_ A follow up analysis was conducted regarding the Time sequence of change requests and comments and the Budget of the change requests. The results from the contingency tables, chi-square test of independence and Corresponding analysis showed that the Budget is associated with the chronological order the comments and the change request happened (\(p<0.05\), Cramer's \(V=0.300\)). More specifically, in the 80% of the requirements related with an increased budget, a change request took place before the discussion existed. On the other hand, in almost the 60% of the requirements related with no budget change, a discussion among the employees existed prior to the change request. ## VI Discussion-Conclusions In this case study we investigated the Swedish Transportation Agency (STA), a leading domestic company in the field of transportation and in particular, whether the quality of a requirement has an influence on how it was implemented. The present empirical study is exploratory and focuses on the investigation of whether the requirements' inherent characteristics impact on the project budget and in which ways. Even Fig. 11: Time sequence of change requests and comments VS Budget Fig. 8: Software - Hardware intensive requirements VS Budget Fig. 10: Number of Comments VS Budget Fig. 9: Software - Hardware intensive requirements VS Analysis Time though the case study is exploratory and the finding cannot be generalized to an isolated company or case study, however they could offer us with a pattern on how the different requirements' characteristics actually impact the project budget. The first finding showed that software requirements are associated with less budget when a change request occurs. The results refer only to changes in the budget. Based on the analysis results and the natural language requirements, a possible interpretation is that the estimation of Software budget changes seems to be more difficult in comparison to the Hardware budget changes. Thus, the Software changes requests are "harder" to handle and therefore to be managed since many software requirements budget changes are not even estimated. A potential reason could be that the budget for change requests for Software requirements is not estimated by the employees as analysis costs, probably because they are outsourced to a vendor. The second follow up finding indicated that the Software changes are open for longer than the Hardware changes. Thus, Software management is more difficult and the company may require input and help from an outside vendor in order to manage Software requirements in a more satisfactory way. An interesting finding arose regarding employee discussions on the change requests. The more they discuss and interact on a change request, the lower the actual estimated budget change is. A possible interpretation could be that by discussing a change request you may find cheaper solutions for solving the problem i.e potentially find smarter ways with a smaller cost for to the company. Finally, when requirements are complemented with comments before requesting a change, the company may "save" money. This may happen due to a better understanding of the needs, i.e. the change requests, instead of immediately asking for a change of the requirements. The data gathered in studies such as ours are affected by various sources of variation and are therefore subject to large variability. The statistical analysis of such data can reveal significant differences, trends, disagreements and groupings between the practitioners and can constitute a valuable aid for understanding the attitudes and opinions of the stakeholders and therefore a tool for better Software Management. As a future work, we will continue investigating how do the inherent characteristics of the requirements impact on the project's budget and try to explain the reasons behind this. Our focus is to provide support to companies for improving their ways of managing their Software requirements' changes. In the context of our work, we plan to continue research and efforts towards efficient and effective Software management in Software engineering. ## VII Acknowledgements The work is supported by the KKS foundation through the S.E.R.T. Research Profile project at Blekinge Institute of Technology (BTH). Moreover, it was partially supported by a research grant of the ERSAK project at Trafikverket, the Swedish Transport Agency (STA). The authors have no conflicts of interests to declare.
2307.04622
Correlations between QPO frequencies and spectral parameters of GRS 1915+105 using AstroSat observations
In this work, we study the correlation between Quasi-periodic Oscillation (QPO) frequency and the spectral parameters during various X-ray states in the black hole binary GRS 1915+105 which matches well with the predicted relativistic dynamic frequency (i.e. the inverse of the sound crossing time) at the truncated radii. We have used broadband data of LAXPC and SXT instruments onboard AstroSat. Spectral fitting shows that the accretion rate varies from $\sim 0.1$ to $\sim 5.0 \times 10^{18}$ gm/s and the truncated radius changing from the last stable orbit of an almost maximally spinning black hole, $\sim$ 1.2 to $\sim$ 19 Gravitational radii. For this wide range, the frequencies of the C-type QPO (2 - 6 Hz) follow the trend predicted by the relativistic dynamical frequency model and interestingly, the high-frequency QPO at $\sim$ 70 Hz also follows the same trend, suggesting they originate from the innermost stable circular orbit with the same mechanism as the more commonly observed C-type QPO. While the qualitative trend is as predicted, there are quantitative deviations between the data and the theory, and the possible reasons for these deviations are discussed.
Ruchika Dhaka, Ranjeev Misra, JS Yadav, Pankaj Jain
2023-07-10T15:08:32Z
http://arxiv.org/abs/2307.04622v1
Correlations between QPO frequencies and spectral parameters of GRS 1915+105 using AstroSat observations ###### Abstract In this work, we study the correlation between Quasi-periodic Oscillation (QPO) frequency and the spectral parameters during various X-ray states in the black hole binary GRS 1915+105 which matches well with the predicted relativistic dynamic frequency (i.e. the inverse of the sound crossing time) at the truncated radii. We have used broadband data of LAXPC and SXT instruments onboard AstroSat. Spectral fitting shows that the accretion rate varies from \(\sim 0.1\) to \(\sim 5.0\times 10^{18}\) gm/s and the truncated radius changing from the last stable orbit of an almost maximally spinning black hole, \(\sim 1.2\) to \(\sim 19\) Gravitational radii. For this wide range, the frequencies of the C-type QPO (2 - 6 Hz) follow the trend predicted by the relativistic dynamical frequency model and interestingly, the high-frequency QPO at \(\sim 70\) Hz also follows the same trend, suggesting they originate from the innermost stable circular orbit with the same mechanism as the more commonly observed C-type QPO. While the qualitative trend is as predicted, there are quantitative deviations between the data and the theory, and the possible reasons for these deviations are discussed. keywords: accretion, accretion discs - black hole physics - stars: black holes - X-rays: binaries - relativistic processes ## 1 Introduction The Black Hole X-ray Binary (BHXB) GRS 1915+105 was discovered on August 15, 1992, as a transient by the WATCH All-sky monitor onboard Granat observatory. It was the first galactic object to show a superluminal jet (Mirabel & Rodriguez, 1994). The binary system contains a black hole of 12.4 solar mass (Reid et al., 2014). This source is located at a distance D = 8.6 kpc (Reid et al., 2014) and its relativistic jets are directed at an angle \(i=70^{\circ}\) from the line of sight (Mirabel & Rodriguez, 1994). It is an outstanding source because of its huge variability (Castro-Tirado et al., 1992; Belloni et al., 2000, 1997b; Yadav et al., 1999). This source is observed in 14 different X-ray classes, based on N x-ray flux, Color-Color Diagram (CCD) and hardness ratio (Belloni et al., 2000; Klein-Wolt et al., 2002; Hannikainen et al., 2005). Some of these classes are named \(\phi\), \(\chi\), \(\theta\), \(\lambda\), \(\rho\), etc. Among all the 14 different classes the most observed class is \(\chi\). The \(\chi\) class is the least variable class, and no large amplitude and long-term X-ray flux variability have been observed. Most of the time, since its discovery in 1992, GRS 1915+105 has been seen in bright X-ray states like High Soft state (HS) and High HIMS state (also called the Steep Power Law state (SPL state)). This source has entered into a decline phase since 2018 (lower branch of HIMS and the Low Hard State (LS)). X-ray binaries exhibit variability on rapid time scales. Fourier analysis is often used to study fast variability and quasi-periodic oscillations (QPOs) by computing power-density spectra (PDS)(Klis, 1989). There are numerous patterns have been observed in the PDS (Belloni et al., 2002, 1997a), ranging from various types of broad-band noise to much narrower structures known as QPOs. These appear as sharp peaks in the power spectrum. QPOs with frequencies ranging from few mHz to \(\sim\) 70 Hz have been observed for the source GRS 1915+105 (Morgan et al., 1997; Belloni & Altamirano, 2013a; Paul et al., 1997; Yadav et al., 1999; Sreehari et al., 2020). The centroid frequencies of these QPOs during specific spectral states and transitions can be associated with physical processes occurring in these systems. Typically, there are two types of QPOs. Low-frequency QPOs have a centroid frequency \(\lesssim 30\) Hz, whereas high-frequency QPOs have a centroid frequency \(\lesssim 60\) Hz (up to a few hundredhertz) (Belloni, 2009; Belloni & Altamirano, 2013b). Low-frequency QPOs are further subdivided into A, B, and C-type QPOs based on differences in power spectral properties and phase lag behavior, and they occur in various spectral states (Homan et al., 2001; Remillard et al., 2002; Wijnands et al., 1999; Casella et al., 2004). However, the precise physical origin of QPOs in BHXBs is so far not well understood. Misra et al. (2020) have studied the dependence of QPO frequency \(f\) on the inner radius \(r\) of the truncated accretion disk. They found that \(f/\dot{M}\) is well correlated with \(r\), where \(\dot{M}\) is the accretion rate. Remarkably, the relationship between the two is well described in terms of dynamical frequency arising due to normal modes of disk oscillations (Misra et al., 2020). The dynamical frequency is defined as the inverse of the sound crossing time (\(f_{dyn}\sim c_{s}(r)/r\)). The sound crossing time is the ratio of the truncation radius and the sound speed at the inner disc. According to the standard relativistic disc model proposed by Novikov & Thorne (1973), the sound speed is dependent on several factors, including the mass accretion rate (\(\dot{M}\)), spin, and inner radius (\(r\)) of the disc. This leads to the following formula for the dynamical frequency (Misra et al., 2020): \[\frac{f_{dyn}}{\dot{M}}=N\,8979\,\mathrm{Hz}\,\left(\frac{r}{r_{g}}\right)^{-2.5 }\left(\frac{M}{12.4M_{\odot}}\right)^{-2}\times A^{1}B^{-2}D^{-0.5}E^{-0.5}L \tag{1}\] where \(r_{g}=GM/c^{2}\) is the gravitational radius, and r is the inner disc radii, \(N\) is a normalisation factor to take into account the assumptions made in the standard accretion disc theory. The parameters \(A\), \(B\), \(D\), \(E\), and \(L\) are functions of the inner disc radii and the spin parameter described in Novikov & Thorne (1973) and Page & Thorne (1974). All these parameters are important for small radii, \(r<10\,r_{g}\). As a result, in this regime, the functional form of \(f_{dyn}\) considerably differs from its Newtonian dependence. Using spectral and timing analysis, one can determine the mass accretion rate, inner disc radii, and QPO frequency. Thus, the interpretation, and in particular Eqn 1 can be verified with such an analysis. Misra et al. (2020) did such an analysis using AstroSat observation data collected on 28 March 2016 and 1 April 2017 when GRS 1915+105 was in the low HIMS state (i.e., the lower horizontal track of HIMS). The source showed C-type QPOs in the frequency range of 3.5-5.4 Hz during the observation. A similar analysis was undertaken for Insight-HXMT observations of GRS 1915+105 when it exhibited low-frequency C-type QPOs (Liu et al., 2021). For a wider range of QPO frequency, 2.6-4.3 Hz, and inferred accretion rate of 0.2-1.2\(\times 10^{18}\)gm/s, they confirmed the results obtained by Misra et al. (2020). Apart from these C-type QPOs, GRS 1915+105 also shows a QPO at \(\sim 69\) Hz, which is remarkable in having a nearly constant frequency (Morgan et al., 1997; Belloni et al., 2001, 2006; Belloni & Altamirano, 2013a). This QPO has also been reported for AstroSat data, where it varied slightly from 67.4 to 72.3 Hz (Belloni et al., 2019). In this paper, we perform an extensive spectro-temporal analysis of various X-ray states observed in GRS 1915+105 using AstroSat data. In GRS 1915+105, so far, only one outburst started in 1992 is observed which is still continuing. GRS 1915+105 is never seen in the rising phase of an outburst. Our data includes a low hard state (Obs. 7), which has never been reported before. The motivation here is to study the QPO frequency dependence on spectral parameters covering a wider range of inner disc radii, accretion rates and QPO frequencies. In Section 2 of this work, we describe observations and data re \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Observation No. & Observation Date & Observation ID & Start Time(hb:mm:ss) & LAXPC Exposure Time(ks) & SXT Exposure Time(ks) \\ \hline 1 & 03 Mar 2016 (MJD 57450) & T01\_030T01\_900000358 & 09:54:20 & 22.840 & 7.822 \\ 2 & 25 Apr 2016 (MJD 57503) & G05\_214101\_9000000428 & 03:57:56 & 8.209 & 3.045 \\ 3 & 27 Apr 2016 (MJD 57505) & G05\_167T02\_9000000432 & 17:22:26 & 14.440 & 7.358 \\ 4 & 28 Mar 2017 (MJD 57840) & G06\_033T01\_9000001116 & 18:03:19 & 32.050 & 13.190 \\ 5 & 01 Apr 2017 (MJD 57844) & G07\_046T01\_900001124 & 11:50:10 & 11.420 & 5.452 \\ 6 & 15 Apr 2017 (MJD 57858) & G07\_028T01\_9000001166 & 22:39:28 & 9.668 & 4.974 \\ 7 & 21 Mar 2019 (MJD 58563) & A05\_173T01\_9000002812 & 19:14:36 & 63.498 & 20.929 \\ \hline \end{tabular} \end{table} Table 1: Details of observations of the source GRS 1915+105 made by AstroSat between 2016 and 2019. In the table, observation IDs are listed alongside the exposure time, date, and time of the observation. Figure 1: 1 day binned complete lightcurve of GRS 1915+105 using MAXI and Swift/BAT data starting from 13 January 2016 (57400 MJD) to 27 April 2019 (58600 MJD). The vertical solid lines represent AstroSat observations taken for the analysis, with dates provided in Table 1. (The vertical lines for the 25 & 27 of April 2016, at around 57500 MJD, nearly overlap and appear to be one line.) duction techniques using the LAXPC and SXT pipeline software. In Section 3, we explain the various analytical and modelling techniques used to analyse the temporal and spectral features of GRS 1915+105. In Section 4 of the paper, we describe the outcomes of the study and draw conclusions based on those results. ## 2 Observation and Data Reduction AstroSat is a multi-wavelength observatory launched for astronomical studies of various celestial objects in near and far UV, soft (0.3-80 keV) and hard (3-100 keV) X-rays (Agrawal, 2006). It has four science payloads: 1) Soft X-ray Telescope (SXT) (Singh et al., 2016, 2017), 2) Ultra-Violet Imaging Telescope (UVIT) (Tandon et al., 2017), 3) Cadmium Zinc Telluride Imager (CZTI) (Bhalerao et al., 2017) and 4) the Large Area X-ray Proportional Counter (LAXPC) (Yadav et al., 2016a; Antia et al., 2017). Large Area X-ray Proportional Comures (LAXPC) consist of three identical but independent PCUs (LAXPC 10, LAXPC20 and LAXPC30) with an effective area of 6000 cm\({}^{2}\) at 15 keV and has a time resolution of 10\(\mu\)s in the energy range 3.0-80.0 keV with the dead-time of about 42 \(\mu\)s (Yadav et al., 2016a,b; Agrawal et al., 2017). A simultaneous fit of SXT data along with LAXPC data provides a broadband spectrum of the source. We have analysed various observations with simultaneous data from SXT and LAXPC spanning over 1094 days starting from 3 March 2016. Out of all the AstroSat observations that we looked into, we picked out the ones that showed the presence of QPOs in their power density spectrum. In our study, we have included only those observations when the source flux is more or less steady. GRS 1916+105 often shows strong flares when the flux can change by a factor of a few (Belloni et al., 1997b; Yadav et al., 1999). Such flaring situations are not included in this study. All transient black hole binary outbursts should follow a q-diagram. GRS 1915+105 has shown only one outburst so far; starting with its discovery on 15th August 1992 and the outburst ending now (not yet over); for approximately 31 years. The rising phase of the outburst in GRS 1915+105 is never observed. Our observations cover the period from 2016 to 2019 when the source remained mostly in luminous X-ray states. Thus our observations trace only part of the q-diagram; mostly vertical left and bottom horizontal branches, partly when QPOs are present. Its variability is complex as the source stays in the high luminous X-ray states most of the time. We selected seven observations of four distinct states: the High Soft (HS) state, the Low HIMS state; the High HIMS state; and the Low Hard (LS) state. The data used in this work consists of 7 different observations made on 3 March 2016 (Obs. 1), 25 April 2016 (Obs. 2), 27 April 2016 (Obs. 3), 28 March 2017 (Obs. 4), 1 April 2017 (Obs. 5), 15 April 2017 (Obs. 6), and 21 March 2019 (Obs. 7). Table 1 presents the effective exposure time of LAXPC and SXT of the observations used in this study. The Burst Alert Telescope (SWIFT/BAT) Hard X-ray Transient Monitor and the Monitor of All-sky X-ray Imaging (MAXI) provide continuous coverage of GRS 1915+105 in soft and hard X-rays. To see the evolution of the source, we extract the MAXI flux in the energy range of 2-20 keV and the SWIFT/BAT flux in the energy range of 15-50 keV, as shown in Fig. 1. The SWIFT/BAT flux is scaled by 30 so that both X-ray band light curves of GRS 1915+105 starting from 13 January 2016 to 27 April 2019 can be seen clearly. The vertical lines in the figure represent AstroSat observations of the GRS 1915+105 source used for this study. The sequence of vertical lines in the light curve shown in Fig. 1 is identical to that presented in Table 1. Each observation was further divided into segments such that each segment was continuous without gaps. The HID of GRS 1915+105, covering the period from 13 January 2016 (MJD 57400) to 27 April 2019 (MJD 58600), is illustrated in Fig. 2, where the 2-20 keV MAXI flux is plotted against the X-ray colour (HR). The location of the source in the HID diagram broadly reflects the state of the system. Also marked in Fig. 2 are the locations of the AstroSat observations. Obs. 2 and 3 correspond to the soft state, while the high flux of Obs. 1 shows that it is in the Hard Intermediate state (High HIMS). On the other hand, Obs. 4, 5 and 6 correspond to the Low HIMS state. The data from Obs. 7 represents the Low Hard state of the source. ### SXT Data Reduction Level 1 photon counting mode data of the SXT instrument was processed through the official SXT pipeline AS1SXTLevel2 - 1.4b1 to produce Level 2 mode data. The Photon Counting mode (PC mode) data were chosen for the analysis of all sets of observations listed in Table 1. Using Julia-based SXTvermerger script2, we Figure 2: Hardness–intensity diagram (HID) of GRS 1915+105 showing the evolution of hardness ratio (10–20 keV/ 4-10 keV) with MAXI 2–20 keV flux from 13 Jan 2016 (MJD 57400) to 27 Apr 2019 (MJD 58600). The diagram highlights the AstroSat observations we used. merged all the events belonging to one set of observations into a single event file. The HEASoft (version 6.29) tool XSELECT was used to generate the spectrum, light curves and images. The response matrix file (RMF) "sxt_pc_mat_g0to12_RM.rmf," standard background spectrum "SkyBkg_comb_EL3p5_Cl_Rdl16p0_v01.pha" and ancillary response file (ARF) "sxt_pc_excl00_v01_20190608_mod_16oct21.art" were used for the analysis. The sxtARFDmodule3 provided by the SXT instrument team was used to apply a correction for offset pointing. In order to implement simultaneous analysis, we ensured that the LAXPC 20 observations were available at the same Good Time Interval (GTI) as the SXT observation. Therefore, we used the simultaneous data segments to generate light curves, images and spectrum of GRS1915+105. Footnote 3: [https://www.tifr.res.in/astrosat_sxt/sxtpipeline.html](https://www.tifr.res.in/astrosat_sxt/sxtpipeline.html) For the Obs. 4, Obs. 5, Obs. 6 and Obs. 7 observations (low X-ray flux states), there was no pile-up near the centre of the image due to low flux (\(<\)40 counts per second, as mentioned in the AstroSat Handbook;4). The average count rate in the Obs. 1, Obs. 2 and Obs. 3 was 91.33 counts/sec, 84.25 counts/sec, and 90.00 counts/sec, respectively. Therefore, to account for the pile-up effect at the centre of the image caused by the high flux rate (\(\sim\) 1 Crab) of the source in the charged-coupled device (CCD), the inner radius of the circular annulus region was set to 2 arcmins. Footnote 4: [https://www.itoca.in/astrosat/AstroSat_handbook.pdf](https://www.itoca.in/astrosat/AstroSat_handbook.pdf) ### LAXPC Data Reduction Level 2 event files were extracted from Level 1 event mode data utilising the official LAXPC software version released on 04 Aug 20205. LAXPC data was extracted to obtain the light curve and spectrum of the source(Agrawal et al., 2018; Sreehari et al., 2019). Details of the response matrix (RMF) and background spectrum generation for proportional counters 10, 20, and 30, respectively, can be found in Antia et al. (2017). Footnote 5: [http://astrosat-ssc.ucucaa.in/laxpcData](http://astrosat-ssc.ucucaa.in/laxpcData) Out of three LAXPC detectors (LAXPC 10, LAXPC20 and LAXPC 30), we used only LAXPC 20 data for energy spectral studies for all of the observations given in Table 1. ## 3 Data Analysis ### X-ray lightcurve and Timing Analysis We have produced Background-subtracted light curves for four distinct observation types in the 4.0-50 keV energy range using LAXPC 20 data for the minimum time resolution of the SXT, which is 2.378 seconds. The left panel of Fig. 3 shows 800 sec long Background- Subtracted light curve for HS state (Obs. 3), SPL state (Obs. 1), low HIMS (Obs. 4), and LH state (Obs 7). The right panel of Fig.3 shows 800 sec SXT light curves in the 0.3-8 keV energy range for the identical segments used to generate the LAXPC 20 Background-Subtracted lightcurves in the left panel. In order to study the properties of QPOs, we analyse the data in the frequency regime by generating a Power Density Spectrum (PDS). The PDS were generated by dividing the lightcurve of each segment into parts and averaging the power spectra of each part. We used all three LAXPC detector units (LAXPC 10, LAXPC 20, and LAXPC 30) to plot the PDS for the HS state (Obs. 3, Seg. 2). To plot the PDS for the rest of the observations, we used the LAXPC 20 unit. The PDS for the HS state is shown in the upper left panel of Fig. 4 in the frequency range 10-110 Hz and is modelled using several Lorentzian functions (Belloni et al., 2002; Nowak, 2000) and a power-law component in order to account very low frequency noise (VLFN). It shows HFQPO at \(\sim\) 70 Hz, while no QPO is seen in the lower frequency region. Fig. 4, the upper right panel, shows the PDS of the low HIMS state (Obs. 4, Seg. 6) in the frequency range 0.1-20 Hz. The lower panels of Fig. 4 show the PDS for the SPL state (left panel) and the LH state (right panel) for the Obs. 1 (Seg. 5) and Obs. 7 (Seg. 7), respectively. The component of broad-band noise related to these three PDS (Obs. 4, Obs. 1, and Obs. 7) was modelled using only a few Lorentzians. The frequency of QPOs, along with errors, has been estimated and tabulated in the third column of Table 2. All three panels show LFQPOs along with their harmonics. Figure 3: An 800-sec background-subtracted light curve generated from LAXPC 20 for the observations made on 27 Apr 2016 (Obs. 2), 3 Mar 2016 (Obs. 1), 28 Mar 2017 (Obs. 4), and 21 Mar 2019 (Obs. 7) are shown in the left panel of the figure in four different colours. The right panel represents the SXT light curves for the same observations. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline Exposure & Exposure & QPO & nI & Accretion & Inner Radius & Fraction & & Flux in line & \\ Time & Time & Segment & Freq. & (\(10^{22}\)cm\({}^{-2}\)) & Rate & (Rg) & Scatter & & (\(10^{-2}\)photons & \(\chi^{2}\)/Dof \\ (sec) & (sec) & (sec) & & & (\(10^{8}\)gm/s) & (Rg) & Scatter & & (\(10^{-2}\)photons & \(\chi^{2}\)/Dof \\ (sec) & (sec) & & & & (\(10^{8}\)gm/s) & (Rg) & & & (\(10^{-2}\)photons & \(\chi^{2}\)/Dof \\ (sec) & (sec) & & & & (\(10^{8}\)gm/s) & (Rg) & & & (\(10^{-2}\)photons & \(\chi^{2}\)/Dof \\ (sec) & (sec) & & & & & & & & (\(10^{-2}\)photons & \(\chi^{2}\)/Dof \\ (sec) & (sec) & & & & & & & & (\(10^{-2}\)s\({}_{-1}\)) & \\ \hline \multicolumn{11}{c}{SPL/HIMS (Obs.1)} & & & & & & & & \\ 1334 & 813 & 1 & \(4.51^{+0.02}_{-0.02}\) & 4.001 & \(4.51^{+0.43}_{-0.38}\) & \(10.60^{+0.70}_{-0.77}\) & 0.61 & \(2.874^{+0.017}_{-0.018}\) & \(<1.978\) & 67.46/89 \\ \hline \multicolumn{11}{c}{1757} & \multicolumn{1}{c}{815} & \multicolumn{1}{c}{2} & \(3.54^{+0.02}_{-0.02}\) & 4.001 & \(5.22^{+0.50}_{-0.45}\) & \(14.00^{+1.25}_{-1.16}\) & 0.61 & \(2.818^{+0.014}_{-0.018}\) & \(<1.186\) & 100.61/89 \\ \hline \multicolumn{11}{c}{2183} & \multicolumn{1}{c}{815} & \multicolumn{1}{c}{3} & \(2.56^{+0.01}_{-0.01}\) & 4.001 & \(4.18^{+0.47}_{-0.41}\) & \(15.53^{+1.73}_{-1.48}\) & 0.61 & \(2.571^{+0.036}_{-0.036}\) & \(2.155^{+1.425}_{-1.15}\) & 47.20/72 \\ \hline \multicolumn{11}{c}{2606} & \multicolumn{1}{c}{803} & \multicolumn{1}{c}{4} & \(2.79^{+0.01}_{-0.01}\) & 4.001 & \(4.91^{+0.52}_{-0.52}\) & \(18.16^{+1.58}_{-2.15}\) & 0.61 & \(2.586^{+0.036}_{-0.036}\) & \(2.446^{+1.13}_{-0.880}\) & 73.23/72 \\ \hline \multicolumn{11}{c}{3034} & \multicolumn{1}{c}{815} & \multicolumn{1}{c}{5} & \(3.22^{+0.01}_{-0.01}\) & 4.001 & \(4.55^{+0.51}_{-0.51}\) & \(15.33^{+1.67}_{-1.40}\) & 0.61 & \(2.630^{+0.037}_{-0.037}\) & \(2.374^{+1.271}_{-1.210}\) & 48.66/72 \\ \hline \multicolumn{11}{c}{354} & \multicolumn{1}{c}{303} & \multicolumn{1}{c}{6} & \(3.82^{+0.04}_{-0.04}\) & 4.001 & \(4.28^{+0.66}_{-0.51}\) & \(12.58^{+1.43}_{-1.46}\) & 0.61 & \(2.715^{+0.039}_{-0.039}\) & \(2.832^{+1.465}_{-1.41}\) & 61.65/70 \\ \hline \multicolumn{11}{c}{3535} & \multicolumn{1}{c}{813} & \multicolumn{1}{c}{7} & \(4.77^{+0.02}_{-0.02}\) & 4.001 & \(5.25^{+0.48}_{-0.48}\) & \(11.79^{+0.84}_{-0.88}\) & 0.61 & \(2.714^{+0.039}_{-0.039}\) & \(2.485^{+1.854}_{-1.82}\) & 56.69/73 \\ \hline \multicolumn{11}{c}{3378} & \multicolumn{1}{c}{668} & \multicolumn{1}{c}{8} & \(4.94^{+0.02}_{-0.02}\) & 4.001 & \(4.17^{+0.41}_{-0.40}\) & \(8.96^{+0.65}_{-0.79}\) & 0.61 & \(2.816^{+0.041}_{-0.041}\) & \(1.222^{+0.41}_{-2.008}\) & 63.22/73 \\ \hline \multicolumn{11}{c}{449} & \multicolumn{1}{c}{506} & \multicolumn{1}{c}{9} & \multicolumn{1}{c}{7} & \multicolumn{1}{c}{5} & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \multicolumn{11}{c}{499} & \multicolumn{1}{c}{506} & \multicolumn{1}{c}{9} & \multicolumn{1}{c}{70.54\({}^{+0.41}_{-0.53}\)} & \(4.65^{+0.12}_{-0.12}\) & \(1.69^{+0.01}_{-0.01}\) & \(<2.78\) & 0.410\({}^{+0.033}_{-0.030}\) & 4.51 & \(1.0^{\rm a}\) & \multicolumn{1}{c}{70.35/72} \\ \hline \multicolumn{11}{c}{1640} & \multicolumn{1}{c}{1640} & \multicolumn{1}{c}{10} & \(69.81^{+0.83}_{-0.81}\) & \(4.51^{+0.05}_{-0.05}\) & \(1.97^{+0.07}_{-0.05}\) & \(<1.79\) & 0.351\({}^{+0.031}_{-0.031}\) & 4.51 & \(1.0^{\rm a}\) & \multicolumn{1}{c}{1} & \multicol \begin{tabular}{c c c c c c c c c c c} \hline \hline Exposure & Exposure & & & & & & & Flux in & \\ Time & Time & Time & & & & & & & & line & \\ LXP20 & SXT & Segment & Freq. & (\(10^{22}\)cm\({}^{-2}\)) & & Rate & (Rg) & Scatter & Gamma & & emission (\(10^{-2}\)photons & \\ (sec) & (sec) & & & & & & & & & cm\({}^{-2}\)s\({}^{-1}\)) & \\ \hline \multicolumn{10}{c}{Low HMS (Obs. 5)} \\ 634 & 638 & 23 & 4.04\({}^{+0.01}_{-0.02}\) & 4.00\({}^{\rm a}\) & 0.86\({}^{+0.09}_{-0.08}\) & 5.86\({}^{+0.63}_{-0.53}\) & 0.397\({}^{+0.045}_{-0.037}\) & 2.24\({}^{+0.02}_{-0.02}\) & 1.290\({}^{+0.498}_{-0.494}\) & 77.5986 \\ \hline \multicolumn{10}{c}{} \\ 204 & 204 & 24 & 4.67\({}^{+0.02}_{-0.03}\) & 4.00\({}^{\rm a}\) & 0.84\({}^{+0.16}_{-0.13}\) & 4.94\({}^{+0.78}_{-0.73}\) & 0.349\({}^{+0.042}_{-0.038}\) & 2.27\({}^{+0.01}_{-0.04}\) & 1.068\({}^{+0.752}_{-0.552}\) & 88.08/81 \\ \hline \multicolumn{10}{c}{} \\ 748 & 746 & 25 & 5.14\({}^{+0.02}_{-0.02}\) & 4.00\({}^{\rm a}\) & 0.72\({}^{+0.08}_{-0.07}\) & 3.95\({}^{+0.37}_{-0.36}\) & 0.311\({}^{+0.032}_{-0.027}\) & 2.28\({}^{+0.03}_{-0.03}\) & 0.922\({}^{+0.648}_{-0.635}\) & 68.21/88 \\ \hline \multicolumn{10}{c}{} \\ 1175 & 1177 & 26 & 5.13\({}^{+0.03}_{-0.03}\) & 4.00\({}^{\rm a}\) & 0.70\({}^{+0.07}_{-0.06}\) & 3.78\({}^{+0.31}_{-0.29}\) & 0.309\({}^{+0.030}_{-0.026}\) & 2.30\({}^{+0.03}_{-0.01}\) & 0.848\({}^{+0.638}_{-0.630}\) & 88.94/90 \\ \hline \multicolumn{10}{c}{} \\ 1260 & 1260 & 27 & 5.10\({}^{+0.03}_{-0.02}\) & 4.00\({}^{\rm a}\) & 0.75\({}^{+0.07}_{-0.07}\) & 4.16\({}^{+0.31}_{-0.30}\) & 0.307\({}^{+0.030}_{-0.027}\) & 2.29\({}^{+0.03}_{-0.03}\) & 0.886\({}^{+0.615}_{-0.606}\) & 99.64/90 \\ \hline \multicolumn{10}{c}{} \\ 762 & 763 & 28 & 5.31\({}^{+0.03}_{-0.04}\) & 4.00\({}^{\rm a}\) & 0.70\({}^{+0.07}_{-0.07}\) & 3.64\({}^{+0.34}_{-0.34}\) & 0.286\({}^{+0.027}_{-0.025}\) & 2.27\({}^{+0.03}_{-0.03}\) & 0.692\({}^{+0.517}_{-0.504}\) & 59.87/87 \\ \hline \multicolumn{10}{c}{} \\ 1465 & 1465 & 29 & 4.52\({}^{+0.03}_{-0.03}\) & 4.00 & 0.77\({}^{+0.07}_{-0.07}\) & 4.70\({}^{+0.31}_{-0.32}\) & 0.302\({}^{+0.029}_{-0.027}\) & 2.30\({}^{+0.01}_{-0.03}\) & 0.692\({}^{+0.5}_{-0.5}\) & 129.81/91 \\ \hline \multicolumn{10}{c}{} \\ 1184 & 1193 & 30 & 4.51\({}^{+0.03}_{-0.03}\) & 4.00 & 0.74\({}^{+0.07}_{-0.06}\) & 4.60\({}^{+0.34}_{-0.32}\) & 0.306\({}^{+0.030}_{-0.026}\) & 2.29\({}^{+0.03}_{-0.03}\) & 0.816\({}^{+0.516}_{-0.506}\) & 81.87/90 \\ \hline \multicolumn{10}{c}{} \\ 1362 & 1369 & 31 & 4.48\({}^{+0.05}_{-0.04}\) & 4.00 & 0.75\({}^{+0.07}_{-0.06}\) & 4.65\({}^{+0.32}_{-0.31}\) & 0.306\({}^{+0.029}_{-0.026}\) & 2.29\({}^{+0.03}_{-0.03}\) & 0.770\({}^{+0.564}_{-0.445}\) & 66.60/90 \\ \hline \multicolumn{10}{c}{} \\ 1082 & 1082 & 32 & 2.42\({}^{+0.01}_{-0.01}\) & 4.00 & 0.15\({}^{+0.03}_{-0.03}\) & 3.13\({}^{+0.61}_{-0.64}\) & 0.325\({}^{+0.040}_{-0.017}\) & 1.84\({}^{+0.02}_{-0.03}\) & 0.522\({}^{+0.189}_{-0.274}\) & 103.9/82 \\ \hline \multicolumn{10}{c}{} \\ 1082 & 1082 & 33 & 2.41\({}^{+0.01}_{-0.01}\) & 4.00 & 0.14\({}^{+0.03}_{-0.02}\) & 2.68\({}^{+0.59}_{-0.81}\) & 0.335\({}^{+0.041}_{-0.034}\) & 1.85\({}^{+0.02}_{-0.03}\) & 0.547\({}^{+0.202}_{-0.191}\) & 81.4/83 \\ \hline \multicolumn{10}{c}{} \\ 1084 & 1084 & 34 & 2.42\({}^{+0.01}_{-0.01}\) & 4.00 & 0.16\({}^{+0.03}_{-0.03}\) & 3.28\({}^{+0.62}_{-0.61}\) & 0.325\({}^{+0.040}_{-0.035}\) & 1.86\({}^{+0.03}_{-0.03}\) & 0.419\({}^{+0.189}_{-0.165}\) & 104.9/83 \\ \hline \multicolumn{10}{c}{} \\ 1091 & 1091 & 35 & 2.14\({}^{+0.02}_{-0.02}\) & 4.00 & 0.15\({}^{+0.03}_{-0.03}\) & 3.11\({}^{+0.61}_{-0.65}\) & 0.331\({}^{+0.042}_{-0.036}\) & 1.79\({}^{+0.02}_{-0.02}\) & 0.712\({}^{+0.306}_{-0.101}\) & 129.2/82 \\ \hline \multicolumn{10}{c}{} \\ 1091 & 1091 & 36 & 2.01\({}^{+0.01}_{-0.01}\) & 4.00 & 0.14\({}^{+0.03}_{-0.02}\) & 3.14\({}^{+0.66}_{-0.72}\) & 0.363\({}^{+0.046}_{-0.020}\) & 1.82\({}^{+0.03}_{-0.03}\) & 0.497\({}^{+0.190} ### Spectral Analysis We have performed a simultaneous spectral fitting of SXT and LAXPC20 spectra using XSPEC 12.12.0 in the broad energy range 1-50 keV (SXT: 1-5 keV and LAXPC20: 4-50 keV) for 4, 6 and 7 sets of observations listed in Table 1. The high energy range above 50.0 keV has been ignored because of the low S/N (signal-to-noise) ratio. For the rest of the observation sets, we have used the combined SXT, and LAXPC energy range 1.0-20.0 keV; during these observations source spectrum is soft and signal to noise ratio deteriorates fast above 20 keV. Lower energies below 1 keV were not considered in all the observations due to uncertainties in the effective area and response of the SXT. The left panels of Fig. 5 display the energy spectra of HS state (Obs 3, Seg. 2) in the top panel and SPL state (Obs. 1) in the bottom panel, respectively, covering an energy range of 1-20 keV. The low HIMS and LH state spectra for the Obs. 4 (Seg. 6) and Obs. 7 (Seg. 6), respectively, are shown in the right top and right bottom panels of Fig. 5 in the energy range of 1-50 keV. A relative normalisation constant was used for the simultaneous fitting of LAXPC and SXT data. As recommended by the LAXPC team, the 3% systematic error was incorporated for uncertainties in background estimation when fitting LAXPC and SXT data together (Antia et al., 2017). A gain correction was applied to the SXT data using the gain fit in XSPEC with slope fixed to 1, and the best-fit offset value was found to range from 0 to 35.68 eV. SXT data were grouped with the fgrouppha6 tool of tools7. There are several ways for binning the input ph file data; we have done the optimal binning using the fgrouppha tool. The spectrum was fitted using a combination of models, Constant+tbabs (kerrdisk+simpl+kerrd). The absorption by the Inter-Stellar Medium (ISM) was taken into account with the TBabs model (Wilms et al., 2000) implemented with the galactic absorption abundance. The hydrogen column density was kept fixed at \(4\times 10^{22}cm^{-2}\) for data sets of HIMS, SPL and LH states listed in Figure 4: The top left panel shows the power density spectra of the HS state (Obs. 3, Seg. 2) in the 10-110 Hz frequency range, utilising all three LAXPC detectors. The top right panel shows the PDS of low HIMS state (Obs. 4, Seg. 6) in the frequency range of 0.1-20 Hz. The bottom left panels show the PDS in the frequency range 0.1-20 Hz for observations of SPL state (Obs 1, Seg. 5) and LH state (Obs 7, Seg. 7) in the bottom right panel. LAXPC 20 (only single detector) data is used for these analyses. Table 3, as there was no significant difference in the best-fit while keeping this parameter free (Misra et al., 2020; Liu et al., 2021). \(N_{h}\) was kept free for HS state data set and was found to vary from \(4.47\times 10^{22}cm^{-2}\) to \(4.65\times 10^{22}cm^{-2}\). The convolution model of comptonization "simpl" (Steiner et al., 2009) was used to take into account the Comptonization of the disk photons in the inner flow. The simpl model processes any input spectrum and transforms a fraction \(f_{\rm sc}\) of the source photons into a power-law distribution. The inner radius of the disk and mass accretion rate was estimated from the best-fit values obtained from the relativistic disk model, "kerrd" (Ebisawa et al., 2003). The black hole mass, disk inclination angle, and distance to the source were fixed to \(12.4\)M\({}_{\odot}\), 60\({}^{\circ}\), and 8.6 kpc, respectively, (Reid et al., 2014). The spectral hardening factor of kerrd was fixed to 1.7 (Shimura & Takahara, 1995). For the kerrdisk model, the emissivity index for both the inner and the outer portions of the disk was fixed at 1.8 (Blum et al., 2009). The rest-frame energy of the iron line was set at 6.4 keV (Blum et al., 2009). As GRS 1915+105 is a highly spinning black hole, we set the spin parameter for "kerrdisk" at 0.98 (Blum et al., 2009). Keeping these parameters free does not significantly affect the best-fit values of other parameters. The break radius separating the inner and outer parts of the disk was fixed at 6 \(r_{B}\) (gravitational radii). The radius parameter in kerrd is measured in the unit of gravitational radius (\(r_{B}\)), while for the kerrdisk, it is in units of radius of marginal stability or innermost stable circular orbit (ISCO). Therefore inner radius in the kerrdisk was normalised to that used for the "kerrd" after dividing by a factor of 1.235. The fraction scatter parameter in the data from 3 March 2016 was not constrained; therefore, we set it to 0.6. For HS state observation, gamma and flux in line emission parameters were not constrained; thus, we set them to 4.5 and \(1\times 10^{-2}\)photons cm\({}^{-2}\)s\(-1\), respectively. Table 2 represents the best-fit values of the spectral parameters, including the absorption column density, inner disk radius, accretion rate, scattered fraction, photon index (gamma), and flux in the iron emission line. ## 4 Results An overview of the observations used in this work which includes the date of observation, X-ray flux, hardness ratio, X-ray state, QPO Figure 5: The left panels show the energy spectra of HS state (Obs. 3, Seg. 2) in the top panel and SPL state (Obs. 1, Seg. 5 ) in the bottom panel, respectively, in the energy range of 1-20 keV. The low HIMS and LH state spectra for the Obs. 4 (Seg. 6) and Obs. 7 (Seg. 7), respectively, are shown in the right top and right bottom panels in the energy range of 1-50 keV. The residuals are displayed beneath each spectrum. The SXT and LAXPC data points are black and red, respectively. Figure 6: The top left and top right panels illustrate the variability of QPO frequencies with respect to accretion rate and inner disc radius, respectively. The bottom panel represents the relationship between the accretion rate and the inner disc radius. Figure 7: Variation in spearman rank correlation coefficient for \(Y=\frac{\dot{\delta}P\dot{\delta}P\dot{\delta}}{(\dot{\delta}P)}\) with inner disc radius as a function of \(p\). frequency, accretion rate, and the inner disk radius, is given in Table 3. The X-ray flux observed in the LAXPC20 detector is presented in Column 2 of Table 3. The value of HR2 is shown in column 3, where HR2 is defined as the ratio of X-ray flux in the 13-60 keV to the 3-5 keV energy range. We observe that the hardness ratio continuously decreases as the source moves from the Low Hard (LH) state to the HS state via the SPL state and the low HIMS state. The accretion rate, shown in column 6 of Table 3, generally increases as energy spectra become softer. The accretion rate is highest during the SPL state and lowest during the LH state. Columns 5 and 7 of Table 3 list the range of QPO frequencies and the inner radii of the truncated disc for different observations. Fig. 6 shows the variation of QPO frequency with accretion rate (top left panel), with inner disc radius (top right panel), and the variation of accretion rate with the inner disc radius (bottom panel). While for some of the individual data sets (i.e. for observations taken during a particular spectral state, such as Obs. 4 and Obs. 1), correlations between these parameters are evident, there is, in general, no correlation seen when all the observations are considered. Next, we consider the possibility that the QPO frequency may depend both on the accretion rate and the inner disc radius and, in particular, in the form suggested by Equation 1, i.e. the QPO frequency divided by the accretion rate depends on the inner disc radius, as was suggested by Misra et al. (2020). This is illustrated in Fig. 8, where the QPO frequency divided by the mass accretion rate is plotted against the inner radius of the accretion disc. In this case, a clear trend is visible for all the observations. The solid violet line in Fig. 8 represents the best-fitted standard accretion disc model for Low Frequency QPOs (LFQPOs) with spin parameter 0.97 and normalisation constant 0.01 (earlier work; Misra et al. (2020), who used only low HIMS data). For all the data sets, we find that the relationship is consistent with that predicted by the dynamic frequency model (given in Equation 1 with \(a=0.999\) and \(N=0.1\)). This is shown by the solid black in Fig. 8. Note that the high spin value is already implied by the small inner radii of \(\sim 1.2R_{g}\) obtained from the spectral fitting. This work extends the earlier results to different spectral states and covers a large variation in accretion rate from \(0.1\times 10^{18}\)gm/s to \(5.0\times 10^{18}\)gm/s and the truncated radius changing from the last stable orbit of a maximally spinning black hole, \(\sim 1.2\) to \(\sim 19\) Gravitational radii. For this wide range, the frequencies of the C-type QPO follow the trend predicted by the relativistic model and, interestingly, the high frequency QPO at \(\sim 70\) Hz (which is an obvious outlier in top panels of Fig. 6) also follow the same trend, suggesting a common origin. While the qualitative trend is as predicted, there are quantitative deviations, which we discuss in the next section. We have so far studied the QPO frequency divided by \(\dot{M}\) as a function of the inner disc radius based on the interpretation that the QPO frequency is the dynamical one given by Equation 1. To generalise, we define a variable \(Y=\frac{QPO\,freq.}{(MP)}\) and check if other values of \(p\) other than unity would also represent the data by checking if \(Y\) is correlated with inner disc radius. The absolute magnitude of the Spearman rank correlation has a maximum of 0.99 for \(p\) ranging between 0.8 and 1.2. The Spearman rank correlation variation with \(p\) is plotted in Fig. 7. This figure shows that the correlation does not show significant change for \(p\) values within 0.8 to 1.2. Figure 8: QPO frequency divided by accretion rate with a broad vast of inner disk radius. The Black solid line indicates the relativistic standard accretion disc model for dimensionless spin parameter a = 0.999 and Normalisation constant N=0.1. However, the solid Plum colour line reflects the best-fit model presented by Misra et al. (2020) for the Low-Frequency QPOs, where the best spin parameter was 0.973 and N value was 0.11. ## 5 Discussion In order to put the results of this work into perspective, it is necessary first to enumerate the various possible different reasons why the data points in Fig. 8, show some deviations from the predicted values. It has been assumed that the colour factor \(f\) is a constant \(=1.7\). The colour factor depends on the local vertical radiative transfer in the disc and has been numerically obtained to be approximately 1.7 by Shimura & Takahara (1995) for black hole binaries. The radiative transfer depends on the vertical structure of the disc and on the fairly uncertain viscous energy dissipation as a function of height. Moreover, a corona on top of the disc and irradiation will also affect the colour factor. The effect of changing the colour factor is more prominent for observations with a larger inner truncated disc radius. For example, if the colour factor is increased to 2, the mass accretion rates and the inner radii of the accretion disk slightly change for the soft state data collected on 25 April 2016 and 27 April 2016 i.e. mass accretion rate changes from \(1.95^{+0.06}_{-0.02}\) to \(1.93^{+0.10}_{-0.048}\times 10^{18}\) g/sec and the inner radius changes from \(1.04^{+0.42}_{-0.15}\) to \(1.32^{+0.62}_{-0.08}R_{g}\). On the other hand, for the Low HIMS (15 Apr 2017), the accretion rate change from \(0.74^{+0.07}_{-0.06}\) to \(2.4^{+0.3}_{-0.2}\times 10^{18}\) g/sec while the inner radius changes from \(4.6^{+0.3}_{-0.3}\) to \(9.6^{+0.3}_{-0.3}R_{g}\). An increase in the colour factor results in an increase in accretion rate and inner radii, making the HIMS points (Obs. 4, 5, 6) in Fig. 8 to move right and downwards. We have tested that by changing the colour factor to 2, then the predicted curve matches with the data points, but the normalisation factor increases from 0.1 to 0.15. Note that we have also assumed that the colour factor is independent of the accretion rate and radii which may not be the case. Some of the deviations of the data points from the predicted values could be due to such dependence. It should be emphasised that the theoretical formula for the dynamical frequency (Equation 1) is an order of magnitude estimate, the uncertainty of which is parameterised by the normalisation factor \(N\). Thus, one may expect \(N\) to vary not only for different observations (with different accretion rates and inner disc radii) but also to vary with radius, leading to deviations when the data is compared with a constant \(N\) prediction. The theoretical prediction is based on the standard accretion disc, where the disc extends to the last stable orbit and is not truncated. The sound speed at a radius may differ when the disc is truncated at that radius compared to when it is not, and this difference may be a function of the accretion rate and radius. A related issue is the assumption of standard accretion disc theory that the viscous dissipation goes to zero at the last stable orbit, which is incorporated both in the form of Equation 1 and in the spectral model _kerrbb_ used in this work. This assumption forces the temperature (and hence the sound speed) to go to zero at the last stable orbit. However, this assumption may not correctly describe the system, and instead, the accretion flow should necessarily pass through a sonic point, which leads to deviations from the standard theory near the last stable orbit (Abramowicz & Fragile, 2013). Apart from these theoretical considerations, another potential reason for the deviation between the data and the predicted values is that the source may not be in the steady state and may be in a variable state. Out of seven observations used in this work, the source shows significant short-time variability (on the hour/orbital time scale) during three observations (3rd March 2016, 28th March 2017 and 1st April 2017 (Obs. 1, 4 & 5)) (Yadav et al., 2016; Rawat et al., 2018), as reflected in Table 2. During these observations, values of QPO frequency, inner disk radii and the Gamma clearly show a trend with time (for different orbits). Thus, the spectra averaged over the whole observation may not provide accurate accretion rates and inner disc radii values. Moreover, when the system is dynamic, it may not be correct to model the time-averaged spectra with a steady state one, as assumed when we have used a disc model like _kerrbb_. These three data sets show most deviations from the theory, as seen in Figure 8 as the disk was not in the steady state. The 15th April 2017 (Obs. 6) data support this argument. This observation data do not show any trend with time/orbit and fall in the middle of points of Obs. 4 & 5 in Figure 8 with little deviation (also see Table 2). Given all the above-listed possibilities, which may cause the data points not to follow the theoretical predictions accurately, it is quite remarkable that the overall predicted trend is seen, for such a wide range of accretion rates, inner disc radii and QPO frequency. Indeed, as mentioned earlier, the general trend that for an empirical form of \(Y=f_{QPO}/\dot{M}^{p}\), the best anti-correlation with radii is obtained for \(p\sim 1\), indicates that the QPO frequency can be identified with dynamical one. It is also remarkable that the high frequency QPO at \(\sim 70\) Hz also follows the trend of the low frequency ones and the explanation for the observed high frequency is that for the high frequency QPO, the accretion rate is significantly higher and the inner radius close to the last stable orbit. Interpreting the QPO frequency as the dynamic one, is an alternate explanation to the model where the QPO is due to the precession of the inner flow at the Lense-Thirring frequency (Stella and Vietri, 1997; Ingram et al., 2009; Ingram and Motta, 2019; Motta et al., 2022; Motta, 2016; You et al., 2020). In that interpretation, the QPO frequency is expected to be a function only of the truncation radius and not the accretion rate. Moreover, there is some evidence that the energy dependent properties of some of the QPOs vary with the inclination angle of the binary (Van den Eijnden et al., 2016; Motta et al., 2015; Schnittman et al., 2006; Heil et al., 2015; Arur and Maccarone, 2020), which would be more likely explained by a precessing inner flow. At present, this evidence is limited to a few sources due to the difficulty in estimating the inclination angle and energy dependent QPO properties. A more detailed theoretical analysis of the predicted inclination dependence of these two interpretations, along with better \begin{table} \begin{tabular}{c c c c c c} \hline \hline Observations & X-ray flux for \(\mathrm{lxp}20\) & HR2 & X-ray State & QPO freq. (Hz) & Accretion rate & \multirow{2}{*}{Inner radii (\(R_{g}\))} \\ & (c/s) & \(\left(\frac{13-60\mathrm{g}eV}{3-3\mathrm{g}eV}\right)\) & & X-ray State & QPO freq. (Hz) & (\(10^{18}\) gm/s) \\ \hline Obs. 1 & \(\sim 1840\) & \(\sim 0.36\) & SPL / HIMS & \multirow{2}{*}{2.56 - 4.94} & \multirow{2}{*}{4.17 - 5.25} & \multirow{2}{*}{8.96 - 18.15} \\ & & & & & \\ Obs. 2 \& Obs. 3 & \(\sim 3324\) & \(\sim 0.13\) & HS & 69.81 - 71.43 & 1.69 - 1.97 & 1.37 - 1.69 \\ \hline Obs. 4 & \(\sim 850\) & \(\sim\)0.64 & HIMS (low) & 3.44 - 5.41 & 0.66 - 0.86 & 3.31 - 6.60 \\ \hline Obs. 5 & \(\sim 780\) & \(\sim 0.57\) & HIMS (low) & 4.04 - 5.31 & 0.70 - 0.86 & 3.78 - 5.86 \\ \hline Obs. 6 & \(\sim 720\) & \(\sim 0.46\) & HIMS (low) & 4.48 - 4.52 & 0.74 - 0.77 & 4.60 - 4.70 \\ \hline Obs.7 & \(\sim 320\) & \(\sim 0.99\) & LH & 2.01-2.42 & 0.11-0.16 & 2.35 - 3.58 \\ \hline \end{tabular} \end{table} Table 3: Overview of the analysis of GRS 1915+105 done in this work. data, would be able to differentiate between them. Note that in the interpretation used in this work, the QPO frequency is not expected to depend on the inclination angle of the disc. The wide band spectral and rapid temporal capabilities of AstroSat and Insight-HXMT had shown that the frequencies of the C-type QPO of GRS 1915+105 can be identified with general relativistic dynamic ones. In this work, we extend the results using AstroSat for a broader range of accretion rates and inner radii and have shown that the high frequency QPO may also be of a similar origin. The work needs to be extended to other observations of GRS 1915+105 and other black hole systems. Apart from AstroSat and Insight-HXMT observations, such work can also be done by NICER with perhaps high energy spectral coverage from simultaneous Nustar data. Such a systematic and multi-observatory study will give a clearer picture of the origin of the QPO phenomenon in black hole systems. ## Acknowledgements The authors would like to thank the anonymous reviewer for his or her insightful remarks and suggestions that considerably enhanced the quality of the manuscript. This work has used the data from the Soft X-ray Telescope (SXT) developed at TIFR Mumbai. And the SXT POC at TIFR is acknowledged for verifying and releasing the data through the Indian Space Science Data Centre (ISSDC) and providing the required software tools. We would also like to thank the LAXPC POC and SXT POC teams for their support. In addition, this study utilised the Monitor of All-sky X-ray Image (MAXI) and SWIFT/BAT data provided by the MAXI and BAT teams. This research has used the software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), a service of the Astrophysics Science Division at NASA. ## Data Availability The software and packages utilised for data analysis are available at NASA's HEASARC website ([https://heasarc.gsfc.nasa.gov/docs/software/heasoft/patch.html](https://heasarc.gsfc.nasa.gov/docs/software/heasoft/patch.html)). The data used in this article are available at the AstroSat-ISSDC website ([https://astrobrowse.issdc.gov.in/astro_archive/archive/Home.jsp](https://astrobrowse.issdc.gov.in/astro_archive/archive/Home.jsp)), Maxi website ([http://maxi.riken.jp/top/index.html](http://maxi.riken.jp/top/index.html)) and the Swift/BAT observations from NASA's SWIFT website ([https://swift.gsfc.nasa.gov/results/transients/](https://swift.gsfc.nasa.gov/results/transients/)).
2303.07804
Toward Standardized Performance Evaluation of Flow-guided Nanoscale Localization
Nanoscale devices with Terahertz (THz) communication capabilities are envisioned to be deployed within human bloodstreams. Such devices will enable fine-grained sensing-based applications for detecting early indications (i.e., biomarkers) of various health conditions, as well as actuation-based ones such as targeted drug delivery. Associating the locations of such events with the events themselves would provide an additional utility for precision diagnostics and treatment. This vision yielded a new class of in-body localization coined under the term "flow-guided nanoscale localization". Such localization can be piggybacked on THz communication for detecting body regions in which biological events were observed based on the duration of one circulation of a nanodevice in the bloodstream. From a decades-long research on objective benchmarking of "traditional" indoor localization, as well as its eventual standardization (e.g., ISO/IEC 18305:2016), we know that in early stages the reported performance results were often incomplete (e.g., targeting a subset of relevant performance metrics), carrying out benchmarking experiments in different evaluation environments and scenarios, and utilizing inconsistent performance indicators. To avoid such a "lock-in" in flow-guided localization, in this paper we propose a workflow for standardized performance evaluation of such localization. The workflow is implemented in the form of an open-source simulation framework that is able to jointly account for the mobility of the nanodevices, in-body THz communication between with on-body anchors, and energy-related and other technological constraints (e.g., pulse-based modulation) at the nanodevice level. Accounting for these constraints, the framework is able to generate the raw data that can be streamlined into different flow-guided localization solutions for generating standardized performance benchmarks.
Arnau Brosa López, Filip Lemic, Jakob Struye, Jorge Torres Gómez, Esteban Municio, Carmen Delgado, Gerard Calvo Bartra, Falko Dressler, Eduard Alarcón, Jeroen Famaey, Sergi Abadal, Xavier Costa Pérez
2023-03-14T11:20:17Z
http://arxiv.org/abs/2303.07804v2
# Toward Standardized Performance Evaluation of Flow-guided Nanoscale Localization ###### Abstract Nanoscale devices featuring Terahertz (THz)-based wireless communication capabilities are envisioned to be deployed within human bloodstreams. Such devices are envisaged to enable fine-grained sensing-based applications for detecting events (i.e., biomarkers) for early indications of various health conditions, as well as actuation-based ones such as the targeted drug delivery. Intuitively, associating the locations of such events with the events themselves would provide an additional utility for precision diagnostics and treatment. This vision recently yielded a new class of in-body localization coined under the term "flow-guided nanoscale localization". Such localization can be piggybacked on THz-based communication for detecting body regions in which events were observed based on the duration of one circulation of a nanodevice in the bloodstream. From a decades-long research on objective benchmarking of "traditional" indoor localization, as well as its eventual standardization (e.g., ISO/IEC 18305:2016), we know that in early stages the reported performance results were often incomplete (e.g., targeting a subset of relevant performance metrics), carrying out benchmarking experiments in different evaluation environments and scenarios, and utilizing inconsistent performance indicators. To avoid such a "lock-in" in flow-guided nanoscale localization, in this paper we discuss a workflow for standardized performance evaluation of such localization. The workflow is implemented in the form of an open-source framework that is able to jointly account for the mobility of the nanodevices in the bloodstream, in-body THz communication between the nanodevices and on-body anchors, and energy-related and other technological constraints (e.g., pulse-based modulation) at the nanodevice level. Accounting for these constraints, the framework is able to generate the raw data that can be streamlined into different flow-guided localization solutions for generating standardized performance benchmarks. Flow-guided nanoscale localization, Terahertz, performance evaluation methodology, precision medicine; ## I Introduction Recent advances in nanotechnology are paving the way toward nanoscale devices with integrated sensing, computing, and data and energy storage capabilities [1]. Among others, such devices will find applications in precision medicine [2, 3]. A subset of such applications envision the nanodevices being deployed in the patients' bloodstreams. As such, these nanodevices will have to abide to the environmental constraints limiting their physical size to the one of the red blood cells (i.e., smaller than 5 microns). Due to such constrained sizes, their sole powering option will be to scavenge environmental energy (e.g., from heartbeats or through ultrasound-based power transfer) utilizing nanoscale energy-harvesting entities such as Zinc-Oxide (ZnO) nanowires [1]. Due to constrained powering, such devices are expected to be passively flowing within the patients' bloodstreams. Recent advances in the development of novel materials, primarily graphene and its derivatives [4], herald nanoscale wireless communications in the THz region (i.e., 0.1-10 THz) [3]. In the context of the above-discussed nanodevices, wireless communication capabilities will enable two-way communications between them and the outside world [5]. Fully integrated nanodevices with communication capabilities are paving the way toward sensing-based applications such as oxygen sensing within the bloodstream for detecting hypoxia (i.e., a biomarker for cancer diagnosis), as well as actuation-based ones such as non-invasive targeted drug delivery for cancer treatment. As recognized in recent literature, nanodevices with communication capabilities will also provide a primer for flow-guided localization in the bloodstream [3, 6]. Intuitively, such localization would enable associating the location of the nanodevice with a detected event (e.g., hypoxia, target for drug targeted delivery), providing medical benefits along the lines of non-invasiveness, early and precise diagnostics, and reduced costs [6, 7, 8]. Flow-guided localization is in an early research phase, with only a few works targeting the problem [6, 7, 8]. The main challenges include i) a centimeter-level range of THz-based in-body wireless communication at nanoscale, ii) energy-related constraints stemming from energy-harvesting as the sole powering option of the nanodevices, iii) high mobility of the nanodevices within the bloodstream, with their speeds reaching 20 cm/sec. Flow-guided localization proposals have made an encouraging progress in addressing the above challenges, yet we argue that the research and further advances on such localization are needed and still to flourish. Based on the above argument and the knowledge generated through decades of research on "traditional" indoor localization, we posit that, at this early stage, there is a need for a framework for objective performance evaluation of flow-guided THz-based nanoscale localization. Specifically, the research on indoor localization in early stages was suffering from the inability of comparing the performance of different approaches in an objective way. In other words, the reported performance results were often incomplete (e.g., targeting a single metric such as localization accuracy and ignoring the other important ones such as the latency in reporting location estimates), utilizing different performance indicators (e.g., mean vs. median accuracy), and utilizing different evaluation environments and scenarios. These issues were eventually recognized in the community and addressed through projects such as the EU Evaluation of RF-based Indoor Localization Solutions for the Future Internet (EVARILOS) [9] and NIST Performance Evaluation of Smartphone Indoor Localization Apps (PerfLoc) [10], as well as with indoor localization competitions such as the one from Microsoft at the ACM/IEEE IPSN conference [11], eventually resulting in the development of an ISO/IEC standard for objective benchmarking of indoor localization approaches [12]. With this article, we aim at avoiding the initial "lock-in" in the comparability of flow-guided localization by proposing a framework for standardized performance evaluation of such localization approaches. Specifically, we discuss the fundamentals of flow-guided nanoscale localization, provide the categorization of existing approaches, and discuss the limitations of their current performance assessments. This is followed by proposing a workflow for standardized and objective performance assessment of flow-guided localization. In addition, an open-source network simulator is provided that implements the discussed workflow and provides the community with the first tool for realistic and objective assessment of flow-guided localization. Finally, we demonstrate the performance of the simulator by evaluating the performance of the current state-of-the-art flow-guided localization solution. ## II Related Works ### _Performance Evaluation of THz Nanoscale Systems_ As argued in [13], simulating the performance of a given system allows for completely controllable experimental conditions and environments. In combination with repeatability and cost-efficiency, these advantages make simulations a valuable tool to evaluate new algorithms, especially at early research stages. Given that the research on flow-guided localization is still in a preliminary stage, simulating the operation of such systems can be considered as a natural first step in the assessment of their performance. This was only meagerly recognized in the scientific community, with BloodVoyagerS [13] being the first tool that provides a simplified bloodstream model for simulating the mobility of the nanodevices within it. The simulator covers 94 vessels and organs, with the origins of the coordinate system placed in the center of the heart. The spatial depth of all organs is equated, with the reference thickness of 4 cm mimicking the depth of a kidney, resulting in the z-coordinates of the nanodevices being in the range between 2 and -2 cm (cf., Figure 1). The simulator further assumes that the arteries and veins are set anterior and posterior, respectively. Transitions from the arteries to veins happen in the organs, limbs, and head. In the heart, the blood transitions from the veins to arteries, i.e., the blood model transitions from posterior to anterior. The flow rate is modeled through the relationship between pressure difference and flow resistance. This results in the average blood speeds of 20, 10, and 2-4 cm/sec in aorta, arteries, and veins, respectively. Transitions between the arteries and veins are simplified by utilizing the constant velocity of 1 cm/sec. TeraSim [14] is the first simulation platform for modeling THz communication networks which captures the capabilities of nanodevices and peculiarities of THz propagation. TeraSim is built as a module for ns-3 (i.e., a discrete-event network simulator), implementing physical and link layer solutions tailored to nanoscale THz communications. Specifically, at the physical layer the simulator features pulse-based communica Figure 1: Nanodevice mobility in the BloodVoyagerS [13] tions with an omnidirectional antenna over distances shorter than 1 m, assuming a single, almost 10 THz wide transmission window. At the link layer, TeraSim implements two well-known protocols, i.e., ALOHA and CSMA, while a common THz channel module implements a frequency selective channel model, assuming in-air wireless communication. We will utilize BloodVoyagerS and TeraSim as the starting point in the development of the envisioned simulator. ### _Evaluation Methodologies for Flow-guided Localization_ As argued, research lessons on the performance evaluation of indoor localization systems can to an extent be applied for objective and standardized assessment of flow-guided localization. The EU EVARILOS project was among the early efforts aiming at such performance assessment for RF-based indoor localization [9]. Within the project, a performance assessment methodology was developed, which included a number of evaluation scenarios, envisioned capturing the performance of evaluated solutions along a heterogeneous set of metrics including localization accuracy, latency, and energy consumption, and assessing and mitigating the negative effects of RF interference on the performance of the evaluated solutions. The project also yielded a web platform populated with raw data that can be inputted in an indoor localization solution for its streamlined performance assessment along a number of standardized scenarios. A similar approach was followed in the NIST PerfLoc project, however with a set of possible solutions to be evaluated extending beyond only Radio Frequency (RF) to Inertial Measurement Unit (IMU)-based, Global Positioning System (GPS)-supported, and other hybrid approaches. Finally, the IPSN/Microsoft Indoor Localization Competition [11] was among the first efforts to support back-to-back evaluation of different indoor localization approaches along the same set of conditions. The above-discussed and consequent efforts yielded the following lessons: i) performance comparison of different indoor localization approaches can be carried out in an objective way by following the same evaluation methodology, i.e., utilizing the same environments, scenarios, and evaluation metrics, ii) such evaluation can be streamlined by providing a set of raw data captured along a standardized evaluation methodology, which is envisioned to be used as an input to an indoor localization solution, and iii) the performance of RF-based indoor localization can be degraded by both self-interference and interference from neighboring RF-based systems operating in the same frequency band. In the current outlook on the performance assessment of existing flow-guided localization, the approaches from [7] and [8] are evaluated in a rather simplified way accounting solely for the mobility of the nanodevices as modeled by BloodVoyagerS. As such, their performance assessments ignore many potential effects of wireless communication (e.g., RF interference), as well as energy-related constraints stemming from energy-harvesting and, consequently, the intermittent operation of a nanodevice [1]. It is also worth mentioning that [6] carried out a limited performance evaluation assessing the number of nanodevices needed for localizing a nanodevice at any location in the body in a multi-hop fashion. The derived assessments can, therefore, at this point only serve as a rough indication due to their low levels of realism and subjective evaluation methodologies. In this work, we enhance the realism of such assessments by jointly accounting for the mobility of the nanodevices, in-body nanoscale THz communication between the nanodevices and the outside world, and energy-related and other technological constraints (e.g., pulse-based modulation) of the nanodevices. ## III Flow-guided Localization Fundamentals RF-based in-body localization approaches can be categorized based on the type of applications they support, as depicted in Figures 2 and 3. Intuitively, there is a need for localization of in-body devices that are either mobile or nomadic within the body, otherwise their locations could be derived during deployment. The nomadic or mobile devices in the body are envisioned to support three main types of applications [15]. The first is the localization of macroscale devices within the body, specifically for localizing gastric capsules (nb., as there is a clear diagnostic benefit of assigning the measurements of the gastrointestinal system with the locations at which they were taken) and implants (nb., for detecting their movements away from the intended deployment locations). Such devices are not envisaged to feature nanoscale dimensions and their expected levels of mobility are either low (i.e., several cm/hour in the gastrointestinal system) or there is potentially no mobility in case of the implants. This reduces the localization requirements compared to the other two categories in Figure 2, primarily due to the fact that localization can be performed using RF signals in sub-6GHz frequency bands. Thus, there are no stringent requirements in terms of the devices' physical sizes, hence they can feature batteries and do not experience intermittent behavior. A representative of this approach is [15], in which out-of-band aliasing of signals transmitted by an out-of-body anchor at the central frequency of 1 GHz is utilized for localizing a static backscattering diode in the body, reporting cm-level accuracy of the procedure. The second category targets localizing nanoscale devices that feature low mobility levels, utilized in applications such as tracking fiducial markers (nb., devices that provide accurate target location for tumors or organs which move in respect to surrounding anatomy) and other types of miniaturized implants. Although a subset of such applications can be enabled Figure 2: Categorization of RF-based in-body localization approaches, corresponding applications, their requirements, and relevant performance metrics through devices that do not feature nanoscale dimensions [15], enabling their full set will require nanoscale entities (e.g., early targeted treatment of small-scale tumors), hence this type is categorized separately in Figure 2. Here, a representative is an early effort in [6], where the authors assume the nanodevices are densely deployed and passively flowing in the bloodstream. For such a scenario, the authors propose an iterative localization concept in which the nanodevices closer to the body surface are localized first with the support from on-body anchors, followed by the usage of the localized nanodevices as both anchors and relays for localizing the nanodevices deeper in the body (cf., Figure 3.b). The authors assume energy-harvesting nanodevices operating at THz frequencies due to size constraints in the bloodstream. Such an approach could conceptually be applied for localizing nanoscale implants within the body. However, further research is needed for addressing the associated challenges (e.g., stringent latency-related constraints for multi-hop communication). Both of the above-discussed categories of RF-based in-body localization target localizing a (nano)device within the body. This can be viewed as analogous to indoor localization where, within an indoor environment, the goal is to localize a device (e.g., smartphone) at an unknown location. Therefore, evaluation methodologies applicable to traditional indoor localization can also be applied for localization of an in-body (nano)device. Taking the EVARILOS Benchmarking Methodology [9] as an example, the metrics of interest are the point accuracy of localization (i.e., the Euclidean distance between true and estimated locations), latency and energy consumption required for localizing the device, and the reliability of such localization (i.e., probability of reporting a location estimate upon request), as depicted in Figure 2. The final category is the flow-guided nanoscale localization considered in this work. Here, the goal is to use the nanodevices to detect and localize a target event, not necessarily to localize themselves (cf., Figure 3.c). As discussed earlier, the work in [6] can conceptually support this type of scenarios and is, therefore, included in this category. Nonetheless, the representatives of such localization are [7, 8]. In these approaches, the authors utilize machine learning models for distinguishing a region through which each nanodevice passed during one circulation through the bloodstream. The authors in [8] base this procedure on tracking the distances traversed by a nanodevice in its circulations through the bloodstream by utilizing a conceptual nanoscale IMU. However, this posits challenges in terms of resources available at the nanodevice level for storing and processing IMU-generated data, and challenges related to the vortex flow of blood negatively affecting the accuracy of IMU readings. The authors in [7] mitigate these issues by tracking the time needed for each circulation through the bloodstream. The captured distance or time is then envisioned to be reported to a beaconing anchor deployed in the proximity of the heart utilizing short range THz-based backscattering at the nanodevice level. Given that only a body region through which the nanodevice traversed is being detected, these localization approaches are (in contrast to [6]) not designed to provide point localization of the target. This is despite the fact that point localization of the target event would be immensely beneficial for the healthcare diagnostics. Moreover, the region detection accuracy and reliability of localization can intuitively be enhanced with an increase in the number of circulations the nanodevices make in the bloodstream. As a trade-off, such an increase would negatively affect the energy consumption of the localization procedure. Therefore, in flow-guided localization the relevant performance metrics such as the point and region accuracies, reliability, and energy consumption should be considered as a function of the application-specific delay allowed for localizing target events (cf., Figure 2). ## IV Framework for Standardized Performance Evaluation of Flow-guided Localization ### _Evaluation Workflow_ As discussed previously, enabling flow-guided localization of the nanodevices flowing in the bloodstream requires at least a single anchor mounted on the patient's body. Flow-guided localization approaches in [7, 8] can be enabled with a single anchor strategically positioned in the proximity of the heart. This is because the heart is the only location through which each nanodevice is guaranteed to pass in each circulation through the bloodstream. Additional anchors can be introduced into the system by specifying their coordinates in their configuration file of the simulator, as indicated in Figure 3: Schematics of different types of RF-based in-body localization approaches Figure 4. The on-body anchors are expected to feature batteries or similar powering sources, hence they are assumed to be continuously operational. Their main roles are to transmit beacon packets and receive the backscattered responses from the nanodevices. The nanodevices are assumed to feature capacitors for energy storage and ZnO nanowires as the energy-harvesting entities. The capacitor charging is modeled as an exponential process accounting for the energy-harvesting rate and interval (e.g., 6 pJ per sec and per 20 ms for harvesting from heartbeats and ultrasound-based power transfer, respectively) [1]) and capacitor's storage capacity. The nanodevices are assumed to feature intermittent behavior due to harvesting and storage constraints. This behavior is modeled through the _Turn ON_ threshold, i.e., if the current energy level of a nanodevice is above the threshold, the nanodevice is turned on. Once its energy is fully depleted, the nanodevice turns off, followed by a turn on when its energy increases above the _Turn ON_ threshold. Moreover, if the nanodevices are turned on, they are assumed to periodically carry out a sensing or actuation task with a given frequency. Each execution of a task is expected to consume a certain constant amount of energy, hence the more frequent the task, the more energy will be consumed by each nanodevice. The location(s) of the event(s) to be detected is (are) envisioned to be hard-coded by the experimenter, abiding to the constraints of the scenario. Specifically, this location has to be in or near the bloodstream in order to eventually be detected by the nanodevices. The event is assumed to be detected by a nanodevice if i) the Euclidean distance between its location and the location of the nanodevice at the time of the execution of a task is smaller than the predefined threshold (nb., configured to 1 cm in the reported experiments), and ii) the nanodevice is turned on. Communication between an anchor and a nanodevice is based on passive reception of a beacon, followed by active (i.e., energy-consuming) transmission of a response packet from the nanodevice, as assumed in the representative work from the literature [7]. The anchor is beaconing with the constant beaconing frequency and transmit power. In each beacon packet, the anchor advertises its Medium Access Control (MAC) address. In the backscattered packets, the nanodevices report their MAC addresses, the time elapsed since their last passage through the heart, and an event bit. The time elapsed since the last passage through the heart and the event bit represent the raw data that can be fed into a flow-guided localization approach for localizing a target. Each time a nanodevices passes through the heart the time elapsed since the last passage is re-initialized to zero in order not to compound multiple circulations. The event bit is assumed to be a logical "1" in case of a successful detection of a target event and "0" otherwise. Similarly, the event bit is reinitialized to "0" in each passage through the heart. ### _Framework Design and Implementation_ The framework for standardized performance evaluation of flow-guided localization is depicted in Figure 4. The input to the framework is a set of parameters defining an evaluation scenario. The inputs are envisioned to be passed to the ns-3-based simulator for the generation of raw data to be used for streamlined evaluation a given flow-guided localization solution for the assumed scenario, resulting in a performance benchmark, as indicated in Figure 4. Each streamlined performance benchmark consists of a set of relevant performance metrics, in turn allowing for an objective back-to-back comparison of different approaches in a consistent environment along the same set of scenarios and performance metrics. The architecture of the simulator follows a well-established ns-3 layered model, as depicted in Figure 4. The _AnchorApplication_ module implements continuous beaconing with a predefined period (nb., with 100 ms being a default value). Each beacon packet is forwarded to the _THzNetDevice_ module toward the communication stack implemented within the TeraSim simulator. The link and physical layers implement the ALOHA protocol and TS-OOK modulation, respectively. The THz channel is modeled by calculating the receive power for each communicating pair of devices and scheduling the invocation of the _ReceivePacket()_ method accounting for the corresponding propagation time. The channel model entails Figure 4: Overview of the framework for standardized performance evaluation of flow-guided localization in-body path-loss and Doppler terms [8]. The path-loss is calculated using the attenuation and thickness parameters of the vessel, tissue, and skin. The Doppler term is accounted for by evaluating the change in relative positions between the nanodevices and anchors with time. The _ReceivePacket()_ method checks for potential collisions by calculating the SINR and discarding the packet if the SINR is below the predefined threshold for reception. Alternatively, the packet is passed through all the way to the application layer of the nanodevice. At the nanodevice level, the receive power of the beacon is used for setting up the transmission power of the packet to be backscattered. This is followed by backscattering the response packet from the nanodevice toward the anchor by utilizing the same procedure as for the transmission of the beacon. The anchors are assumed to be static entities and feature sufficient energy for continuous operation. The nanodevices are assumed to be energy-harvesting entities that are mobile within the bloodstream. To model their mobility, we have integrated BloodVoyagerS in our simulator, as visible in Figure 4. Invoking a BloodVoyagerS execution results in generating a Comma Separated Value (CSV) file that specifies the locations of the nanodevices in the bloodstream within a simulation time frame, sampled at 1 Hz. Since ns-3 is an event-driven simulator, at each BloodVoyagerS-originating location of a nanodevice, the nanodevice is assumed to carry out a sensing/actuation task. Given that for certain applications carrying out such tasks could be required more frequently, we provide an upsampler for BloodVoyagerS-originating locations sampled at 1 Hz. As the vessels in BloodVoyagerS are modeled using straight lines, the upsampling is based on linear interpolation with a small random component drawn from a zero-mean Gaussian distribution, representing vortex flow of blood and minor changes in the diameters of veins, arteries, etc. At each new location, the nanodevice is expected to carry out a task for detecting an event of interest. ### _A Snapshot of Framework-generated Outputs_ A snapshot of outputs generated using the framework is depicted in Figures 5 and 6. In the generation of the outputs, we have utilized a single anchor positioned in the center of the heart, 64 nanonodes sampling for target events at 3 samples per second, ultrasound-based energy-harvesting at the nanonode level [1], the overall simulation duration of 1000 sec, and the Euclidean distance for detecting a target event of 1 cm. Figure 5 depicts the raw data generated by an example nanonode during one simulation runtime. The raw data consists of the _circulation_time_ parameter indicating the time passed since the last reception of a beacon from the anchor and the _event_bit_ suggesting if the target event was detected since the last beacon reception. The main takeaway from Figure 5 is that, for some raw data instances, the _circulation_time_ is larger than 90 sec, which is the maximum circulation time that might occur in a single loop trough the bloodstream. This implies that in some circulations the raw data is not reported to the anchor and, when the data is eventually reported, it contains the compound of multiple such circulations. Such behavior is a result of one of the following: i) intermittent operation of a nanonode due to energy-harvesting, resulting in the nanonode sometimes not featuring sufficient energy for sensing or transmission, and ii) self-interference from the other nanonodes and anchors, resulting in reception and transmission errors. In addition, random paths of the nanonodes in the vicinity of the target event (i.e., in an organ, limb, or head) can result in the nanonodes missing the event due to its Euclidean distance from the event never being smaller than the threshold of 1 cm, despite the fact that they went through the loop that contained the event. This implies that the _event_bit_ parameter might in some cases be erroneous. Figure 6 depicts a set of performance metrics generated in a streamlined fashion using the framework. In the generation of the results, we have utilized a modified approach from [7] and 20 randomly sampled evaluation points (i.e., target events) in the bloodstream. The modification in the approach pertains to random selection of the left or right regions given that the approach assuming a single anchor is by-design unable to distinguish between such regions for certain parts of the body (e.g., limbs). As visible from the figure, the reliability of localization increases as a function of localization delay. As an example, the reliability is increased from less than 50% to more than 90% if the delay is increased from 2 to 15 min. Our results again reveal that certain assumptions made in earlier works on flow-guided nanoscale localization ignore several phenomena that are expected to occur in practice, pertaining to unreliable THz-based communication between in-body nanonodes and on-body anchors and intermittent operation of the nanonodes due to energy-harvesting. When these are accounted for as done when utilizing the proposed framework, our results further reveal relatively poor performance of the evaluated flow-guided localization solution in the considered scenario. Specifically, the region detection accuracy is at most 40% and features only a small increase with the delay. Given that the approach from [7] cannot report point estimates but solely the estimated regions, in the calculation of the point accuracy we have utilized the centroid of a Fig. 5: An example raw data output region as its point estimate. This procedure is well-established in the domain of benchmarking of proximity-based indoor localization solutions [9]. In Figure 6, the depicted point accuracy can be considered as irrelevant, given the low region detection accuracy. In other words, the point accuracy should be derived only for the correctly detected regions in order to express the fine-grained ability of localizing target events. We nonetheless depict the point accuracy even for the case of incorrectly detected regions to draw readers' attention to this issue. The point accuracy is depicted in a regular box-plot fashion, where each box-plot depicts the distribution of localization errors for the 20 considered target events and a given delay. Finally, time-dependent energy level of an example nanonode depicted in Figure 6 indicates the energy consumption of different tasks at the nanonode level. Such indications are necessary for energy-aware optimizations of the task scheduling to maximize the operational time of the intermittently-operating nanonodes in a similar way as in [1]. ## V Conclusion We argue that there is a need for objective evaluation of the performance of flow-guided nanoscale localization. We further argue that such objectiveness can be achieved by utilizing the same evaluation environment, scenarios, and performance metrics. This is achieved by proposing a workflow for performance assessment of flow-guided localization and its implementation in the form of a simulator, providing the community with the first tool for objective evaluation of flow-guided localization. Our results reveal relatively poor accuracy of the evaluated solution in the considered scenario. This is due to unreliable THz communication between in-body nanonodes and on-body anchors and intermittent operation of the nanonodes due to energy-harvesting. Accuracy enhancements are envisioned as a part of our future work along the lines of introducing additional anchors at strategic locations on the body (e.g., writs) and developing a more suitable machine learning model that accounts for the fact that the raw data might be erroneous (e.g., compounding circulation times). Regardless of the poor accuracy, our results indicate that the proposed workflow and the simulator can be utilized for capturing the performance of flow-guided localization approaches in a way that allows objective comparison with other approaches. ## Acknowledgments This research was supported by the German Research Foundation (DFG, NaBoCom project, grant nr. DR 639/21-2).
2304.02703
Wavelength and phase considerations for multi-pulse plasma generation of terahertz
We present a numerical study on plasma generation of THz radiation utilizing multiple light pulses of various wavelengths in an optical scheme that is readily achievable in a tabletop environment. To achieve coherent THz emission it is necessary to carefully consider all the wavelengths involved in a multi-pulse setup. Previous theoretical work has explored ideal waveforms and electric field symmetries for optimal efficiency in generating THz from plasma [Phys. Rev. Lett. 114 183901 (2015)]. In practice such setups are quite delicate and prone to instability. We show that wavelength combinations with lower theoretical efficiency can more easily produce stable THz pulses in a tabletop environment combining readily available near-infrared wavelengths.
Clayton D. Moss, Shayne A. Sorenson, Jeremy A. Johnson
2023-04-05T18:55:02Z
http://arxiv.org/abs/2304.02703v1
# Wavelength and Phase Considerations for Multi-Pulse Plasma Generation of Terahertz + ###### Abstract We present a numerical study on plasma generation of THz radiation utilizing multiple light pulses of various wavelengths in an optical scheme that is readily achievable in a tabletop environment. To achieve coherent THz emission it is necessary to carefully consider all the wavelengths involved in a multi-pulse setup. Previous theoretical work has explored ideal waveforms and electric field symmetries for optimal efficiency in generating THz from plasma [Phys. Rev. Lett. **114** 183901 (2015)]. In practice such setups are quite delicate and prone to instability. We show that wavelength combinations with lower theoretical efficiency can more easily produce stable THz pulses in a tabletop environment combining readily available near-infrared wavelengths. ## 1 Introduction Terahertz (THz) radiation is a useful spectroscopic tool, often employed to study ultrafast electronic and lattice dynamics. [1] One method of generating stable THz pulses for phase-resolved spectroscopy is focusing a two-color beam in air to generate a plasma. [2, 3, 4] One advantage of THz pulses generated from plasma is their extremely broad spectral bandwidths - a consequence of the highly nonlinear nature of the plasma generation process and the lack of any constraints imposed by a solid generation medium. Experimental and computational efforts have led to more optimal configurations, parameters of consideration include: beam focusing geometry[5, 6], fundamental pump wavelength[7, 8], and using multiple colors of light beyond a standard two-color setup[9, 10, 11]. Notably, Martinez et al [9] proposed a "sawtooth" waveform made from multiple harmonics of light. While an ideal sawtooth shape is not feasible in a typical tabletop experiment, advances have been made to incorporate a third harmonic in plasma generation [10, 12]. We and others have demonstrated that non-resonant three-color schemes can feasibly increase THz output in a tabletop setup with minimal additional equipment. [13, 14, 15, 16] These involve adding an 800 nm beam to an existing two-color beam with an infrared (IR) fundamental. Often excess 800 nm light is available in a setup, e.g. discarded during down-conversion to IR or lost through reflective optics, even small amounts of 800 nm light can be used to increase plasma THz output. For example, we showed that when a 1450 nm fundamental is combined with its second harmonic (725 nm) adding additional 800 nm light can increase the THz output up to a factor of 30 for certain fluence combinations. [14] We consider such experiments in this study, as diagrammed in Fig. 1a. In brief, an 800 nm laser is down-converted to the 1200-1800 nm range, which is more efficient for two-color plasma generation. [7, 8] The resulting IR beam is used as the fundamental wavelength and is focused through a second harmonic generation (SHG) crystal to form the two-color plasma. Often much of the initial 800 nm intensity persists after down-conversion and is discarded as "waste" light, as is the case in an optical parametric amplifier (OPA). The discarded waste light is recombined with the IR fundamental. In certain fluence and delay combinations the addition of the 800 nm beam increases THz output. An interesting result of these experiments is that inherent carrier-envelope phase (CEP) instabilities do not affect the coherence of the generated THz pulses. [14, 16] One possible explanation is that the frequency beating caused by using incommensurate wavelengths allows for this stability. We expand on the experimental results of enhanced THz output from additional beams using the transverse photocurrent model to explore more possible generation schemes. In particular, we look at how THz output and stability are affected by the choice of fundamental IR wavelength. We discover that when the fundamental wavelength approaches twice 800 nm wavelength, the relative phase stability of all three pulses becomes vital. If the CEP is stable between the three beams, the highest theoretical gains in THz efficiency are realized. Interestingly, the relative CEP relationship between the three pulses becomes unimportant at certain wavelength combinations, showing that enhancement can still be achieved without relative CEP stability. These contrasting scenarios show that CEP stable laser systems are needed to obtain maximum THz generation enhancement, however specific wavelength combinations can be deployed to provide significant enhancement without a CEP-stable source. ## 2 Methods We employ the photocurrent model of plasma-generated THz emission,[17] which has been shown to be the primary generation mechanism by more robust, computationally expensive particle-in-cell simulations. [3] Modeling THz emission with only the photocurrent model has proven sufficient to provide qualitative analysis of experimental trends. [14; 12; 10] In the photocurrent model, the number of carriers in the plasma is calculated according to: \[\frac{dN(t)}{dt}=w(t)[N_{g}-N(t)]-r_{t}N(t), \tag{1}\] with \(N(t)\) being the number of carriers at any given time \(t\), \(N_{g}\) being the initial number density of available carriers, and \(r_{t}\) representing the estimated recombination constant. Recombination occurs on longer time scales than generation, thus the recombination term can often be ignored. The tunneling rate \(w(t)\) is calculated in response to the magnitude of the driving field \(E(t)\) by: \[w(t)=\frac{\alpha}{\hat{E}(t)}exp(-\frac{\beta}{\hat{E}(t)}). \tag{2}\] Where \(\alpha=4\omega_{a}r_{H}^{5/2}\), \(\beta=(2/3)r_{H}^{3/2}\), \(\omega_{a}=4.134\times 10^{16}s^{-1}\) is the atomic frequency unit and \(r_{H}=U_{N_{2}}/U_{H}\) is the ionization potential of nitrogen gas (15.6 eV) relative to that atomic hydrogen (13.6 eV). We also define \(\hat{E}(t)=|E(t)|/E_{a}\) as the absolute value of the electric field of the laser in atomic units (\(E_{a}=5.14\times 10^{11}\) V/m). The emitted THz field is proportional to the derivative of the electron shift current that occurs in the plasma, which is simply the product of the carrier generation function and the electric field: \[E_{emit}\propto\frac{e^{2}}{m_{e}}E(t)N(t). \tag{3}\] Figure 1: a) Simulated experimental setup, adapted from [14]. Separate beam lines are necessary to optimize the relative delay between the pulses before they are focused collinearly to make the plasma. b) Representation of simulation parameters, the primary variables are the IR fundamental frequency \(\omega\), relative delay \(\Delta t\), and 800 nm internal phase \(\phi\). From this modeled emission we calculate THz yield as the square root of the absolute value of the Fourier transform squared, with the frequency bounded from 0.1 to 10 THz. The experimental setup which we simulate in our calculations is depicted in Fig. 1. As performed in Sorenson et al [14], a variable IR fundamental is chosen and focused through an optimally aligned SHG crystal. The excess 800 nm light taken from the OPA is reintroduced along the same focusing line with variable relative delay. At certain delays and relative powers, the optimized THz generation of the IR beam is enhanced further by the addition of the 800 nm beam. A detailed look at the phase considerations of the driving pulse is shown in Fig. 1b. The infrared fundamental frequency (\(\omega\)) and its second harmonic (\(2\omega\)) have their relative delay fixed at \(\pi/2\), this optimizes THz output and is achieved in practice by alignment of the SHG crystal. The reference THz yield is calculated using only these two colors. The fundamental is varied over a range of 1100 nm to 2000 nm, representative of feasible driving frequencies from an OPA. This is then compared to output with the third 800 nm beam present. The relative delay is varied in addition to the internal phase of the pulse (\(\phi\)). For this study we keep the fluence of the IR fundamental constant, corresponding to a pulse energy of 240 \(\mu\)J, the fluence of the 800 nm beam corresponds to a pulse energy of 1930 \(\mu\)J (see SI of [14]). All pulses are modeled using Gaussian beam profiles with 75 fs set as the full-width-half-max. The relative phase of the 800 nm pulse is of particular interest to this work, in an experimental setup where beams follow different optical paths it is difficult to ensure fixed relative phase due to optical jitter and fluctuations. Regardless of whether fixed relative phases between all three pulses can be achieved, our calculations consider both fixed and random phase scenarios. In the fixed case all pulses are CEP stable. To model the random phase scenario, we model the THz emission for a range of relative phase delays (\(\phi\)) and then average the current produced by each phase (See Fig. 2). Averaging the current across different phases replicates the experimental condition of shot averaging. ## 3 Results and Discussion The modeled results of both conditions with and without CEP stability are shown in Fig. 2 for a 1450 nm fundamental and a 1600 nm fundamental. As shown in Fig. 2a, at optimal relative delays the emitted THz pulse is enhanced on average by a phase-unstable 800 nm pulse due to more favorable carrier ionization and subsequent current generation. If the 800 nm beam arrives before the two-color beam (negative relative delays) THz emission is suppressed as carriers are liberated without the asymmetric push needed to form a net current drift. For a primary IR wavelength that is far away from being harmonic with the third pulse (1450 nm, blue dotted lines, see Fig. 2b) the specific phase of the 800 nm pulse doesn't matter at optimal delay and there is consistent enhancement when averaging over every possible phase. However, when the primary wavelength is commensurate (1600 nm, red dashed lines, see Fig. 2c) the outcomes vary greatly for different fixed phases of the 800 nm pulse (\(\phi\)). The overall symmetry of the driving pulse is much more sensitive to the addition of the third beam, and we see that the average result is less desirable for the commensurate (1600 nm) compared to the the non-commensurate (1450 nm) case. However, if the phase of the third beam can be held stable and suitably optimized, then the greatest potential gains in THz emission are possible. For example, the solid green line corresponding to a phase of 4\(\pi\)/10 in Fig. 2c exceeds a maximum enhancement factor of four, larger than the predicted outcome for 1450 nm (Fig 2b). Figure 2: We calculate THz enhancement factor as the ratio of the THz yields with the 800 nm pulse present and absent from the driving ionization field. Positive relative delays correspond with the 800 nm pulse arriving after the pulse containing the IR fundamental its second harmonic. a) Averaged, phase-unstable enhancement for the commensurate (1600 nm, red dashed line) and incommensurate (1450 nm, blue dotted line) cases. b) Individual fixed CEP stable (solid lines) and random average phase (dotted line) calculations for 1450 nm fundamental. There is little variation at optimal relative delay. c) Fixed CEP stable and random average phase (dashed line) calculations for 1600 nm fundamental. Each individual phase has drastically different outcomes. Note that the average in Fig. 2c (red dashed line) is different than the expected mean of the phases shown; this is because enhancement factor is calculated from scalar yields. Averaging accounts for both phase and magnitude derived from the electron current. In Fig. 3 we plot several calculated electron currents for a 1600 nm fundamental. While we use the derivative of the electron current to calculate emission (Eq. 3), the electron current, (in particular the trailing current tail seen after 150 fs in Fig. 3), gives a good sense of the magnitude and phase of a resulting THz pulse. The phase averaged current shift (blue dotted line) is not much larger than the calculated shift of the two-beam reference (black line). The optimal single fixed phase (light orange line) results in a much larger current and overall displacement, but different phases can create currents with displacement in the opposite direction (purple dashed line). In Fig. 4 we show the results from extending our model over a large range of IR fundamental wavelengths. Here, we report the enhancement factor for only the optimal relative time delay between driving pulses. This again assumes a fundamental with second harmonic and additional 800 nm. The largest gains in THz field strength are achieved at commensurate wavelengths, evidenced by the large feature centered around 1600 nm, but only in fixed-phase conditions. The individual fixed-phase results (thin lines) caution that reduced THz yield is equally possible, in the same situations that a fixed-phase scheme would have the highest gain. As the wavelength combinations become less commensurate, the optimal single-phase and phase-averaged cases converge. As seen at 1100 nm, 1400 nm, and 1800 nm, the difference between the CEP stable and random averaged cases is minimal. Conversely, we see larger discrepancies at full and fractional harmonics. [18] We also note that while THz emission is more efficient at longer wavelengths it becomes increasingly difficult to create the optimal plasma sparks required for THz generation. [7] Our method accounts for increased Keldysh ionization tunneling due to the enhancement 800 nm pulse being present, but does not consider focusing difficulties that can arise using longer wavelengths. Our results suggest two paths towards optimal THz generation beyond the two-color scheme. To achieve the highest theoretical THz yields it is necessary to have commensurate colors with carefully chosen relative phases for all beams. This allows for the pursuit of previously discussed optimal driving field shapes, such as the sawtooth waveform. [9] It is worth noting that in cases with fewer beams different phase combinations may be more favorable. [11] However, if phase stability cannot be achieved experimentally, choosing incommensurate colors can still boost THz output while avoiding CEP stability concerns. ## 4 Conclusions In conclusion we hope to bridge optimal theoretical generation scenarios and practical experimental considerations in multi-beam plasma generation of THz. We demonstrate that non-optimized, incommensurate wavelength scenarios can give better results within practical constraints that make relative CEP stability between driving pulses impossible. As multi-harmonic THz plasma generation schemes continue to improve we hope to encourage flexibility and creativity to achieve stable THz output. Figure 3: Calculated electron photocurrents for a 1600 nm fundamental, which determine the phase and magnitude of resulting THz pulses. The low-frequency displacement from zero, observed clearly after 150 fs, is largely responsible for THz emission. The average of all possible 800 nm third beam phases (blue dotted line) is a small improvement over it not being present at all (black solid line). The optimal fixed-phase result (light orange line) is a significant improvement over the average. Certain phases can result in current shifts in the opposite direction (purple line), which explains the discrepancy between the optimal single phase and the phase-averaged cases.
2310.11186
Efficiently Visualizing Large Graphs
Most existing graph visualization methods based on dimension reduction are limited to relatively small graphs due to performance issues. In this work, we propose a novel dimension reduction method for graph visualization, called t-Distributed Stochastic Graph Neighbor Embedding (t-SGNE). t-SGNE is specifically designed to visualize cluster structures in the graph. As a variant of the standard t-SNE method, t-SGNE avoids the time-consuming computations of pairwise similarity. Instead, it uses the neighbor structures of the graph to reduce the time complexity from quadratic to linear, thus supporting larger graphs. In addition, to suit t-SGNE, we combined Laplacian Eigenmaps with the shortest path algorithm in graphs to form the graph embedding algorithm ShortestPath Laplacian Eigenmaps Embedding (SPLEE). Performing SPLEE to obtain a high-dimensional embedding of the large-scale graph and then using t-SGNE to reduce its dimension for visualization, we are able to visualize graphs with up to 300K nodes and 1M edges within 5 minutes and achieve approximately 10% improvement in visualization quality. Codes and data are available at https://github.com/Charlie-XIAO/embedding-visualization-test.
Xinyu Li, Yao Xiao, Yuchen Zhou
2023-10-17T12:07:14Z
http://arxiv.org/abs/2310.11186v1
# Efficiently Visualizing Large Graphs ###### Abstract Most existing graph visualization methods based on dimension reduction are limited to relatively small graphs due to performance issues. In this work, we propose a novel dimension reduction method for graph visualization, called t-Distributed Stochastic Graph Neighbor Embedding (t-SGNE). t-SGNE is specifically designed to visualize cluster structures in the graph. As a variant of the standard t-SNE method, t-SGNE avoids the time-consuming computations of pairwise similarity. Instead, it uses the neighbor structures of the graph to reduce the time complexity from quadratic to linear, thus supporting larger graphs. In addition, to suit t-SGNE, we combined Laplacian Eigenmaps with the shortest path algorithm in graphs to form the graph embedding algorithm ShortestPath Laplacian Eigenmaps Embedding (SPLEE). Performing SPLEE to obtain a high-dimensional embedding of the large-scale graph and then using t-SGNE to reduce its dimension for visualization, we are able to visualize graphs with up to 300K nodes and 1M edges within 5 minutes and achieve approximately \(10\%\) improvement in visualization quality. Codes and data are available at [https://github.com/Charlie-XIAO/embedding-visualization-test](https://github.com/Charlie-XIAO/embedding-visualization-test). Large Graphs Graph Layout Graph Embedding Dimension Reduction Clusters Neighbor Structure ## 1 Introduction Graph data are widely used nowadays, and visualization of graph data is an important problem in various fields. For example, interactions of users on social media websites can be represented by graphs with users as nodes and their following relations as edges. Analysis on such social network graphs can give important information such as interpersonal ties, structural holes, and local online communities [5]. Various different types of methods for graph visualization have been proposed over the past few decades, many of which are reviewed and compared by Herman et al. in a survey on graph visualization and navigation [9]. Important techniques include force directed methods [11] and spectral drawing methods [12], etc. Most of these techniques can be easily distracted by noises when plotting cluster structures of graphs. They also run too slow or take too much space regarding large graph datasets consisting of over 10K nodes. These severely limits their applicability on large graph data and real-world data where thousands of noises may exist. To handle large-scale data, we found that a faster approach of recent years can be applied. We can first compute a high dimensional embedding where each node in the original graph is marked by a vector. After that we use dimension reduction methods to convert this embedding into a 2D layout. There are various existing graph embedding methods, which can be found in the survey by Goyal and Ferrara [7]. Existing popular dimension reduction methods include PCA [21], t-SNE [15] and UMAP [16]. However, these methods are designed for data visualization and can not be directly applied to graph visualization. Moreover, although they can be combined with graph embedding to visualize graph, their runtime can still be improved. Also, their visualization quality is still a problem, especially regarding cluster structures. In this paper, we focus on the simplest case of graph visualization: undirected, unweighted graphs without node attributes, and leave the generalization for future work. Motivated by the objective of preserving the cluster structures of graphs, we propose ShortestPath and ShortestPath Laplacian Eigenmaps Embedding (SPLEE) that take into account graph theoretical distances and layout clusters more clearly. Also, ShortestPath is a fast algorithm that can deal with larger datasets. We also put forward a fast and cluster-preserving dimension reduction technique called t-SGNE based on the original t-SNE, which takes advantage of the graph neighbor structures. Finally, in order to compare the results of different methods on different datasets, we propose two quantitative measures for testing, including Normalized Mutual Information (NMI) and Aesthetic Quality (AQ). These measures respectively evaluate the clustering accuracy and how well clusters are distributed in layout. Organization.In the remaining part of this section, we outline the previous works related to our work. In Section 2, we discuss details of our methods, including graph embedding methods, dimension reduction methods and quantitative testing standards. In Section 3, we present the testing results of both previous methods and our methods on a variety of datasets, regarding visualization quality and running time. We also provide the repository of our codes on GitHub.1 In Section 4, we discuss possible future improvements on our work. Footnote 1: [https://github.com/Charlie-XIAO/embedding-visualization-test](https://github.com/Charlie-XIAO/embedding-visualization-test) ### Related Work #### 1.1.1 Graph Layout A layout of a graph is a two dimensional embedding of the graph, where each node is assigned a coordinate on the 2D plane for visualization purpose. There are mainly three types of methods to compute the layout of a graph: force-directed methods, spectral methods, and dimension reduction methods. * **Force directed methods** model nodes as particles that repel each other, and edges as springs that connect the particles. Thus, it can compute the graph layout by simulating the corresponding particle-spring system and minimizing the energy of the system [11]. It can produce aesthetically pleasing results with less edge crossing and uniform distribution of nodes. However, the optimization of this complex system is time-consuming and computation-intensive, which means we cannot directly apply force directed method to graphs of large scale. Moreover, the aesthetic criteria used in force directed method fail to reflect how the cluster structure of the graph is represented in the layout. In fact, empirically, for a graph with more than 10K nodes, force-directed methods usually produce a "hairy ball" with no identifiable clusters. * **Spectral methods** use eigenvectors of some matrices related to the graph as the graph layout. Compared with force directed methods, spectral methods are much faster, most of which run quadratic time. Also, spectral methods are deterministic, presenting an exact mathematical formula that draws the layout. The most famous spectral drawing method is to compute the lowest eigenvectors of the Laplacian matrix of the graph [12]. The correctness of this method can be verified with an optimization problem. Spectral drawing methods tend to place each node at the centroid of its neighbors with some deviation. This preserves the graph-theoretical distance between nodes pretty well. However, this may also result in lots of crossings between edges and ambiguity of the borders of clusters. Different clusters may mix up a lot for large graphs. * **Dimension reduction methods** have a wide range of usage. They can be applied to graph drawing with certain modifications. Dimension reduction methods aim to project the given high dimensional data to lower dimension while preserving some form of information of the original high dimensional data (typically represented by an objective function). To apply dimension reduction method to graph drawing, one can first compute a high dimensional embedding of the graph, then reduce the dimension of the embedding to two while minimizing some form of difference between the high and low dimensional embedding. The output can be treated as a graph layout. For each node, a typical choice of high dimensional embedding would be its graph-theoretic distance to all the nodes in the graph. Pivot MDS, proposed by Brandes and Pich, is a classic example [4]. It first samples some pivot nodes and uses the distances of each node to these pivots as high dimensional embeddings. Then, it applies Multi-Dimensional Scaling (MDS) to reduce the dimension to two, where the objective function is a so-called stress function that measures the discrepancy between the pairwise graph-theoretic distance in the graph and the pairwise Euclidean distance on 2D plane. Methods based on dimension reduction are suitable for the visualization of large-scale graph because there are various approximations of the objective functions of dimension reduction methods. However, layouts produced by dimension reduction methods tend to be less aesthetic compared to those produced by force-directed methods. #### 1.1.2 t-Sne t-SNE is a nonlinear, statistical model for dimension reduction. It aims to map some known high dimensional data to low dimension in a way that preserves the neighbor structure in a probabilistic way, whereas traditional dimension reduction methods like MDS preserves the more intuitive distance structure. Loosely speaking, t-SNE assumes that similarity between points are captured by their distance in high dimensional space. Thus, a point is similar to its neighbors and dissimilar to points far away. The low dimensional embedding is constructed so that, in the low dimensional space, points that are similar in high dimensional space are closer and dissimilar points are farther apart with high probability. The primary use of t-SNE is to visualize data in high dimensional space, thus a common choice of the low dimensional space is \(\mathbb{R}^{2}\). Compared to previous data visualization methods, t-SNE can preserve both local structure and global structure such as clusters at different scales [15]. To apply to large dataset, t-SNE uses an approximation based on random walk on neighborhood graph, where the neighborhood graph is constructed from the high dimensional embedding. Although this enable the application of t-SNE to larger dataset, the computation of neighborhood graph creates a computational bottleneck and restrict t-SNE to dataset of size about 100K [1]. #### 1.1.3 Graph Embedding Graph embedding is a powerful method for reducing the dimensions of graph data while preserving certain graph structures. More specifically, a graph embedding is a mapping that maps each node to a representation vector, whose dimension is much smaller than the number of nodes. In an effective graph embedding method, the representation vectors of nodes within the same community should be similar, so that closely-related nodes tend to lie close to each other after dimension reduction as well. There are various types of graph embedding methods. DeepWalk [17] is a random walk based embedding method. It applies truncated random walks to walk through the nodes and obtain sampling. A truncated random walk starts at a certain node and randomly visits one of its neighbors and so on, until the length of the walk reaches a default value. This describes the cooccurrence relations of nodes, which is the key of this method. DeepWalk then uses word2vec [6] to create the representation vectors of each node. Word2vec is a common method for word embedding in NLP, which learns the cooccurrence relations among words from sentences and vectorize each word. DeepWalk is hence able to vectorize each node by learning their cooccurrence relations. Node2Vec [8] is another random walk based embedding method. It is similar to DeepWalk but samples random walks differently. It introduces two parameters, which controls the probability of performing a breadth-first search or a depth-first search when randomly choosing the next node to visit. Breadth-first search better records the similarity of closely-related nodes. Depth-first search may preserve some global structures of the original graph. Laplacian Eigenmaps [2] is a different type of graph embedding method. It constructs relations between nodes from a local perspective. Since the objective is to keep the closely-related nodes close to each other in the lower-dimensional space, Laplacian Eigenmaps tries to minimize \(\sum_{i,j}\lVert y_{i}-y_{j}\rVert\), where \(y_{i}\) and \(y_{j}\) are data points in the lower-dimensional space. The problem becomes minimizing the trace of \(Y^{T}LY\) after some transformation, where \(L=D-A\) is called the unnormalized Laplacian matrix, \(A\) is the adjacency matrix, and \(D\) is a diagonal matrix with entries \(D_{ii}=\sum_{j}A_{ij}\). By trace derivative law, \(LY=-DY\Lambda\) gives the optimized result. This can be rewritten as the generalized eigenvalue problem \(Ly=\lambda Dy\). Hence, Laplacian Eigenmaps first computes the eigenvalues and eigenvectors of the Laplacian matrix of the graph, then takes the \(d\) eigenvectors corresponding to the \(d\) smallest nonzero eigenvalues as the \(m\)-dimensional output (\(d\ll|V|\) is the dimension of the high dimensional embedding). Geometric Laplacian Eigenmaps [20] is similar to Laplacian Eigenmaps while taking the eigenvectors corresponding to the \(m\) largest nonzero eigenvalues. The intuition is that these correspond to the best approximation to the Laplacian through singular value decomposition. ### Our Contribution Our work involves three parts: graph embedding using ShortestPath (SP) and ShortestPath Laplacian Eigenmaps Embedding (SPLEE), dimension reduction using t-SGNE, and two quantitative testing standards, Normalized Mutual Information (NMI) and Aesthetic Quality (AQ). Using SPLEE for graph embedding first and then using t-SGNE to perform dimension reduction, we show that we can visualize graphs with up to 300K nodes and 1M edges within \(5\) minutes and achieve approximately \(10\%\) improvement in visualization quality. ## 2 Method The problem of graph drawing can be formulated as follows. Let \(G=(V,E)\) be an undirected, unweighted graph where \(|V|=n\) (for more general types of graphs, see Section 4). Graph Drawing defines a function \(GD:G\mapsto Y\), where \(Y=\{y_{i}\in\mathbb{R}^{2}\mid i=1,\cdots,n\}\subset\mathbb{R}^{2}\) is the two dimensional embedding of the graph \(G\). There are two steps to apply the method of dimension reduction to graph drawing. The first step is Graph Embedding (\(GE\)). Given a graph \(G\), we want to find a function \(GE:G\mapsto X\) where \(X=\{x_{i}\in\mathbb{R}^{d}\mid i=1,\cdots,n\}\subset\mathbb{R}^{d}\) is the high dimensional embedding of the graph \(G\) and \(d\gg 2\). Next, we will perform Dimension Reduction (\(DR\)) on \(X\), that is, to apply a function \(DR:X\mapsto Y\) on \(X\). Thus, we have the composition \[GD=DR\circ GE. \tag{1}\] ### t-SNE t-SNE is a \(DR\) method. We will first state the original t-SNE algorithm, then describe its approximation based on random walk on neighborhood graph, as is done in the original paper. The original t-SNE consists of three parts: constructing a probability distribution \(\mathcal{P}\) in \(\mathbb{R}^{d}\) and approximate \(\mathcal{P}\) with probability distribution \(\mathcal{Q}\) in \(\mathbb{R}^{2}\). First, for \(x_{i},x_{j}\in X\), we can compute the probability \(p_{j|i}\) that \(x_{i}\) would pick \(x_{j}\) as its neighbors as follows: \[p_{j|i}=\frac{\exp(-\|x_{i}-x_{j}\|^{2}/2\sigma_{i}^{2})}{\sum_{k\neq i}exp(- \|x_{i}-x_{k}\|^{2}/2\sigma_{i}^{2})}\cdot\mathbbm{1}_{\{i\neq j\}}, \tag{2}\] where \(\chi_{A}\) is the indicator function of set \(A\). Then, we can define the distribution \(\mathcal{P}\) as \[p_{ij}=\frac{p_{j|i}+p_{i|j}}{2n}, \tag{3}\] where \(\sigma_{i}\) is chosen using a binary search so that the perplexity \(\mathrm{Perp}(\mathcal{P}_{i})=2^{H(\mathcal{P}_{i})}\) equals to a user-defined value (\(\mathrm{Perp}(\mathcal{P}_{i})\) is typically between \(5\) and \(50\), scikit-learn choose\(30\) as default), and \(H(\mathcal{P}_{i})=-\sum_{j}p_{j|i}\log p_{j|i}\) is the Shannon entropy as is suggested in SNE [10]. Next, we construct \(\mathcal{Q}\) in a similar way, except we use the Student t-distribution instead of Gaussian distribution: \[q_{ij}=\frac{(1+\|y_{i}-y_{j}\|^{2})^{-1}}{\sum_{k}\sum_{l\neq k}(1+\|y_{k}-y_ {l}\|^{2})^{-1}}\cdot\mathbbm{1}_{\{i\neq j\}}. \tag{4}\] The Student t-distribution uses a degree of freedom equal to 1, which results in a Cauchy distribution. This distribution is heavy-tailed (infinite first moment), which helps alleviate the crowding problem of Stochastic Neighbors Embedding (SNE), that data points are all mapped to the center becoming indistinguishable [15]. To find \(\mathcal{Q}\) that best approximates \(\mathcal{P}\), we use the KL divergence of \(\mathcal{P}\) from \(\mathcal{Q}\) as an objective: \[\min_{Y}\mathrm{KL}(\mathcal{P}\|\mathcal{Q})=\sum_{i,j}p_{ij}\log\left(\frac{ p_{ij}}{q_{ij}}\right). \tag{5}\] To apply t-SNE to larger datasets, we use an approximation of \(P\) based on neighborhood graph and random walk. First, we construct a neighborhood graph \(G_{knn}=G_{knn,X}\) from \(X\), where \(x_{i}\) is linked to its first \(k\) nearest neighbors, denoted as \(N(i)\). We perform a fixed large number of random walks of fixed length on the graph for each node, where the probability of transiting from \(x_{i}\) to \(x_{j}\) is proportional to \(\exp\|x_{i}-x_{j}\|^{2}\). From the random walks, we can construct an approximation of \(p_{j|i}\) as follows: \[p_{j|i}=\frac{\text{number of random walks from $i$ to $j$}}{\text{number of random walks starting from $i$}}. \tag{6}\] The rest is the same as what we stated above. Time complexity.t-SNE mainly involves three steps. First, the construction of the \(G_{knn,X}\) involves the computation of pairwise Euclidean distances, which has a time complexity of \(O(d|V|^{2})\). The reason is that for each node, we need to further compute and rank its distance to the rest of the nodes, and select the first \(k\) nearest nodes to construct the neighborhood graph. Second, the simulation of random walks runs \(O(|V|)\) time. Finally, the optimization of \(\mathrm{KL}(\mathcal{P}\|\mathcal{Q})\) has a time complexity of \(O(|V|^{2})\) as is suggested in the original article [15]. ### t-Sgne t-SGNE is a simple yet effective modification of t-SNE that can be applied to graph data with high dimensional embeddings. Given a graph \(G\) and its high dimensional embedding \(X\), we perform t-SNE with nearest neighbor approximation, except that the neighborhood graph is constructed from \(G\) instead of \(X\), i.e. \[G_{knn}=G_{knn,G}, \tag{7}\] where \(v_{i},v_{j}\in G\) are connnected in \(G_{knn,G}\) if \(v_{i}\) is of the first \(k\) nearest neighbors of \(v_{j}\) on graph \(G\), where the nearest neighbor is computed by a breadth-first search of \(k\) steps. To justify this simple modification, we need to answer a nontrivial question: is \(G_{knn,G}\) a valid substitute for \(G_{knn,X}\)? The answer depends on our choice of \(GE\) method. The goal of t-SNE is to map points closer (farther) in \(\mathbb{R}^{d}\) to closer (farther) positions in \(\mathbb{R}^{2}\). When applied to large datasets, \(G_{knn,G}\) with random walk approximates the distance relation of points in \(\mathbb{R}^{d}\). If \(x_{i}\) and \(x_{j}\) are closer in \(\mathbb{R}^{d}\), a random walk starting from \(x_{i}\) is more likely to reach \(x_{j}\), which results in larger \(p_{j|i}\). However, points closer in \(G\) are not necessarily closer in \(\mathbb{R}^{d}\), and vice versa. For example, Structural Deep Network Embedding (SDNE) is a \(GE\) method that captures structural similarity instead of distance/neighbor relations between nodes in a graph [10]. Nodes are assigned to closer positions in \(\mathbb{R}^{d}\) not because they are closer in the graph but because they have similar structural properties (e.g. both are central to a cluster, both are hubs between two clusters). In this case, simply comparing the ratio of random walk paths from \(v_{i}\) to \(v_{j}\) will not reflect the structural similarity and dissimilarity between these nodes. To judge if t-SGNE is suitable, we need to ask another question: when we map a graph \(G\) to \(X\subset\mathbb{R}^{d}\), what kind of points should be closer in the high dimensional space \(\mathbb{R}^{d}\)? Typically, t-SGNE can only be combined with \(GE\) methods that maps nodes of small geodesic distance (shortest path length) to closer embeddings. For \(GE\) methods that aim to preserve more complex structures like structural similarity, \(G_{knn,G}\) is not a good approximation of \(G_{knn,X}\). In other words, with an appropriate \(GE\) method, t-SGNE is suitable for visualizing the cluster structures of graph, which is captured by the geodesic distance relation in \(G\). Time complexity.The construction of \(G_{knn,G}\) has a time complexity of \(O(k|V|)\) since for each node, we need to perform a \(k\)-step BFS to determine its \(k\) neighbors. Compared with t-SNE which involves a quadratic-time computation of \(G_{knn,X}\), t-SGNE only requires linear time to construct the neighborhood graph. The rest is the same as t-SNE. ### ShortestPath ShortestPath is a \(GE\) method that gives graph embedding of each node based on its shortest path length to certain target nodes. The motivation that we introduce such method is that we want the embedding result to have association with the distance between nodes in the original data, which is mostly represented by the shortest path lengths between nodes. First, we pick the target nodes. Let \(d\) be the dimension of the high dimensional embedding. Through experiments, we found that randomly choosing \(d\) targets shows the overall best result. Then, to calculate the length of shortest paths between each node in the graph and each target in \(X\), we apply BFS starting from each node in \(X\). To minimize the negative effect of nodes that are far apart in the final plot, we also apply a threshold \(l_{0}\) here. If the length of the shortest path is less than \(l_{0}\), we use it in the embedding. Otherwise, we regard the length as \(l_{0}+1\). The default value of \(l_{0}\) is \(\sqrt{|E|}\) where \(|E|\) is the number of edges in the graph. Such a threshold also reduces the number of computations thus increasing efficiency. Hence, the ShortestPath embedding is defined as follows: we first obtain an embedding matrix \[E_{ij}=\begin{cases}d_{ij},&\text{if }d\leq l_{0},\\ l_{0}+1,&\text{if }d>l_{0}\text{ or path does not exists between }v_{i}\text{ and }x_{j},\end{cases} \tag{8}\] where \(d_{ij}\) represents the shortest path length between the nodes \(v_{i}\in V\) and \(x_{j}\in X\). Each row vector is then taken as the embedding vector of node \(v_{i}\). Time complexity.Since BFS is applied \(d\) times in total, and in each round it traverses each edge at most once, the total time complexity of ShortestPath is \(O(d|E|)\). ### Splee ShortestPath Laplacian Eigenmaps Embedding (SPLEE) is a \(GE\) method that combines ShortestPath and Laplacian Eigenmaps. The motivation is to take into account graph-theoretically distances rather than just considering node connections when doing spectral embedding. The original version of Laplacian Eigenmaps [2] can be applied on more general types of high dimensional data. However, they need to be transformed to weighted graphs before applying the method. Links exist if corresponding data points are close to each other, and weights are determined by some functions on the Euclidean distances between data points which will be mentioned later soon. However, when applying Laplacian Eigenmaps on graph data, no types of distances are taken into consideration. Hence, SPLEE uses shortest path lengths between nodes to make up for this. The algorithm involves two main steps. The first step is to obtain a special distance matrix \(W\). As in the original Laplacian Eigenmaps, a heat kernel is applied to the Euclidean distances to approximate the Gaussian [2]. Similarly, SPLEE applies the heat kernel to the shortest path lengths between nodes. Furthermore, \(W_{ij}=0\) if the shortest path length between the nodes \(v_{i}\) and \(v_{j}\) are beyond some threshold \(l_{0}\) since nodes that are far apart should not be linked as in the original version of Laplacian Eigenmaps. Hence, the distance matrix \(W\) is defined as: \[W_{ij}=\begin{cases}\exp(-\epsilon d_{ij}^{2}),&\text{if }d\leq l_{0},\\ 0,&\text{if }d>l_{0}\text{ or path does not exists between }v_{i}\text{ and }v_{j},\end{cases} \tag{9}\] where \(d_{ij}\) represents the shortest path length between \(v_{i}\) and \(v_{j}\). According to experimental results, \(\epsilon\) is recommended to be around 5.0 to 7.0. The threshold \(l_{0}\) can be chosen as the same default value \(\sqrt{|E|}\) as in ShortestPath. The shortest path lengths are computed by BFS for unweighted graphs (or by Dijkstra's Algorithm for weighted graphs). The second step is to compute eigenvectors to use as the high dimensional node embeddings. We generalize the original definition of Laplacian matrix \(L\) as: \[L=D-W, \tag{10}\] where \(W\) is the distance matrix above, and \(D\) is a diagonal matrix with entries \(D_{ii}=\sum_{j}W_{ij}\). Then the eigenvectors corresponding to the smallest \(d\) eigenvalues of the generalized Laplacian matrix \(L\) are taken as the \(d\)-dimensional embedding (\(d\ll|V|\)). Time complexity.SPLEE uses the same technique of computing shortest path lengths as ShortestPath, which takes \(O(d|E|)\) time. To compute the lowest \(d\) eigenvectors of the Laplacian matrix, the state-of-art algorithm takes \(O(k|V|^{2})\) time, where \(k\) is the number of iterations. Hence, SPLEE runs a total time complexity of \(O(d|E|+k|V|^{2})\). ### Normalized Mutual Information This is a measure for clustering accuracy of the 2D layout. Normalized Mutual Information (NMI) is a measure that compares two network partitions. It ranges from 0 to 1, and the smaller the NMI, the more similar the two partitions. Here we do clustering for both the original graph and the 2D graph layout, and compare the clustering results. There are two main steps to achieve this. First, we use Louvain's algorithm [3] to cluster the original graph. In this step, we output the clustering label for each node as well as the number of clusters created. One important reason to choose this algorithm is that it works directly on graph data structures and does not require the number of clusters as a parameter. In this way, we can obtain the optimal clustering result for the original graph to use as a standard community partition. Second, we apply kNN clustering algorithm on the 2D graph layout, with the \(k\) chosen as the output number of clusters in the previous step. Hence, we can keep in accordance the number of clusters between the graph and the layout. In this way, the two sets of clustering labels have the same number of distinct values, thus the result will be more intuitive and accurate. Now we compute the NMI as follows: \[\mathrm{NMI}(PR\,;TR)=\frac{2\cdot I(PR\,;TR)}{H(PR)+H(TR)}, \tag{11}\] where \(PR\) and \(TR\) denote the set of labels of the original graph and the 2D layout respectively, \(H(\cdot)\) denotes the entropy, and \(I(\cdot\,;\cdot)\) denotes the mutual information between the two arguments. ### Aesthetic Quality This is a measure for the aesthetic quality focusing on cluster structures of the 2D layout. It examines whether clusters are clearly divided. It ranges from 0 to 1, and the bigger the value, the better clusters are distributed in the layout. The basic idea is to divide the plot into \(k\times k\) grids. For each grid, we calculate the portion that nodes of each cluster take up and we compare it with a user-defined threshold \(p\). If the result is larger than \(p\), we regard the grid a good grid with a clear dominating cluster, and otherwise we ignore it. Finally, we give an overall result by calculating the ratio of good grids. Specifically, the x-length of each grid is given by \[x_{0}=\frac{X_{\max}-X_{\min}}{k}, \tag{12}\] where \(X_{\max}\) is the largest value along the x-axis of the point position in the layout and \(X_{\min}\) is the smallest. Similarly, we define the y-length of each grid as \[y_{0}=\frac{Y_{\max}-Y_{\min}}{k}, \tag{13}\] Thus, we can easily know the bounds for each grid. Now, by going over the position of every node in the layout, we can specify which grid it belongs to. Then by calculating the portion each kind of nodes takes up in the grid, we can examine whether it is a good grid. The result of this measure \(AQ\) is thus given by \[AQ=\frac{\text{number of good grids}}{k^{2}}. \tag{14}\] ## 3 Experiments In this section we present experimental analyses on our methods. We first compare different combinations of \(GE\) and \(DR\) methods to compare their visualization quality, then we take a specific combination (ShortestPath and t-SGNE) for testing on larger graph datasets. Testing codes, datasets, and part of the experimental results are available on GitHub.[1] ### Visualization quality of different combinations We evaluate our new \(GE\) methods ShortestPath and SPLEE and new \(DR\) method t-SGNE. We also compare with the existing \(GE\) method DeepWalk and \(DR\) method t-SNE as a baseline. This experiment focuses on comparing the visualization quality of different combinations of methods, hence only small graph datasets (< 10K nodes) are used. Larger datasets will be tested in the next experiment. We experiment on three different datasets from Network Data Repository [19]. * EmailUniv is a network of a small collection of emails sent universally. It has 1133 nodes and 5451 edges, with an average degree of 9. It has clear cluster structures representing frequent sending and receiving of emails within certain communities. * Wiki is a Wikipedia-based network constructed from different Wikipedia categories. It is a originally a labeled graph, but we will not take its labels into account. It has 2405 nodes and 12761 edges, with no clear borders of cluster structures due to the high interpenetration between the topics collected in this dataset. * Lastfm is a heterogeneous network of user relations of the music website Last.fm. Each node represents an Asian user of Last.fm, and the edges represent the following relations between them. The network consists of 7624 nodes and 27806 edges, with a small density of 0.001. Cluster structures are clear in this network, representing the online communities of Asian Last.fm users. Running time.As for running time of the \(GE\) methods, ShortestPath takes the lead, with a linear runtime with respect to the number of edges. DeepWalk takes longer than ShortestPath since it needs to train the random walk model, but it is less affected by the increase of graph size. Due to the quadratic runtime of SPLEE, it runs much slower than the other two \(GE\) methods when the size of graph gets bigger, so it may not be a good choice for large graphs consisting of 10K nodes and more. As for the running time of the \(DR\) methods, t-SNE and t-SGNE do not differ much, since the size of the high dimensional embedding is set to be 128 here. However, if the size of the high dimensional embedding get larger, t-SGNE may outperform t-SNE since it takes advantage of graph neighbors rather than computing pairwise similarity of nodes. Clustering accuracy.The NMI scores are used to test clustering accuracy of graph visualization. The combination of SPLEE with t-SNE shows the best quality performance in all three datasets. The baseline combination of DeepWalk and t-SNE also gives good results, but not as good as the previous combination. The ascendance of SPLEE combined with t-SNE is most clearly shown in the EmailUniv dataset, as can be seen in Figure 1. \(GE\) methods combined with t-SGNE fall below the baseline, but they are almost as good as the baseline combination, which is still pleasing with regard to quality. It is worth noticing that when the cluster structures in the original graph are clear (e.g. EmailUniv and Lastfm), different \(GE\) methods give completely the same layout combined with t-SGNE. This may be because t-SGNE takes both the high dimensional embedding and the original graph as inputs, which reduces the impact of the embedding to the layout. Another remarkable observation is that t-SGNE largely improves the NMI performance of ShortestPath. Combined with t-SNE, ShortestPath falls far behind the other two \(GE\) methods, but combined with t-SGNE, its performance is almost as good as the other two. Aesthetics quality.From the results, we can see that the AQ scores almost coincide with NMI scores, with the combination of SPLEE and t-SNE still taking the lead. Different \(GE\) methods combined with t-SGNE perform as well as this combination in some datasets (e.g. EmailUniv) and less satisfying in others, but overall their performances are pleasing for application. We also directly compare the layouts of these different combinations visually. Here we pick the Wik1 dataset as is shown in Figure 2, since its cluster structures are ambiguous, so even slight differences in aesthetic performances can be clearly observed. SPLEE combined with t-SNE (Figure 1(c)) gives the best layout of Wiki, separating the cluster structures the most clearly among all combinations. As a baseline, DeepWalk combined with t-SNE (Figure 1(a)) separates some of the clusters on the periphery, but the borders are not as clear. When using t-SGNE as the \(DR\) method (Figure 1(d), 1(e), 1(f)), clusters are separated, but the clusters seem bloated and mixed together \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Dataset** & \multicolumn{2}{|c|}{**Method (\(GE\) / \(DR\))**} & **Time (\(GE\) / \(DR\))** & **NMI** & **AQ** \\ \hline \multirow{6}{*}{EmailUniv} & DeepWalk & t-SNE & 10.102 & 5.715 & 0.4951 & 0.50 \\ & ShortestPath & t-SNE & 0.394 & 5.848 & 0.2112 & 0.30 \\ & SPLEE & t-SNE & 3.231 & 5.916 & **0.6106** & **0.57** \\ & DeepWalk & t-SGNE & 9.530 & 6.219 & 0.4917 & **0.57** \\ & ShortestPath & t-SGNE & 0.298 & 6.442 & 0.4917 & **0.57** \\ & SPLEE & t-SGNE & 3.369 & 6.226 & 0.4917 & **0.57** \\ \hline \multirow{6}{*}{Wiki} & DeepWalk & t-SNE & 24.279 & 17.410 & 0.6146 & **0.59** \\ & ShortestPath & t-SNE & 0.883 & 15.690 & 0.3627 & 0.24 \\ & SPLEE & t-SNE & 20.382 & 16.925 & **0.6147** & **0.59** \\ & DeepWalk & t-SGNE & 21.433 & 14.974 & 0.5988 & 0.56 \\ & ShortestPath & t-SGNE & 0.817 & 15.273 & 0.5883 & 0.53 \\ & SPLEE & t-SGNE & 17.759 & 14.508 & 0.5944 & 0.50 \\ \hline \multirow{6}{*}{Lastfm} & DeepWalk & t-SNE & 65.456 & 41.130 & 0.6822 & **0.67** \\ & ShortestPath & t-SNE & 2.881 & 37.680 & 0.5210 & 0.56 \\ \cline{1-1} & SPLEE & t-SNE & 343.962 & 40.694 & **0.6894** & **0.67** \\ \cline{1-1} & DeepWalk & t-SGNE & 71.484 & 44.128 & 0.6746 & 0.62 \\ \cline{1-1} & ShortestPath & t-SGNE & 2.920 & 44.354 & 0.6746 & 0.62 \\ \cline{1-1} & SPLEE & t-SGNE & 343.272 & 44.723 & 0.6746 & 0.62 \\ \hline \end{tabular} \end{table} Table 1: Comparisons on running time (in seconds), NMI, and AQ scores of different combinations of \(GE\) and \(DR\) methods. The running time is experimented on a single machine with 2.3 GHz Intel Core i7 CPU and 16 GB of memory. While the running time gives an indication on the applicability of the methods on large graphs, the main focus of this experiment is on NMI and AQ scores which represent the visualization quality. Figure 1: Graph layouts of EmailUniv dataset using SPLEE combined with t-SNE, compared with the baseline combination of DeepWalk and t-SNE. with no clear borders. Though below baseline, the results of t-SGNE are still acceptable since cluster structures can indeed be distinguished visually, and the such a problem will be covered up in graphs with clearer cluster structures (e.g. EmailUniv), as can be seen in Table 1 from the AQ scores. Hence, we can conclude that SPLEE combined with t-SNE produces layouts of the best quality. However, we also notice that SPLEE cannot deal with large datasets decently due to quadratic running time. Considering larger datasets, ShortestPath combined with t-SGNE is the best alternative choice. In the next experiment on large datasets, ShortestPath and t-SGNE will be the selected pair of \(GE\) and \(DR\) methods. ### Runtime of t-SNE and t-SGNE In this experiment, we compare the runtime of t-SNE and t-SGNE. Based on the results of Experiment 3.1, we fix our \(GE\) method to be ShortestPath and apply t-SNE and t-SGNE respectively on synthetic datasets of increasing size to compare their runtime. The datasets are generated using Lancichinetti-Fortunato-Radicchi (LFR) benchmark, which is an algorithm for generating synthetic networks with adjustable cluster structure [13]. LFR benchmark is typically used to compare between differnet community detection algorithm. It can generate datasets of arbitrary size with _a priori_ known clusters. The data generation involves three parameters [13]: * \(\tau_{1}\): power law exponent of the degree distribution of the graph; * \(\tau_{2}\): power law exponent of the community size distribution of the graph; * \(\mu\in[0,1]\): fraction of inter-community edges incident to each node (inter-community degree is \(\mu\cdot\deg(u)\)). Smaller \(\mu\) values typically correspond to graphs with more separated clusters. In this experiment, we use the implementation of LFR benchmark in the scikit-learn library of Python. We set \(\tau_{1}=4\), \(\tau_{2}=4\) and \(\mu=0.18\). The parameters are tuned so that the generated graphs will have clearly distinguishable cluster structures. Table 2 shows the results of this experiment2. Footnote 2: During this experiment, there were other processes running on the same machine. To avoid their influences on the results, we perform each experiment three times and take the average. The results of this experiment are in accordance with our analysis in Section 2. The linear-time construction of the neighborhood graph in t-SGNE significantly reduces the running time as the size of the graph increases, compared with t-SNE in which the construction runs quadratic time. Hence, t-SGNE is a more suitable algorithm for visualizing large graph datasets. ### Visualization of large graphs In this experiment, we further push the limit of the combination of ShortestPath and t-SGNE to larger graph datasets. Based on the result of Experiment 3.1 and Experiment 3.2, this combination of \(GE\) and \(DR\) methods has the best Figure 2: Graph layouts of Wiki dataset using different combinations of \(GE\) and \(DR\) methods. runtime performance and a satisfying performance in the quality of the layout. We apply this combination to graph datasets involving both generated and real-world data, with sizes ranging from 10K to 4M as follows: * Lfr_30k_0.18, Lfr_300k_0.18, and Lfr_3m_0.18 are datasets generated by the LFR benchmark as in Experiment 3.2, with 30K, 300K, and 3M nodes respectively. * TwitchGamers[18] is a network of Twitch users, where the nodes represent the users and the edges represent their following relationships. It has 168,114 nodes and 6,797,557 edges. * Dblp[22] is a co-authorship network of researchers publishing papers of Computer Science. Nodes represent the authors and a link exists if two authors have at least one publication in common. The network has 317,080 nodes and 1,049,866 edges. * YoutubeComm[22] is a network of YouTube users, where the nodes represent the users and the edges represent the online friendship. It consists of 1,134,890 nodes and 2,987,624 edges, with 8385 user-defined communities. * LiveJournal[14] is a network of LiveJournal users. Nodes represent the users and a link exists if a user declare the other member as a friend. The network has 3,997,962 nodes and 34,681,189 edges. As can be seen from the results in Table 3, the runtime of ShortestPath combined with t-SGNE is overall satisfying on large graph datasets, finishing in less than a few hours as the size of the graph scales up to 4M nodes. No memory leak occurred at the million scale either. We do notice that the runtime of t-SGNE involves randomness in certain real-world datasets (e.g. YoutubeComm). This may be due to randomness added to the stochastic machine learning algorithms, which guarantees different models for each training. However, such an abnormal increase in the runtime will not exceed an acceptable scale, since the maximum number of iterations is limited to a constant during the learning process. ## 4 Discussion In this section, we briefly discuss some problems and deficiencies of our currently proposed methods and the potential future improvements to them. General types of graphs.Although we restrict our discussion to undirected, unweighted graphs with no node attributes, t-SGNE can be applied in a more general setting. For weighted graphs, one possible modification is to construct a weighted neighborhood graph \(G_{knn,G}\) from \(G\), where the weight of the edge from \(x_{i}\) to \(x_{j}\) is the average of product of weights on the path of random walks from \(x_{i}\) to \(x_{j}\). One can also come up with a more sophisticated function of the weights on the path to avoid the potential quick decay of \(w_{ij}\) as \(k\) grows. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Dataset** & **Size (Nodes / Edges)** & **ShortestPath / t-SNE / Total** & **ShortestPath / t-SGNE / Total** \\ \hline Lfr\_30k\_0.18 & 30,000 / 75,643 & 0:00:04 / 0:01:33 / 0:01:37 & 0:00:04 / **0:00:23 / 0:00:27** \\ Lfr\_100k\_0.18 & 100,000 / 253,036 & 0:00:17 / 0:08:12 / 0:08:29 & 0:00:17 / **0:01:21** / **0:01:39** \\ Lfr\_300k\_0.18 & 300,000 / 756,664 & 0:00:57 / 0:32:43 / 0:33:40 & 0:01:03 / **0:03:54** / **0:04:57** \\ \hline \end{tabular} \end{table} Table 2: Runtime comparison of t-SNE and t-SGNE on LFR-generated large datasets in the format of h:mm:ss. The machine we use for this experiment has a 32-core Intel Xeon Gold 6338 Processor CPU and 16GB of memory. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Type** & **Dataset** & **Size (Nodes / Edges)** & **ShortestPath** & **t-SGNE** \\ \hline \multirow{3}{*}{Generated} & Lfr\_30k\_0.18 & 30,000 / 75,643 & 0:00:02 & 0:00:13 \\ & Lfr\_300k\_0.18 & 300,000 / 756,664 & 0:00:28 & 0:02:31 \\ & Lfr\_3m\_0.18 & 3,000,000 / 4,937,941 & 0:05:26 & 0:49:02 \\ \hline \multirow{3}{*}{Real-world} & TwitchGamers & 168,114 / 6,797,557 & 0:01:18 & 0:24:34 \\ & Dblp & 317,080 / 1,049,866 & 0:00:27 & 0:12:33 \\ & YoutubeComm & 1,134,890 / 2,987,624 & 0:02:54 & 3:09:23 \\ & LiveJournal & 3,997,962 / 34,681,189 & 0:45:11 & 2:05:51 \\ \hline \end{tabular} \end{table} Table 3: Runtime of ShortestPath combined with t-SGNE on large datasets in the format of h:mm:ss. The machine we use for the experiment has an 8-core Apple M2 CPU and 16 GB of memory. Other \(Ge\) methods.The reason why t-SGNE is not suitable for \(GE\) methods like SDNE is that \(G_{knn,X}\) cannot approximate the distance structures in \(X\). This raises a natural question: can we perform graph exploring algorithms more sophisticated than BFS, or more sophisticated analyses on the result of random walks, so that we can capture the more complex distance structure? For example, maybe a comparison of the degrees can capture the structural similarity of nodes. Other combinations of \(Ge\) and \(Dr\) methods.ShortestPath combined with t-SGNE is currently the selected combination for visualizing large graphs due to its pleasing visualization quality and reasonable runtime. However, SPLEE combined with t-SNE is actually the best combination considering visualization quality. Is it possible to improve the quality of the combination of ShortestPath and t-SGNE, or reduce the runtime of the combination of SPLEE and t-SNE, in order to obtain a better layout in reasonable time?
2305.15156
SyNDock: N Rigid Protein Docking via Learnable Group Synchronization
The regulation of various cellular processes heavily relies on the protein complexes within a living cell, necessitating a comprehensive understanding of their three-dimensional structures to elucidate the underlying mechanisms. While neural docking techniques have exhibited promising outcomes in binary protein docking, the application of advanced neural architectures to multimeric protein docking remains uncertain. This study introduces SyNDock, an automated framework that swiftly assembles precise multimeric complexes within seconds, showcasing performance that can potentially surpass or be on par with recent advanced approaches. SyNDock possesses several appealing advantages not present in previous approaches. Firstly, SyNDock formulates multimeric protein docking as a problem of learning global transformations to holistically depict the placement of chain units of a complex, enabling a learning-centric solution. Secondly, SyNDock proposes a trainable two-step SE(3) algorithm, involving initial pairwise transformation and confidence estimation, followed by global transformation synchronization. This enables effective learning for assembling the complex in a globally consistent manner. Lastly, extensive experiments conducted on our proposed benchmark dataset demonstrate that SyNDock outperforms existing docking software in crucial performance metrics, including accuracy and runtime. For instance, it achieves a 4.5% improvement in performance and a remarkable millionfold acceleration in speed.
Yuanfeng Ji, Yatao Bian, Guoji Fu, Peilin Zhao, Ping Luo
2023-05-23T08:57:18Z
http://arxiv.org/abs/2305.15156v2
# SyNDock: N Rigid Protein Docking via ###### Abstract The regulation of various cellular processes heavily relies on the protein complexes within a living cell, necessitating a comprehensive understanding of their three-dimensional structures to elucidate the underlying mechanisms. While neural docking techniques have exhibited promising outcomes in binary protein docking, the application of advanced neural architectures to multimeric protein docking remains uncertain. This study introduces SyNDock, an automated framework that swiftly assembles precise multimeric complexes within seconds, showcasing performance that can potentially surpass or be on par with recent advanced approaches. SyNDock possesses several appealing advantages not present in previous approaches. Firstly, SyNDock formulates multimeric protein docking as a problem of learning global transformations to holistically depict the placement of chain units of a complex, enabling a learning-centric solution. Secondly, SyNDock proposes a trainable two-step SE(3) algorithm, involving initial pairwise transformation and confidence estimation, followed by global transformation synchronization. This enables effective learning for assembling the complex in a globally consistent manner. Lastly, extensive experiments conducted on our proposed benchmark dataset demonstrate that SyNDock outperforms existing docking software in crucial performance metrics, including accuracy and runtime. For instance, it achieves a 4.5% improvement in performance and a remarkable millionfold acceleration in speed. ## 1 Introduction Protein complexes serve as vital components in numerous biological processes, regulating the expression of essential functions like DNA transcription [1], mRNA translation [2], and signal transduction [3]. Remarkably, experimental evidence has shown that these functions depend not only on binary complexes but also on the involvement of multimeric complexes. Structural information about complexes provides critical insight to understand protein function, cellular components, and mechanisms, aiding biomedical research in identifying drug targets, designing therapies, and advancing our knowledge of diseases. However, the process of detecting the structures of multimeric protein complexes using experimental techniques such as X-ray diffraction [4] is often characterized by slowness, high cost, and increased technical challenges compared to solving the structures of single protein chains. Therefore, the advancement of Figure 1: **Task illustration:** (a) Input multiple proteins and (b) the predicted N-body complex structure. computational methods for predicting the docking of multimeric protein complexes is of immense importance and offers significant benefits to biomedical research. Research on the computational prediction of protein complex structures has been conducted for several decades. Most popular docking software [5; 6; 7; 8; 9; 10; 11; 12] is typically limited to the assembly of two protein structures (also known as pairwise docking), and only a few methods [13; 14; 15; 16] can solve multimeric protein docking, although some of them with many restrictions (i.e., homomeric, symmetric). Specifically, these algorithms largely follow the steps of coarsely generating a large number (e.g., millions) of possible pairwise docking candidates, followed by combining pairwise solutions using a combinatorial optimization algorithm (e.g., heuristic or genetic algorithm), and further fitting and refining the top-ranked complex structures based on an energy model (e.g., Monte Carlo [17]). However, all of these methods are still computationally expensive and often take between hours and days to predict, with no guarantee of accurately finding complex structures. In this paper, we address the problem of \(N\)_rigid protein docking_, which refers to predicting the 3D structure of a multimeric complex from the unbound states of individual proteins (Figure 1). This problem assumes that the proteins remain rigid during the docking process, without undergoing any deformation, which is a reasonable and widely applicable assumption in various biological contexts. As an effective solution, we propose SyNDock, a novel learning-centered approach to multimeric protein docking. Our method formulates the problem as learning global \(SE(3)\) transformations for each input protein to recover the exact placement and orientation of protein units within the target complex. The pipeline of SyNDock, as shown in Figure 2, consists of several steps. First, a graph-based deep model called the Independent \(SE(3)\)-Equivariant Multi-graph Matching Network (IEMMN) extracts informative features from the protein chains. Next, the relative transformations between pairs of protein chains are estimated, along with the corresponding confidence values. Finally, a differentiable transformation synchronization module learns to refine the relative transformations to obtain accurate absolute transformations under the "self-consistent" constraint, overcoming potential noise in the input estimations. These modules are interconnected and trained in an end-to-end fashion and can be iterated to improve performance. In summary, this paper has three main **contributions**. First, we formulate multimeric protein docking as a problem of learning global transformations to place chain units of a complex, we propose the first end-to-end solution, and it opens a new perspective on multimeric complex docking. Second, we present a carefully designed, fast, and end-to-end two-step pipeline that enables effective learning to assemble the complex in a globally consistent manner. Third, we contribute a hetero-multimeric complex dataset curated from the DIPS dataset [18]. Extensive experiments show that SyNDock outperforms recent advanced methods in both accuracy and speed. For example, SyNDock achieves a 4.5% performance improvement and a millionfold speedup over Multi-Zerd [17] on the tetrameric protein docking task. These contributions collectively demonstrate the effectiveness and practicality of our approach in advancing the field of multimeric protein docking. Figure 2: **Overview of SyNDock,** which comprises three main components: (a) The IEMNN backbone models proteins as graphs and extracts feature embeddings. (b) The pairwise pose estimation module and confidence estimation module utilize the feature embeddings to estimate relative transformations and docking confidence scores between protein pairs. (c) The transformation synchronization module combines these predictions in a learnable manner to recover absolute transformations. The prediction of the multimeric complexes can be easily recovered by applying affine transformations. Related Work Protein Structure Prediction.Recently, the deep learning based methods AlphaFold2 [19] and Rosettafold [20] have made a profound impact on the field of structural biology [21]. These methods have enabled the prediction of protein structure from primary amino acid sequences and have demonstrated a promising avenue for the tertiary structure determination of proteins. Inspired by the success of these two methods, there have been several attempts to use pre-trained AlphaFold2 to predict the structure of protein complexes [22; 23; 24] by inserting a linker between two isolated protein sequences. These methods rely on the slow construction of multiple sequence alignment (MSA) features and are often unable to handle complex docking tasks involving more than two chains. Protein-Protein Docking.Classical experimental methods for determining the structure of protein complexes, including X-ray crystallography [25; 26], nuclear magnetic resonance spectroscopy (NMR) [27; 28], and cryogenic electron microscopy (Cryo-EM) [29; 30; 31]. However, such assays are laborious and expensive, and researchers are trying to solve this problem with computational methods. As a focal point of activity in computational and structural biology, structure-based docking approaches [32] are attracting great research interest [33; 34; 35; 36; 37; 38; 39; 40; 41; 42]. They are mainly designed to solve the assembly of paired protein structures, also known as pairwise docking, which typically consists of several main steps: sampling a large number of candidate protein complexes and ranking these candidates with a score function. Finally, the top candidates are further refined using energy or geometric models [43]. Recently, EquiDock [44] proposed an end-to-end rigid pairwise protein docking method that is free of the candidate sampling constraint and thus greatly accelerates prediction. Despite the observed steady progress in the field, methods for assembling three or more chains have received less attention. The pioneering work [45; 17] enabled the assembly of multimeric proteins by using optimal combination algorithms from pairwise docked candidates generated by standard binary docking methods. A series of subsequent papers [14; 15] made improvements in various aspects to achieve even better performance. However, these methods follow the classic paradigm of sampling a large number of complex candidates and further optimizing them using hand-crafted geometric or chemical features to obtain the final structure, resulting in inefficient predictions and unsatisfactory performance. Transformation Synchronization.Given a collection of pairwise estimates, synchronization seeks to synchronize and recover the absolute estimates of latent values that best explain them. In this paper, we focus on transformation synchronization, which is applicable to both \(SO(3)\) and \(SE(3)\). For docking multimeric proteins, one could naively consider only adjacent protein pairs and aggregate the transformations sequentially. However, this only works if all pairwise estimates are accurate, and an inaccurate/non-existent pairwise alignment will cause the result to fail. Various approaches have been proposed to more accurately determine the optimal global transformation by considering the additional information contained in the relative transformation, such as the level of confidence. The method of [46; 47] proposes a closed-form solution for \(SO(3)\) and \(SE(3)\) synchronization based on the eigendecomposition of a weighted pairwise transformation matrix. The following approaches [48; 49] build on these ideas and integrate transformation synchronization with a supervised end-to-end multi-view point cloud registration pipeline. We extend transformation synchronization for the first time to effectively and efficiently predict the structure of multimeric protein complexes. ## 3 Preliminaries and Background Problem StatementWe consider a set of \(N\) potentially interacting proteins \(\mathcal{S}=\left\{\mathcal{G}_{k}\right\}_{k=1}^{N}\) as input, which forms a multimeric complex. Each protein \(\mathcal{G}_{k}\) consists of \(n_{k}\) residues, represented in their bound (docked) state as 3D point clouds \(\mathbf{X}_{k}^{*}\in\mathbb{R}^{3\times n_{k}}\), where the position of each residue is given by the coordinate of its corresponding \(\alpha\) carbon atom. In the unbound state, each undocked protein is arbitrarily rotated and translated in space, resulting in a point cloud \(\mathbf{X}_{k}\in\mathbb{R}^{3\times n_{k}}\) with modified random positions and orientations. Given arbitrary proteins and their unbound positions \(\mathbf{X}_{k}\) as input, the rigid protein docking task seeks to calculate the absolute transformations \(\mathbf{T}_{k}\) such that the result of the affine transformation \(\mathbf{X}_{k}\otimes\mathbf{T}_{k}\) is equal to the desired result \(\mathbf{X}_{k}^{*}\). Intuitively, we represent a transformation \(\mathbf{T}\in SE(3)\) by a \(4\times 4\) matrix. Unless otherwise noted, we denote the rotation and translation components of \(\mathbf{T}\) as \(\mathbf{R}\in SO(3)\subset\mathbb{R}^{3\times 3}\) and \(\mathbf{t}\in\mathbb{R}^{3}\) respectively. Rather than regressing the global transformations directly, we first estimate the pairwise transformations \(\mathbf{T}_{k,l}\) for each pair of proteins \(k\) and \(l\), and subsequently exploit transformation synchronization to infer the optimal global transformations \(\mathbf{T}_{k}\) accurately. Please refer to the Table 5 in Appendix A for the comprehensive notation of the paper. Equivariance and InvarianceTo achieve a reliable prediction of the complex structure, we need to satisfy two constraints: (1) _equivariance constraint_: We desire the predicted structure of the proteins to be independent of their initial positions, orientations, and order of input. (2) _invariance constraint_: We require the final complex structure to be the same after superposition, regardless of the random transformations that are applied. Formally, we wish to guarantee that: **Definition 3.1** (Equivariance).: Let \(\mathrm{T}_{g}:\mathcal{X}\mapsto\mathcal{Y}\) be a set of transformations on \(\mathcal{X}\) for the abstract group \(g\in G\). We say that a function \(\phi:\mathcal{X}\mapsto\mathcal{Y}\) is equivariant to \(g\) if there exists an equivalent transformation on its output space: \(\mathrm{S}_{g}:\mathcal{Y}\mapsto\mathcal{Y}\) such that \(\phi(\mathrm{T}_{g}(\mathbf{x}))=\mathrm{S}_{g}(\phi(\mathbf{x}))\). **Definition 3.2** (Rotation Equivariance).: We say that a function \(\phi\) is rotation equivariant if for any orthogonal matrix \(\mathbf{Q}\in\mathbb{R}^{3\times 3}\), rotating the input \(\mathbf{X}\) resulting in an equivalent rotation of the output \(\phi(\mathbf{X})\), i.e., \(\mathbf{Q}\phi(\mathbf{X})=\phi(\mathbf{Q}\mathbf{X})\). **Definition 3.3** (Translation Equivariance).: We say that a function \(\phi\) is translation equivariant if for any translation vector \(\mathbf{g}\in\mathbb{R}^{3}\), translating the input \(\mathbf{X}\) by \(\mathbf{t}\) results in an equivalent translation of the output \(\phi(\mathbf{X})\), i.e, let \(\mathbf{X}+\mathbf{g}:=(\mathbf{x}_{1}+\mathbf{g},\ldots,\mathbf{x}_{n}+ \mathbf{g})\), we have \(\phi(\mathbf{X})+\mathbf{g}=\phi(\mathbf{X}+\mathbf{g})\). **Definition 3.4** (\(SE(3)\)-Equivariance).: We say that a function \(\phi\) is \(SE(3)\)-equivariant if it is translation equivariant and rotation equivariant. **Definition 3.5** (Permutation Equivariance).: We say that a function \(\phi\) is permutation equivariant if for any column index permutation operator \(\sigma\), permuting the input \(\mathbf{X}\) results in the same permutation of the output i.e., \(\sigma(\phi(\mathbf{X}))=\phi(\sigma(\mathbf{X}))\). **Definition 3.6** (Invariance).: Let \(\mathrm{T}_{g}:\mathcal{X}\mapsto\mathcal{Y}\) be a set of transformations on \(\mathcal{X}\) for the group \(g\in G\). We say that a function \(\phi:\mathcal{X}\mapsto\mathcal{Y}\) is invariant to \(g\) if \(\phi(\mathbf{X})=\phi(\mathrm{T}_{g}(\mathbf{X}))\). ## 4 SyNDock OverviewThe overall structure of SyNDock is outlined in Figure 2, which begins by training an encoder \(\Phi\) network, represented by a graph neural network (GNN) that is \(SE(3)\) equivariant, to extract features from \(N\) given protein chains \(\mathcal{S}\). Then, the pairwise relative transformations \(\{\mathbf{T}_{k,l}\}_{k=1,k\neq l}^{N}\) between the \(\binom{N}{2}\) pairs of protein chains are estimated along with the corresponding confidence values \(\{c_{k,l}\}\), and the predicted relative parameters are then synchronized to recover the global absolute transformation \(\{\mathbf{T}_{k}\}_{k=1}^{N}\) for each of the individual proteins. This overall procedure can be further refined through an iterative process, allowing for incremental improvements in the results. We provide the implementation details in Algorithm 2 in Appendix B for better clarity. Protein RepresentationFollowing the work of [50; 44], we encode each input protein as a 3D proximity graph, and the graph of protein \(k\) is denoted as \(\mathcal{G}_{k}=(\mathcal{V}_{k},\mathcal{E}_{k})\), where each node \(i\in\mathcal{V}_{k}\) corresponds to an amino acid with scalar and vector features of \(\mathbf{h}_{i}\in\mathbb{R}^{f_{1}}\) and position \(\mathbf{x}_{i}\in\mathbb{R}^{3}\) corresponding to the Cartesian coordinate of the \(\alpha\) carbon atom. An edge \((i,j)\in\mathcal{E}_{k}\) exists if vertex \(j\) is one of the ten most similar neighbors of vertex \(i\) based on the Euclidean distance of the coordinates of the two vertices. Each \((i,j)\) also has edge features that encode both scalar and vector features. To satisfy the equivariance constraint, we use the \(SE(3)\)-invariant feature engineering proposed by [44] to construct the additional features \(\mathbf{f}_{i}\in\mathbb{R}^{f_{2}}\) and \(\mathbf{f}_{j\to i}\in\mathbb{R}^{f_{3}}\) of each node \(i\in\cup_{k=1}^{N}\mathcal{V}_{k}\) and each edge \((i,j)\in\cup_{k=1}^{N}\mathcal{E}_{k}\), respectively. For more details on protein encoding and feature engineering, please refer to Appendix B. As shown in Figure 3, we implement the encoder network \(\Phi\) for representation learning by extending the design of [44]. We name it as Independent \(SE(3)\)-Equivariant Multi-Graph Matching Network Figure 3: **IEMMN** message passing (IEMMN). In specific, the \(\Phi\) performs node coordinate and feature embedding updating for the input of the protein graphs \(\left\{\mathcal{G}_{k}\right\}_{k=1}^{N}\), including the inter- and intra- graph message passing as well as \(SE(3)\)-equivariant coordinate updates. The \(t\)-th layer of the \(\Phi\) performs the update of node feature embeddings \(\{\mathbf{h}_{i}^{(t)}\}_{i\in\cup_{k}^{N}\mathcal{V}_{k}}\) and node coordinate embeddings \(\{\mathbf{x}_{i}^{(t)}\}_{i\in\cup_{k}^{N}\mathcal{V}_{k}}\) as: \[\mathbf{m}_{j\to i}^{(t)} =\phi^{e}\left(\mathbf{h}_{i}^{(t)},\mathbf{h}_{j}^{(t)},\exp(- \left\|\mathbf{x}_{i}^{(t)}-\mathbf{x}_{j}^{(t)}\right\|^{2}/\sigma)\mathbf{f}_ {j\to i}\right),\quad\forall(i,j)\in\cup_{k=1}^{N}\mathcal{E}_{k} \tag{1}\] \[\boldsymbol{\mu}_{j\to i}^{(t)} =a_{j\to i}^{(t)}\mathbf{W}\mathbf{h}_{j}^{(t)},\forall i\in \mathcal{V}_{k},\quad j\in\cup_{l=1,l\neq k}^{N}\mathcal{V}_{l}\] (2) \[\mathbf{m}_{i}^{(t)} =\frac{1}{|\mathcal{N}(i)|}\sum_{j\in\mathcal{N}(i)}\mathbf{m}_{ j\to i}^{(t)},\quad\forall i\in\cup_{k=1}^{N}\mathcal{V}_{k}\] (3) \[\boldsymbol{\mu}_{i}^{(t)} =\sum_{j\in\mathcal{V}_{k}}\boldsymbol{\mu}_{j\to i}^{(t)}, \quad\forall i\in\mathcal{V}_{k},k=1,\ldots,N\] (4) \[\mathbf{x}_{i}^{(t+1)} =\sum_{j\in\mathcal{N}(i)}\left(\mathbf{x}_{i}^{(t)}-\mathbf{x}_ {j}^{(t)}\right)\phi^{x}\left(\mathbf{m}_{j\to i}\right)+\eta\mathbf{x}_{i}^{(0 )}+(1-\eta)\mathbf{x}_{i}^{(t)},\quad\forall i\in\cup_{k=1}^{N}\mathcal{V}_{k}\] (5) \[\mathbf{h}_{i}^{(t+1)} =(1-\beta)\cdot\mathbf{h}_{i}^{(t)}+\beta\cdot\phi^{h}\left( \mathbf{h}_{i}^{(t)},\mathbf{m}_{i}^{(t)},\boldsymbol{\mu}_{i}^{(t)}, \mathbf{f}_{i}\right),\quad\forall i\in\underset{k=1}{N}^{N}\mathcal{V}_{k} \tag{6}\] where \(\mathcal{N}(i)\) are the neighbors of node \(i;\phi^{x}\) is a real-valued (scalar) function; \(\mathbf{W}\) is a learnable matrix; \(\phi^{h},\phi^{e}\) are functions outputting a vector \(\mathbb{R}^{d};\mathbf{f}_{j\to i}\) and \(\mathbf{f}_{i}\) are the original edge and node features (extracted \(SE(3)\)-invariantly from the residues); \(a_{j\to i}\) is an attention based coefficient with trainable shallow neural networks \(\psi^{q}\) and \(\psi^{k}\) : \[a_{j\to i}^{(t)}=\frac{\exp\left(\left\langle\psi^{q}\left(\mathbf{h}_{i}^{(t) }\right),\psi^{k}\left(\mathbf{h}_{j}^{(t)}\right)\right\rangle\right)}{\sum_ {j^{\prime}}\exp\left(\left\langle\psi\ast\left(\mathbf{h}_{i}^{(t)}\right), \psi^{k}\left(\mathbf{h}_{j^{\prime}}^{(t)}\right)\right\rangle\right)} \tag{7}\] The output of encoder for each protein is then denoted as \(\mathbf{Z}=[\mathbf{z}_{1},\ldots,\mathbf{z}_{n}]\in\mathbb{R}^{3\times n}, \mathbf{H}=[\mathbf{h}_{1}^{(T)},\ldots,\mathbf{h}_{n}^{(T)}]\in\mathbb{R}^{d \times n}\), where \(\mathbf{z}_{i}=\mathbf{x}_{i}^{(T)}\) for all \(i\in\cup_{k=1}^{N}\mathcal{V}_{k}\). \(\mathbf{Z}\) and \(\mathbf{H}\) will be used as the input for subsequent modules. It is then straightforward to prove the following: **Proposition 4.1**.: \(\mathbf{m}_{j\to i},\mathbf{m}_{i},\boldsymbol{\mu}_{j\to i},\boldsymbol{\mu}_ {i}\)_, and \(\mathbf{h}_{i}\) in the message passing of IEMMN are invariant to any orthogonal rotation \(\mathbf{Q}\in\mathbb{R}^{3\times 3}\), any translation vector \(\mathbf{g}\in\mathbb{R}^{3}\)._ **Proposition 4.2**.: _IEMMN, denoted as \(\Phi_{\text{IEMMN}}\), is rotation equivariant, i.e., for any orthogonal rotation \(\mathbf{Q}\in\mathbb{R}^{3\times 3}\), at each layer \(t\) we have: \(\mathbf{Q}\mathbf{X}^{(t+1)}=\mathbf{Q}\Phi_{\text{IEMMN}}(\mathbf{X}^{(t)})= \Phi_{\text{IEMMN}}(\mathbf{Q}\mathbf{X}^{(t)})\)._ **Proposition 4.3**.: _IEMMN is translation equivariant, i.e., for any translation vector \(\mathbf{g}\in\mathbb{R}^{3}\), at each layer \(t\) we have: \(\mathbf{X}^{(t+1)}+\mathbf{g}=\Phi_{\text{IEMMN}}(\mathbf{X}^{(t)})+\mathbf{g}= \Phi_{\text{IEMMN}}(\mathbf{X}^{(t)}+\mathbf{g})\)._ **Proposition 4.4**.: _IEMMN is permutation equivariant, i.e., for any permutation operator \(\sigma\) on the order of \(\mathbf{X}\), at each layer \(t\) we have: \(\sigma(\mathbf{X}^{(t+1)})=\sigma(\Phi_{\text{IEMMN}}(\mathbf{X}^{(t)}))= \Phi_{\text{IEMMN}}(\sigma(\mathbf{X}^{(t)}))\)._ Therefore, the node feature embeddings \(\mathbf{H}\) is invariant by Theorem 4.1 and the node coordinate embeddings \(\mathbf{Z}\) is \(SE(3)\)-equivariant and permutation equivariant by Theorems 4.2 and 4.3 and Theorem 4.4, respectively. Estimating Pairwise Transformation and Confidence ScoreFor each pair of proteins, the goal of pairwise protein docking is to retrieve optimal \(\hat{\mathbf{R}}_{k,l}\) and \(\hat{\mathbf{t}}_{k,l}\): \[\hat{\mathbf{R}}_{k,l},\hat{\mathbf{t}}_{k,l}=\operatorname*{arg\,min}_{ \mathbf{R}_{k,l},\mathbf{t}_{k,l}}\sum_{s=1}^{N_{\text{BP}}}\left\|\mathbf{R}_{k,l}\mathbf{p}_{s}+\mathbf{t}_{k,l}-\varphi\left(\mathbf{p}_{s},\mathbf{q}_{s} \right)\right\|^{2} \tag{8}\] where \(N_{\text{BP}}\) denotes the number of binding pockets between the source and target proteins, and \(\phi(\mathbf{p}_{s},\mathbf{q}_{s})\) is a correspondence function that maps the pocket points \(\{\mathbf{p}_{s}\}_{s=1}^{N_{\text{BP}}}\) to their corresponding binding points \(\{\mathbf{q}_{s}\}_{s=1}^{N_{\text{BP}}}\) in the target protein. In our setting, the binding pockets of two interacting proteins are matched one by one, so the \(\varphi\) can be thought of as an identity function. In the following, we will show how to adapt the optimal transport loss and Kabsch algorithm in [44] for pairwise transformation and confidence score prediction. Pairwise Transformation PredictionIn specific, we adopt the multi-head attention mechanism to setup up \(M\) key-points queries of each protein of the input pair \(\{\mathcal{G}_{k},\mathcal{G}_{l}\}\), The set of binding keypoints predictions can be formulated as \(\mathbf{y}_{m}^{k}:=\sum_{i\in\mathcal{V}_{k}}\alpha_{i}^{(m)}\mathbf{z}_{k}^{i}\), for all \(m\in[M]\) where \(\mathbf{z}_{i}^{k}\in\mathbb{R}^{3}\) denotes the \(i\)-th column of the matrix \(\mathbf{Z}_{k}\), and \(\alpha_{i}^{(m)}=\mathrm{softmax}_{i}\left(\frac{1}{\sqrt{d}}\mathbf{h}_{k}^{k \top}\mathbf{W}_{m}^{\prime}\mu\left(\varphi\left(\mathbf{H}_{l}\right)\right)\right)\) are attention scores, with \(\mathbf{W}_{m}^{\prime}\in\mathbb{R}^{d\times d}\) a parametric matrix (different for each attention head), \(\varphi\) a linear layer plus a LeakyReLU non-linearity, and \(\mu(\cdot)\) is the mean vector. We train them to approximate/superimpose the truth binding pockets of the respective protein pair. We drive such a process by assuming a _many-to-many_ matching loss (i.e., optimal transport loss), which can be formulated as \(\mathcal{L}_{\mathrm{OT}}=\min_{\mathbf{T}\in\mathcal{U}(S,K)}\langle \mathbf{T},\mathbf{C}\rangle\), where \(\mathbf{C}_{m,s}=\left\|\mathbf{y}_{km}-\mathbf{p}_{ks}\right\|^{2}+\left\| \mathbf{y}_{lm}-\mathbf{p}_{ls}\right\|^{2}\), and \(\mathcal{U}(S,M)\) is the set of \(S\times M\) transport plans with uniform marginals \(\mathbf{p}_{k}\) is the binding points of protein \(k\). Next, given the pocket point prediction \(\mathbf{Y}_{k},\mathbf{Y}_{l}\), the relative transformation \(\mathbf{R}_{k,l}\) from the source protein to the target protein can easily be calculated by a differentiable ordered point sets registration algorithm (e.g., Kabsch algorithm) as: \(\mathbf{R}_{k,l},\mathbf{t}_{k,l}=\mathcal{F}_{reg}(\mathbf{Y}_{k},\mathbf{Y}_ {l})\) Pairwise Confidence Estimation.Synchronization is more effective when weights are provided for each of the pairwise estimates. We weight each pairwise transformation, formulate the estimation of \(c_{k,l}\) as a binary classification task, and define the confidence estimation function as \[c_{k,l}\leftarrow\Phi_{con}\left(\mathbf{x}_{k}\otimes\mathbf{T}_{k,l}, \mathbf{x}_{j},\mathbf{h}_{k},\mathbf{h}_{l}\right) \tag{9}\] where the input consists of (i) the coordinates of the transformed source and target proteins, the \(\otimes\) is the affine transformation function that aligns the source and target proteins, (ii) the feature/latent vector of proteins. We use a mean operator to generate features. These features are combined and fed into a confidence estimation network with three fully connected layers (64-64-32-1) and a sigmoid activation function. The output of this network is a score \(c_{k,l}\) which is a value between 0 and 1. The corresponding ground truth \(c_{k,l}^{\mathrm{GT}}\) is obtained directly from the training data. To avoid redundant computations, we only estimate pairwise pose predictions and scores for pairs \((k,l)\) where \(k<l\). For pairs \(k>l\), we set \(c_{l,k}=c_{k,l}\), \(\mathbf{R}_{l,k}=\mathbf{R}_{k,l}^{-1}\), and \(\mathbf{t}_{l,k}=\mathbf{R}_{k,l}\cdot\mathbf{t}_{k,l}\). The training of the pairwise protein docking module is directly supervised by the combination of the relative transformation loss \(\mathcal{L}_{\mathrm{REL}}\) and the confidence loss function \(\mathcal{L}_{\mathrm{CON}}\), where the \(\mathcal{L}_{\mathrm{CON}}\) denotes the binary cross entropy loss, and \[\mathcal{L}_{\mathrm{REL}}=\sum_{(k,l)\in\mathcal{S}}\left(\left\|\mathbf{R} _{k,l}-\mathbf{R}_{k,l}^{\mathrm{GT}}\right\|_{2}+\left\|\mathbf{t}_{k,l}- \mathbf{t}_{k,l}^{\mathrm{GT}}\right\|_{2}\right). \tag{10}\] SE(3) Transformation SynchronizationGiven the estimated \(\binom{N}{2}\) relative protein-to-protein transformations, we aim to find the \(N\) global absolute transformations \(\{\mathbf{T}_{k}\}\) that best explain them. Previous method [46, 47] proposed a closed-form solution to \(SE(3)\) synchronization using spectral decomposition. This idea was further developed in end-to-end learning pipelines [48, 49]. The approach is based on the construction of a block matrix of pairwise transformations, where the block \((k,l)\) represents the transformation between instance \(k\) and \(l\). The key insight in this line of research is that the absolute transformations can be extracted from the pairwise transformation matrix by means of eigendecomposition. The global transformation parameters can be determined either through joint calculation (also known as transformation synchronization) or by breaking the problem into two parts: rotation synchronization and translation synchronization. In this paper, we opt for the latter approach, dividing the problem into rotational and translational synchronization, which admits a differentiable closed-form solution under the spectral relation. Rotation Synchronization.Our approach to rotation synchronization is based on a Laplacian rotation synchronization formulation that has been proposed previously in the literature [48]: \(\mathbf{R}_{k}^{*}=\arg\min_{\mathbf{R}_{k}\in SO(3)}\sum_{(k,l)\in\mathcal{E }}c_{k,l}\|\mathbf{R}_{k,l}-\mathbf{R}_{k}\mathbf{R}_{l}^{\top}\|_{F}^{2}\). More precisely, consider a symmetric matrix \(\mathbf{L}\in\mathbb{R}^{3N\times 3N}\), which resembles a block Laplacian matrix, defined as: \[\mathbf{L}=\begin{bmatrix}\mathbf{I}_{3}\sum_{i}c_{k,1}&-c_{1,2}\mathbf{R}_{1,2 }&\cdots&-c_{1,N}\mathbf{R}_{1,N}\\ -c_{2,1}\mathbf{R}_{2,1}&\mathbf{I}_{3}\sum_{i}c_{k,2}&\cdots&-c_{2,N}\mathbf{R }_{2,N}\\ \vdots&\vdots&\ddots&\vdots\\ -c_{N,1}\mathbf{R}_{N,1}&-c_{N,2}\mathbf{R}_{N,2}&\cdots&\mathbf{I}_{3}\sum_{i }c_{k,N}\end{bmatrix}.\] where the \(c_{k,l}\) represents the pairwise confidence score. \(\mathbf{R}_{k,l}\) is the estimated relative rotation transformation, and the \(N\) is denoted as the number of input protein chains. Consequently, we collect the three eigenvectors \(\mathbf{U}=\left(\mathbf{U}_{1}^{\top},\cdots,\mathbf{U}_{N}^{\top}\right)^{ \top}\in\mathbb{R}^{3N\times 3}\) that correspond to the three smallest eigenvalues of \(\mathbf{L}\). To avoid reflections, we choose the sign of each eigenvector such that \(\sum_{i=1}^{N}\det\left(\mathbf{U}_{k}\right)>0\). The least squares estimates of the global rotation matrices \(\mathbf{R}_{i}\) are then given, under relaxed orthogonality and determinant constraints, by first performing singular value decomposition (SVD) on each \(\mathbf{U}_{k}=\mathbf{V}_{k}\mathbf{\Sigma}_{k}\mathbf{W}_{k}^{\top}\) and then output the corresponding absolute rotation estimate as: \[\mathbf{R}_{k}=\mathbf{V}_{k}\mathbf{W}_{k}^{\top}\,,\quad\text{for all }k=1,2,\ldots,N. \tag{11}\] Translation Synchronization.We can retrieve global translation vectors \(\{\mathbf{t}_{k}\}\) that minimize the following least squares problem: \[\mathbf{t}_{k}=\operatorname*{arg\,min}_{\mathbf{t}_{k}}\sum_{(k,l)\in \mathcal{E}}c_{k,l}\left\|\hat{\mathbf{R}}_{k,l}\mathbf{t}_{k}+\hat{\mathbf{ t}}_{k,l}-\mathbf{t}_{l}\right\|^{2} \tag{12}\] The closed-form solution is easy to get: \(\mathbf{t}=\mathbf{L}^{+}\mathbf{b}\), where \(\mathbf{t}^{*}=[\mathbf{t}_{1}^{*\top},\ldots,\mathbf{t}_{N}^{\top}]^{\top} \in\mathbb{R}^{3N}\) and \(\mathbf{b}=[\mathbf{b}_{1}^{*\top},\ldots,\mathbf{b}_{N}^{*\top}]^{\top}\in \mathbb{R}^{3N}\) with \(\mathbf{b}_{k}:=-\sum_{l\in\mathcal{N}(k)}c_{k,l}\mathbf{T}_{k,l}^{\top} \mathbf{t}_{k,l}\). Iterative Refinement of \(N\)-body Docking.The above formulation allows an implementation in an iterative scheme, with the ability to refine the predictions step by step, starting from a coarse level. To this end, we can start each subsequent iteration by using the synchronized estimated parameters \(\mathbf{T}_{k}\) from the previous iteration to fit each protein, setting \(\mathbf{X}_{k}^{*}=\mathbf{X}_{k}\otimes\mathbf{T}_{k}\). In this paper, we iterate the refinement 4 times for each sample and calculate the loss with the final prediction. End-to-End Training AlgorithmWe supervise the network training using the following supervisions: \(\mathcal{L}_{\mathrm{REL}}\), \(\mathcal{L}_{\mathrm{CON}}\), \(\mathcal{L}_{\mathrm{OT}}\) as well as the global _Synchronization Loss_: \(\mathcal{L}_{\mathrm{SYNC}}=\frac{1}{N}\sum_{k=1}^{N}\left\|\mathbf{X}_{k} \otimes\mathbf{T}_{k}-\mathbf{X}_{k}^{\mathrm{GT}}\right\|_{2}\) where \(\otimes\) is the affine transformation. The total loss functions can thus be defined as \[\mathcal{L}_{\mathrm{OVERALL}}=\mathcal{L}_{\mathrm{OT}}+\mathcal{L}_{ \mathrm{REL}}+\mathcal{L}_{\mathrm{CON}}+\mathcal{L}_{\mathrm{SYNC}}. \tag{13}\] ## 5 Experiments Dataset.We evaluated our method on the DIPS dataset [51], which is currently the largest protein complex structure dataset mined from the Protein Data Bank and tailored to the rigid body docking assumption. We adopted the same split as defined in EquiDock [44], where the train/validation/test splits are based on the protein family. After excluding a few structures with more than 10K atoms, we get 14,225 PDB for the training set, 368 for the validation set, and 393 for the test. To generate subsets of N-body protein complexes, we extracted connected subgraphs with N degrees and removed complexes with more than ten chains. The statistics of the curated dataset is in Table 1. We also performed a binary docking evaluation in Table 3, using the default settings provided by [44], which included 100 randomly \begin{table} \begin{tabular}{c c c c} \hline \hline N-Body & Train & Valid & Test \\ \hline 2 & 30,299 & 748 & 887 \\ 3 & 19,664 & 441 & 593 \\ 4 & 16,415 & 303 & 429 \\ 5 & 14,105 & 251 & 257 \\ 6 & 10,689 & 281 & 114 \\ 7 & 5,881 & 220 & 30 \\ 8 & 2,143 & 90 & 5 \\ 9 & 545 & 20 & 0 \\ 10 & 53 & 2 & 0 \\ \hline \hline \end{tabular} \end{table} Table 1: **Curated dataset statistics.** Figure 4: **Complex Prediction Visualization.** Qualitative comparisons between our approach and baseline approaches on the trimeric protein samples. selected test pairs from the DIPS dataset. We refer the readers to Appendix B for more detailed information on the mentioned datasets as well as curation implementation. Evaluation Metrics.We use two metrics to measure the quality of the docking predictions: Complex Root Mean Square Deviation (C-RMSD) and Interface Root Mean Square Deviation (I-RMSD). These two metrics are defined below: Given the ground truth \(\mathbf{P}^{*}\in\mathbb{R}^{3\times\sum_{k=1}^{N}n_{k}}\) and predicted complex structures \(\mathbf{P}\in\mathbb{R}^{3\times\sum_{k=1}^{N}n_{k}}\), we first superimpose them using the rigid registration algorithm (i.e., Kabsch) and then compute the C-RMSD as \(\sqrt{\frac{1}{\sum_{k=1}^{N}n_{k}}\left\|\mathbf{P}^{*}-\mathbf{P}\right\|_{F} ^{2}}\). The I-RMSD is calculated similarly, but only takes the coordinates of the residues that are within 8 angstroms of the residues in the other protein. For a fair comparison between the baselines, we use only the \(\alpha\) carbon coordinates for the calculation of both metrics. Experimental Configurations.We train our model using the AdamW optimizer with a learning rate of \(10^{-4}\) and a weight decay of \(10^{-3}\). We train for 50 epochs with a batch size of 6 using a fixed learning rate schedule. Model selection is based on the performance of the validation set. The best validation model is then tested on the test set. Unless otherwise noted, we train and test separately on different N-body data subsets. For data augmentation, we randomly change the permutation of inputs during training and testing, and we also randomly rotate and translate all inputs in the space. Our implementation is based on the PyTorch toolkit [53], with extensive use of the ROMA [52] package. We will make our code available to the public. Multimeric Protein Docking.We compare SyNDock to the widely used multimeric protein docking method Multi-Zerd [13]. Note that there are very few open-source, user-friendly multimeric docking programs available for comparison, and some of them (CombDock2[16], RL-Multi-LzerD3[14]) always crash. For different subsets of N bodies, we randomly select a certain number of test samples from the test set for comparison (60 samples for trimer, 30 samples for tetramer). From Table 2 we see that SyNDock significantly outperforms Multi-Zerd, especially in the tetramer docking subset (\(N=4\)), where the former increases the C-RMSD by up to 4.5% compared to the latter. This observation holds true for the I-RMSD metrics as well, demonstrating that our method also has a clear advantage in binding site identification. Furthermore, our method exhibits a speed improvement of several orders of magnitude compared to the baseline method (see Figure 5). Footnote 2: [http://bioinfo3d.cs.tau.ac.il/CombDock/download/](http://bioinfo3d.cs.tau.ac.il/CombDock/download/) Footnote 3: [https://github.com/kiharalab/RL-MLZerD](https://github.com/kiharalab/RL-MLZerD) Binary Protein Docking.SyNDock can be scaled down to predict binary protein docking. We compare SyNDock to the popular binary protein docking baselines: Attract [7], HDock [41], ClusPro [9], PatchDock [54], and the recently proposed EquiDock [44]. As shown in Table 3, SyNDock is competitive and often outperforms the baselines, demonstrating that our method also has a general capability. \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Trimer (N=3)} & \multicolumn{4}{c}{Tetramer (N=4)} \\ & \multicolumn{2}{c}{Complex-RMSD \(\downarrow\)} & \multicolumn{2}{c}{Interface-RMSD \(\downarrow\)} & \multicolumn{2}{c}{Complex-RMSD \(\downarrow\)} & \multicolumn{2}{c}{Interface-RMSD \(\downarrow\)} \\ \hline Methods & Median & Mean & Std & Median & Mean & Std & Median & Mean & Std & Median & Mean & Std \\ \hline Multi-Zerd & 22.17 & 24.06 & 6.08 & 23.16 & 24.44 & 6.95 & 28.51 & 29.02 & 5.83 & 26.75 & 29.23 & 8.71 \\ SyNDock & 21.02 & 22.01 & 7.69 & 17.80 & 18.35 & 6.34 & 23.92 & 24.56 & 7.12 & 19.75 & 20.37 & 5.85 \\ \hline \hline \end{tabular} \end{table} Table 2: **Multimeric Complex Prediction Result**. Comparison between Multi Zerd [52] and SyNDock in two performance metrics in the two curated subsets (N=3,4). All values are smaller the better. We can see that SyNDock can significantly outperform Multi-Zerd in terms of docking accuracy as well as having a clear advantage in the identification of docking sites. Computation Efficiency.Figure 5 compares the inference efficiency of different approaches. We record the running time of all complex structure prediction algorithms on the trimmer and tetramer test set. Note that SyNDock predicts the global transformation in a direct shot, resulting in speeds several orders of magnitude faster than the baseline method. This is particularly advantageous for intensive screening applications that need to search over large search spaces, such as drug discovery. In addition, as the number of input proteins increases, the superiority of SyNDock in terms of speed becomes increasingly apparent. Ablation Study.We conduct an ablation study on the trimer test set to assess the significance of the core modules in SyNDock. As a baseline, we implement a heuristic method called EquiDock-_Seq_, which sequentially docks proteins using a binary docking model (EquiDock). As shown in Table 4, EquiDock-Seq fails to effectively address the challenges of multiple protein docking, resulting in C-RMSD and I-RMSD scores of 25.7 and 26.2, respectively. To evaluate the impact of the learnable synchronization module, we introduce the synchronization module to the pipeline, enabling a learning-centric solution (SyNDock _wo IEMNN_). This integration significantly improves the performance, achieving C-RMSD and I-RMSD scores of 22.9 and 19.8, respectively. Next, we investigate the effectiveness of the proposed IEMNN backbone by replacing the backbone of the previous experiments. For the baseline approach (SyNDock _wo sync_), we observe a slight improvement in I-RMSD to 20.4, while C-RMSD remains unchanged. The best performance is achieved by SyNDock (standard) with C-RMSD and I-RMSD scores of 22.0 and 18.4, respectively, demonstrates the effectiveness of the IEMNN backbone in aggregating and propagating information across multiple graphs. Overall, the ablation study highlights the importance and effectiveness of the proposed modules in improving the multimeric protein docking performance. VisualizationIn Figure 4, we show a number of successful examples where SyNDock has significantly outperformed the baselines on the subset of trimeric proteins. ## 6 Conclusions In this paper, we propose a novel multimeric protein docking model, SyNDock, which allows effective learning to assemble the complex in a globally consistent manner, opening a new perspective on multimeric complex docking. It enables the automatic assembly of accurate multimeric complexes within a few seconds, the performance of which can be superior or comparable to recent advanced approaches. Extensive experiments on our curated dataset show that SyNDock outperforms advanced methods in both accuracy and speed. Regarding limitations, we acknowledge two main aspects. Firstly, limited by the rare number of resolved multibody protein structures, we expanded the number by extracting substructures. However, this may introduce biases and deviate from the true distribution of protein complexes in nature. This limitation is inherent to the data generation process and can potentially impact the performance and generalizability of the trained models. Secondly, our approach relies on the rigid assumption, which restricts its applicability to flexible protein docking scenarios. For future work, we would like to incorporate more domain knowledge and extend the current framework to more applications. \begin{table} \begin{tabular}{l c c c c} \hline \hline Method & \multicolumn{3}{c}{IEMNN} & SyNc & C-RMSD & I-RMSD \\ \hline EquiDock-_Seq_ & \(\times\) & \(\times\) & 25.7 & 26.2 \\ \hline SyNDock & \(\checkmark\) & \(\times\) & 25.5 & 20.4 \\ SyNDock & \(\times\) & \(\checkmark\) & 22.9 & 19.8 \\ SyNDock & \(\checkmark\) & \(\checkmark\) & 22.0 & 18.4 \\ \hline \hline \end{tabular} \end{table} Table 4: **Ablation study** (Trimer test set) Figure 5: **Inference time distribution. Both methods are tested on the same hardware.** \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{Complex-RMSD} & \multicolumn{3}{c}{Interface-RMSD} \\ \hline Methods & Median & Mean & Std & Median & Mean & Std \\ \hline Attract & 17.17 & 14.93 & 10.39 & 12.41 & 14.02 & 11.81 \\ HDock & 6.23 & 10.77 & 11.39 & 3.90 & 8.88 & 10.95 \\ ClusPro & 15.76 & 14.47 & 10.24 & 12.54 & 13.62 & 11.11 \\ PatchDock & 15.24 & 13.58 & 10.30 & 11.44 & 12.15 & 10.50 \\ EquiDock & 13.29 & 14.52 & 7.13 & 10.18 & 11.92 & 7.01 \\ \hline SyNDock & 12.95 & 14.27 & 6.85 & 9.97 & 11.93 & 5.84 \\ \hline \hline \end{tabular} \end{table} Table 3: **Binary Complex Prediction Results. Comparison of different binary docking methods.**
2302.11397
Measurement of telescope transmission using a Collimated Beam Projector
With the increasingly large number of type Ia supernova being detected by current-generation survey telescopes, and even more expected with the upcoming Rubin Observatory Legacy Survey of Space and Time, the precision of cosmological measurements will become limited by systematic uncertainties in flux calibration rather than statistical noise. One major source of systematic error in determining SNe Ia color evolution (needed for distance estimation) is uncertainty in telescope transmission, both within and between surveys. We introduce here the Collimated Beam Projector (CBP), which is meant to measure a telescope transmission with collimated light. The collimated beam more closely mimics a stellar wavefront as compared to flat-field based instruments, allowing for more precise handling of systematic errors such as those from ghosting and filter angle-of-incidence dependence. As a proof of concept, we present CBP measurements of the StarDICE prototype telescope, achieving a standard (1 sigma) uncertainty of 3 % on average over the full wavelength range measured with a single beam illumination.
Nicholas Mondrik, Michael Coughlin, Marc Betoule, Sébastien Bongard, Joseph P. Rice, Ping-Shine Shaw, Christopher W. Stubbs, John T. Woodward, LSST Dark Energy Science Collaboration
2023-02-22T14:32:55Z
http://arxiv.org/abs/2302.11397v1
# Measurement of telescope transmission using a Collimated Beam Projector ###### Abstract With the increasingly large number of type Ia supernova being detected by current-generation survey telescopes, and even more expected with the upcoming Rubin Observatory Legacy Survey of Space and Time, the precision of cosmological measurements will become limited by systematic uncertainties in flux calibration rather than statistical noise. One major source of systematic error in determining SNe Ia color evolution (needed for distance estimation) is uncertainty in telescope transmission, both within and between surveys. We introduce here the Collimated Beam Projector (CBP), which is meant to measure a telescope transmission with collimated light. The collimated beam more closely mimics a stellar wavefront as compared to flat-field based instruments, allowing for more precise handling of systematic errors such as those from ghosting and filter angle-of-incidence dependence. As a proof of concept, we present CBP measurements of the StarDICE prototype telescope, achieving a standard (1\(\sigma\)) uncertainty of \(3\) % on average over the full wavelength range measured with a single beam illumination. Calibration, photometry, detectors, telescopes. + Footnote †: journal: ## 1 Introduction Multi-band photometry permits measurements of much fainter sources than spectroscopy while still preserving low spectral resolution components of the observed SED (spectral energy distribution). In general, there is a large number of desirable photometric measurements that are useful to astronomers, including, but not limited to: top-of-atmosphere (TOA) flux, flux corrected for Galactic extinction, and flux corrected for both Galactic and host-galaxy extinction, for supernova (SNe) cosmology in particular[1, 2]. Only instrumental magnitudes are directly measured, and include contributions from telescope transmission idiosyncrasies and atmospheric transmission variations that serve to obscure astrophysically interesting features[3]. Knowledge of instrumental passbands is particularly useful when attempting to determine magnitudes of sources having SEDs that are dissimilar to standard flux calibration stars, as in the case of SNe Ia cosmology. With the advent of large-scale transient surveys such as Rubin Legacy Survey of Space and Time[4, 5] (LSST) undertaken by Rubin Observatory, the number of Type Ia supernovae will be large enough that systematic calibration uncertainty will become the limiting factor in the determination of cosmological parameters[6]. Additionally, state of the art survey calibration schemes, such as the Forward Global Calibration Method[7], can take passband measurements as inputs, thereby increasing the accuracy and precision of survey measurements by accounting for the variations in the spectra of field stars. The goal of photometric calibration is to arrive at a measure of brightness that is an accurate measurement of the source SED integrated over a given bandpass and that accounts for temporal variations in the atmospheric and optical transmission functions. The material presented here is focused on the problem of determining the optical transmission function of the telescope, but an overview of factors impacting photometric calibration of surveys can be found in Ref. [8]. By optical transmission function, we mean the fraction of photons that enter the telescope, are converted to photo-electrons, and subsequently read out of the CCD (charge-coupled device). The simplest method of measuring this quantity would then be to send a known number of photons down the telescope's optics, and compare the number measured on the CCD to the number originally emitted. The usual way to accomplish this mapping in astronomy is by tying back to almost pure hydrogen white dwarfs stars, for which we believe we are able to estimate the flux based on spectral line measurements and radiative transfer calculations [9]. Tying to standards that can actually be built on Earth has also been tried, either by creating a proxy of a black body that can be directly observed by the telescope [10], or by creating a calibrated stable light source using a calibrated detector, usually a photodiode provided by the National Institute of Standards and Technology (NIST) [3]. While this latter approach relies on a metrology chain with more intermediate steps, it capitalizes on the fact that NIST photodiode's quantum efficiency (QE) can be calibrated with an uncertainty on the order of \(0.1\%\) over the full optical range (400nm-1000nm). This is the approach we choose for the Collimated Beam Projector: By using a NIST-calibrated photodiode to normalize away the variations in the calibration light source, we are seeking to transfer a known flux scale, such as the one defined by the Primary Optical Watt Radiometer [11] (POWR) at NIST, onto a telescope CCD which can then transfer that calibration to an astrophysical source. Technically speaking, POWR provides an optical-watt _power_ scale, which is distinct from a _flux_ scale by a factor of area (Flux \(\equiv\) Power/Area). Because we are interested here only in chromatic variations, and not absolute calibration, we may treat the two as equivalent. Although an _absolute_ flux scale is desirable (transmission of the system is known exactly at all wavelengths), in many practical cases a _relative_ flux scale is sufficient. By relative flux scale, we mean that the transmission ratio \(T(\lambda)/T(\lambda^{\prime})\) is known between any two generic wavelengths, but the overall grayscale normalization is unknown. Said more simply, we need to ensure that there are no temporal or spatial (over the detector) variations in observed ratios of fluxes (colors), and that these ratios are consistent with the true flux ratios. ### Extant transmission measurement devices Astronomers have wrangled with the challenges of calibrating CCD-based observations since the advent of the devices in the late 20\({}^{\rm th}\) century. Bias frames, flat field exposures, star flats, and many other types of data are frequently used by astronomers to contend with the temporal and spatial non-uniformity of telescope optical systems. The flat field is of particular interest to the challenge of determining instrumental passbands because it is an attempt to standardize the response of each pixel in a telescope system. A typical flat field system uses a white-light source to illuminate a lambertian reflecting screen, usually far out-of-focus, which is observed by the telescope, with the end result being a "flat" (constant surface brightness) image on the detector. The flat field obtained, which has different scattered and stray light behavior than a start field science beam, generally tends to homogenize variability arising from the screen construction, and is used to normalize away pixel-to-pixel variations in images of astrophysical sources. However, as has been pointed out (e.g., Refs. [12, 13]), naive application of flat fields may introduce systematic errors that limit the ultimate uncertainty of the measurement. A flat field may not perfectly mimic the response to astronomical sources due to several factors such as delineating the difference between variations in pixel size, caused by departure from a rectilinear grid (due, for example, to lateral electric fields within the sensor that distort pixel gridlines) from true variations in pixel QE. One of the fundamental issues with conflating these two processes is that variations in pixel size are flux-conserving (although a given photo-electron may end up in a neighboring pixel, it is still present and can be counted), while variations in QE are not (the photo-electron is never generated in the first place). The sheer ubiquity of flat field screens at observatories does however, make them a tantalizing component to leverage in the challenge of measuring telescope throughputs. It remains then, to concoct a scheme by which variations in pixel size can be decoupled from variations in QE. One method is to use tunable, narrowband light to illuminate a flat field screen, generating a data cube of flats at each wavelength over the spectral range of interest. By looking at the signal variation of each pixel with wavelength, one can remain agnostic to the _average_ size of the pixel. We stress _average_, because the method is still sensitive to the wavelength-dependent component of pixel size variations. As an example, in the presence of a static lateral electric field (for example, from impurity gradients), a photo-electron from a blue photon, which converts near the back surface of the CCD, will experience greater deflection than a red photon, which converts deeper in the device. This would make the pixel in question appear less responsive in the blue than the red, and again the flux-conserving pixel size variations can be mistaken for QE variation. Overall, this type of method reduces confusion between pixel size and QE, becoming instead limited by the wavelength-derivative of pixel size variations. More appropriately, they might be said to be limited by the _differential_ size-derivative of pixels, since leakage of photo-electrons out of a pixel can be compensated for by leakage into the pixel from its neighbors. Several experiments and devices have been constructed along these lines, for example, the method proposed in Ref. [3] and implemented in Ref. [14] used a photodiode-monitored, monochromatic, tunable laser source reflected off of a flat field screen to measure the transmission function of the PanSTARRS telescope. The DECal system [15, 16] on the Cerro Tololo Inter-American Observatory's Blanco telescope, similarly illuminates a flat field screen with a combination of LEDs and a white-light powered monochromator, which is monitored by a photodiode. Challenges remain for flat-field based systems, however. In particular, there are systematic differences in scattered light paths between flat field and stellar illumination patterns. These differences would result in systematic errors when deriving transmission measurements for point sources. It would be ideal, then, to illuminate the telescope with a wavefront similar to that of a star: i.e., full-pupil and planar. This would effectively side-step the challenging task of deriving corrections for point-source images from surface-brightness based flats. The SCALA system [17] on the University of Hawaii 88-inch Telescope uses a white-light powered monochromator as its illumination source, which is fed into a series of integrating spheres attached to a frame mounted on the interior of the telescope dome. Apertures on the outputs of the integrating spheres are collimated by mirrors and re-imaged onto the focal plane of the Super-Nova Integral Field Spectrograph (SNIFS). Two of the collimated beams is monitored by a Cooled Large Area Photodiode [18], which provides normalization. In this case, full pupil illumination is traded off against using a collimated beam re-imaging system to project large (\(1^{\circ}\)) spots onto the focal plane. Tying the calibration of SNIFS to the NIST definition of the optical Watt in turn permits the SNfactory to provide, by repeated observations of standard stars over more than a decade, a well calibrated star network covering a large fraction of the sky observable from Hawaii.[19] The NIST Stars project uses a spectrograph to observe a NIST-calibrated light source and standard stars alternately, thereby transferring the light source calibration to the standard stars1. When the light source is placed sufficiently far away, the wavefront is effectively parallel when entering the telescope, thus generating a full-pupil collimated beam. Footnote 1: [https://www.nist.gov/programs-projects/nist-stars](https://www.nist.gov/programs-projects/nist-stars) The StarDICE project is of the same flavor, but uses a stable calibrated poly-chromatic source made of LEDs placed at \(\sim 200\) m from a \(\sim 1\) m focal length telescope as an artificial star. It aims at providing a network of stars of magnitude between 10 and 13 calibrated with traceability to the SI (Systeme International d'Unites) through the NIST POWR scale by observing in turn the artificial star and the CALSPEC stars. This procedure makes the measurement of the transmission of the StarDICE telescope mandatory. In this paper, we present the Collimated Beam Projector (CBP) instrument, a telescope throughput measurement system.[20, 21, 22] In Sec. 2, we explore design considerations in general, and in Sec. 3 we describe the design and components of our CBP system in particular. Section 4 outlines the method for tying CBP measurements back to the flux scale established at NIST, Sec. 5 describes the StarDICE experiment and CBP setup, and Sec. 6 describes the data reduction procedure. Section 7 presents transmission curves for the StarDICE prototype telescope, taken at the Laboratoire de Physique Nucleaire et des Hautes Energies (LPNHE), and presents lessons learned during this phase. Finally Sec. 8 reviews planned upgrades and revisions to the system. ## 2 Design Considerations for Optical Transmission Measurement To design an instrument to measure the optical throughput of an imaging system, we must first understand both the optical properties and measurement goals of the system. Using \(t\) for time, \(\lambda\) for wavelength, \(\mathbf{x}\) for the vector position, \(\boldsymbol{\gamma}\) for a given altitude and azimuth, and \(\mathbf{r}\) the location on the primary mirror, the flux arriving from a given source on the focal plane, \(\phi(t,\mathbf{x},\boldsymbol{\gamma})\), can be expressed as \[\phi(t,\mathbf{x},\gamma)=C(t)\int A(t,\lambda,\boldsymbol{\gamma})\,T(t, \lambda,\mathbf{x},\mathbf{r},\boldsymbol{\alpha})\,S(t,\lambda,\boldsymbol{ \alpha})\,d\lambda\,d^{2}\boldsymbol{\alpha}\,d^{2}\mathbf{r}, \tag{1}\] where \(C(t)\) is a constant with respect to focal plane position, telescope pointing, and wavelength (e.g., collecting area, electronic gain), \(A(t,\lambda,\boldsymbol{\gamma})\) is the instantaneous atmospheric transmission function, \(T(t,\lambda,\mathbf{x},\mathbf{r},\boldsymbol{\alpha})\) is the instrumental transmission function, and \(S(t,\lambda,\boldsymbol{\alpha})\) is the total TOA photon flux incident on the primary from sources at angle \(\boldsymbol{\alpha}\) relative to the pointing of the telescope. The integrals are over all wavelengths, relative angles, and the entirety of the primary, respectively. To measure the transmission of a telescope, we must contend with all of the terms above, with a few modifications. Relative to standard astronomical measurements, \(A\) is much suppressed due to shorter path-lengths and \(S\) is instead the spectrum of the calibration light. Our ultimate goal is to estimate \(T\), the transmission function of the telescope imposed on astrophysical point-source wavefronts. Armed with this expression, we can begin to explore the design requirements for our transmission measurement systems. ### Motivation for using collimated beams A planar wavefront incident on the primary at a defined angle \(\boldsymbol{\alpha}\) has a fixed ghosting pattern as a function of wavelength set by the geometry of a given optical system. When a telescope is illuminated with a non-planar wavefront, ghosts and other scattered light are superimposed on the target region, resulting in a subtly different transmission function than the one experienced by an astrophysical source. For this reason, flat fields, which illuminate the telescope at all angles, are not ideal calibrators for point sources. Beyond the angle of incidence, the location of a photon's impact on the telescope primary, \(\mathbf{r}\), can play a major role in transmission measurements, particularly for interference filter-based systems that are not designed with variable multi-layer coating thicknesses to account for geometrically-related shifts in bandpasses. For these systems, the incidence angle \(\theta\) at which the light passes through the filter changes the effective transmission of the filter. An approximation of the shift in transmission for a filter for a given incidence angle \(\theta\) and index of refraction \(n\) is[23] \[T(\lambda,\,\theta)=T\Big{(}\lambda\,(1-\frac{\sin^{2}\theta}{n^{2}})^{- \frac{1}{2}},\,\theta=0\Big{)}. \tag{2}\] where we see that the effect is essentially a blueshift of the filter transmission at \(\theta=0\). If the wavefront injected into the telescope is non-planar, there will be a difference in the measured filter transmission and transmission function experienced by a star, the scope of which is dependent on telescope geometry (in particular, the measured filter passband will be broadened according to the above equation, weighted by the relative photon flux density at each angle \(\theta\)). The location of filter edges is of particular concern, so it is worth understanding their impact on transmission measurement devices. Photons with a wavelength located within an edge are highly likely to experience internal reflection within the filter because the transmission significantly deviates from unity and from zero, by definition. These internal reflections can escape the filter and land on the detector, contributing significantly to scattered light. For a collimated system, there is some hope that these scattering patterns approximate (to a degree) that of a stellar wavefront. For a flat field system, whose photon phase-space distribution is different than that of a plane wave illumination (with a single phase and direction), there is additional systematic uncertainty. Together, these two concerns motivate the use of collimated light, rather than flat fields. It is important to note that here we have assumed that the goal of the survey is to measure point sources - if measurements of surface brightness are required, then projecting a planar wavefront onto the telescope is not a requirement anymore, and flat-field based methods are more appropriate. ### Light source requirements A first requirement is based on the tautology that the light source used to measure the transmission function must be capable of emitting light over the entire wavelength range of interest (roughly 300 nm - 1100 nm for optical systems). Assuming negligible attenuation from atmospheric effects (\(A\) is small in setups where the light source is close to the telescope), the relevant wavelength-dependent transmission variations to be measured are those in \(T\). This motivates an additional property of the light source: that the optical bandwidth should be small compared to variations expected in \(T\) For interference filter based systems, this scale is set naturally by the filter edges, which transition from opaque to transparent over roughly a 20 nm span. This results in a filter transmission change ratio of around 1% per \(0.2\) nm. Properly sampling smoothly if rapidly varying filter edges at the percent level therefore suggests optical bandwidths of order 1 nm, which is readily achievable by devices such as monochromators and tunable lasers. In addition to the wavelength requirement, the output flux of the light source, \(S\), must be known at the sub-percent level. ## 3 Design of the Collimated Beam Projector (CBP) We present here an overview of the CBP system; see Refs. [20, 21] for additional details on CBP design. The CBP consists of two components: an imaging system and a light source. A tunable laser (with associated coupling optics) provides high-power, narrowband light for the CBP imaging system. The imaging system is composed of a collimating optic, a focal plane mask, and an integrating sphere, with a fiber optic cable connecting the output of the laser to the integrating sphere. This system is then attached to an alt-az mount that allows the CBP to be pointed remotely. By coordinating the alt-az pointing of the CBP and the telescope, it is possible to point the CBP beam at a different section of the primary while leaving the position on the detector fixed. This process, which we dub "pupil stitching", allows the CBP to scan the full primary mirror (and by extension, the different paths taken by photons en route to the same detector pixels). This in theory enables a synthetic full-pupil measurement with a collimated beam (essentially emulating a stellar wavefront). Given hard time constraints in particular due to the limited availability of the laser at LPNHE, we defer the full pupil measurement to a forthcoming paper (Souverin et al. in prep.). Taking the constraints imposed above and re-writing Eqn. 1 for an input photon distribution provided a single CBP pointing, we obtain \[\phi_{\mathrm{CBP}}(t,\mathbf{x},\lambda)=C(t)\int_{\mathrm{CBP}}T(t,\lambda, \mathbf{x},\mathbf{r},\boldsymbol{\alpha})\,S(t,\lambda,\boldsymbol{\alpha}) \,d^{2}\mathbf{r}, \tag{3}\] where the integral is over the CBP's footprint on the primary, and we have assumed the CBP's output flux \(S(t,\lambda,\boldsymbol{\alpha})\) can be described as \(S(t,\lambda^{\prime},\boldsymbol{\alpha}^{\prime})\,\delta(\lambda^{\prime}- \lambda)\,\delta(\boldsymbol{\alpha}^{\prime}-\boldsymbol{\alpha})\), i.e. the CBP output is monochromatic with a well-defined angle relative to the telescope (indicated by the Dirac delta functions \(\delta(\dots)\)), and that atmospheric attenuation is negligible. In the case where the CBP's footprint is small relative to the primary, one can additionally multiply by another factor of \(\delta(\mathbf{r}^{\prime}-\mathbf{r})\), and assume that the measurement is a point-like sampling of the primary. Alternatively, one might make the assertion that the system's optical transmission is not a strong function of the input beam's location on the primary, and thereby absorb the (now constant) integral into \(C(t)\). In all generality, this latter assertion is untrue since different input beam locations on the primary result usually in different angles of the converging beam passing through an interference filter before hitting the detector, though the level of deviation is dependent on telescope geometry and is generally smaller for slower optical systems, which operate at smaller angles. This assumption leaves for a later work (Souverin et al. in prep) the discussion about the fact that different areas on a primary mirror can have very different reflectivity properties. The actual measurement of this variability will be part of the strategy developped in forthcoming papers (StarDICE collaboration 2023-2024) in order to reconstruct the full pupil transmission from CBP patch measurements. ### Tunable Laser The light source for the CBP is an EKSPLA NT-242 tunable laser2, which outputs 3 ns to 6 ns laser pulses at a 1 kHz repetition rate, with a total output power of 0.5 W at roughly 450 nm. The tunable laser uses a non-linear optical process (spontaneous parametric down-conversion, SPDC) inside of a crystal (called an optical parametric oscillator, or OPO) to convert an incident photon of frequency \(\nu_{1}\) (the OPO pump beam) into two photons such that \(\nu_{1}=\nu_{2}+\nu_{3}\), where energy and momentum remain conserved. In the case of the NT-242, these two photons are cross-polarized (formally, this means the process is type-II SPDC), which allows for separation of the "signal" and "idler" beams (the high- and low-frequency output beams, respectively) via a Rochon prism within the laser. In the case of the NT-242 laser used here, the pump beam is provided by a Nd:YAG laser at 1064 nm. This pump beam is tripled in frequency to 355 nm before reaching the OPO, meaning the OPO itself is pumped by a 355 nm beam, which allows access to wavelengths above 355 nm. In order to reach wavelengths below the OPO pump wavelength, the beam is sent through a second harmonic generator (SHG), which doubles the frequency of the incoming light. The SHG allows access to wavelengths below 355 nm, at the cost of much-reduced efficiency. In the end, the NT-242 laser is tunable from below 300 nm to over 2 \(\mu\)m, which is well matched to the sensitivity range of CCDs. The tunable laser provides high flux in a narrow bandpass, \(\lesssim 5\) cm\({}^{-1}\), which corresponds to approximately \(\delta\lambda=0.13\) nm at 500 nm, and \(\delta\lambda=0.5\) nm at 1 \(\mu\)m. These are upper limits quoted by the manufacturer, and measurements taken using these systems show bandwidths of 0.08 nm to 0.48 nm between 350 nm and 1100 nm.24 As these bandpasses are small compared to the accuracy achieved at this stage, we treat the output of the laser as monochromatic. Footnote 2: Identification of commercial equipment to specify adequately an experimental problem does not imply recommendation or endorsement by the NIST, nor does it imply that the equipment identified is necessarily the best available for the purpose. This applies for all other commercial products named in this publication. There are additional concerns that must be addressed when using tunable lasers, and we present here the major challenges; for an overview of tunable lasers in optical calibration applications, see Ref. [24]. There are three primary obstacles to overcome: excessive brightness in some regions, low efficiency near the degeneracy point of the system, and incomplete separation of the signal and idler3 beams in the degeneracy region (around 710 nm for the EKSPLA NT-242). To address brightness concerns, we added a reflective neutral density (ND) filter to the fiber coupling system so that the beam can be attenuated. The degeneracy region occurs when \(\nu_{\rm signal}\simeq\nu_{\rm idler}\simeq\nu_{\rm pump}/2\), and is characterized by low output power and poor separation of the signal and idler beams. Low efficiency in the degnereracy region can be overcome either by increasing integration times, or by using a different light source. For example, an OPO pumped at 532 nm would have its degeneracy point at 1064 nm, in a part of the spectrum farther from filter edges. Enhanced separation of the signal and idler beams can be achieved in several ways, for example, with the addition of rotating polarizers or short-/long-pass filters with cut-off/-on wavelengths equal to the OPO pump beam wavelength. Such optical elements are not present in our current fiber coupling scheme, but will be installed for our next iteration. Footnote 3: We remind here the _idler beam_ refers to the second beam obtained after wavelength splitting by the OPO, the _signal beam_ denoting the beam at the desired wavelength. The light first passes through a flip mirror, which optionally steers the beam into a beam dump, shuttering the laser. Afterwards, the beam passes through a variable ND filter mounted on a rotation stage. This filter allows for attenuation of the beam in regions where the laser is so bright as to require sub-second exposure times. The beam then is coupled into a reflective fiber collimator, which is connected to the integrating sphere via an optical fiber. Figure 2 summarizes this setup. ### CBP imaging system The CBP imaging system is comprised of a Sonnar(r) CFE Superachromat 5.6/250 mm lens, which images a mask held by a Finger Lakes Instrumentation (FLI) CL1-10 filter wheel backlit by a Labsphere integrating sphere. The lens' focus is not strongly chromatic, which should allow us to forgo re-focusing the CBP optics at each wavelength. The FLI filter wheel contains two internal wheels; one holds the masks to be re-imaged, and another holds an f-stop aperture, which prevents injection of light at angles too extreme to be focused by the lens. The CBP currently holds 3 imaging masks: a 20 \(\mu\)m pinhole, a 5x5 grid of 20 \(\mu\)m pinholes at 200 \(\mu\)m spacing, and a large 500 \(\mu\)m pinhole. The single 20 \(\mu\)m allows for precise determination of ghost locations without confusion from nearby pinholes, while the pinhole grid allows for multiplexing of transmission measurements as a function of sensor location. Using the CBP with no pinhole allows for coarse alignment of the CBP and telescope, while the 500 \(\mu\)m pinhole is useful in achieving fine alignment and for providing a resolved, locally flat region on the detector. The 500 \(\mu\)m pinhole is also required for calibrating the CBP, as the smaller pinholes do not provide sufficient brightness. It should be noted that such small pinholes tend to have large variations in their diameters, and the manufacturer of the masks used here, Lenox Laser, quotes \(\pm 10\) % tolerance on the diameter, which implies variations in pinhole area of up to 50 % in the worst-case scenario. In addition to the output port and light injection port, there are two devices attached to the integrating sphere. The first is the monitor photodiode, a Thorlabs SM1PD2A, which normalizes away temporal variations in laser power. The photodiode is connected to a Keithley 6514 charge-integrating electrometer, which measures the amount of charge collected by the photodiode during an exposure. Because there is a saturation limit to the collection capacity of the electrometer's measurement capacitor, an aperture of 1 mm is placed in front of the photodiode, allowing for use of integration times commensurate with those needed for telescope measurements. The second device connected to the integrating sphere is an Ocean Optics QE65000 fiber-fed universal serial bus spectrograph, which monitors the wavelength emitted by the laser. The integrating sphere itself is necessary to ensure that all pinholes are illuminated homogenously and achromatically with respect to the spectrograph and the monitoring photodiode, since we cannot monitor the flux emitted by each individual pinhole. If the surface brightness seen by the pinhole grid varied with wavelength across the grid, it would imprint itself as a focal plane transmission gradient on the telescope. The integrating sphere also ensures that the light seen by the monitoring equipment (photodiode, spectrograph) has the same surface brightness as the light illuminating the pinhole grid. It has to be noted that these desirable features come at the price of decreasing the surface brightness of the pinholes in direct proportion of the size of the integrating sphere used. In the current design, the integrating sphere diameter (6 inches) has been fixed by the desire to multiplex the transmission measurement using a grid of multiple pinholes. This in turn resulted in a large flux dillution that required such a powerful light source as the laser. Figure 1: An image of the CBP installed in the StarDICE lab at LPNHE. Figure 2: Schematic of the CBP light injection. The telescope measured is placed in front of the CBP optics. ## 4 Establishing the CBP flux scale In order to transfer the detector-based flux scale established at NIST to the CBP, the CBP optics were sent to NIST to be calibrated against one of the trap detectors used to hold the POWR optical watt scale. This step is necessary because the CBP input flux is monitored inside the integrating sphere, but the light must pass through an additional strongly chromatic element (the collimating lens) prior to entering the telescope. It is therefore necessary to have an additional calibration at each wavelength of interest between the integrating sphere flux measured by the CBP monitoring photodiode and the actual light emitted by the system. This calibration took place in three steps. An overview of the calibration process is shown in Fig. 3. The reason why the CBP was not directly calibrated using the calibrated photodiode is because the calibrated photodiode alone is too small to sample the entire CBP output beam, and had poor signal-to-noise when only subsampling the beam. A different calibration scheme, using larger solar cells tied to the NIST optical watt definition[25] is currently being implemented and will be presented in a forthcoming publication (Souverin et al. in prep.). During all steps, all optics and photodiodes were in a light-tight box, and the laser was coupled into the box through a fiber-optic cable. During each step of measurement, the laser was tuned automatically across the spectral range for calibration. Because the laser delivers light pulses of \(\sim 10\ ns\) at a frequency of 1 kHz, we choose to operate the Keithley 6514 in charge accumulation mode. For this mode of operation, a computer-controlled shutter at the laser output limits the exposure time (typically 1 sec.) of the laser to the fiber. The two photodiodes (the CBP monitor photodiode and the calibrated photodiode, see below) were connected to a separate charge accumulation electrometer, and both electrometers started charge accumulation just before the opening of the laser shutter and stopped accumulation just after closing of the laser shutter. Subsequently, the total amount of accumulated charge, in Coulombs, was read out from each electrometer. In step 1, the CBP illuminated the entrance pupil of a 100 mm diameter refracting telescope with a NIST-calibrated photodiode (referred to as the "calibrated photodiode" in the following and in step 1 and 2 of Fig. 3) at its focus. The focused beam underfilled the calibrated photodiode, ensuring the full collection of the incoming beam. The charge \(Q_{\mathrm{CBP}}\) on the CBP monitor photodiode, and the charge \(Q_{\mathrm{Telescope}}\) on the calibrated photodiode, were read out after one laser exposure cycle at each wavelength. The ratio \(Q_{\mathrm{CBP}}/Q_{\mathrm{Telescope}}\) then references the CBP photodiode to the calibrated photodiode. Data for this step, plotted as \(Q_{\mathrm{Telescope}}/Q_{\mathrm{CBP}}\), is shown in Fig. 4a. The next two steps reference the calibrated photodiode to the POWR-calibrated trap detector through an intermediate monitor photodiode. The trap detector used here is a 3-element arrangement of photodiodes such that a plane-parallel beam of incoming light must make 5 reflections before escaping, boosting the effective quantum efficiency of the trap relative to a single photodiode. These two steps use a different fiber optic cable than the first step, and the fiber optic transmitting the laser light was passed through a speckle reducer that modulated a section of the fiber at high frequency to mix the modes. Within the light-tight box, the fiber output port was placed at the focus of a 100 mm, f/2.8 refractive collimator. The resulting collimated beam was passed first through an aperture to reduce its diameter to 5 mm, then through a beamsplitter with an intermediate monitor photodiode (10 mm x 10 mm called monitor diode in Fig. 4a-c) set to capture the reflected beam. For step 2, the beam transmitted through the beamsplitter illuminated the refracting telescope with the calibrated photodiode at its focus. The charge \(Q_{\mathrm{Telescope}}\) on the calibrated photodiode and the charge \(Q_{\rm Monitor}\) on the intermediate monitor photodiode were read out at each wavelength, so the ratio \(Q_{\rm Telescope}/Q_{\rm Monitor}\) references the calibrated photodiode to the intermediate monitor. For the step 3, the beam transmitted through the beamsplitter illuminated and completely underfilled the trap detector. The charge \(Q_{\rm Monitor}\) on the intermediate monitor photodiode and the charge \(Q_{\rm Trap}\) references the intermediate monitor to the trap. Data for the combined results of these two steps are plotted as \(Q_{\rm Telescope}/Q_{\rm Trap}\) in Fig. 4b. We therefore measure at each wavelength, three charge ratios: \(Q_{\rm CBP}/Q_{\rm Telescope}\), \(Q_{\rm Telescope}/Q_{\rm Monitor}\), and \(Q_{\rm Monitor}/Q_{\rm Trap}\), each ratio corresponding to one of the three measurements steps. The product of these gives \(Q_{\rm CBP}/Q_{\rm Trap}\), \[\frac{Q_{\rm CBP}}{Q_{\rm Trap}}=\frac{Q_{\rm CBP}}{Q_{\rm Telescope}}\frac{Q _{\rm Telescope}}{Q_{\rm Monitor}}\frac{Q_{\rm Monitor}}{Q_{\rm Trap}} \tag{4}\] The CBP calibration factor \(T_{\rm CBP}(\lambda)\), defined at each wavelength as the number of photons out of the CBP per number of photoelectrons measured by the CBP monitor photodiode, is thus given by: \[T_{\rm CBP}(\lambda)=\frac{Q_{\rm Trap}}{Q_{\rm CBP}}\frac{1}{EQE_{\rm Trap}} \tag{5}\] where \(EQE_{\rm Trap}\) is the external quantum efficiency of the trap detector at each wavelength as calibrated by POWR. The resulting calibration factor is plotted in Fig.4c. Since this paper aims at demonstrating the ability of a CBP to measure the chromatic variations of telescope and filter transmissions, we don't keep track of the absolute grey scale but only of Figure 3: A schematic illustrating the three-step process used to transfer the POWR-traceable calibration of a NIST trap detector to the CBP. Step 1 measures the ratio \(Q_{\rm CBP}/Q_{\rm Telescope}\) measured respectively by the CBP monitor photodiode and the NIST calibrated photodiode. Likewise the ratios \(Q_{\rm Telescope}/Q_{\rm Monitor}\) and \(Q_{\rm Monitor}/Q_{\rm Trap}\) for steps 2 and 3 are obtained by comparing the measurements of the Monitor diode with the NIST calibrated photodiode and the Calibrated Trap detector respectively. The product of the ratios, multiplied by the responsivity of the trap detector, gives the responsivity of the CBP in units of [A W\({}^{-1}\)]. The inverse of the CBP responsivity is defined to be the CBP calibration factor. The calibrated photodiode refered to in step 1 and 2 is a NIST-calibrated photodiode with respect to POWR. \(T_{\rm CBP}\) up to an arbitrary grey scale, without propagating the ratios of the photodiodes quantum efficiencies. The NIST trap responsivity is known with a standard uncertainty of about 0.1 %. Statistical standard uncertainty of the charge ratio measurements in each of the three steps of CBP calibration described above were 0.1 %, 0.5 %, and 0.5 %, respectively, in the region between 400 nm and 700 nm, and 0.1 %, 1 %, and 1 %, respectively, in the region \(\lambda>\) 700 nm. Uncertainty from systematic effects such as CBP scattered light, telescope uniformity and scattered light, spectral purity, and wavelength calibration have not been estimated carefully, but could add another few percent. An other potential source of systematic error is that this procedure uses the 500 \(\mu\)m pinhole, and thus relies on the assumption that changing from a large, 500 \(\mu\)m pinhole in the CBP to a different pinhole setup with different sizes is achromatic. We list these effects for completeness but don't add them to the quantitative estimate of systematics of the later telescope transmission measurements with this CBP. We believe the periodic structure seen in the telescope-trap ratio for \(\lambda<400\) nm and \(\lambda>700\) nm to be due to interference in the AR (anti-reflection) coating of the telescope lenses. This is not seen in the first step of measurements because the collimated laser beam used in the second step of measurements is significantly smaller (5 mm) than the exit pupil diameter of the CBP (approximately 45 mm). The larger illumination region in the first step averages the signal over enough different de-phased regions such that the periodic structure is undetectable. To reduce the effect of this systematic on our analysis, we apply a low-pass filter to the data in the affected regions, with the result shown in red in Fig. 4c. The sharp cutoff for \(\lambda<400\) nm likely arises due to the AR coating of the CBP collimating lens. As we will show, the uncertainty in the CBP's calibration becomes the dominant term for wavelengths longer than approximately 800 nm, and represents an area of significant potential improvement where we expect our new scheme using solar cells to yield significatively better results. ## 5 CBP measurements of the StarDICE telescope transmission ### The StarDICE experiment The StarDICE experiment aims at anchoring fluxes of currently used spectrophotometric standard stars (SPSS) to the scale defined by POWR. In order to reconstruct fluxes that are free of atmospheric effects, the experiment envisions a long duration follow-up of SPSS ground level fluxes with a dedicated telescope, whose absolute full pupil transmission is monitored for the duration by observations of a calibrated artificial star with NIST traceability. The strategy is to average out the atmospheric transmission variations, given a long enough lever arm in time, and to use slitless spectro-photometry to constrain the absorption spectrum of the atmosphere. The details of this procedure will be left to a set of forthcoming papers (StarDICE collaboration, 2023-2024), but are expected to build up on the successed achieved by cosmological supernovae surveys, as for exampled described in [7]. A simple approach for the artificial star is to achieve quasi-perfect plane-wave illumination of the telescope by the observation of spherical waves emitted by small light sources in the long (but finite) distance range. In practice, the exercise is easier to achieve for small apertures, for which the long range remains within 200 m. A test of the proposed concept was performed in 2018 at the Observatoire de Haute Provence (OHP) with a 10" f/4 Newtonian telescope. The focal plane of the telescope was equipped with a **Fig 4**: Results from the calibration chain described in Sec. 4. (a): Ratio between calibrated photodiode response and CBP monitor photodiode response. (b): Ratio between calibrated photodiode response and trap detector response. (c): The calibration factor of the CBP. The red curve has been low-pass filtered to suppress the interference fringe signal from the calibration telescope-trap measurement. (d): The relative difference between the raw and low-pass filtered CBP system throughputs, with dashed red lines drawn at \(+/-1\) %. The sharp cutoff at approximately 400 nm seen in (a) and (c) is attributed to the AR coating on the CBP lens. The residuals in (d) are used to estimate the systematic CBP calibration uncertainty due to fringing. SBIG ST-7XME camera behind a 5 slot filter wheel with \(bvRI\) filters and 1 empty slot. The \(b\) and \(v\) filters were interference filters, while the \(R\) and \(I\) filters were colored glass. The camera sensor is a grade 1 front-illuminated Kodak faf-0402ME CCD, cooled to \(\sim-10\)\({}^{\circ}\)C using a Peltier junction for cooling. The \(6.91\) mm\(\times 4.6\) mm active area is divided into \(765\times 510\) regions of \(9\)\(\mu\)m\(\times 9\)\(\mu\)m pixels. A set of 18 narrow-spectrum LEDs, attached 113.4 m away from the primary mirror vertex, were observed at the beginning and end of each observation night to monitor the evolution of the instrument throughput around 18 different wavelengths, the rest of the night being devoted to the photometric follow-up of SPSS in 4 \(bvRI\) filters. The test demonstrated a sub-percent precision of the nightly calibration by the point-like artificial star, while collecting photometry for a handful of SPSS with \(\sim 0.01\) mag photon statistics on individual images. The test was put to a halt by a sudden and rapid change in the apparent instrument throughput. The subsequent analysis of calibration data demonstrated that the evolution was entirely attributable to a change of the sensor readout electronic gain, with no apparent change of the detector quantum efficiency according to the artificial star observations. We present here a measurement of the monochromatic throughput of this test instrument that is necessary to interpret the broadband stellar and LEDs measurements. The measurement is occuring a posteriori, using the impaired sensor. The sensor readout gain appears to have settled at a different but stable value, so that the most adverse effect is an increased readout noise, to around 16 e\({}^{-}\). We stress for clarity that none of the measurements done at OHP are discussed here. We only report the procedure done a posteriori at the lab to characterize the telescope and filters used in the StarDICE demonstration program. ### Experimental setup The CBP system was set up in the StarDICE lab at LPNHE, with the CBP collimator approximately 1.5 m away from the StarDICE primary (see Fig. 1). Images were taken in dark/light pairs - for "dark" images, the StarDICE camera shutter was opened, but the laser shutter remained closed. Subtraction of these image pairs then corrects for scattered light from contaminating sources. For charge measurement, the CBP electrometer was configured to take 10 readings before and after closing the laser shutter, which permits measurement of background photodiode charge levels. During exposures, the monitor spectrograph was continually read out and saved at a fixed rate, typically 1 Hz or 10 Hz, depending on total exposure time. The telescope CCD's exposure time includes an additional \(t_{\rm buffer}=10\) seconds relative to the requested exposure time from the CBP (\(t_{exp}\)), in order to allow for communication between the telescope and CBP systems. Upon completing the exposure, the photodiode and spectrograph time-series are stored alongside the image. The CBP scan procedure can be summarized as 1. Open StarDICE camera shutter, laser shutter remains closed. Expose for time \(t_{\rm exp}+t_{\rm buffer}\). Close camera shutter. Used as background subtraction image for following exposure. 2. Open StarDICE camera shutter. Reset electrometer charge and take 10 charge measurements. Begin taking spectrometer measurements. 3. Open laser shutter, expose for time \(t_{\rm exp}\). The exposure time is chosen in order to yield enough signal in the camera detector without saturating the photodiode. As will be discussed later on, this didn't ensure the spectrograph signal to be unsaturated. This issue has been taken care of in the new setup. 4. Close laser shutter, take final 10 readings on electrometer, stop spectrometer integration. 5. Close StarDICE shutter, move to next wavelength, repeat. In this setup the grid of pinholes is used, in order to test the multiplexing of the transmission measurement at many different locations on the camera detector. ## 6 Data reduction ### Photometry To measure the amount of photons collected by the CCD, we perform aperture photometry on the grid of points imaged onto the focal plane. The magnification ratio between the telescope and the CBP collimating optic is roughly 4; this means the 20 \(\mu\)m holes in the CBP mask become 80 \(\mu\)m diameter spots on the StarDICE focal plane, which translates to about 9 pixels. Because there are refractive optical elements in the telescope beam (e.g., filters, dewar window), the focus of the system will change slightly with wavelength though the amount of defocus is not large relative to the size of the spots. Beyond simple defocus, during the telescope's trip back to LPNHE from its first calibration run at OHP (Observatoire de Haute Provence), we believe that the telescope was knocked slightly out of collimation, leading to moderate aberrations in the images generated by the CBP. These aberrations are not prohibitively large, and can be accommodated by slightly increasing the aperture size used for photometric measurements. Fig. 5 shows an example of a bias corrected, dark frame subtracted image, with the individual aperture and background subtraction regions marked. In the analyses that follow, only spots whose entire photometry and sky background apertures lie entirely within the frame are used. We use an aperture of 45 pixels, with a background annulus of inner radius 50 pixels, and a width of 10 pixels. When measuring regions of very low transmission, forced aperture photometry is performed at the last known location of the pinholes. On occasion, we have noticed changes of a few pixels in the positions of the pinhole grid across the focal plane. These arise in particular when changing filters, though on occasion they seem to arise from presumed slippage in the alt/az motors driving the CBP pointing. We compensate for this by allowing the apertures to move as a fixed grid, with the new locations being determined via peak finding after convolving the image with a 2-d gaussian with a standard deviation of 10 pixels in x and y. Only in the case where all pinholes move uniformly is the pixel grid allowed to move. This prevents wandering apertures in cases where the transmission is effectively zero. ### Charge measurement Photocurrent generated in the monitor photodiode is integrated by the electrometer, and afterwards is used to normalize away variations in the input laser light intensity. We integrate charge rather than measure current in order to better account for the laser's pulse-to-pulse instability, which can be of order 10 % or more. Measurements are made with the electrometer range fixed to 2 \(\mu\)C to avoid spurious jumps in charge levels that arise when the measurement range is changed. Typical charge levels reached by the CBP during an integration are (0.1 to 2) \(\times 10^{-7}\) C. Bias current for the Keithley 6514 is specified to be \(<4\) fA, which is negligible given our signal levels and Figure 5: An image of the 5x5 20 \(\mu\)m pinhole mask as seen by the StarDICE CCD. The image has been bias corrected and differenced with a dark frame. Note that a very aggressive stretch is applied in order to see the wings of the spots, and that their size in terms of their full-width at half-max (FWHM) is much smaller. The black shaded regions indicate the apertures used for photometry, while the red annuli indicate regions used for background subtraction. White numbers label each spot, and the wavelength of the laser is shown in the lower right. integration times (\(\leq 300\) s). Although the specification published in the 6514 datasheet provides only measurement accuracy (not precision), the manufacturer indicates that the value is intended to be understood as total measurement uncertainty, including both precision and accuracy4. We therefore use the datasheet's uncertainty formula for the 2 \(\mu\)C range given by Footnote 4: Keithley, private communication \[\sigma_{\rm PD}=0.01Q_{\rm PD}+500\ {\rm pC} \tag{6}\] as the uncertainty on the photodiode charge measurement. We keep the bias part of the formula in order to account for its variation throughout the experiment time span. We decided at this level to be conservative and defer the estimate of the non linearities and stability of the response to the next paper. ### Spectroscopic Data and Calibration The laser's output wavelength is monitored by an Ocean Optics QE65000 spectrograph. The spectrograph is attached to the CBP integrating sphere by a 600 \(\mu\)m core diameter optical fiber, which also serves to define the entrance slit to the spectrograph. We read the device out at a rate of either 10 Hz or 1 Hz, depending on the brightness of the laser at the specified wavelength. The original choice of the fiber diameter was made in order to maximize the signal to noise ratio of the spectrograph readings over the full wavelength range. To calibrate the spectrograph, an Ocean Optics HG-1 wavelength calibration lamp was shone into the CBP's integrating sphere via an optical fiber, using the same integrating sphere entrance port as the laser input fiber. The spectrograph calibration data has been analysed at the end of our data taking, after returning the laser to NIST. It was then noticed that the 600 \(\mu\)m diameter fiber was too broad to properly resolve the lines generated by the calibration source. Furthermore, we found that there was a systematic, wavelength-dependent shift in the PSF location between the small and large fiber, preventing a posteriori recalibration. Since it was impossible to redo a full telescope scan with a smaller spectrograph fiber at that time, we were forced to rely on the stability of the wavelength provided by the laser given a fixed requested wavelength. Futher tests on a later CBP implementation with a similar but more recent laser showed that the relationship between the requested wavelength and the wavelength provided by the laser is extremely stable. Yet, for this paper we have no quantitative assessment of a potential shift of the wavelength calibration. In summary, the wavelength calibration we use for the current analysis is as follows: 1. Spectrograph calibration using spectral lamps 2. Calibration of the relation between requested wavelength and the wavelength provided by the laser, using the spectrograph with the 100 \(\mu\)m fiber. 3. Use of this relation to transform requested wavelength into actual wavelength throughout our analysis of CBP data. #### 6.3.1 Spectrograph Calibration Procedure We calibrate the spectrograph by directly linking it to an Ocean Optics Hg-1 lamp with a 100 \(\mu\)m diameter fiber. We fit the calibration lamp spectra in pixel space using a model that is the sum of 20 gaussians (one per emission line) and a 0\({}^{\rm th}\) order polynomial for background estimation, and the results of the fit are shown in Fig. 6. We also note that there is a distinct lack of calibrated emission lines in the HG-1 lamp for \(\lambda\gtrsim 950\) nm, thus the wavelength solution should not be trusted much beyond these values. Because our CBP calibration data set does not presently extend beyond 1000 nm, we mask the data for which \(\lambda>1000\) nm (estimated _a priori_ from the spectrograph's built-in wavelength solution). To determine a wavelength-pixel mapping for the spectrograph, we fit the centers of the gaussians against the known calibration lines using a 3\({}^{\rm rd}\) order Chebyshev polynomials. The per-line residuals of this fit are shown in the lower panel, and the standard deviation of the lines about 0 indicates that the wavelength solution is good to approximately 0.3 nm. #### 6.3.2 Calibration of the requested wavelength to calibrated wavelength relationship for the laser We transfer this wavelength calibration to the laser by taking a series of laser output spectra, changing the laser wavelength by 20 nm between each spectra. We fit a gaussian to the laser line and then use a 3rd order Chebyshev polynomial to generate a mapping between requested laser wavelength and spectrograph measured wavelength. The results of this fitting process are shown in Fig. 7. After applying the polynomial relationship between requested and measured wavelength, the residuals improve to approximately the 0.1 nm level. For the rest of the data presented here we use Figure 6: Calibration of the CBP monitor spectrograph with an Ocean Optics HG-1 calibration lamp. On the upper figure, data (black points) is plotted as a function of fitted wavelength and the red curve shows the fitted model. Pale blue dashed lines denote the true wavelengths of the calibration lines used in the fit. Data for \(\lambda>1000\) nm are masked. The lower figure shows the difference between true and best-fit wavelengths for the calibration lines. Red dashed lines are drawn at \(+/-1\sigma\) (= 0.32 nm) from the mean (dashed black line). this polynomial calibration to infer the true output wavelength of the laser for a given requested wavelength. We are aware of, and note the fact that the wavelength calibration uncertainty quoted here is but an estimate, that suffers from the limited amount of time and data available to write this report. A more detailed propagation of the wavelength calibration errors will be included in the next version of the experiment, that will be reported in the next forthcoming paper (Souverin et al. in prep.). ### Throughput Calculation Mathematically, \(T_{i}(t,\lambda,\textbf{x})=Q_{\rm CCD,i}(t,\lambda,\textbf{x})/(Q_{\rm PD}( \lambda)T_{\rm CBP}(\lambda))\), where \(T_{i}\) is the (unnormalized) telescope transmission as seen by pinhole \(i\), \(Q_{\rm CCD,i}\) is the charge collected by the CCD for pinhole \(i\), \(Q_{\rm PD}(\lambda)\) is the charge collected by the CBP monitor photodiode, and \(T_{\rm CBP}(\lambda)\) is the transmission correction for the CBP system as outlined in Section 4. Because the true quantum efficiency of the CBP monitor photodiode is rolled into our CBP calibration term, there is no correction factor for the CBP monitor photodiode. The different pinholes allow for a degree of focal plane multiplexing by sampling different sections of the system within a single exposure. This provides potentially the ability to measure grey and chromatic variations of the full detector response. To show the derivation for this form of \(T_{i}\), we can first use Eqn. 3 to write \[Q_{\rm CCD,i}=\sum_{\rm pix,i}\phi_{\rm CBP}(t,\textbf{x},\lambda)\Delta\,t \tag{7}\] Figure 7: Upper panel: raw data (black dots) and fitted Chebyshev polynomial (blue line) showing the spectrograph measured wavelength as a function of wavelength requested from the laser. Lower panel: residuals from the fit (black points) as well as the original difference between requested and measured wavelength (red triangles). The fit shows residuals at around the 0.1 nm level for most of the wavelength range of interest. Residuals that lie outside the bounds of the lower plot are denoted with arrows. The black and red hatched arrow indicates that the residual at 1100 nm lies outside the plot for both the corrected and uncorrected case. where the sum is over the pixels belonging to the image of pinhole \(i\), which serves to localize the measurement to the region around point **x** on the focal plane, and \(\Delta t\) is the integration time. We then arrive at the expression \[Q_{\mathrm{CCD,}i}(t,\lambda,\textbf{x})=\sum_{\mathrm{pixels,}i}C(t)T(t, \lambda,\textbf{x})S_{\mathrm{pixel}}(t,\lambda,\textbf{x})\Delta\,t \tag{8}\] where \(S_{\mathrm{pixel}}\) is the photon flux seen by a given pixel in aperture \(i\) such that \(\sum_{\mathrm{pixels}}S_{\mathrm{pixel}}(t,\lambda,\textbf{x})=S(t,\lambda,)\), and we have invoked the approximation that the CBP is sampling a representative weighting of the StarDICE primary (allowing us to drop the **r** dependence), enabled by the coincidence that the CBP output beam size is roughly the size of the StarDICE primary annulus (We define the primary annulus as the primary disc hollowed out of the shade of the secondary). The calibration factor of the CBP monitor photodiode \(T_{\mathrm{CBP}}\) has been transformed up to an arbitrary grey term, that we set to one for simplicity, asserting that each photoelectron corresponds to a single unique detected photon. The total number of photons entering the StarDICE telescope for a given pinhole, written as \(S(\lambda,t)\Delta\,t\,K_{i}\), is also given by \(Q_{\mathrm{PD}}T_{\mathrm{CBP}}\), where \(K_{i}\) is an unknown constant reflecting our ignorance of the true size of the emitting pinhole. By assuming that the transmission of the telescope system is reasonably flat over the size of the pinhole image on the detector, we can write the ratio \(Q_{\mathrm{CCD,i}}/(Q_{\mathrm{PD}}T_{\mathrm{CBP}})\) as \[\frac{Q_{\mathrm{CCD,i}}}{Q_{\mathrm{PD}}T_{\mathrm{CBP}}}=\frac{C(t)T(t, \lambda,\textbf{x}_{\mathrm{center}})\sum_{\mathrm{pixels,i}}S_{\mathrm{pixel}}(t, \lambda)\Delta\,t}{K_{i}S(t,\lambda)\Delta\,t}=\frac{C(t)}{K_{i}}T(t,\lambda, \textbf{x}_{\mathrm{center}}) \tag{9}\] where the term \(\frac{C(t)}{K_{i}}\) is a constant with respect to wavelength. Under the assumption that the temporal variability of \(C(t)\) (which is mostly related to variables such as electronic gain and grey extinction due to e.g., dust on the primary) is slow compared to the time needed to measure the transmission, we can treat the ratio as a constant, \(C\). This constant is the same for all measurements of a given pinhole \(i\) at location **x** for a given scan, and we can normalize it away by dividing by the transmission at a fiducial wavelength or by the average transmission over all wavelengths. This gives us a measurement of system transmission at time \(t\) and focal plane position **x** relative to another wavelength, thus reaching our goal of measuring the relative throughput of the system. In summary, the process of measuring throughputs with a CBP consists of performing aperture photometry on each of the pinhole images, dividing by the photodiode charge measurement and CBP calibration factor at that wavelength, and then normalizing by, for example, the average of that quantity over all wavelengths. We then obtain a set of relative transmissions, one per pinhole \(i\), covering the entire detector, allowing to trace both the average transmission for a given CBP partial illumination of the primary mirror, and potential chromatic transmission variations over the field of view. ### Throughput uncertainty calculation Using the standard method for uncertainty propagation, the uncertainty on the throughput measurement \(T=Q_{\mathrm{CCD}}/(Q_{\mathrm{PD}}T_{\mathrm{CBP}})\) is given by \[\sigma_{T}=\big{(}(\frac{\partial T}{\partial Q_{\mathrm{CCD}}}\sigma_{ \mathrm{CCD}})^{2}+(\frac{\partial T}{\partial Q_{\mathrm{PD}}}\sigma_{ \mathrm{PD}})^{2}+(\frac{\partial T}{\partial T_{\mathrm{CBP}}}\sigma_{ \mathrm{CBP}})^{2}\big{)}^{\frac{1}{2}} \tag{10}\] where \(\partial T/\partial Q_{\rm CCD}=1/(Q_{\rm PD}T_{\rm CBP})\), \(\partial T/\partial Q_{\rm PD}=-Q_{\rm CCD}/(Q_{\rm PD}^{2}T_{\rm CBP})\), and \(\partial T/\partial T_{\rm CBP}=-Q_{\rm CCD}/(Q_{\rm PD}T_{\rm CBP}^{2})\). The uncertainty budget for the CBP's measurement of the StarDICE telescope's no-filter throughput is shown in Fig. 8. For wavelengths below approximately 800 nm, the uncertainty is dominated by uncertainty in the charge measurement. From the definitions given above (equations 6 and 10), it is easy to show that the minimum uncertainty contribution from the photodiode term is 1%, in the limit that \(50{\rm nC}<<Q_{\rm PD}\). Once the prescribed limit is reached, the photodiode uncertainty is not a function of total charge, and therefore decouples from our ability to reduce it by exposing longer, or using a brighter source. It can be reduced only by making more measurements. At wavelengths longer than \(\sim 790\) nm, the CBP's flux calibration (Section 4) is the limiting factor. In particular, the uncertainty ascribed to the interference fringing in the calibration transfer telescope. ## 7 Results and Discussion ### StarDICE telescope transmission Fig. 9 shows the transmission of the StarDICE telescope as measured by the average over all pinholes within a single CBP pointing. In order to examine the relative throughput of the system, we normalize each pinhole by its peak value with no filter in the beam, thereby allowing us to directly compare transmission variations across pinholes. The periodic variations seen in the transmission curves in the blue (\(\lambda\lesssim 600\) nm) are believed to be real, and due to interference fringing in the Figure 8: The uncertainty budget for the CBP measurement of StarDICE throughput. Contributions to the total uncertainty budget (red) are shown for the 3 main components: aperture photometry (blue), photodiode charge measurement (orange), and CBP calibration (green). The spike seen near 700 nm is due to the degeneracy region in the laser, and the spike near 400 nm is due to low laser brightness combined with low CBP optical throughput. The gradual rise from approx. 850 nm onward is believed to be due to the interference fringing effect discussed in Sec. 4. microlens array mounted on the StarDICE CCD. Although not visible on the results plotted, we mention for completeness that there is also one point (810 nm, \(I\) band) that is excised due to laser issues during the exposure. The data ends at 1 \(\mu\)m, as this is the cutoff wavelength for the trap detector calibration we currently have in hand. As the goal of this paper is to outline the performance of the CBP, we do not extrapolate the curve farther, nor implement an interpolation scheme for the degeneracy region and instead leave these elements of the StarDICE bandpass analysis to the forthcoming StarDICE experiment paper (StarDICE collaboration 2023-2024). In general the filters are well-behaved with one exception being a small leak in the \(v\) filter around 790 nm. The sharp features around 710 nm are due to the degeneracy region of the laser, and are not true features of the StarDICE bandpass. ### Evidence for variation across StarDICE focal plane In addition to the average transmission measurement of Fig. 9, we report a chromatic variation of roughly 5 % for the CBP open transmission measurement (no filter) on a per-pinhole basis. This is shown on Fig. 10, where the crosses represent CBP measurements of the StarDICE telescope transmission over the full optical range, colored by pinhole number. They have been integrated over narrow (\(\sim 30\) nm) bands for later comparison with LED flat field measurements of the camera transmission. We see that, relative to the measurement for pinhole 0, every other pinhole displays a chromatic evolution of the open transmission of about 5 % from 400 nm to 950 nm. Figure 9: A multi-pinhole CBP scan of the StarDICE telescope. Each color represents a different filter placed in the beam of the telescope. The transparent curves show the measured transmissions for each of the pinholes in the grid, while the solid curves are the means across all pinholes. Each pinhole is normalized relative to its own peak value in the no-filter scan. The sharp features seen around 710 nm are due to extremely low SNR (signal-to-noise ratio), on account of the degeneracy region of the laser. We also observe a systematic difference between pinholes, with pinholes 4-6 being more transmissive than others at 400 nm and less transmissive at 950 nm. This corresponds to a spatial evolution over the field of view of about 5 %. Part of these effects can be accounted for by the camera response, as shown by the round coloured dots on the same figure. Those points represent the open transmission of the camera measured by flat field illumination using LEDs of narrow (\(\sim 30\) nm) spectra. The flat fields obtained are integrated over spatial regions corresponding to each pinhole and displayed in the same color coding as the CBP measurements. We see that both the chromatic trend of all pinholes and the spatial trend between pinholes are partially accounted for by the response of the camera. A more quantitative assessment of the spatial variation detected would need a full range of new measurements, both with the CBP and with the flat field illumination system, which is beyond the scope of this paper. It nonetheless shows that a CBP is a promising tool to measure the response of the full telescope system down to the camera, even at the level of variations within the field of view. ### Reproducibility In addition to the uncertainty calculations given in Section 6.5, we also obtained repeated measurements of the transmission function of the telescope at a fixed subset of wavelengths to test the robustness of the CBP results. Fig. 11 shows the results of 10 measurements of the StarDICE system throughput taken at 13 different wavelengths (50 nm intervals from 400 nm to 1000 nm) on a per-pinhole basis. Note that the pinholes and focal plane locations are not the same as those in Fig. 9. The measurements were taken as 10 sets of 13 measurements (i.e., the wavelength was changed between each measurement), which temporally separates the measurements at each wavelength by several minutes. Each pinhole-wavelength combination is normalized to its mean value, and horizontal colored lines are drawn at \(+/-1\sigma\). Black horizontal lines denote the calibration uncertainty goal of 1 % (standard uncertainty), which is generally met between approximately 450 nm to 850 nm. This is in line with the theoretical predictions, which say that uncertainty should decrease by \(1/\sqrt{N}\). For \(N=10\) measurements, we would expect 1 % precision for measurements with individual uncertainties of \(\sim 3~{}\%\) - this corresponds roughly to the 450 nm to 800 nm region in Fig. 8. This is because the uncertainty of the CBP photodiode charge measurements is statistics dominated in that spectral region. Where the systematic error due to the CBP transmission uncertainty dominates, the uncertainty doesn't decrease with repeated measurements. The grey shaded regions are visualizations of this uncertainty for each wavelength. The underestimation of the predicted uncertainty and the observed scatter is likely due to the very sharp nature of the AR coating cutoff around 400 nm, and the calibration consequences thereof. Figure 10: Normalized transmissions for each of the pinholes from the CBP scan of the StarDICE telescope. Xs denote CBP measurements integrated over the wavelength width of the LEDs used in the flat field study of the CCD. Circles signify LED flat field illumination measurements of the camera, integrated over 10 x 10 pixel squares at the position of the CBP pinholes. The grey strip denotes the degeneracy region, where CBP measurements are less reliable. Some evidence for spatial variability in the transmission function is seen, particularly in the blue end of the spectrum. Figure 11: Repeatability test of the telescope transmission measurement from 400 nm - 1000 nm at 50 nm intervals. Each plot represents an individual pinhole on the StarDICE focal plane. The horizontal colored bars at each wavelength represent +/-1\(\sigma\) from the mean of the measurements. Each data point is normalized by the mean of the measurements at that wavelength for that pinhole spot. For some of the measurements, the +/-1\(\sigma\) lines fall slightly outside the limits of the graph. The dashed grey lines denote +/-1 % deviation from the mean. Solid grey regions denote predicted uncertainty using the total uncertainty budget from Fig. 8. ### Additional Sources of Systematic Error We have attempted, where practical, to maintain an accurate uncertainty budget for the CBP system. Here we discuss other possible sources of systematic uncertainty which might affect our measurements, and offer some possible future avenues of estimation or suppression. One assumption we have made in the design of the CBP is the achromaticity of the pinhole grid illumination by the integrating sphere. If the back illumination of the grid is chromatic across the grid, that would directly translate to spurious chromatic evolution across the detector on the StarDICE system. Examining this would involve rotating or translating the detector under examination with respect to the CBP grid. By comparing data taken with the same spot at different areas of the focal plane (or different spots at the same area of the focal plane), we can place constraints on the achromaticity of the CBP pinhole illumination. Alternatively, if one wishes to quash this source of uncertainty at the expense of multiplexing, a single on-axis pinhole can be used, which will not suffer from this effect. We have plans to continue to explore this potential systematic on the StarDICE system in the future. Another assumption is that the CBP calibration factor, which was measured at NIST with a single on-axis 500 \(\mu\)m pinhole (for SNR reasons), is directly transferable to the pinhole grid, with pinholes of significantly smaller sized and with most (if not all) of them at least slightly off axis. Measuring transfer achromaticity directly is challenging because the 500 \(\mu\)m pinhole has an area, and therefore signal, that is 25x larger than the total pinhole area of the 5x5 20 \(\mu\)m pinhole grid (and 625x larger than any individual pinhole). Repeating the NIST calibration process with this much less flux is a challenging proposition, and out of the scope of this paper. The assumption of calibration invariability with pinhole size will be a source of systematics investigated in details in the forthcoming paper (Souverin et al. in prep.). Electronic gain variation represents another axis of systematic uncertainty. Gain drifts might occur in either the CCD readout electronics or in the electrometer; in either case, if the gain varies systematically over the course of a CBP scan, it would show up as a change in system response with wavelength. There is a method to counter this, which is to consistently return to a reference configuration (in both pointing and wavelength) during the scan. By taking data in the exact same configuration, any temporal drift in the electronics during the scan can be monitored and regressed away. ## 8 Conclusion We have presented here the Collimated Beam Projector (CBP) which is a telescope transmission calibration device. The CBP uses collimated light to illuminate a telescope, mimicking a stellar wavefront over a subsection of the primary. By calibrating the output of the CBP relative to a NIST trap detector, the CBP can propagate the state-of-the-art POWR calibration to a telescope detector. To demonstrate this, we have measured the throughput of the StarDICE telescope, with uncertainties between 3 % (\(400\) nm \(\lesssim\lambda\lesssim 800\) nm) and 14 % (\(800\) nm \(\lesssim\lambda\lesssim 1000\) nm). Over the full wavelength range, the wavelength calibration achieved, relying on the stability of the laser, is estimated to be of the order of 0.1 nm. In its present form, precision for the CBP is limited by different factors in each of the regimes mentioned. In the short wavelength regime, the accuracy of the electrometer used to measure charge deposited on the monitor photodiode limits single-exposure uncertainties to 1 %, even at high flux. For the long wavelength region, we are limited by systematics in the calibration of the CBP, which arise due to interference fringing in the transfer telescope used in the calibration process. To address the former limitation, two possibilities exist: make \(N\) measurements and average, or procure a different, higher precision, electrometer. As an example, the Keithley 6517B electrometer has a per-measurement precision of 0.4 % + 50 pC for the 2 \(\mu\)C measurement range, which is a significant improvement over the 6514 used in this work. Making \(N\) measurements to improve SNR is one of the improvements that have been implemented with success in futher work. In the long wavelength regime, we are considering two avenues of approach: recalibration at NIST using a different optical setup using a reflective instead of refractive telescope to avoid fringing issues, or re-designing the collimation scheme for the CBP such that a monitor photodiode can be inserted directly to the output beam (rather than the integrating sphere). This latter scheme also has the advantage in that it requires only the calibration of the monitor photodiode relative to POWR, and not the calibration of the full CBP instrument, and is the path we have selected for our further developments. The main issue of having an output flux largely diluted at the output of the CBP, resulting on a signal too low to be accurately measured has been overcome by the use of a large surface solar cell that collects the entire output beam. The calibration of such a solar cell is described in [25] and its use will be discussed in a forthcoming paper (Souverin et al. in prep.). In general, this implementation of a CBP in front of a telescope in a controled laboratory setting has provided many useful venues for improvement. Those have been implemented and will be presented in the forthcoming paper (Souverin et al. in prep.). A calibration system that can be taken to different telescopes is of critical importance to supernova cosmology in particular, as photometric calibration has been and remains a major factor in determining cosmological parameters [26, 27, 2, 6]. In support of this mission, our group intends to undertake a series of measurements on a representative group of telescopes that have played significant roles in supernova measurements. In addition to our CBP, Rubin Observatory has also procured a CBP. Measurements and techniques learned here will be directly translated to support observations taken by the Rubin Observatory, and ultimately support the transfer of the NIST optical flux scale to astrophysical objects. ### Acknowledgments This paper has undergone internal review in the LSST Dark Energy Science Collaboration. We thank P.Antilogus, J.Neveu and N.Regnault for their thorough job. The authors declare no conflicts of interest. C.W.Stubbs performed the initial conceptual design of the CBP, and was engaged in the implementation and data analysis. M.W.Coughlin designed and fabricated the original version of the CBP, as well as supported measurements and paper writing. N.Mondrik conducted the installation of the CBP in the lab, led the data taking and analysis and wrote the main part of the paper. M.Betoule contributed to the StarDICE telescope control system and data acquisition software, and took part in the hardware assembly, data taking and analysis. S.Bongard participated in the hardware assembly, data taking and analysis, and took charge of the editorial needs of the paper. P.S. Shaw and J.T. Woodward assembled the equipment and NIST and performed the measurements and data analysis. J.P. Rice wrote the NIST calibration section and provided editorial support. The DESC acknowledges ongoing support from the Institut National de Physique Nucleaire et de Physique des Particules in France; the Science & Technology Facilities Council in the United Kingdom; and the Department of Energy, the National Science Foundation, and the LSST Corporation in the United States. DESC uses resources of the IN2P3 Computing Center (CC-IN2P3-Lyon/Villeurbanne - France) funded by the Centre National de la Recherche Scientifique; the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231; STFC DiRAC HPC Facilities, funded by UK BEIS National E-infrastructure capital grants; and the UK particle physics grid, supported by the GridPP Collaboration. This work was performed in part under DOE Contract DE-AC02-76SF00515. We thank the US Department of Energy and the Gordon and Betty Moore Foundation for their support of our LSST precision calibration efforts, under DOE grant DE-SC0007881 and award GBMF7432 respectively. NM is supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE1745303. MC acknowledges support from the National Science Foundation with grant numbers PHY-2010970 and OAC-2117997. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. Identification of commercial equipment to specify adequately an experimental problem does not imply recommendation or endorsement by the NIST, nor does it imply that the equipment identified is necessarily the best available for the purpose. This research made use of Astropy,5 a community-developed core Python package for astronomy.28, 29 Footnote 5: [http://www.astropy.org](http://www.astropy.org) The data used for this paper can be retrieved by direct inquiry to the corresponding author. Given the demonstrating nature of the work undertaken in this paper, we do not plan more extensive online publication of the data.
2306.03264
shs-nlp at RadSum23: Domain-Adaptive Pre-training of Instruction-tuned LLMs for Radiology Report Impression Generation
Instruction-tuned generative Large language models (LLMs) like ChatGPT and Bloomz possess excellent generalization abilities, but they face limitations in understanding radiology reports, particularly in the task of generating the IMPRESSIONS section from the FINDINGS section. They tend to generate either verbose or incomplete IMPRESSIONS, mainly due to insufficient exposure to medical text data during training. We present a system which leverages large-scale medical text data for domain-adaptive pre-training of instruction-tuned LLMs to enhance its medical knowledge and performance on specific medical tasks. We show that this system performs better in a zero-shot setting than a number of pretrain-and-finetune adaptation methods on the IMPRESSIONS generation task, and ranks 1st among participating systems in Task 1B: Radiology Report Summarization at the BioNLP 2023 workshop.
Sanjeev Kumar Karn, Rikhiya Ghosh, Kusuma P, Oladimeji Farri
2023-06-05T21:33:04Z
http://arxiv.org/abs/2306.03264v1
shs-nlp at RadSum23: Domain-Adaptive Pre-training of Instruction-tuned LLMs for Radiology Report Impression Generation ###### Abstract Instruction-tuned generative Large language models (LLMs) like ChatGPT and Bloomz possess excellent generalization abilities, but they face limitations in understanding radiology reports, particularly in the task of generating the impressions section from the findings section. They tend to generate either verbose or incomplete impressions, mainly due to insufficient exposure to medical text data during training. We present a system which leverages large-scale medical text data for domain-adaptive pre-training of instruction-tuned LLMs to enhance its medical knowledge and performance on specific medical tasks. We show that this system performs better in a zero-shot setting than a number of _pretrain-and-finetune_ adaptation methods on the impressions generation task, and ranks 1st among participating systems in Task 1B: Radiology Report Summarization at the BioNLP 2023 workshop. ## 1 Introduction A radiology report is the primary means by which the radiologist communicates his/her interpretations of medical images (e.g., X-Rays) and resulting conclusions to the ordering physician. A radiology report typically includes several sections (Kahn Jr et al., 2009), among which the most important ones are the findings and impressions sections. findings includes qualitative and quantitative descriptions of abnormalities if present, along with the radiologist's diagnosis or differential diagnosis regarding the observations. impressions summarises the findings section, and the radiologist notes major abnormalities and their recommendations in this subsection; a sample report with findings and impressions is shown in Table 1. There have been various efforts to automatically generate impressions from findings, such as, Karn et al. (2022) reinforcement learning- and Delbrouck et al. (2023) BERT-based systems. Pretrained language models (PLMs) are trained on enormous, heterogeneous corpora that include everything from news articles and literary works to web content and encyclopedia articles, which allows them to capture a broad range of linguistic patterns and features (Gururangan et al., 2020). However, PLMs also have limitations and biases, particularly in their training data, which can affect their performance in certain out-of-domain tasks (Bommasani et al., 2021). To advance the state of the art, several domain-specific PLMs are available, specifically in the field of biomedical and clinical NLP, such as BioBERT (Lee et al., 2020) and RadBERT (Yan et al., 2022). These PLMs do, however, have additional constraints. For instance, the datasets used to train RadBERT were rather small and only covered a few number of anatomical specialities. The _pretrain-and-finetune_ paradigm for PLMs has become the dominant approach for addressing downstream tasks with significant training data scarcity (Karn et al., 2021). Recent studies (e.g., Gururangan et al. (2020)) suggest that additional pretraining on in-domain text, referred to as special \begin{table} \begin{tabular}{l} \hline \hline \multicolumn{1}{c}{} \\ \hline \hline \multicolumn{1}{c}{} \\ \hline \hline bifrontal hemorrhagic contusions are once again noted, stable compared to most recent prior,... subarachnoid hemorrhage is once again noted within the left... subdural hematoma is noted overlying the left temporal lobe and to the left... there is no shift of normally midline structures. the ventricles appear unremarkable. a left temporal lobe hemorrhagic contusion remains stable in size... the visualized paranasal sinuses are clear. there is no evidence of acute fracture. \\ \hline \hline \multicolumn{1}{c}{} \\ \hline \hline 1. bifrontal hemorrhagic contusion appears stable compared to most recent prior with slightly increased vasogenic... \\ 2. subdural hematoma is noted layering over the left temporal lobe and within the left falk. \\ 3. subarachnoid hemorrhage is noted within the left frontal region. \\ 4. no shift of normally midline structures. \\ \hline \hline \end{tabular} \end{table} Table 1: findings (top) and impressions (bottom) sections of a radiologist’s report from MIMIC-III. ist pretraining, prove more successful for improving downstream performance. Currently another approach has emerged, where instead of finetuning PLMs to perform downstream tasks, the objectives of downstream tasks are reconstructed using textual prompts similar to the original pre-training objectives Liu et al. (2023). This _pretrain-and-prompt-tune_ paradigm is commonly referred to as prompt-tuning. Multitask prompted finetuning (also known as instruction tuning) is a type of large-scale _pretrain-and-prompt-tune_ paradigm where finetuning of large PLMs (also referred as LLMs) is performed with datasets representing various NLP tasks defined by instructions as natural language prompts Scao et al. (2022). Using this approach, Scao et al. (2022) equipped their LLM Bloom with the skill to perform multilingual zero-shot instruction-based tasks and tagged it Bloomz. We propose an extension of the domain adaptation paradigm beyond the typical method of _pretrain-and-finetune_ or instruction-tuned LLM for domain-specific tasks. We posit that additional pretraining of LLMs that already went through the vast _pretrain-and-prompt-tune_ with in-domain text adapts better and more easily. We refer to our approach as _general-pretrain-prompt-tune-and-special-pretrain_. With this approach, the model is trained using the same initial LM objective in each of the three training stages (i.e., general pretraining, prompt-tuning and domain specialized pretraining), which is a significant advantage. Furthermore, since the instruction-tuned LLM is familiar with a variety of prompts and has tackled numerous tasks, the domain pretraining saturates rapidly, resulting in lower training costs. We continued the pretraining of the instruction-tuned Bloomz on MIMIC-IV to form RadBloomz, and evaluated this adaptation paradigm on the radiology report summarization task. The proposed system in a zero-shot setting exhibits better performance than _pretrain-and-finetune_ methods, and ranks 1st place among participating systems in Task 1B: Radiology Report Summarization at the BioNLP 2023 workshop. Overall, our contributions are as follows: * We extend the domain adaptation paradigm by introducing _general-pretrain-prompt-tune-and-special-pretrain_ by further pretraining instruction-tuned LLMs like Bloomz on the domain-specific text. * We show that the new adaptation paradigm of an instruction-tuned LLM for radiology yields better performance in a zero-shot setting than _pretrain-and-finetune_ methods. ## 2 Datasets ### Pretraining Datasets for Radiology Domain Adaptation. We have performed domain-adaptive pretraining using the recently published MIMIC-IV radiology reports dataset Johnson et al. (2000). It contains over 2.3 million radiology reports from 237k patients, and approximates to over 616 million tokens with Bloomz Muennighoff et al. (2022) tokenizer. After preprocessing, we only used 1.4 million reports with 190 million tokens. The Table 2 provides further details on the statistics of the datasets. ### Finetuning Datasets for Impression Generation We utilize the datasets that were shared for Task 1B: Radiology Report Summarization at the BioNLP 2023 workshop for our finetuning task. The task consists of three datasets: MIMIC-III Johnson et al. (2016), MIMIC-CXR Johnson et al. (2019) and CheXpertIrvin et al. (2019), pre-split into findings and impressions sections. For MIMIC-III, there are 59320 reports in training dataset, 7413 in validation and 6526 in test set and 6531 in hidden test set. Most reports (91.4%) pertain to CT imaging, and the most represented anatomy pertains to the head (52.8%). Although the task related to MIMIC-CXR/CheXpert datasets is multimodal, we only used radiology reports for finetuning and inference. MIMIC-CXR training dataset consists of 125,417 radiology reports in training dataset, 991 in validation and 1624 in test dataset. The hidden test dataset is a CheXpert dataset and consists of 1k reports. ## 3 Methods Our methods consists of preprocessing, domain-adaptive pretraining, finetuning and inference. \begin{table} \begin{tabular}{|l|c|c|} \hline Dataset & findings & impressions \\ \hline MIMIC-IV & 113.46(139.06) & 33.04 (36.52) \\ \hline MIMIC-III & 118.19(59.7) & 49.48 (35.12) \\ \hline MIMIC-CXR & 54.52 (24.67) & 16.37 (15.79) \\ \hline \end{tabular} \end{table} Table 2: Number of words per report with standard deviation in parentheses for various dataset. ### Preprocessing The preprocessing step uses Regex-based cleaning and normalization to remove spurious characters and administrative or repetitive texts unrelated to the report. We have added special tokens for de-identified text in the reports. In addition, we also identified different sections in the report, namely findings and impressions. We have selected the reports with less than 512 tokens that have both these sections. ### Domain adaptive pretraining (DAPT) We select GPT-powered Bloom (Scao et al., 2022) as the base LLM for our study. Bloom is a multilingual LLM modified from Megatron-LM GPT2 (Shoeybi et al., 2019) trained auto-regressively with an objective function of cross entropy with mean reduction. There are multiple versions of Bloom based on the number of parameters. The largest Bloom model consists of 176 billion parameters, with 70 layers, 112 attention heads and 14336 dimensional hidden layers. Bloomz (Muennighoff et al., 2022) is a massive multitask instruction-tuned version of Bloom. For our domain adaptation study, we use a variant of Bloomz (Bloomz-7b1) with 7 billion parameters, 30 layers and 4096-dimensional hidden layers. Following our proposed _pretrain-fine-tune-and-pretrain_ paradigm, we continuously pretrain Bloomz-7b1 using cross-entropy loss on auto-regressively generated tokens from the findings and impressions sections of MIMIC-IV reports. ### Finetuning In this study, the domain-specific task for fine-tuning an LLM is Radiology Report Summarization. Following the standard prompt-based finetuning, we use findings as the prompt and finetune Bloomz-7b1 using cross-entropy loss of the auto-regressively generated summary tokens by comparing with the ground-truth impressions. We also appended TL;DR to the prompt. This technique keeps the final fine-tuning objective consistent with the pretraining and instruction-tuning objectives of the base Bloom and intermediate Bloomz. In order to avoid catastrophic forgetting, we have reduced trainable parameters of Bloomz-7b1 by freezing all but the last layer of the model. ### Inference The inference pipeline leverages the trained model to generate impressions, given the findings. The metrics used to evaluate the generated results are Rouge scores (Lin, 2004), F1RadGraph (Delbrouck et al., 2022), Bertscore (Zhang et al., 2019) and F1CheXbert (Xie et al., 2023) for MIMIC-CXR/CheXpert datasets. ## 4 Experiments We propose two experimental runs for the summarization task. 1. **Radiology Domain Adaptive Pretraining (RadBloomz) with MIMIC-IV and zero-shot inference.** GPT-powered Bloomz-7b1 model is further trained using causal language objective on MIMIC-IV radiology report to form RadBloomz. We set the sequence length to 512, training batch size to 64, validation batch size to 32, learning rate to 3e-5 and AdamW as the optimizer (Loshchilov and Hutter, 2017). The best zero-shot inference results are for 24k steps. 2. **RadBloomz finetuned with MIMIC-III.** We follow _pretrain-and-finetune_ paradigm and finetuned the RadBloomz further with MIMIC-III dataset on radiology report summarisation task. We use the same hyperparameters and training configuration as the above. The best finetuning results are for 2697 steps. All the experiments were run on the same infrastructure. 1 We use sampling based technique to generate the summary from the model output distribution given findings. We set sampling hyperparameter such as maximum tokens to 128, \(top\_k\) to 50 and \(top\_p\) to 0.7. Footnote 1: Eight Tesla A100 SXM4 GPUs (with 80 GB memory per GPU) using Deepsped zero-3 configuration (Rasley et al., 2020) with BF16 enabled ## 5 Results and Discussion We compared results from RadBloomz to various systems using n-gram overlap ROUGE and facts overlap F1RadGraph evaluations. Table 3 highlights the performance of RadBloomz (with team name shs-nlp) on MIMIC-CXR and MIMIC-III hidden test datasets. Hidden test-set MIMIC-III includes only reports while MIMIC-CXR includes reports and images. Our system is text-based only and thus MIMIC-III is a more appropriate evaluation. Our system is the top performer for MIMIC-III hidden test set among all the other submitted systems. Additionally, for MIMIC-CXR hidden test set our text-based system ranks fourth among all the other submitted systems, further showing the strength of the proposed domain adaptation technique on a multimodal task. In Table 4, we compare the performance of the standard _pretrain-and-finetune_ and our proposed paradigm _pretrain-prompt-tune-pretrain-and-zero-shot_ on the radiology report summarization test data for the Task 1B challenge.2 We note that although finetuning with MIMIC-III improves the Rouge-L and Bertscore metrics for MIMIC-III test dataset, the Bertscore, F1-RadGraph and F1-cheXbert scores are lower for the finetuned model. This shows that the domain adaptation paradigm is sufficient for achieving higher performance and doesn't require task-specific finetuning. Footnote 2: Ground-truths are only available for the open test data for any additional evaluation. **Error Analysis.** A detailed error analysis on the open test datasets reveals that many of the generated impressions get a low score for both Rouge and F1-RadGraph in cases where the radiology report has no abnormalities mentioned. For example, generated impression "normal mri of the cervical spine." and ground truth impression "negative study" are semantically similar, but these n-gram overlap-based scores fail to recognize the semantic relatedness. Similarly, we noticed that similar findings sometimes generate different impressions. For example, impressions can be as detailed as : "near complete opacification of the ethmoid air cells and sphenoid sinuses. moderate air-fluid level with mucosal thickening of the right maxillary sinus and moderate mucosal thickening of the left maxillary sinus.", while similar findings in another report would be summarised to "pansinusitis, as described above." In addition, we have noticed problems with missed facts and hallucinations, as seen in 7. ## 6 Conclusion In this work, we introduce a new domain adaptation paradigm of _general-pretrain-prompt-tune-and-special-pretrain_ where we further pretrain an instruction-tuned LLM (Bloomz) on the radiology domain text. We use radiology report summarization as the domain-specific task and demonstrate that the new paradigm-based LLM model performs better than the standard _pretrain-and-finetune_ based method even in a zero-shot setting. The system ranks 1st among participating systems in the hidden-test category in Task 1B: Radiology Report Summarization at the BioNLP 2023 workshop. \begin{table} \begin{tabular}{l|c c c c c} **Models** & **open test-set** & **BLEU4** & **ROUGE-L** & **BertScore** & **F1-cheXbert** & **F1-RadGraph** \\ \hline RB-/shot & **MIMIC-III** & **17.33** & 33.93 & 55.49 & N/A & **34.93** \\ RBz-It & & 16.49 & **35.25** & **57.29** & N/A & 31.12 \\ \hline RBz-Oshot & **25.32** & **47.48** & **63.61** & **74.34** & **49.00** \\ RBz-It & **MIMIC-CXR** & 16.16 & 26.16 & 52.22 & 53.1 & 31.07 \\ \end{tabular} \end{table} Table 4: Results for different domain adaptation on the different test split of the shared task 1B. The experimental setup is the same for all methods, i.e., the same train/validation/test split of the medical reports was used. RBz-0shot: RadBloomz-zero shot, RBz-ft: RadBloomz-finetuned. \begin{table} \begin{tabular}{l|c|c c c c c} **Team** & **hidden testset** & **BLEU4** & **ROUGE-L** & **BertScore** & **F1-cheXbert** & **F1-RadGraph** \\ \hline **shs-nlp** & & **18.36** & **35.32** & **57.26** & N/A & **36.94** \\ utsa-nlp & & 16.05 & 34.41 & 57.08 & N/A & 36.31 \\ aimi & MIMIC-III & 16.61 & 33.43 & 55.54 & N/A & 35.12 \\ sinai & & 17.38 & 32.32 & 55.04 & N/A & 33.96 \\ knowlab & & 13.23 & 32.02 & 55.64 & N/A & 33.39 \\ \hline dmis-msra & & **18.62** & **34.57** & **55.90** & **72.36** & **43.20** \\ utsa-nlp & & 16.33 & 34.97 & 55.54 & 69.41 & 42.86 \\ knowlab & MIMIC-CXR & 14.41 & 33.63 & 54.72 & 67.20 & 39.98 \\ **shs-nlp** & & 14.59 & 32.43 & 53.99 & 68.99 & 38.40 \\ aimi & & 5.15 & 31.84 & 47.83 & 64.18 & 32.05 \\ \end{tabular} \end{table} Table 3: The table shows performance of top-5 submitted systems on the two categories of hidden test data of the shared task 1B at BioNLP 2023. shs-nlp is our RadBloomz system. Hidden test-set MIMIC-III includes only reports while MIMIC-CXR includes reports and images. Our system is text-based and thus MIMIC-III is more appropriate evaluation and ranks 1st among participating systems. ### Limitations There are a few limitations pertaining to the training data we used. Some of them are listed below. 1. Our domain Adaptation of LLMs was performed on English reports only, and therefore may not work out of the box in a multilingual setting. 2. There is data imbalance with respect to imaging modalities and anatomies covered by our training data. For example, regions like extremities, neck, spine and shoulder are underrepresented in the dataset, and report summarization related to those regions needs to be thoroughly evaluated. 3. There needs to be a study on the diversity of the patients represented in the data, and how it impacts the performance of the model for underrepresented communities. 4. Different radiologists (and radiology departments) have different preferences and styles of writing reports. In addition, clinical referrals sometimes dictate to what extent some details are documented in the report. There was no study on the consistency, uncertainty or information richness of the report. Asides from the training data, there may be space and time throughputs of the model which could make them unsuitable for on-premise and/or at-the-edge applications. This aspect offers an opportunity for further work on how best to quantize and deploy RadBloomz (and similar LLMs) within the clinical workflow towards improved efficiency for radiologists. ## Ethics Statement The research performed in this paper adheres to the Association for Computing Machinery (ACM) Code of Ethical and Professional Conduct 3 adopted by the Association for Computational Linguistics (ACL). To prevent any harm caused due to errors in our model-generated outputs, our models are meant to be deployed in a human-in-the-loop setting where the key information extracted by our models are reviewed by radiologists and physicians. Footnote 3: [https://www.acm.org/code-of-ethics](https://www.acm.org/code-of-ethics) ## Disclaimer The concepts and information presented in this paper/presentation are based on research results that are not commercially available. Future commercial availability cannot be guaranteed.
2303.09197
Integrating Temporality and Causality into Acyclic Argumentation Frameworks using a Transition System
In the context of abstract argumentation, we present the benefits of considering temporality, i.e. the order in which arguments are enunciated, as well as causality. We propose a formal method to rewrite the concepts of acyclic abstract argumentation frameworks into an action language, that allows us to model the evolution of the world, and to establish causal relationships between the enunciation of arguments and their consequences, whether direct or indirect. An Answer Set Programming implementation is also proposed, as well as perspectives towards explanations.
Y. Munro, C. Sarmiento, I. Bloch, G. Bourgne, M. -J. Lesot
2023-03-16T10:13:47Z
http://arxiv.org/abs/2303.09197v2
# Temporality and Causality in Abstract Argumentation # Temporality and Causality in Abstract Argumentation Yann Munro\({}^{*}\) Camilo Sarmiento\({}^{*}\) Isabelle Bloch Gauvain Bourgne Marie-Jeanne Lesot Sorbonne Universite CNRS LIP6 Paris France {firstname.surname}@lip6.fr These authors contributed equally to this work. ###### Abstract In the context of abstract argumentation, we present the benefits of considering temporality, i.e. the order in which arguments are enunciated, as well as causality. We propose a formal method to rewrite the concepts of acyclic abstract argumentation frameworks into an action language, that allows us to model the evolution of the world, and to establish causal relationships between the enunciation of arguments and their consequences, whether direct or indirect. An Answer Set Programming implementation is also proposed, as well as perspectives towards explanations. ## 1 Introduction The abstract argumentation framework (AAF), first introduced in [1], provides a suitable framework for representing and reasoning about contradicting information. It makes it possible to find sets of arguments that can be accepted together and provides explanations on why such sets have been accepted or not. Thus, AAF provides convenient tools to model and reason about debates. However, it is a static framework which does not include a notion of temporality that seems crucial for modelling dialogues. To address this issue, considering an AAF for each time step could be an option, but it might be expensive to create and compare AFs at every time step. On the other hand, action languages offer tools to reason about action and change and have been naturally conceived to include the notion of time. The action language introduced by [1] has been designed to determine the evolution of the world given a set of actions corresponding to deliberate choices of the agent, the occurrence of which can trigger a chain reaction through external events. We choose this action language for three reasons. First, it allows for concurrency of events. Other languages also offering this advantage, such as \(\mathcal{C}\)[1] or PDDL+ [15], are adapted to non-deterministic or durative actions, which increases complexity and is not useful in our framework. Secondly, there exists a definition of actual causality that is suitable for this action language. Finally, [1] propose a sound and complete translation into ASP. We propose to take advantage of these properties to study the causal relations in a dialogue, paving the way for the search of explanations. This paper is structured as follows. Section 2 briefly recalls the principles of abstract argumentation. Section 3 describes the chosen action language and the actual causality definition suitable for it. Section 4 proposes the main contributions of this paper: a formalisation of acyclic abstract argumentation graphs into an action language, with its corresponding implementation in ASP. Section 5 establishes its formal properties, including its soundness and completeness, as well as the relevance of the temporality inclusion. Section 6 illustrates the exploitation of the proposed formalisation to get enriched information, as graphical representations and causal relations. Section 7 concludes the paper. ## 2 Abstract Argumentation Framework, AAF This section briefly recalls the basics of [1]'s AAF. **Definition 1**.: _An abstract argumentation framework \(AAF\) is a couple \(AF=(A,R)\), where \(A\) is a finite set of arguments and \(R\) is a set of attacks corresponding to a binary relation \(A\times A\). An argument \(x\in A\) attacks \(y\in A\) if \((x,y)\in R\)._ As \(R\) is a binary relation with a finite support, an AAF can be represented using a graph. **Example 1**.: _To illustrate these notions, we introduce an argumentative scenario modelling the interaction between a requesting physician, D, and a radiologist, R, concerning an examination of a \(n\) month old baby for pathology Z. D: Can you do an X-ray scanner (CT) for this baby? (a) R: It is better for a baby to avoid ionising radiations. (b) R: I can suggest an MRI (magnetic resonance imaging) in two days' time. (c) D: Can Z be seen on an MRI? (d) R: Yes, of course: If you want confirmation, look at the guide to good radiology practice. (e) D: But a baby might move and so you might not be able to get the information you are looking for because the image may be artefacted. (f) R: Do not worry, I am used to doing MRI for babies. (g) D: Does not it cost the hospital a lot more to do an MRI? (h) I also have to check with the patient's family because it might cost them more. (i) R: No problem here. The high cost includes the experience gained by my team so that in the future this kind of delicate examination can be performed without me. (\(j\)) D: I have just spoken to the family, no problem with the MRI, the exam is refunded. (\(k\)) D: However, the family is not comfortable with the idea of having to wait two days, could not you do the exam before? (\(l\)) R: No my schedule for today is already full. My next slot is in two days, as I told you. (\(m\)) After this discussion, the decision is made to schedule an MRI in two days' time. But later that day, the doctor receives a call from the family saying that the baby is really not well and insisting on the urgency of the examination. Therefore, the doctor contacts the radiologist to add a final argument. D: It is very urgent for the baby, we need a place today! (\(n\)) From this dialogue, we can extract manually arguments and their relations to create the AAF represented in Figure 1 with the following arguments: {**a**: Scanner, \(b\): Ionising radiation, **c**: MRI in two days, \(d\): Z not visible by MRI, \(e\): Z visible by MRI, \(f\): Difficult conditions, \(g\): High experience, \(h\): High cost for the hospital, \(i\): High cost for the patient, \(j\): Not problematic for the hospital, \(k\): The family is covered for an MRI, **I**: MRI today, \(m\): No availability today, \(n\): It is an emergency?.). Arguments \(a,c,l\) are called the decision variables, their acceptance being the criterion triggering a decision: CT, MRI in two days, or MRI today. The obtained argumentation system is a possibility of graph that can be extracted from this dialogue. The extraction process can be done automatically using so-called argument mining methods [11]. Note that this is a static representation of the dialogue from which any notion of temporality has been erased. Thus, if the arguments had been stated in a different order, it would not change the graph. This will be important to address causality in Section 6.2. Once an argumentation graph has been constructed, it is possible to reason on it to determine sets of arguments that can be considered as accepted. The _set of direct attackers_ of \(x\in A\) is denoted by \(Att_{x}=\{y\in A\mid(y,x)\in R\}\). A set \(S\) is _conflict-free_ if \(\forall(x,y)\in S^{2}\), \((x,y)\notin R\). An argument \(x\in A\) is _acceptable_ by \(S\) if \(\forall y\in Att_{x},\exists z\in S\cap Att_{y}\). Then an _admissible set_\(S\) is defined as a conflict-free set whose elements are all acceptable by \(S\) itself. For an acyclic graph, this is the only extension-based semantics as all others coincide with it and are therefore not discussed here [1]. **Example 1**.: _(continued) - The argument graph is acyclic. To determine the set of acceptable arguments, it is sufficient to start from the non-attacked arguments, here \(b,g,j,k,n\). They are accepted by default. Then, an argument attacked by at least one accepted argument cannot be accepted. By applying this rule, we obtain that argument \(l\) is accepted, in contrast to \(a\) and \(c\). Therefore, the final decision is to perform an emergency MRI today._ ## 3 Action Language and Causality This section introduces the formal aspects of the action language proposed by [1] and then briefly describes what is considered to be an actual cause in this formalism. For more details refer to [1]. ### Syntax and Semantics The purpose of the action language introduced by [1] is to determine the evolution of the world given a set of actions corresponding to deliberate choices of the agent. These actions might trigger some chain reaction through external events. Therefore, in order to have a complete knowledge of the evolution of the world, [1] keep track of both: the evolution of the states of the world and the occurrence of events. Hence, we denote by \(\mathbb{F}\) the set of variables describing the state of the world, more precisely _ground fluents_ representing time-varying properties, and by \(\mathbb{E}\) the set of variables describing transitions, more precisely _ground events_ that modify fluents. A _fluent literal_ is either a fluent \(f\in\mathbb{F}\), or its negation \(\neg f\). The set of fluent literals in \(\mathbb{F}\) is denoted by \(\mathit{Lit}_{\mathbb{F}}\), i.e. \(\mathit{Lit}_{\mathbb{F}}=\mathbb{F}\cup\{\neg f\mid f\in\mathbb{F}\}\). The complement of a fluent literal \(l\) is defined as \(\overline{l}=\neg f\) if \(l=f\) or \(\overline{l}=f\) if \(l=\neg f\). **Definition 2** (state).: _A set \(L\subseteq\mathit{Lit}_{\mathbb{F}}\) is a state if it is:_ * _Coherent:_ \(\forall l\in L,\overline{l}\not\in L\)_;_ * _Complete:_ \(\forall f\in\mathbb{F},f\in L\) _or_ \(\neg f\in L\)_._ A state thus gives the value of each of the fluents describing the world. Time is modelled linearly and in a discrete way to associate a state \(S(t)\) to each time point \(t\) of a set \(\mathbb{T}=\{-1,0,\ldots,N\}\). \(S(0)\) is the _initial state_. Using a bounded past formalisation, all states before \(t=0\) are gathered in a state \(S(-1)=\mathbb{F}\setminus S(0)\). An event \(e\in\mathbb{E}\) is an atomic formula. Each event is characterised by three elements: preconditions and triggering conditions give conditions that must be satisfied by a state \(S\) for the event to be triggered (their difference is detailed later on); effects indicate the changes to the state that are expected if the event occurs. Note the deliberate use of the term 'expected' as an event may have less effects than those formalised. The preconditions and effects are represented as formulas of the languages \(\mathcal{P}:=l|\psi_{1}\wedge\psi_{2}|\psi_{1}\vee\psi_{2}\) and \(\mathcal{E}:=l|\varphi_{1}\wedge\varphi_{2}\), respectively. The functions which associate preconditions, triggering conditions and effects with each event are respectively defined as: \(pre:\mathbb{E}\rightarrow\mathcal{P}\), \(tri:\mathbb{E}\rightarrow\mathcal{P}\), Figure 1: Argumentation graph associated with Example 1. and \(\mathit{eff}:\mathbb{E}\rightarrow\mathcal{E}\). \(\mathbb{E}\) is partitioned into two disjoint sets: \(\mathbb{A}\) contains the actions carried out by an agent and thus subjected to their volition; \(\mathbb{U}\) contains the exogenous events which are triggered as soon as all the \(pre\) conditions are fulfilled, therefore without the need for an agent to perform them. Thus, for exogenous events \(pre\) and \(tri\) are the same. By contrast, for actions, \(tri\) conditions necessarily include \(pre\) conditions but those are not sufficient: the \(tri\) conditions of an action also include the volition of the agent or some kind of manipulation by another agent. The set of all events which occur at time point \(t\) is denoted by \(E(t)\). Allowing concurrency of events (meaning that more than one event can occur at each time point) is one of the main advantages of this action language. These definitions lead to a classical transition system: \(E(t)\) generates the transition between the states \(S(t)\) and \(S(t+1)\). Thus, the states follow one another as events occur, simulating the evolution of the world. With a bounded past formalisation, events that occurred before \(t=0\) must be represented in order to obtain causal results that are consistent with the philosophical conception of causality. Thus, for each fluent literal \(l\in S(0)\) an event \(ini_{l}\in\mathbb{E}\) is introduced, such that \(\mathit{eff}(ini_{l})=l\). Then, \(E(-1)=\{ini_{l},l\in S(0)\}\) which satisfies \(\mathit{eff}(E(-1))=S(0)\). To solve potential conflicts or to prioritise between events, a strict partial order \(\succ_{\mathbb{E}}\) is introduced, which ensures the triggering primacy of one event over another. **Definition 3** (context \(\kappa\)).: _The context, denoted as \(\kappa\), is the octuple \((\mathbb{E},\mathbb{F},pre,tri,\mathit{eff},S(0),\succ_{\mathbb{E}},\mathbb{T})\), where \(\mathbb{E},\ \mathbb{F}\), \(pre\), \(tri\), \(\mathit{eff}\), \(S(0)\), \(\succ_{\mathbb{E}}\), and \(\mathbb{T}\) are as defined above._ **Definition 4** (valid execution).: _An execution is a sequence \(E(-1),S(0),E(0),\ldots,E(N),S(N+1)\). Such an execution is valid given \(\kappa\) if \(\forall t\in\mathbb{T}\):_ 1. \(S(t)\subseteq\mathit{Lit}_{\mathbb{F}}\) _is a state according to Definition_ 2_._ 2. \(E(t)\subseteq\mathbb{E}\) _verifies:_ 1. \(\forall e\in E(t)\)_,_ \(S(t)\models pre(e)\)_;_ 2. \(\nexists\)__\((e,e^{\prime})\in E(t)^{2},\ e\succ_{\mathbb{E}}e^{\prime}\)_;_ 3. \(\forall e\in\mathbb{E}\) _such that_ \(S(t)\models tri(e)\)_,_ \(e\in E(t)\) _or_ \(\exists e^{\prime}\in E(t)\)_,_ \(e^{\prime}\succ_{\mathbb{E}}e\)_._ 4. \(S(t+1)=\begin{cases}l\in S(t),\forall e\in E(t),\overline{l}\not\in\mathit{ eff}(e)\end{cases}\cup\\ \begin{cases}l\in\mathit{Lit}_{\mathbb{F}},\exists e\in E(t),l\in\mathit{eff}(e) \end{cases}\)_._ There is potentially more than one valid execution for a given context \(\kappa\). Adding a set of timed actions \(\sigma\subseteq\mathbb{A}\times\mathbb{T}\) as an input, called _scenario_, leads to a unique valid execution [10]. From this unique execution the event trace and state trace we are interested in, denoted by \(\tau^{e}_{\sigma,\kappa}\) and \(\tau^{s}_{\sigma,\kappa}\), respectively, can be extracted. **Definition 5** (traces \(\tau^{e}_{\sigma,\kappa}\) and \(\tau^{s}_{\sigma,\kappa}\)).: _Given a scenario \(\sigma\) and a context \(\kappa\), the event trace \(\tau^{e}_{\sigma,\kappa}\) is the sequence of events \(E(-1),E(0),\ldots,E(N)\) from the execution which is valid given \(\kappa\), such that: \(\forall t\in\mathbb{T},\forall e\in E(t),e\in\mathbb{A}\Leftrightarrow(e,t)\in\sigma\). The state trace \(\tau^{s}_{\sigma,\kappa}\) is the sequence of states \(S(0),S(1),\ldots,S(N+1)\) corresponding to \(\tau^{e}_{\sigma,\kappa}\)._ ### Actual Causality The actual causation definition proposed by [10] is an action language suitable formalisation of Wright's NESS test. Introduced by [11], this test states that: 'A particular condition was a cause of a specific consequence if and only if it was a necessary element of a set of antecedent actual conditions that was sufficient for the occurrence of the consequence.' A causal relation links a cause to an effect. Since action languages represent the evolution of the world as a succession of states produced by the occurrence of events, states are introduced between events. Therefore, in addition to the actual causality relation that links two occurrences of events, as commonly accepted by philosophers, it is necessary to define causal relations where causes are occurrences of events and effects are formulas of the language \(\mathcal{P}\) that are true at a given time. These intermediate relations are established on the basis of Wright's NESS test of causation. In order to give an actual causality definition suitable for action languages, three causal relations are introduced by [10]: (i) _Direct NESS-causes_ give essential information about causal relations by looking at the effects that the occurrence of an event has actually had, which are not necessarily the same as those expected. Direct NESS-causes relate occurrences of events and formulas of \(\mathcal{P}\) being true at a specific time point. However, the set of direct NESS-causes of a formula of \(\mathcal{P}\) may include exogenous events that are not necessarily relevant. It is therefore essential to establish a causal chain by going back in time in order to find the set of actions that led to the formula truthfulness. (ii) _NESS-causes_ allow for such a causal chain to be found. If we denote by \(\psi\in\mathcal{P}\) the formula true at \(t_{\psi}\) we are interested in, and \(C\) the set of direct NESS-causes of \((\psi,t_{\psi})\), finding the NESS-causes means finding what causes \((tri(C),t)\) necessarily, where \(t<t_{\psi}\). Note that direct NESS-causes are by definition a special case of NESS-causes. (iii) The occurrence of a first event \(e\) is considered an _actual cause_ of the occurrence of a second event \(e^{\prime}\) if and only if the occurrence of \(e\) is a NESS-cause of the triggering of \(e^{\prime}\). From this we can deduce that, if the occurrence \((e^{\prime},t_{2})\) is a direct NESS-cause of \((\psi,t_{3})\) and the occurrence \((e,t_{1})\) is an actual cause of \((e^{\prime},t_{2})\), with \(t_{1}<t_{2}<t_{3}\), then the occurrence \((e,t_{1})\) is a NESS-cause of \((\psi,t_{3})\). These three causal relations are illustrated using Example 1 in Section 6.2. ## 4 From AAF to Action Languages This section presents our first contribution: a formalisation of acyclic AAF into the action language introduced above. Section 4.1 presents the definition of the argumentative context \(\kappa\), Section 4.2 provides the modified definitions of the action language semantics, and Section 4.3 briefly sketches the structure of the ASP implementation. In contrast to AAF, we propose to take into account the order of enunciation of arguments. Instead of having only a couple \((A,R)\), the input is a couple \((\Delta,R)\), where \(\Delta\) is a dialogue, i.e. a sequence of statements in natural language: **Definition 6** (dialogue \(\Delta\)).: _A dialogue is \(\Delta=\{(a,o)\mid a\in A,o\in\mathbb{N}\}\), where each argument \(a\) is associated to its order of enunciation, \(o\)._ ### Instantiating the Context In order to formalise an AAF in the action language described in Section 3, let us first define the variables necessary to describe the world, i.e. the AAF. These variables correspond to the fluents \(\mathbb{F}\). As introduced in Section 2, there are two elements to consider: the arguments and the attack relation. First, to describe an argument \(x\), we create two fluents: \(p_{x}\in\mathbb{F}\) and \(a_{x}\in\mathbb{F}\) expressing whether the argument is present in the graph and whether it is acceptable. Regarding \(R\), we use the fluent \(cA_{y,x}\in\mathbb{F}\) to model that \(y\) can attack argument \(x\). As we only deal with acyclic AAF, \(\nexists(x_{1},\ldots,x_{n})\in A\) such that \(\big{(}cA_{x_{1},x_{2}},\ldots,cA_{x_{n-1},x_{n}},cA_{x_{n},x_{1}}\big{)}\in \mathbb{F}\). We call this property acyclicity of the fluents \(cA\). In an AAF, the only deliberate action is to enunciate an argument, which leads to \(\mathbb{A}=\{enunciate_{x}\mid x\in A\}\). For this action to be possible, argument \(x\) must not have already been said. \(x\) then becomes present and acceptable by default. This choice is justified by the fact that its acceptability is evaluated in the next state before it has an impact on the rest of the graph. Formally: \[pre(enunciate_{x})\equiv\neg p_{x}\] \[\mathit{eff}(enunciate_{x})\equiv p_{x}\wedge a_{x}\] Before enunciating the next argument, we choose to update the acceptability of all other arguments present after the enunciation of a new argument. This defines a state which we call _argumentative state_. **Definition 7** (argumentative state).: _A state \(S(t)\) is an argumentative state if: i) \(\forall x,y\), \([S(t)\models a_{x}\wedge p_{y}\wedge cA_{y,x}\Rightarrow S(t)\models\neg a_{y}]\); ii) \(\forall x,\Big{[}S(t)\models p_{x}\land\Big{(}\bigwedge_{y}\neg a_{y}\lor \neg cA_{y,x}\Big{)}\Rightarrow S(t)\models a_{x}\Big{]}\)._ After an argument is enunciated, we want updates to be triggered automatically. We represent them with two exogenous events: \(makesUnacc_{y,x}\in\mathbb{U}\) and \(makesAcc_{x}\in\mathbb{U}\). An argument is acceptable only if it is unattacked or attacked only by unacceptable arguments. Hence, it is enough for one of the attackers to be acceptable to make the attacked argument unacceptable. The two cases are considered: _Acceptability update:_ Suppose that an argument \(y\) just enunciated can attack argument \(x\), and that \(x\) and \(y\) are acceptable. Then, \(x\) being attacked by an acceptable argument \(y\), it becomes unacceptable. Formally, the exogenous event \(makesUnacc_{y,x}\) can be written as: \[\begin{split} tri(makesUnacc_{y,x})\equiv& a_{x} \wedge a_{y}\wedge cA_{y,x}\\ \mathit{eff}(makesUnacc_{y,x})\equiv&\neg a_{x} \end{split}\] This definition also allows dealing with cases where a new argument \(z\) makes an attacker \(y\) of \(x\) acceptable again. In this case, \(x\) becomes unacceptable. _Non-acceptability update:_ Suppose that argument \(x\) is not acceptable and that an argument \(z\) has just been enunciated. This argument has no direct link with \(x\) but may impact the acceptability of some attackers of \(x\). We therefore check whether all the arguments that can attack \(x\) are acceptable or not. If none of them are indeed acceptable, then \(x\) becomes acceptable again. In the action language, this is expressed by the exogenous event \(makesAcc_{x}\): \[\begin{split} tri(makesAcc_{x})\equiv& p_{x}\land\neg a _{x}\land\left(\bigwedge_{y}\neg cA_{y,x}\lor\neg a_{y}\right)\\ \mathit{eff}(makesAcc_{x})\equiv& a_{x}\end{split}\] Finally, when an argument \(x\) is enunciated, it must be checked that it has not become unacceptable because of an argument \(y\) already present before it makes other arguments unacceptable. This is reflected in the following priority rule: \[makesUnacc_{y,x}\succ_{\mathbb{E}}makesUnacc_{x,z}\] Note that adding an argument to the graph can only directly impact the other arguments by making them unacceptable. For this reason, it is not necessary to establish a priority rule of the form \(makesUnacc_{y,x}\succ_{\mathbb{E}}makesAcc_{z}\) as this situation is already addressed by the previous rule. **Remark -** In the above transformation, we do not distinguish between the notions of potential and real attack, because such a difference disappears in the equations. Indeed, let us consider a fluent \(att_{y,x}\in\mathbb{F}\) translating the fact that argument \(y\) actually attacks argument \(x\). Let us define the exogenous event \(isAttacking_{y,x}\in\mathbb{U}\) as: \[\begin{split} tri(isAttacking_{y,x})\equiv& p_{x}\wedge p_{y}\wedge cA_{y,x}\\ \mathit{eff}(isAttacking_{y,x})\equiv& att_{y,x}\end{split}\] From this definition, an argument \(y\) attacks an argument \(x\) if both are present and \(y\) can attack \(x\). However, for this attack to be taken into account, the attacker \(y\) must be acceptable. We obtain conditions of the form \(a_{y}\wedge att_{y,x}\), i.e. \(a_{y}\wedge p_{y}\wedge cA_{y,x}\). However, an argument cannot be acceptable without being present, i.e. \(a_{y}\wedge p_{y}\equiv a_{y}\). Thus, taking this new fluent into account, we would have: \(tri(makesUnacc_{y,x})\equiv a_{y}\wedge a_{x}\wedge att_{y,x}=a_{y}\wedge a_{x} \wedge cA_{y,x}\). The same precondition applies as without the introduction of \(isAttacking\) and \(att\). Therefore, we use only \(cA\). ### Semantics Adapted to AAF Having an adapted \(\kappa\) for the argumentative framework, we propose to modify the action language semantics to produce traces that are representative of the reality. For this purpose, arguments will be stated from argumentative states step by step in the order determined by the dialogue \(\Delta\). The current form of scenario \(\sigma\) is not ideal for this task. Indeed, it implies that we need to know in advance how many steps each chain of admissibility update events will take to plan at which time the next argument should be stated. To solve this issue we introduce a set of ranked actions \(\varsigma\subseteq\mathbb{A}\times\mathbb{N}\) which is called _sequence_. The input to obtain unique traces will no longer be the scenario \(\sigma\) but the sequence \(\varsigma\). These modifications require changes to Definitions 4 and 5. **Definition 8** (argumentative setting \(\chi\)).: _The argumentative setting of the action language, denoted by \(\chi\), is the couple \((\varsigma,\kappa)\) with \(\varsigma\) a sequence and \(\kappa\) a context._ Definition 9 is the result obtained after modifying Definition 4. Conditions 2.d and 2.e are added and \(\forall e\in\mathbb{E}\) is replaced by \(\forall e\in\mathbb{U}\) in condition 2.c. These modifications respectively express that an action in the sequence can be triggered only if no exogenous event is triggered at the same time point, and that event sets in the event trace cannot be empty. Conditions 1, 2.a, 2.b, and 3 remain unchanged. So the triggering of exogenous events remains unchanged. **Definition 9** (valid execution in an argumentative context).: _Given an argumentative context \(\kappa\), a sequence \(E(-1),S(0),E(0),\ldots,E(N),S(N+1)\) is a valid execution w.r.t. \(\kappa\) if, in addition to conditions 1, 2.a, 2.b, and 3 of Definition 4, the following conditions are satisfied \(\forall t\in\mathbb{T}\):_ * \(E(t)\subseteq\mathbb{E}\) _satisfies:_ * \(\forall e\in\mathbb{U}\) _such that_ \(S(t)\models tri(e)\)_,_ \(e\in E(t)\) _or_ \(\exists e^{\prime}\in E(t),\ e^{\prime}\succ_{\mathbb{E}}e\)_;_ * _If_ \(\exists e\in E(t)\cap\mathbb{A}\)_, then_ \(\forall e^{\prime}\in\mathbb{U}\)_,_ \(S(t)\not\models tri(e^{\prime})\)_;_ * \(E(t)\neq\varnothing\)_._ In Definition 5 traces were defined as extracts of a valid execution given \(\kappa\) and additional conditions related to \(\sigma\). Instead of defining directly traces, Definition 10 corresponds to a valid execution given \(\chi=(\varsigma,\kappa)\). Traces are simply extracts from such valid executions. **Definition 10** (valid execution given \(\chi\)).: _Given an argumentative setting \(\chi=(\varsigma,\kappa)\), a valid execution w.r.t. \(\kappa\), is valid w.r.t. to \(\chi\) if:_ * \(\forall t\in\mathbb{T}\)_,_ \(E(t)\subset(\{a,\exists o\in\mathbb{N},(a,o)\in\varsigma\}\cup\mathbb{U})\)_;_ * \(\forall\left((e,o)\,,(e^{\prime},o^{\prime})\right)\in\varsigma^{2}\) _such that_ \(o<o^{\prime}\)_,_ \(\exists t,t^{\prime}\) _such that_ \(e\in E(t)\) _and_ \(e^{\prime}\in E(t^{\prime})\) _and_ \(t<t^{\prime}\)_;_ * \(\forall\left((e,o)\,,(e^{\prime},o^{\prime})\right)\in\varsigma^{2}\) _such that_ \(o=o^{\prime}\)_,_ \(\exists t\) _such that_ \((e,e^{\prime})\in E(t)^{2}\)_._ Given a valid execution given \(\chi\), its _event trace_\(\tau^{e}_{\chi}\) is its sequence of events \(E(-1),E(0),\ldots,E(N)\), its _state trace_\(\tau^{s}_{\chi}\) is its sequence of states \(S(0),S(1),\ldots,S(N+1)\). ### ASP Implementation We propose an adapted implementation in ASP based on the sound and complete one described in [1]. The ASP program \(\pi_{con}(\kappa)\) and \(\pi_{seq}(\varsigma)\) are obtained by the translation of the context \(\kappa\) and the sequence \(\varsigma\) respectively. \(\pi_{\mathbb{A}}\) is obtained by the translation of the action language semantics introduced in Section 3.1 and modified in Section 4.2. \(\pi_{\mathbb{C}}\) is obtained by the translation of the causal relations definitions introduced by [1]. The entire program \(\Pi(\chi)=\pi_{sec}(\varsigma)\cup\pi_{con}(\kappa)\cup\pi_{\mathbb{A}}\cup \pi_{\mathbb{C}}\) is available1. Footnote 1: [https://gitlab.lip6.fr/sarmiento/kr_2023.git](https://gitlab.lip6.fr/sarmiento/kr_2023.git) ## 5 Formal Properties This section establishes formal properties of the proposed transformation. First, we prove that a notion of temporality is captured by the transformation. Then we prove its soundness and completeness. Finally, we introduce the propositions that pave the way to the discussion of Section 6. ### Preliminary Property on a Valid Execution We start by showing that, although valid executions given \(\kappa\) are not unique, valid executions given \(\chi\) are, and thus are the corresponding traces \(\tau^{e}_{\chi}\) and \(\tau^{s}_{\chi}\). **Proposition 1**.: _Given an argumentative setting \(\chi=(\varsigma,\kappa)\), the traces \(\tau^{e}_{\chi}\) and \(\tau^{s}_{\chi}\) are unique._ Proof.: Let us prove by contradiction the unicity of valid executions given \(\chi\). Let \(\chi=(\varsigma,\kappa)\) be the argumentative setting and \(\epsilon\), \(\epsilon^{\prime}\) two valid executions given \(\chi\). By way of a reductio ad absurdum, we suppose that \(\epsilon\neq\epsilon^{\prime}\). According to Definition 4, \(S(t+1)\) is derived from \(S(t)\) and the events in \(E(t)\), and similarly for \(S^{\prime}\). Hence, given that \(\kappa\) is common to \(\epsilon\) and \(\epsilon^{\prime}\), \(E(-1)=E^{\prime}(-1)\) and \(S(0)=S^{\prime}(0)\), the first discrepancy between \(\epsilon\), and \(\epsilon^{\prime}\) is not to be found in a set of states, but in a set of events, which is not empty as the executions are valid. Let \(t_{0}\) be the minimal date at which a difference between \(\epsilon\) and \(\epsilon^{\prime}\) is observed. We have \(E(t_{0})\neq E^{\prime}(t_{0})\), \(\forall t<t_{0}\), \(E(t)=E^{\prime}(t)\), and \(\forall t\leq t_{0}\), \(S(t)=S^{\prime}(t)\). Thus, \(\forall e\in\mathbb{E}\), \(S(t_{0})\models pre(e)\Leftrightarrow S^{\prime}(t_{0})\models pre(e)\). Without loss of generality, let us consider an event \(e_{0}\) such that \(e_{0}\not\in E(t_{0})\) and \(e_{0}\in E^{\prime}(t_{0})\). Two cases can occur, \(e_{0}\in\mathbb{U}\) or \(e_{0}\in\mathbb{A}\). i) Let us first show by contradiction that \(e_{0}\not\in\mathbb{U}\). Let us suppose \(e_{0}\in\mathbb{U}\). As \(e_{0}\in E^{\prime}(t_{0})\) and \(S(t_{0})=S^{\prime}(t_{0})\), \(S(t_{0})\models tri(e_{0})\). Then from 2.c in Definition 9, \(e_{0}\not\in E(t_{0})\) implies \(\exists e\in E(t_{0})\) such that \(e\succ_{\mathbb{E}}e_{0}\). Then from 2.b applied to \(E^{\prime}(t_{0})\), we get \(e\not\in E^{\prime}(t_{0})\). Now, either \(e\in\mathbb{A}\) or \(e\in\mathbb{U}\). If \(e\in\mathbb{A}\), as \(e\in E(t_{0})\), condition 2.d would imply that \(S^{\prime}(t_{0})\not\models tri(e_{0})\) which contradicts our assumption. In the second case, if \(e\in\mathbb{U}\), then \(e\in E(t_{0})\) and \(e\not\in E^{\prime}(t_{0})\) because of the same reasons behind \(e_{0}\not\in E(t_{0})\) and \(e_{0}\in E^{\prime}(t_{0})\). If we apply the same reasoning to \(e\), we get \(\exists e^{\prime}\in\mathbb{U}\) such that \(e^{\prime}\in E(t_{0})\), \(e^{\prime}\not\in E^{\prime}(t_{0})\), and \(e^{\prime}\succ_{\mathbb{E}}e\). This reasoning can be repeated again on \(e^{\prime}\) and so on. As \(\mathbb{U}\) is finite, either the chain will be broken, or an event will be used a second time. The first case means \(\exists\tilde{e}\in\mathbb{U}\) such that \(\tilde{e}\not\in E(t_{0})\) and \(\tilde{e}\in E^{\prime}(t_{0})\) is false, which makes all the chain false. In the second case, by transitivity we get \(\tilde{e}\succ_{\mathbb{E}}\tilde{e}\). This leads to a contradiction as \(\succ_{\mathbb{E}}\) is a strict partial order. Thus, \(e_{0}\in\mathbb{U}\) is not possible. ii) As from (i) \(e_{0}\in\mathbb{A}\), by condition 1. in Definition 10, \(e_{0}\in E^{\prime}(t_{0})\) implies \(e_{0}\in\{a,\exists o\in\mathbb{N},(a,o)\in\varsigma\}\). \(\varsigma\) being the same for \(\epsilon\) and \(\epsilon^{\prime}\), the rank \(o_{0}\in\mathbb{N}\) associated to \(e_{0}\) is the same. Hence, given that \(\forall t<t_{0}\), \(e(t)=\epsilon^{\prime}(t)\) and \(e_{0}\in E^{\prime}(t_{0})\), \(e_{0}\not\in E^{\prime}(t)\) implies \(e_{0}\not\in E(t)\). The only possibility left is the procrastination of actions. In the case where \(\ tively. Thus, the set of all events which actually occurred at time point \(t\) is \(E^{\mathrm{\chi}}(t)=\tau_{\mathrm{\chi}}^{\varepsilon}(t)\). Following the same reasoning, the actual state at time point \(t\) is \(S^{\mathrm{\chi}}(t)=\tau_{\mathrm{\chi}}^{\varepsilon}(t)\). ### Soundness and Completeness In this section, we establish the soundness and completeness of our transformation. For that, we first introduce the notion of associated graph as follows: **Definition 11**.: _Given a state \(S^{\mathrm{\chi}}(t)\), \(AE^{\prime}=(A^{\prime},R^{\prime})\), where \(A^{\prime}=\{x\mid S^{\mathrm{\chi}}(t)\models p_{x}\}\) and \(R^{\prime}=\{(y,x)\mid S^{\mathrm{\chi}}(t)\models cA_{y,x}\}\), is called the associated graph of \(S^{\mathrm{\chi}}(t)\)._ From the acyclicity property of the fluent \(cA\), the associated graph is acyclic. Now, we focus on the notion of acceptability. We first characterise argumentative states using \(tri\). **Lemma 1**.: _Let \(S^{\mathrm{\chi}}(t)\) be a state. The two following propositions are equivalent:_ * \(\forall e\in\mathbb{U}\)_,_ \(S^{\mathrm{\chi}}(t)\not\models tri(e)\)__ * \(S^{\mathrm{\chi}}(t)\) _is an argumentative state as defined in Def._ 7_._ Proof.: In our context, \(\mathbb{U}=\{makesAcc,makesUnacc\}\). We prove that \(S^{\mathrm{\chi}}(t)\not\models tri(makesAcc_{x})\) is equivalent to (ii) of Definition 7 and \(S^{\mathrm{\chi}}(t)\not\models tri(makesUnacc_{y,x})\) is equivalent to (i) of Definition 7: for any \(x,y\) * \(-tri(makesAcc_{x})=\neg(p_{x}\wedge\neg a_{x}\wedge(\bigwedge_{y}\neg a_{y} \vee\neg cA_{y,x}))\)__ * \(-(p_{x}\wedge(\bigwedge_{y}\neg a_{y}\vee\neg cA_{y,x}))\lor a_{x}\)__ * \(p_{x}\wedge(\bigwedge_{y}\neg a_{y}\vee\neg cA_{y,x})\Rightarrow a_{x}\), which leads to the desired equivalence with (ii) in Definition 7. * \(-tri(makesUnacc_{y,x})=-(a_{x}\wedge a_{y}\wedge cA_{y,x})\)__ * \(-(a_{x}\wedge p_{y}\wedge a_{y}\wedge cA_{y,x})\)__as_ \(a_{y}\) _implies_ \(p_{y}\)__ * \(-(a_{x}\wedge p_{y}\wedge cA_{y,x})\vee\neg a_{y}\)__ * \(a_{x}\wedge p_{y}\wedge cA_{y,x}\Rightarrow\neg a_{y}\)__, which leads to the desired equivalence with (i) in Definition 7. An argumentative state can therefore be seen as a state where nothing happens until a voluntary action is made. Now we prove that it is always possible to reach such a state from an argumentative state in which an \(x\) is enunciated. **Proposition 2**.: _Given an argumentative state \(S^{\mathrm{\chi}}(t)\) and \(x\in A\), if \(enunciate_{x}\in E^{\mathrm{\chi}}(t)\), then \(\exists t^{\prime}\in\mathbb{T}\), \(t<t^{\prime}\) such that \(S^{\mathrm{\chi}}(t^{\prime})\) is an argumentative state._ Proof.: Given an argumentative state \(S^{\mathrm{\chi}}(t)\) and \(x\in A\) such that \(enunciate_{x}\in E(t)\), let us prove that \(\exists t^{\prime}\in\mathbb{T}\), \(t<t^{\prime}\) such that no trigger is a logical consequence of \(S^{\mathrm{\chi}}(t^{\prime})\), which leads to the desired result using Lemma 1. As \(\mathbb{U}\) and \(\{S^{\mathrm{\chi}}(t)\models cA_{y,x}\mid(x,y)\in A^{2}\}\) are finite sets, there is a finite number of possible triggering for \(E^{\mathrm{\chi}}(t)\). Moreover, as there is a finite number of arguments, there is a finite number of paths in the associated graph. The graph being acyclic, each path length is finite. Therefore, there is a finite maximum number of sets of events (\(M\)) and so \(\exists t^{\prime}\leq(t+M+1)\) such that \(\forall e\in\mathbb{U},S^{\mathrm{\chi}}(t^{\prime})\not\models tri(e)\). Finally, this proposition allows us to prove that an acceptable argument in the argumentative state is acceptable in the associated graph and vice-versa, as the triggering rules have been made in order to model how acceptability is computed. We first prove a useful lemma. **Lemma 2**.: _Given an argumentative state \(S^{\mathrm{\chi}}(t)\), for any \(x\), \(S^{\mathrm{\chi}}(t)\models p_{x}\wedge\neg a_{x}\Leftrightarrow S^{\mathrm{ \chi}}(t)\models\exists y,p_{x}\wedge a_{y}\wedge cA_{y,x}\)._ Proof.: \([\Rightarrow]:\) For any \(x\) such that \(S^{\mathrm{\chi}}(t)\models p_{x}\wedge\neg a_{x}\), (ii) of Definition 7 implies that \(S^{\mathrm{\chi}}(t)\models\neg p_{x}\vee(\bigvee_{y}a_{y}\wedge cA_{y,x})\). Therefore, as \(S^{\mathrm{\chi}}(t)\models p_{x},S^{\mathrm{\chi}}(t)\models\exists y,p_{x} \wedge a_{y}\wedge cA_{y,x}\). \([\Leftarrow]:\) Let us prove it by contradiction: let \(x_{0}\) be such that \(S^{\mathrm{\chi}}(t)\models\neg p_{x_{0}}\lor a_{x_{0}}\). If \(S^{\mathrm{\chi}}(t)\models\neg p_{x_{0}}\), then \(x_{0}\) is such that \(S^{\mathrm{\chi}}(t)\models\forall y,\neg p_{x_{0}}\vee\neg a_{y}\vee\neg cA_{y,x_{0}}\), which ends the proof. Otherwise, \(S^{\mathrm{\chi}}(t)\models p_{x_{0}}\wedge a_{x_{0}}\). If \(S^{\mathrm{\chi}}(t)\models p_{x_{0}}\wedge(\exists y,a_{y}\wedge cA_{y,x_{0}})\) then \(S^{\mathrm{\chi}}(t)=tri(makesUnacc_{y,x_{0}})\), which is not possible as \(S^{\mathrm{\chi}}(t)\) is argumentative. Thus both cases lead to a contradiction. The next proposition establishes the correspondence between acceptability in argumentative and argumentative states. **Proposition 3**.: _Given an argumentative state \(S^{\mathrm{\chi}}(t)\) and its associated graph \(AF=(A,R)\) according to Definition 11, then for any \(x\), \(x\in A\) acceptable by \(A\iff S^{\mathrm{\chi}}(t)\models a_{x}\)._ Proof.: \([\Rightarrow]:\) Let \(x_{0}\in A\) such that \(x_{0}\) is acceptable by A, let us prove that \(S^{\mathrm{\chi}}(t)\models a_{x_{0}}\). Let us suppose that \(S^{\mathrm{\chi}}(t)\models\neg a_{x_{0}}\). Moreover, by construction of AF \(S^{\mathrm{\chi}}(t)\models p_{x_{0}}\). \(S^{\mathrm{\chi}}(t)\) is an argumentative state so according to Lemma 2\(S^{\mathrm{\chi}}(t)\models p_{x_{0}}\wedge\neg a_{x_{0}}\Leftrightarrow S^{ \mathrm{\chi}}(t)\models\exists y,p_{x_{0}}\wedge a_{y}\wedge cA_{y,x_{0}}\). (i) of Definition 7 applied to \(a_{y}\) says that \(\forall z,S^{\mathrm{\chi}}(t)\models a_{y}\wedge p_{z}\wedge cA_{z,y}\Rightarrow S ^{\mathrm{\chi}}(t)\models\neg a_{z}\). As there is a finite number of arguments, it is possible to repeat the process we applied for \(x_{0}\) on \(z\) until one of the two scenarios: * \(S^{\mathrm{\chi}}(t)\models\nexists y,p_{z}\wedge a_{y}\wedge cA_{y,z}\). This leads to trigger the exogenous event \(makesAcc_{z}\) which is not possible as \(S^{\mathrm{\chi}}(t)\) is an argumentative state according to Lemma 1. * \(\forall z,S^{\mathrm{\chi}}(t)\models a_{y}\wedge p_{z}\wedge cA_{z,y}\Rightarrow S ^{\mathrm{\chi}}(t)\models\neg a_{z}\) where \(p_{z}\wedge cA_{z,y}\) is false. Then in \(AF\), \(Att_{y}=\emptyset\). Therefore, \(y\) is acceptable which contradicts that \(x_{0}\) is acceptable. So, \(S^{\mathrm{\chi}}(t)\models a_{x_{0}}\). \([\Leftarrow]:\) Let \(x_{0}\in A\) such that \(S^{\mathrm{\chi}}(t)\models a_{x_{0}}\). Let us prove that \(x_{0}\) is acceptable by \(A\). As \(S^{\mathrm{\chi}}(t)\models a_{x_{0}}\) and \(S^{\mathrm{\chi}}(t)\) is argumentative, we have that \(\forall y,S^{\mathrm{\chi}}(t)\models a_{x_{0}}\wedge p_{y}\wedge cA_{y,x_{0}} \Rightarrow S^{\mathrm{\chi}}(t)\models\neg a_{y}\). Then, for any \(y\) satisfying the premise, by definition of \(AF\), \((x_{0},y)\in A^{2}\) and \((y,x_{0})\in R\). If such a \(y\) is acceptable by \(A\), then according to \([\Rightarrow]\), \(S^{\mathrm{\chi}}(t)\models a_{y}\). In that case, \(S^{\mathrm{\chi}}(t)\models tri(makesUnacc_{y,x_{0}})\) which contradicts the fact that \(S^{\mathrm{\chi}}(t)\) is an argumentative state. So, as **Theorem 1** (Soundness and Completeness).: _Given a dialogue \(\Delta\) and a set of attack \(R\), given the argumentative setting \(\chi\), the associated argumentative graph \(AF^{\prime}\) of the final argumentative state \(S^{\chi}(t)\), and \(AF=(A,R)\) obtained from \((\Delta,R)\), it holds that \(AF^{\prime}=AF\)._ Proof.: As \(AF^{\prime}\) is associated to a final argumentative state, \(\forall x\in A\), \(\exists t^{\prime}\in\mathbb{T}\) such that \(t^{\prime}<t\) and \(enunciate_{x}\in E^{\chi}(t^{\prime})\). Now, \(eff(enunciate_{x})=p_{x}\wedge a_{x}\). So, \(A^{\prime}=A\). Moreover, by construction of \(cA_{y,x}\) and \(R^{\prime}\), \(R=R^{\prime}\). So \(AF=AF^{\prime}\). Finally, from Proposition 3 as \(S^{\chi}(t)\) is argumentative, \(\forall x\in A=A^{\prime},S^{\chi}(t)\models a_{x}\Leftrightarrow x\) is acceptable by \(A\). ### On Temporality and Causality The preliminary Proposition 1 highlights the fact that temporality is captured by the proposed transformation. Indeed, given an order of enunciation, as expressed by sequence \(\varsigma\), there exists a unique trace of states corresponding to a unique way to traverse a graph. When only given a context \(\kappa\), this unicity property does not hold. This section shows that this temporality does not impact the final argumentative state but impacts the causal relations. **Proposition 4**.: _Let \(\varsigma\) and \(\varsigma^{\prime}\) be sequences such that \(\varsigma^{\prime}\) is a permutation of the ranks of \(\varsigma\). Given the final argumentative states \(S^{\varsigma,\kappa}(t)\), \(S^{\varsigma^{\prime},\kappa}(t^{\prime})\), belonging to \(\tau^{s}_{\varsigma,\kappa}\) and \(\tau^{s}_{\varsigma^{\prime},\kappa}\), with \((t,t^{\prime})\in\mathbb{T}\times\mathbb{T}^{\prime}\), \(S^{\varsigma,\kappa}(t)=S^{\varsigma^{\prime},\kappa}(t^{\prime})\)._ Proof.: Let us call \(AF\) and \(AF^{\prime}\), the associated graphs of the final argumentative states \(S^{\varsigma,\kappa}(t)\) and \(S^{\varsigma^{\prime},\kappa}(t^{\prime})\), respectively. Given that they have the same actions in the sequence, then \(A=A^{\prime}\). They also share the same context so \(R=R^{\prime}\). Therefore \(AF=AF^{\prime}\). Now, according to Proposition 3, \(x\in A\) acceptable by \(A\Leftrightarrow S^{\varsigma,\kappa}(t)\models a_{x}\). So \(\forall x,S^{\varsigma,\kappa}(t)\models a_{x}\Leftrightarrow\forall x,S^{ \varsigma^{\prime},\kappa}(t^{\prime})\models a_{x}\). This property implies that the final argumentative state does not depend on \(\varsigma\), but only on the set of arguments it contains: no matter the order in which arguments are enunciated, the final argumentative state is always the same. This immediately leads to the following unicity corollary: **Corollary 1**.: _Given an \(AF=(A,R)\), \(\exists S^{\chi}(t)\) final argumentative state which associated argumentative graph is \(AF=(A,R)\)._ Proposition 4 and its corollary are in accordance with AAF. The relevance of temporality integration comes from the intermediate states, as illustrated in the next section, and from the causal relations that can be derived from it: **Proposition 5**.: _Causal relations depend on the sequence \(\varsigma\)._ Proof.: This proposition is proved by example, commented in details in the next section that illustrated the effect of considering Example 1, and a modification thereof in Example 2. Let \(\Pi(\chi)\) be the program obtained given \(\kappa,\varsigma\) of Example 1, as described in Section 4.3, and let \(\Pi(\chi^{\prime})\) be the program similarly obtained given \(\kappa,\varsigma^{\prime}\) of Example 2. Given the NESS-cause definition in [1], \(\Pi(\chi)\models ness(o(enunciate_{d},4),h(neg(a_{c}),31))\), where occurrence of events \((e,t)\in\mathbb{E}\times\mathbb{T}\) are represented by the predicate \(o(e,t)\) and the truthfulness of \(\mathcal{P}\) formulas \((\psi,t)\in\mathbb{F}\times\mathbb{T}\) by the predicate \(h(\psi,t)\), but \(\nexists t,t^{\prime}\in\mathbb{T}^{2}\), \(\Pi(\chi^{\prime})\models ness(o(enunciate_{d},t),h(neg(a_{c}),t^{\prime}))\). ## 6 Application to the Example and Discussion This section proposes to illustrate the proposed transformation from AAF to action language for Example 1, highlighting its exploitation to get enriched information about the modelled dialogue, more precisely for providing visual representations justifying the acceptance or rejection of arguments. It considers successively two classes of argumentation explanations in [10]'s taxonomy: it first shows how it can lead to graphical representations of the processes of accepting/rejecting arguments, it then discusses the case of causal explanations. ### Graphical Representation and Explanation According to [10], argumentation explanations can consist in extracting argumentative subgraphs to justify the acceptance or rejection of an argument for a given AAF semantics, producing a graphical representation of the underlying process. The transformation proposed in the previous section makes it possible to derive graphical representations of the argumentative process. Indeed, the traces of events and states can be used to obtain a narrative of the interaction that can be represented graphically. The visualisation we propose is illustrated in Figure 2, in a simplified form, for Example 1. It is enriched in the next section using causality relations. Given an event and a state traces \(\tau^{e}_{x}\) and \(\tau^{s}_{x}\), we propose to display the consecutive states, showing fluents as hexagons and the triggered events as rectangles. Since the acceptability of arguments is what mainly matters, we propose to represent only the fluents \(a_{x}\), using the argument names for the sake of readability. Moreover, we do not show fluents when their negation is true in the state, except when the occurrence of a represented event results in the negation of the fluent. In this case, the negation is represented by a lighter shade. The events \(enunciate_{x}\), \(makesUnacy_{y,x}\), and \(makesAcc_{x}\) are shortened as \(enu_{x}\), \(una_{y,x}\), and \(acc_{x}\), respectively. Figure 2: Partial graphical representation associated to Example 1. Hexagons represent fluents and rectangles events. **Example 1**.: _(continued) - Figure 2 shows a partial representation of the state trace obtained for Example 1 using the ASP implementation described in Section 4.3._ _The first represented state corresponds to \(S(6)\), an argumentative state in the sense of Def. 7, which allows for the enunciation of the next argument: since all arguments preceding \(e\) have already been enunciated, the action \(enunciate_{e}\) can be performed. The occurrence of this event is the transition to the next state \(S(7)\) where, as shown in Figure 2, argument \(e\) is acceptable. Unlike \(S(6)\), \(S(7)\) is not an argumentative state: condition (i) of Def. 7 is not satisfied because \((a_{d}\wedge cA_{e,d})\) and \(a_{e}\in S(7)\). Therefore, the next argument cannot be enunciated. However, since the triggering conditions of \(makeSUnacc_{e,d}\) are satisfied, this exogenous event is triggered, leading to a new state transition. Since argument \(d\) is no longer acceptable in \(S(8)\), condition (i) of Def. 7 is now satisfied. Still, condition (ii) is not satisfied by \(S(8)\), preventing the next argument from being enunciated. Instead, \(makesAcc_{c}\) is triggered, leading to the following state \(S(9)\). Here, as shown in Figure 2, argument \(c\) is acceptable. As this new state is argumentative, the next argument, \(f\), can be stated. The dialogue continues step by step and ends at state \(S(31)\)._ A second, more compact, tabular visualisation is proposed, illustrated in Table 1: the arguments are represented in the first column, the order of the performed actions in the first row. For the sake of readability, \(enunciate_{x}\) is shortened as \(x\). In each table cell, \(\bullet\) means that the argument is acceptable while \(\circ\) means that it is not. If an argument has not been enunciated yet, its acceptability cannot be evaluated, which is represented by the shaded boxes. In contrast to the previous representation where the updating stages are shown, this second form has the advantage of being more compact and allows the display the whole dialogue. It also makes it possible to see quickly the direct and indirect impacts of the argument enunciation on the other argument acceptability. In particular, the enunciation order effect can be observed, as illustrated by the graphical comparison of Example 1 and its modification given in Example 2. **Example 2**.: _Let us consider the same dialogue as in Example 1, starting with the enunciation of arguments \(a,b,c\), but considering that the physician then directly asks if it is possible to do the MRI today (\(l\)). The radiologist replies that he can only do it in two days at the earliest (\(m\)). The physician then specifies that it is an emergency (\(n\)). The remaining arguments are then enunciated in the same order ad in the initial example._ _Table 2 displays the proposed compact visualisation for the evolution of the acceptability of the decision variable \(c\), starting from its enunciation. Even if the final state of the argumentation graph is identical, as expected according to Proposition 4 established in the previous section, with \(c\) being rejected, the display makes it easy to observe the very important impact that the order of the actions can have on the intermediate stages that lead to it: in the new scenario, \(c\) is not accepted from the \(6^{th}\) action, i.e. \(n\) enunciation, with no modification until the end._ This visualisation thus also illustrates the relevance of the temporality integration in the argumentation framework: the differences between the two scenarios cannot be captured by classical AAFs. ### On Causality and Explanation Beyond the graphical representation of the acceptance/rejection process, the proposed formalisation of AAF into action models provides tools for richer explanations, allowing us to transfer the notion of actual causality recalled in Section 3.2 to the argumentation framework. Indeed, the extraction of causal chains has been shown to be an important property for explanations Miller (2019). In the taxonomy proposed in the case of argumentation in Cyras et al. (2021), such a causal explanation can be related to the identification of arguments that must be removed from an argumentation graph to make a non-acceptable argument and Toni (2015). In causal terminology, this corresponds to the search for a _but-for_ cause of the non-acceptability of an argument. However, this test does not solve cases where the occurrence of one of two events would have been sufficient to cause an effect in the absence of the other, called over-determination Menzies and Beebee (2020). Among others, the definition of causality underlying the NESS test, as briefly recalled in Section 3.2 and implemented for the considered action language, makes it possible to solve this issue. Structural equations Halpern and Pearl (2005) constitute another formal model of causality that addresses the over-determination issue and can be exploited in the argumentation framework, using the transformation of acyclic abstract argumentation graphs to that formalism proposed \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\),\(i\) & \(j\) & \(k\) & \(l\) & \(m\) & \(n\) \\ \hline \(a\) & \(\bullet\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) \\ \hline \(b\) & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) \\ \hline \(c\) & \(\bullet\) & \(\circ\) & \(\bullet\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) \\ \hline \(d\) & & \(\bullet\) & \(\circ\) & \(\bullet\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) \\ \hline \(e\) & & & \(\bullet\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) \\ \hline \(f\) & & & & \(\bullet\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) \\ \hline \(g\) & & & & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) \\ \hline \(h\) & & & & & \(\bullet\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) \\ \hline \(i\) & & & & & & \(\bullet\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) \\ \hline \(j\) & & & & & & & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) \\ \hline \(k\) & & & & & & & & & & & & \\ \hline \(l\) & & & & & & & & & & & & \\ \hline \(m\) & & & & & & & & & & & & \\ \hline \end{tabular} \end{table} Table 1: Tabular representation of the entire interaction. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \(\varsigma_{1}\) & **c** & **d** & **e** & **f** & **g** & **h,i** & **j** & **k** & **l** & **m** & **n** \\ \hline **c** & \(\bullet\) & \(\circ\) & \(\bullet\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\circ\) \\ \hline \hline \(\varsigma_{2}\) & **c** & **l** & **m** & **n** & **d** & **e** & **f** & **g** & **h,i** & **j** & **k** \\ \hline **c** & \(\bullet\) & \(\bullet\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) \\ \hline \(\varsigma_{3}\) & **c** & **l** & **m** & **n** & **d** & **e** & **f** & **g** & **h,i** & **j** & **k** \\ \hline \(\varsigma_{4}\) & **c** & **s** & **s** & **\(\circ\) & **s** & **\(\circ\) & **s** & **\(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) \\ \hline \(\varsigma_{5}\) & **s** & **s** & **\(\circ\) & **s** & **\(\circ\) & **s** & **\(\circ\) & **s** & **\(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) \\ \hline \end{tabular} \end{table} Table 2: Impact of the order in which arguments are enunciated (lines 1, 3) on the acceptability of the arguments (lines 2, 4). in [11]. The main differences are as follows: from a philosophical point of view, the definition of causality underlying the NESS test belongs to the family of regularity approaches [1], whereas Halpern's definitions belong to the family of counterfactual approaches [10]. Secondly, the use of action languages makes it possible to model and take into account temporality and the dynamics of the dialogue, which is a crucial component. From a mathematical point of view, according to [1], Halpern's definition of causality can be described as 'Contrastive actual weak sufficiency', whereas the one used here would be 'Minimal actual strong sufficiency': in a nutshell, whereas the former emphasises that a cause must be necessary for an effect, hence the contrastive aspect, the latter emphasises sufficiency and subordinates necessity to it. From a practical point of view, the advantage of the causal approach used here is that it does not require counterfactual reasoning or interventionism, mechanisms that are computationally onerous and criticised for introducing subjectivity into causal enquiry [13, 14]. These causal relations, that may lead later to causal explanations, can be represented graphically, enriching the proposed visualisation illustrated in Figure 2 by different types of causes. This principle is illustrated in Figure 3, commented below. **Example 1**.: _(continued) - Figure 3 graphically displays the last four states in the trace of Example 1, corresponding to the enunciation of argument \(n\) and the subsequent update mechanisms. Argument \(n\), which states the urgency of the examination, is the one that closes the debate. As represented in Figure 3 its emunciation in state \(S(28)\) is a direct NESS-cause (dNc) of its acceptability in the following states, a relation we denote by \((enunciate_{n},28)\)dNc \((a_{n},29-31)\). Similarly, we have \((makesUnacc_{n,c},29)\)dNc \((\neg a_{c},30-31)\), \((makesUnacc_{n,m},29)\)dNc \((\neg a_{m},30-31)\), and \((makesAcc_{l},30)\)dNc \((a_{l},31)\). As these examples show, this first relationship is the basic building block of causality, which is concerned with causal relationships given the actual effects of the occurrence of an event. Yet this relationship is not enough. If we want to know why argument \(l\) is acceptable at the end of the dialogue (i.e. why the decision to have an MRI on the same day is made), it is not satisfactory to simply say that it is because of the event \((makesAcc_{l},30)\)._ _To find out why the latter happens, we need to look at the NESS causes and the actual causes to construct the causal chain that lead to it. By transitivity we get that \((makesUnacc_{n,m},29)\) is a cause of the fact that \(makesAcc_{l}\) was triggered, and therefore of the effects that triggering may have had. Going back even further and looking for the causes for which the occurrence \((makesUnacc_{n,m},29)\) took place, we find \((enunciate_{n},28)\) actual cause \((makesUnacc_{n,m},29)\) and therefore \((enunciate_{n},28)\) NESS-cause \((\neg a_{m},30-31)\). By transitivity we can derive \((enunciate_{n},28)\) NESS-cause \((a,31)\). This new relation allows us to say that the physician emunciating that it is an emergency is one of the causes of the final decision, an answer that already seems more satisfactory and can be included in an explanation. The same reasoning can be applied to find the causes of \((\neg a_{c},31)\), the other decision variable._ ## 7 Conclusion This paper has proposed a formalisation of acyclic abstract argumentation systems in the action language of [13], establishing its formal properties: it first allows increasing the expressiveness of these models, through the integration of temporality, making it possible to examine the effect of the order of the argument enunciation. Moreover, it allows us to exploit the notion of causality associated to the action language, offering the possibility to give rich information about the argument acceptance or rejection and justifications about the latter. The paper has proposed two types of graphical representations of the argumentation process that can be used as visual support, opening the way for new forms of argumentation explanations. Future works will aim at developing such explanations, applying the principles developed in the context of eXplainable Artificial Intelligence (XAI), e.g. detailed in [13]: causal chains are established as essential for explanations, but they must also be short. The question of which relations to emphasise remains open, as well as the way in which they can be used to define contrastive explanations, requiring to be able to reason about counterfactual scenarios. AcknowledgementsThe authors would like to thank Professor Catherine Adamsbaum, pediatric radiologist, for the enlightening discussions about the examples. This work was partly supported by the 3rd author's chair in Artificial Intelligence (Sorbonne Universite and SCAI). Figure 3: Enriched graphical representation of Example 1, with causal relations extracts: Direct NESS-causes (\(-\) \(-\)), NESS-causes (\(\cdots\)), and actual causes (\(\lx@paragraphsign\)).
2310.00935
Resolving Knowledge Conflicts in Large Language Models
Large language models (LLMs) often encounter knowledge conflicts, scenarios where discrepancy arises between the internal parametric knowledge of LLMs and non-parametric information provided in the prompt context. In this work we ask what are the desiderata for LLMs when a knowledge conflict arises and whether existing LLMs fulfill them. We posit that LLMs should 1) identify knowledge conflicts, 2) pinpoint conflicting information segments, and 3) provide distinct answers or viewpoints in conflicting scenarios. To this end, we introduce KNOWLEDGE CONFLICT, an evaluation framework for simulating contextual knowledge conflicts and quantitatively evaluating to what extent LLMs achieve these goals. KNOWLEDGE CONFLICT includes diverse and complex situations of knowledge conflict, knowledge from diverse entities and domains, two synthetic conflict creation methods, and settings with progressively increasing difficulty to reflect realistic knowledge conflicts. Extensive experiments with the KNOWLEDGE CONFLICT framework reveal that while LLMs perform well in identifying the existence of knowledge conflicts, they struggle to determine the specific conflicting knowledge and produce a response with distinct answers amidst conflicting information. To address these challenges, we propose new instruction-based approaches that augment LLMs to better achieve the three goals. Further analysis shows that abilities to tackle knowledge conflicts are greatly impacted by factors such as knowledge domain and prompt text, while generating robust responses to knowledge conflict scenarios remains an open research question.
Yike Wang, Shangbin Feng, Heng Wang, Weijia Shi, Vidhisha Balachandran, Tianxing He, Yulia Tsvetkov
2023-10-02T06:57:45Z
http://arxiv.org/abs/2310.00935v2
# Resolving Knowledge Conflicts ###### Abstract Large language models (LLMs) often encounter _knowledge conflicts_, scenarios where discrepancy arises between the internal parametric knowledge of LLMs and non-parametric information provided in the prompt context. In this work we ask _what are the desiderata for LLMs when a knowledge conflict arises and whether existing LLMs fulfill them_. We posit that LLMs should 1) _identify knowledge conflicts_, 2) _pipoint conflicting information segments_, and 3) _provide distinct answers or viewpoints in conflicting scenarios_. To this end, we introduce Knowledge Conflict, an evaluation framework for simulating contextual knowledge conflicts and quantitatively evaluating to what extent LLMs achieve these goals. Knowledge Conflict includes diverse and complex situations of knowledge conflict, knowledge from diverse entities and domains, two synthetic conflict creation methods, and settings with progressively increasing difficulty to reflect realistic knowledge conflicts. Extensive experiments with the Knowledge Conflict framework reveal that while LLMs perform well in identifying the existence of knowledge conflicts, they struggle to determine the specific conflicting knowledge and produce a response with distinct answers amidst conflicting information. To address these challenges, we propose new instruction-based approaches that augment LLMs to better achieve the three goals. Further analysis shows that abilities to tackle knowledge conflicts are greatly impacted by factors such as knowledge domain and prompt text, while generating robust responses to knowledge conflict scenarios remains an open research question. Code and data are publicly available at github.com/yikee/Knowledge_Conflict. ## 1 Introduction Large language models (LLMs) have demonstrated remarkable capabilities to encode world knowledge (Peters et al., 2018; Petroni et al., 2019) and solve knowledge-intensive tasks (Roberts et al., 2020; Brown et al., 2020). Nevertheless, their knowledge abilities are far from perfect (Sun et al., 2023; Hernandez et al., 2023; Muhlgay et al., 2023), leading to the emergence of knowledge augmentation approaches: using external sources (e.g., retrieval corpora (Fisch et al., 2019; Guu et al., 2020; Shi et al., 2023b; Wu et al., 2023), search engines (Press et al., 2022; Nakano et al., 2021), and other LMs(Feng et al., 2023; Luo et al., 2023)) to provide relevant information in the prompt context. However, due to issues such as misinformation, varying perspectives, time-sensitive information, or knowledge updates, **knowledge conflicts** might arise, meaning that there is a discrepancy between _parametric knowledge_ (the internal knowledge stored in LLM parameters) and _non-parametric knowledge_ (the knowledge fetched from external sources (Chen et al., 2022; Xie et al., 2023)). Prior research conducted preliminary studies by probing LLMs with knowledge conflicts and examined their behaviors in response (Chen et al., 2022). The key findings are that LLMs' choices between knowledge sources, parametric or non-parametric, depend on factors including the coherence of the external knowledge (Xie et al., 2023) and model size (Longpre et al., 2021). This work extends these prior works by seeking a deeper understanding of whether LLMs can acknowledge knowledge conflicts and how they should respond. Specifically, we ask: _What should be the desirable behaviors of LLMs when knowledge conflicts arise?_ and _Are LLMs currently exhibiting those desirable behaviors?_ We argue that LLMs should not rely solely on either parametric or non-parametric information, but grant LLM users the agency to make informed decisions based on distinct answers (Floridi, 2023). In line with this goal, we hypothesize that LLMs should 1) identify the existence of knowledge conflicts, 2) pinpoint the specific information segments where knowledge conflicts occur, and 3) generate distinct responses based on all conflicting information. Achieving these desiderata, as shown in Figure 1, enables LLMs to not only acknowledge the existence of knowledge conflicts but also navigate them skillfully, resulting in responses that are more accurate, comprehensive, and, ultimately, trustworthy. To this end, we introduce Knowledge Conflict, a framework to simulate contextual knowledge conflicts and evaluate whether LLM's behavior aligns with the three desiderata. Specifically, we first curate a list of 10k entities covering 20 distinct domains and 200 subject areas, while employing two techniques to generate synthetic knowledge conflicts tailored to a specific context. We establish three distinct tasks with increasing complexity to reflect the three goals: 1) _Contextual Knowledge Conflict Detection_: identify the presence of knowledge conflicts, 2) _QA-Span Knowledge Conflict Detection_: determine whether there is a knowledge conflict specifically in a span which is relevant to the question; and 3) _Distinct Answers Generation_: provide distinct answers by leveraging all pieces of conflicting information. These three tasks focus on different aspects of conflict-handling abilities and together serve as a comprehensive evaluation protocol. We conduct extensive experiments with the Knowledge Conflict framework, revealing that while LLMs perform well above random in Task 1, identifying the existence of knowledge conflicts within contextual information, they encounter notable challenges when it comes to Tasks 2 and 3, which require LLMs to precisely pinpoint these conflicts and provide distinct answers given conflicting context. To address these challenges, we further propose new instruction-based approaches to reflect a wide array of reasoning properties, such as decomposing tasks, breaking down context passages, localization, and more. Through these approaches, we successfully enhance the performance of gpt-3.5-turbo in Task 1 and Task 3, improving LLM's abilities to acknowledge knowledge conflicts and generate distinct answers amidst conflicting information. Further analyses demonstrate that factors such as knowledge domain, prompt text, and more, while robust handling of knowledge conflicts remains an open research question. ## 2 The Knowledge Conflict Framework We present the Knowledge Conflict framework, which leverages a wide range of knowledge sources, various conflict creation methods, and progressively challenging settings to reflect real-world knowledge conflicts and assess LLMs' capacity to recognize and address knowledge conflicts. We illustrate the Knowledge Conflict framework in Figure 2. ### Knowledge Scope We generate an entity list as the starting point in Knowledge Conflict by prompting LLMs in a zero-shot manner: we first instruct the LLM to return 20 distinct domains such as Computer Science, accompanied by 10 fields within each domain such as Artificial Intelligence and Human-Computer Interaction, and then 50 entities specific to each field such as Neural networks and User Interface. As a result, we obtain 9,083 unique entities after filtering out duplicates, covering diverse knowledge areas across various domains. We utilize the generated entity list instead of other publicly accessible entity lists (Pellissier Tanon et al., 2020; Heist & Paulheim, 2020), so it is highly likely that LLMs are familiar with these entities and would Figure 1: We expect LLMs to 1) acknowledge knowledge conflicts, 2) point out the specific conflicting segments, and 3) generate different answers based on conflicting pieces of information. contain knowledge and information about them. Note that the Knowledge Conflict framework is independent of the entity list, thus our approach could be easily extended to other domains, subject areas, and more. ### Knowledge Conflict Generation For each entity, we create two pieces of information by first eliciting the LLM's parametric knowledge about the entity, and then factually modifying it to construct a conflicting knowledge to later put into the prompt context, such that there is a knowledge conflict between these two contexts. We detail the methodology below. Parametric Knowledge ElicitationWe instruct LLMs to produce contextual information about each entity under a closed-book setting with the prompt _"Give me some context about {entity} in 50 words."_ In this case, LLMs rely solely on their internal parametric knowledge, devoid of external evidence, to generate the requested context. As a result, we adopt the generated context as its parametric knowledge. Conflicting Knowledge CreationWe employ two approaches to generate synthetic knowledge conflicts. * _In-domain Named Entity Substitution_: Inspired by previous works that effectively utilize the entity substitution method (Longpre et al., 2021; Xie et al., 2023), We employ NER models (Honnibal et al., 2020; Liu et al., 2019) to identify named entity categorized as "ordinal", "cardinal", "date", "person", "organization", and "location". We randomly select an identified entity and perform substitution: all occurrences of the selected entity are substituted with another entity of the same type drawn from an in-domain corpus, i.e., an entity of the type "person" will be substituted with another entity in type "person" found in the knowledge contexts generated in parametric knowledge elicitation from the same domain. * entities in Section 2.1 that we used to generate these contexts. Concretely, we replace all occurrences of the main entity in the context with another main entity in a context from the same domain. As a result, both strategies would result in a conflicting passage that conflicts with the parametric knowledge. We further verify this in a human evaluation presented in Section 2.4. ### Tasks After obtaining pairs of passages that are in conflict with each other, we create three tasks to examine LLM's ability to 1) recognize the existence of knowledge conflicts, 2) pinpoint conflicting information segments, and 3) provide different answers to each of the conflicting passages. Figure 2: We introduce the Knowledge Conflict framework to comprehensively analyze and improve LLMs’ handling of knowledge conflicts. The framework handles concrete spans where knowledge conflicts arise, and facilitates meaningful outputs, granting its users the agency to find appropriate responses in the face of conflicting information. Task 1: Contextual Knowledge Conflict DetectionWe set up a binary classification task in which a single piece of context, either its parametric knowledge or the conflicting knowledge, and the instruction _"Does the given context conflict with what you know? Yes/No"_ are given in the prompt. The answer _"Yes"_ is expected when the conflicting knowledge is given and _"No"_ in case of the parametric knowledge. We use Precision, Recall, and F1-score as evaluation metrics. Task 2: QA-Span Knowledge Conflict DetectionIt is often the case that not all pieces of information within a passage are in conflict between parametric and conflicting knowledge sources. As a result, in addition to detecting overall contextual knowledge conflict (Task 1), it is crucial for LLMs to pinpoint the specific piece of information where these conflicts arise. We instruct text-davinci-003 (Ouyang et al., 2022) with the prompt _"Given the context, generate a question to which the only single answer is the word {entity} (the question should not contain the word {entity})"_, where the _"entity"_ is the entity substituted or shuffled in the conflict generation step, and the conflicting context in a zero-shot manner to generate a question asking about the conflicting segments of the conflicting context. The prompt _"Given the context, generate a question unrelated to {entity}"_ is employed to generate a question asking about the non-conflicting segments of the conflicting context. We prompt the LLM with a single context (either parametric knowledge or conflicting knowledge) and a single question (either question about the conflicting segments or question about the non-conflicting segments) with the instruction _"Does the given context conflict with what you know regarding the answer to the question? Yes/No"_ for a binary classification. The positive answer is only expected when the conflicting knowledge and the question about the conflicting segments are given. Though we can assess LLMs directly by letting them to identify conflicting segments, we opt for this QA-based method which aligns better with real-world scenarios where users ask questions that might not rely on the conflicting segments. Again, we employ Precision, Recall, and F1-score for evaluation. Task 3: Distinct Answers GenerationWhile previous studies (Longpre et al., 2021; Xie et al., 2023; Mallen et al., 2023) explored various factors that impact the LLMs to choose between their parametric knowledge and external sources, we believe that it is important to defer agency to the users, i.e., in the face of ambiguity and knowledge conflicts, LLMs should return multiple answers and let users make the choices. Therefore, we test LLMs' ability to generate different answers given conflicting contexts in this task. Specifically, the LLM is given the conflicting text and the question about the conflicting segments of text along with the prompt _"Answer the question based on the given context and your own knowledge respectively."_ The ground truths would be the answer based on the conflicting passage and the answer generated by LLMs when only the question is presented in a zero-shot manner. We evaluate the accuracy of parametric-based answers, the accuracy of conflicting-knowledge-based answers, and the accuracy of simultaneously providing both answers. ### Dataset Analysis The dataset we construct through our framework comprises 9,083 distinct entities that are approximately evenly distributed across 20 different domains. Around one-third of the instances of knowledge conflicts stem from named entity substitution, while the remaining two-thirds result from entity shuffling. A detailed breakdown of the dataset by domains and conflict generation methods can be found in Appendix D. It's worth noting that our conflict generation methods may fail under situations where the context is not unique to a specific entity, for example, when there are multiple individuals holding the title of "chief scientist." To further validate the effectiveness of our conflict generation techniques, we conduct human evaluations for Task 1 and Task 2. Results show that 96% of Task 1 problems and 83.5% of Task 2 problems contain perfectly clear knowledge conflicts. The Fleiss' Kappa (Fleiss, 1971) among the five annotators is 0.51, indicating moderate agreement. ## 3 Experiment Settings ### Baselines We evaluate prominent LLM prompting approaches with the Knowledge Conflict framework. Zero-shot prompting(Liu et al., 2021) presents LLMs with a problem statement and asks for a direct answer, without any exemplars or intermediate reasoning steps. Few-shot prompting(Brown et al., 2020b) leverages a few exemplars, pairs of problems and answers, to prompt in-context learning in LLMs. Chain-of-Thought prompting (CoT)(Wei et al., 2022) includes a reasoning path in in-context exemplars and guides LLMs to follow similar reasoning steps to reach an answer. In Task 1, we guide LLMs to deconstruct the given context into atomic facts and check if the number of inconsistencies is greater than zero. In Tasks 2 and 3, we lead LLMs to generate the answers based on parametric knowledge and the answers based on the given context separately before the final response. Generated Knowledge Prompting (GKP)(Liu et al., 2021a) involves extracting knowledge from LLMs, and providing it as an additional input when answering a question. We elicit LLMs' parametric knowledge about the main entity in the given context as the supplementary input. Self-ask prompting(Press et al., 2022) requires LLMs to explicitly formulate the next follow-up question they should inquire before answering it. We employ this approach to generate self-ask questions on parametric knowledge and the context provided. Break-down prompting guides LLMs to solve problems or answer questions at the sentence level, and then integrates all responses in the end. We instruct LLMs to perform classification on a sentence-by-sentence basis and then consolidate these individual responses into a coherent answer. Self-Consistency (SC)(Wang et al., 2023b) is a decoding strategy that samples a diverse set of reasoning paths and selects the most consistent answer by marginalizing out the sampled reasoning paths, leveraging the idea that a complex reasoning problem typically admits multiple different ways of thinking leading to its unique correct answer. In our experiments, Self-Consistency is used in conjunction with CoT and GKP. ### Models and Settings We use ChatGPT (Ouyang et al., 2022) (gpt-3.5-turbo) as the main LLM in the experiments unless otherwise stated. For Self-Consistency that requires multiple samples for a problem, we sample 5 responses with temperature \(\tau=0.7\) following (Wang et al., 2023b); for all the other experiments, we use temperature \(\tau=0\). For few-shot prompting approaches, the input prompt includes four in-context exemplars and their solutions before the problem of interest under Task 1 and Task 2, and two such pairs under Task 3. The in-context exemplars are drawn from different primary fields and different conflict generation methods, and balanced between positive and negative samples. ## 4 Results Task 1: Contextual Knowledge Conflict DetectionTable 1 shows that on Task 1, LLMs display a tendency to declare _"no conflict"_, which results in a nearly perfect precision but a lower recall. This inclination toward asserting the absence of conflicts can raise doubts about negative predictions, as it doesn't necessarily indicate a genuine assessment of the absence of conflicts. However, these concerns are alleviated when considering the results obtained using CoT, where providing reasons is obligatory. In this scenario, both GKP and Self-ask, methods that rely on explicit classification, do not yield strong performance. This indicates that accurately identifying contextual knowledge conflicts through explicit means is not a trivial task. Overall, the LLM demonstrates an above-random ability in contextual knowledge conflict identification. Task 2: QA-Span Knowledge Conflict Detectionwith the precise identification of knowledge conflicts, its performance experiences a considerable \begin{table} \begin{tabular}{l c c c} \hline \hline **Method** & **Prec.** & **Rec.** & **F1** \\ \hline Zero-shot & **0.999** & 0.144 & 0.251 \\ Few-shot & **0.999** & 0.351 & 0.520 \\ CoT & 0.998 & 0.639 & 0.779 \\ CoT + SC & 0.998 & 0.644 & 0.783 \\ GKP + SC & **0.999** & 0.475 & 0.644 \\ Self-ask & 0.995 & 0.486 & 0.653 \\ Break-down & 0.863 & 0.693 & 0.768 \\ \hline **Ours** & 0.893 & **0.728** & **0.802** \\ \hline \hline \end{tabular} \end{table} Table 1: Performance on Task 1: Contextual Knowledge Conflict Detection. The best results are **bold-faced** and the second-best ones are underlined. Our proposed approach outperforms all baselines on F1-score. decline. Among all the baseline methods, the self-ask prompting approach stands out as the most effective, and we find that the generated intermediate answers help to narrow down the scope of knowledge conflicts and encourage localization. Also, we observe a consistent pattern where precision exceeds recall. This pattern revalidates LLMs' tendency to assert "_no conflict_". Overall, LLMs struggle to precisely pinpoint the exact piece of information where these conflicts arise. Task 3: Distinct Answers GenerationTable 3 shows the results under Task 3 where LLMs are directed to provide answers based on the non-parametric context and its parametric knowledge concurrently. Across all the prompting methods, except for zero-shot where only a single answer is returned in most cases, the accuracy of answers based on the conflicting knowledge surpasses that of parametric-based answers. Breakdown prompting is not applicable in this task, and we have omitted Self-Consistency due to the limited improvements it offers in the first two tasks and cost considerations. Overall, the LLM struggles to provide distinct answers simultaneously with the accuracy of getting both correct hovering around 50%, requiring further research and exploration. ## 5 Proposed Approach Recent studies [23, 24] have shown that instruction-based methods work well to induce new abilities in LLMs. We additionally explore and propose a new set of instruction-based approaches for the three tasks to investigate whether instructions tailored to the context of knowledge conflicts would improve upon generic approaches. Full prompt text is presented in Appendix G. Task 1: Contextual Knowledge Conflict DetectionWe propose to employ a four-step approach: 1) elicit knowledge about the main entity, 2) break down the entire context into individual sentences, which draws LLMs' attention to every single detail, and identify sentences that can be verified by the knowledge elicited in step 1, 3) classify whether these sentence conflicts with the knowledge elicited in a sentence-by-sentence manner, and 4) classify whether the remaining sentences conflict with its parametric knowledge (using all the internal knowledge in addition to the knowledge elicited in step 1). For steps 2), 3), and 4), a localization procedure is included, which means apart from returning the answer, the LLMs also need to return their reasoning steps. The main idea is that we promote fine-grained analysis into the sentence-level so that the LLM could better classify and attend to those parts, leaving all the _vague_ sentences to the final step. Table 1 shows that our proposed method exhibits a higher F1 score compared to all the baseline methods, albeit with a slight reduction in precision. The efficacy of both the Break-down baseline and our proposed approach underscores that the capacity to discern contextual knowledge conflicts is contingent upon the context's length. Task 2: QA-Span Knowledge Conflict DetectionSimilarly, we propose to break down the task and fine-grain the context into sentences that can be used to answer the given question. Specifically, the LLMs are instructed to 1) disregard the given context, answer the given question solely based on their own beliefs, 2) find sentences that can be used to answer the given question: if such sentences \begin{table} \begin{tabular}{l c c c} \hline \hline **Method** & **Prec.** & **Rec.** & **F1** \\ \hline Zero-shot & 0.615 & 0.151 & 0.242 \\ Few-shot & 0.395 & **0.860** & 0.541 \\ CoT & 0.843 & 0.375 & 0.519 \\ CoT + SC & 0.875 & 0.367 & 0.517 \\ GKP + SC & 0.508 & 0.499 & 0.504 \\ Self-ask & **0.898** & 0.474 & **0.621** \\ Break-down & 0.614 & 0.413 & 0.494 \\ \hline **Ours** & 0.718 & 0.426 & 0.535 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance on Task 2: QA-Span Knowledge Conflict Detection. The best results are **bold-faced** and the second-best ones are underlined. Self-ask prompting stands out as the strongest baseline method. \begin{table} \begin{tabular}{l c c c} \hline \hline **Method** & **Para.** & **Conflicting.** & **Both** \\ \hline Zero-shot & 0.400 & 0.350 & 0.031 \\ Few-shot & 0.372 & 0.765 & 0.285 \\ CoT & 0.575 & 0.782 & 0.473 \\ GKP & 0.643 & 0.814 & 0.551 \\ Self-ask & 0.611 & 0.735 & 0.464 \\ \hline **Ours** & **0.658** & **0.815** & **0.569** \\ \hline \hline \end{tabular} \end{table} Table 3: Performance on Task 3: Distinct Answers Generation. The best results are **bold-faced** and the second-best ones are underlined. Our approach enables LLMs to generate distinct answers supported by different knowledge sources respectively. exist, extract answers from the sentences and determine whether these answers conflict with the answers generated in step 1; if no such sentences exist, report that there is no conflict. As shown in Table 2, Unfortunately, our approach falls short of outperforming all the baseline methods in this setting after great exploration, indicating that instruction-based approaches might be limited in this scenario. This opens up opportunities for future research to enhance the capability of LLMs in pinpointing instances of knowledge conflicts. Task 3: Distinct Answers GenerationIn order to get more accurate parametric-knowledge-based answers and conflicting-knowledge-based answers, we propose to include "keywords" such as "_solely_" and "_disregard_" to separate the two knowledge sources apart. Also, after generating the response based on one knowledge source, we instruct the LLMs to repeat the question again as LLMs have exhibited limited capability in retaining information across extended contextual spans (Chen et al., 2023; Liu et al., 2023). Table 3 shows that our proposed method attains superior performance across all three metrics. ## 6 Analysis Breakdown by FactorsTo examine the factors that may influence the ability of LLMs to identify contextual knowledge conflicts, pinpoint knowledge conflicts, and offer distinct answers when confronted with conflicting information sources, we delve deeper into our results by categorizing them into various domains and conflict generation methods. We also put forward hypotheses regarding the potential reasons for the effects these factors may have. * _Knowledge Domain_: As shown in Figure 4, LLMs exhibit a higher proficiency in recognizing (Task 1) and pinpointing (Task 2) contextual knowledge conflict within the domains of History and Health Sciences. Regarding Task 3, providing distinct answers, LLMs excel in the domains of Biology. These results demonstrate that the ability to tackle knowledge conflicts varies across knowledge domains: we hypothesize that it has to do with the quantity of conflicting information present in the pre-training data. Essentially, if LLMs encounter a substantial amount of conflicting information during their pre-training within a specific domain, they tend to perform better in that particular domain. We leave the results for the other 10 knowledge domains in Section Appendix F. Figure 4: Model performance across knowledge domains on three different tasks. The factor of knowledge domain has a substantial effect on LLM performance on all three tasks, while certain domains (e.g. art history and anthropology) pose greater challenges. Figure 3: Performance on Tasks 1 and 2 when two conflict creation strategies, named entity substitution and entity shuffling, are separately employed. * [leftmargin=*] * _Conflict Generation Method_: Figure 3 demonstrates that when we dissect the results according to the two synthetic methods used to generate conflicts (i.e., In-domain Named Entity Substitution and In-domain Entity Shuffling), it becomes evident that LLMs exhibit enhanced performance in scenarios where conflicts are induced through entity shuffling. This outcome aligns with intuition since LLMs find it more straightforward to identify and specify knowledge conflicts when the conflict pertains to the main entity. Results for Task 3 can be found in Appendix F. Prompt Consistency CheckLLMs might be sensitive to subtle changes in the prompt text (Zhu et al., 2023). To further examine the potential impact of instruction phrasing on LLMs' performance in given tasks, we introduce three additional formulations of instructions for each task. Table 5 reveals that, on the whole, performance remains stable across various phrasing. The results also indicate that the CoT prompting method enhances prompt robustness, as evidenced by significantly lower standard deviations in comparison to the Few-shot prompting method. Detailed prompt consistency check results for Task 3 can be found in Appendix F. Number of In-context ExemplarsWe explore whether the quantity of in-context exemplars provided affects the ability to tackle knowledge conflicts. Specifically, we change the number of in-context exemplars on a subset of the dataset across the three tasks. As shown in Figure 5, we generally observe that the F1 score plateaus when there are as few as two in-context exemplars. However, including additional exemplars proves beneficial in the case of CoT prompting for Task 2 and Few-shot prompting for Task 3. To sum up, increasing the number of exemplars may improve LLMs' capability of handling knowledge conflicts, but its effect is limited and far from inducing perfectly robust approaches. reduction in the model's tendency to assert negativity in the context of acknowledging knowledge conflicts. More Capable LLMsWe also investigate the competence of other LLMs in tackling knowledge conflicts, which encompass GPT-4 (Bubeck et al., 2023). Due to budget constraints, we only assess its performance on the most challenging Task 3. As an LLM trained on an unprecedented scale of compute and data, GPT-4 showcases increased accuracy in generating both parametric-based answers and conflicting-knowledge-based answers. However, its performance has yet to reach an optimal level, indicating that mere scaling does not solve the challenge of knowledge conflicts. ## 7 Related Work Understanding and expanding the knowledge abilities of LLMsPrevious works have demonstrated that LLMs have incorporated factual knowledge within their parameters and exhibit considerable potential in recalling factual information (Peters et al., 2018; Petroni et al., 2019; Yu et al., 2023; Mrthyunjaya et al., 2023). However, existing research also reveals that their inherent knowledge is not without flaws (Wu et al., 2022; Pan et al., 2023): outdated knowledge (Hernandez et al., 2023; Yu & Ji, 2023; Padmanabhan et al., 2023), factuality issues (Lee et al., 2022; Feng et al., 2023b), hallucination (Ji et al., 2023; Zhang et al., 2023), and more are common challenges in LLM knowledge abilities. In response, researchers have made concerted efforts to enhance these capabilities through approaches such as retrieval augmentation (Lewis et al., 2020; Guu et al., 2020; Borgeaud et al., 2022; Shi et al., 2023b; Jiang et al., 2023), search engine integration (Nakano et al., 2021; Press et al., 2022; Feng et al., 2023a), and incorporating other neural LMs (Feng et al., 2023d; Luo et al., 2023; Du et al., 2023). In this work, we specifically focus on the issue of _knowledge conflict_, when there is a conflict between internal parametric knowledge and external non-parametric knowledge. Without a thorough understanding of how LLMs react to and manage knowledge conflicts, the reliability of their responses may come into question. Knowledge Conflict in LLMsPrevious work on knowledge conflicts primarily focuses on factors impacting models' choice between parametric knowledge and non-parametric knowledge under QA settings. Mallen et al. (2023) finds that conflicting memories are effective for less popular facts; Longpre et al. (2021) explores the effects of model size and retrieval quality by identifying QA instances with named entity answers and substituting mentions of the entity in the gold document with an alternate entity, thus changing the answer; Xie et al. (2023) reveals that when both supportive and contradictory evidence to their parametric memory are present, LLMs show a strong confirmation bias and tend to cling to their parametric memory. Nevertheless, an intriguing and underexplored aspect is to rethink the desiderata for LLMs when confronted with knowledge conflicts, and whether their current responses align with such objectives. To this end, we argue that LLMs should 1) _identify knowledge conflicts_, 2) _pinpoint conflicting information segments_, and 3) _provide distinct answers in conflicting scenarios_. We propose the Knowledge Conflict framework and conduct extensive experiments to evaluate and improve LLMs' ability to tackle knowledge conflicts. ## 8 Conclusion We introduce Knowledge Conflict, an evaluation framework to simulate contextual knowledge conflicts and quantitatively evaluate LLMs' ability to 1) identify contextual knowledge conflicts, 2) pinpoint conflicting knowledge segments, and 3) provide distinct answers or viewpoints amidst conflicts. Extensive experiments demonstrate that LLMs excel at simply identifying knowledge conflicts, but struggle with fine-grained analysis and providing distinct responses. We propose \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**Para.**} & \multicolumn{2}{c}{**Conflicting.**} & \multicolumn{2}{c}{**Both**} \\ \cline{2-7} & **GPT-4** & **change w/ turbo** & **GPT-4** & **change w/ turbo** & **GPT-4** & **change w/ turbo** \\ \hline Zero-shot & 0.206 & -0.194 & 0.628 & +0.278 & 0.057 & +0.026 \\ Few-shot & 0.638 & +0.266 & 0.923 & +0.158 & 0.604 & +0.319 \\ COT & 0.691 & +0.116 & 0.877 & +0.094 & 0.625 & +0.153 \\ GKP & 0.723 & +0.080 & 0.865 & +0.051 & 0.656 & +0.105 \\ Self-ask & 0.684 & +0.073 & 0.880 & +0.145 & 0.619 & +0.155 \\ \hline \hline \end{tabular} \end{table} Table 6: Performance with GPT-4 as the base model and changes with respect to GPT-3.5-turbo. GPT-4 exhibits improvements across all baselines, while it still falls short of optimal. instruction-based approaches that successfully improve in Task 1 and Task 3, while challenges persist in all tasks, especially in Task 2 of pinpointing conflicts. Further analyses show that factors like prompts, exemplars, domains, and conflict simulation methods greatly impact LLM's ability to tackle knowledge conflicts. Our comprehensive framework and in-depth study offer a comprehensive understanding of whether existing LLMs could generate desirable responses amid knowledge conflicts and provide quantitative avenues for evaluating and improving the ability to tackle knowledge conflicts.
2307.14357
On Boolean reliability algebra
In this paper we consider systems which consist of binary components with known reliabilities. We discuss their algebraic properties and define the corresponding algebraic structure, which we call the reliability algebra. We prove that the reliability algebra is a Boolean algebra. The reliability algebra seems an appropriate context for defining fuzziness measure i.e. membership function with reliability values.
Branislav Boričić, Mirjana Ilić, Jelena Stanojević
2023-07-24T10:33:08Z
http://arxiv.org/abs/2307.14357v1
# On Boolean reliability algebra ###### Abstract In this paper we consider systems which consist of binary components with known reliabilities. We discuss their algebraic properties and define the corresponding algebraic structure, which we call the reliability algebra. We prove that the reliability algebra is a Boolean algebra. The reliability algebra seems an appropriate context for defining fuzziness measure i.e. membership function with reliability values. Key words: Boolean algebra; reliability algebra; algebra of block diagrams. AMS 2020: 03B52 03G05 90B25 62N05 60K10 ## 1 Introduction The framework of random events algebra, as usually used in probability theory, is not always good enough for modeling uncertain, unreliable, vague or imprecise real life phenomena. After explosion of Zadeh's fuzzy systems theory based on the notion of fuzzy set (see [19]), and Zadeh's fuzzy logic concept (see [20]), later, between other significant alternatives made for dealing with uncertainties, we mention Shafer's evidence theory (see [18] or [11]), Pawlak's rough set theory (see [16] or [21]) and Molodtsov's soft set theory (see [14] or [15]). Each Heyting-valued function and, consequently, each Boolean-valued function, presents potentially a possibility for fuzzification of any structure of crisp sets (see [4]). On the other side, reliability of a system, as characteristic property, by its nature, can be considered a measure of vagueness, uncertainty or even of logical inconsistency of the system. In this paper we prove that reliabilities, defined in an appropriate way, have Boolean structure, qualifying them for fuzziness measure i.e. for values of membership function in a fuzzy system. In particular, herewith we prepare a ground for formalism enabling to express'reliability of a statement' and'reliability of a proof'. Namely, when we prove that a system consisting of reliabilities \(r\) has a Boolean structure, then we sense that it is possible to formalize a sentence '\(r\) is reliability of a statement \(A\)', denoted by \(A^{r}\), enabling to work with more complex entities such as \(A^{r},B^{s}\vdash C^{t}\), meaning that 'conclusion \(C\), with reliability \(t\), can be derived from hypotheses \(A\) and \(B\), with reliabilities \(r\) and \(s\), respectively (see [3], [5]). The fact that a system of reliabilities presents a Boolean algebra, which is proved in this paper, presents the first and basic step in building a structure making possible to define reliability \(r\) as fuzziness measure, value of membership function, of a statement \(A\), formally working with expression of the form \(A^{r}\). The reliability of a system, possibly consisting of many components, is the probability that a system will function. It is a significant notion in statistics and information theory whenever it is necessary to define the sureness of an information or of a system. The reliability function is then usually introduced by means of probability distribution function of the corresponding random variable and the problem of reliability of a complex system is considered via probabilistic characteristics of the consisting elements (see [1], [8], [17]). The same function appears also in models of actuarial mathematics, under the name of the survival function (see [7]). Also, it is important to say that one applicable part of reliability mathematics are for example, shock models. In the so far literature two basic types of that models are studied: cumulative shock models and extreme shock models, theirs details are out of scope of this paper. But it is worth to point out that shock magnitude is measurable and the system life time of the above models is defined in its terms. New approach, with \(\delta\)-shock model was proposed in the 1999. (see [12] and their previous work) and it is developed until now (see [13]). In that model a present parameter \(\delta\) determine critical interval \(C(\delta)\), and the system fails when the interval of two successive shocks enter into that interval \(C(\delta)\). This \(\delta\)-shock models finds their applications in many scientific fields like: earthquake modeling, insurance mathematics, electrical systems, inventory theory and many others reliability areas. These above facts can be a good reason for further and deeper studying of the topic of this paper, in particular because of its importance and applications from different point of view. Specially, the correspondence between the basic propositional connectives and binary block diagram connectives is well known, e.g. conjunction corresponds to combining of two blocks in series (or cascade) structure, disjunction to combining of two blocks in parallel, and implication to combining two blocks in a feedback control system (see [6]). In this paper we consider this relationship with the respect to the reliabilities of the binary block diagram components (not just with the respect to the information whether they are functioning or not). The central point is to justify that the reliability algebra defined here has a Boolean structure (see [9]). This gives the formal background for reliability calculations, due to the fact that the Boolean techniques are usually used for that purpose. An immediate motive to study the basic algebraic properties of reliabilities rise from our investigation of a reliability logic (see [3], [5]) where a more precise definition of a reliability algebra is required. Our present result also gives the needed verification of that subject. ## 2 Algebra of block diagrams is a Boolean algebra A _Boolean algebra_ is an algebraic structure \({\cal B}=\langle B,\wedge^{b},\vee^{b},^{\prime},1,0\rangle\), where \(B\) is a non-empty set with at least two distinct elements \(0\) and \(1\), two binary operations \(\wedge^{b}\) and \(\vee^{b}\) and one unary operation \({}^{\prime}\), such that \(\wedge^{b}\) and \(\vee^{b}\) satisfy the commutative and the associative laws, as well as the distributive law of \(\wedge^{b}\) with respect to \(\vee^{b}\), and vice versa, together with the neutral elements laws: \(x\wedge^{b}0=x\) \(x\lor^{b}1=x\), and the complement laws (or the inverse laws): \(x\lor^{b}x^{\prime}=1\) and \(x\wedge^{b}x^{\prime}=0\), where \(x^{\prime}\) is called the complement of \(x\) (see [9]). Boolean terms are defined inductively, as usual. Two Boolean terms \(t_{1}\) and \(t_{2}\) are _equal_, denoted by \(t_{1}=t_{2}\), iff we can transform one of them into the other one, using only the axioms of the Boolean algebra and basic properties of equality. The simplest Boolean algebra is a two-element Boolean algebra \({\cal B}_{2}=\langle\{0,1\},\wedge^{b},\vee^{b},{}^{\prime},1,0\rangle\). Variables of this algebra can take only the values \(0\) or \(1\). This dichotomy applies also to Boolean terms, such that whether a Boolean term takes the value \(0\) or \(1\) is determined completely by the values of its variables. It is worth mentioning that when the elements \(0\) and \(1\) of a two-element Boolean algebra are the whole numbers, then the operations of the Boolean algebra can be understood as: \(x\wedge^{b}y=\min(x,y)\), \(x\lor^{b}y=\max(x,y)\) and \(x^{\prime}=1-x\). Our two-element Boolean algebra is then \({\bf 2}=\langle\{0,1\},\min,\max,1-\ \,1,0\rangle\). One well-known example of \({\cal B}_{2}\) algebra is the algebra of binary block diagrams. A binary block diagram (or simply, a diagram) is a structure whose every component is either in the functioning or in the failure state and which itself can also be either functioning or has failed. We use \({\bf 1}\) to denote functioning diagram and \({\bf 0}\) to denote a diagram which has failed. In order to define an algebra of diagrams we choose the following three operations, two of them are binary (we choose them among \(2^{2^{2}}\) binary operations which can be defined over the set \(\{{\bf 1},{\bf 0}\}\)), denoted by \(\odot\) and \(\oplus\), and one of them is unary (this is one of \(2^{2^{1}}\) unary operations which can be defined over the set \(\{{\bf 1},{\bf 0}\}\)), denoted by \(\overline{\ }\), defined as follows: \[\begin{array}{c|c}\odot&{\bf 1}&{\bf 0}\\ \hline{\bf 1}&{\bf 1}&{\bf 0}\\ {\bf 0}&{\bf 0}&{\bf 0}\end{array}\qquad\begin{array}{c|c}\oplus&{\bf 1 }&{\bf 0}\\ \hline{\bf 1}&{\bf 1}&{\bf 1}\\ {\bf 0}&{\bf 1}&{\bf 0}\end{array}\qquad\begin{array}{c|c}\overline{\ }\\ \hline{\bf 1}&{\bf 0}\\ {\bf 0}&{\bf 1}\end{array}\] which we call _series, parallel_ and _complement_, respectively. Then an algebra of diagrams is defined as follows. **Definition.** An _algebra of diagrams_ is an algebraic structure \[{\cal D}=\langle\{{\bf 0},{\bf 1}\},\odot,\oplus,\overline{\ },{\bf 1},{\bf 0}\rangle.\] We use \(A,B,C,\ldots\), with or without subscripts, to denote binary components and we define diagrams, inductively, as follows: **Definition.** (i) Binary components \(A_{1},\ldots,A_{n}\) are _diagrams_. We shall call them _elementary diagrams_. (ii) If \(D\), \(D_{1}\) and \(D_{2}\) are diagrams then \((D_{1}\odot D_{2})\), \((D_{1}\oplus D_{2})\) and \(\overline{D}\) are also _diagrams_. (iii) _Diagrams_ can be obtained by applying only (i) and (ii), finitely many times. We shall use \(D\), with or without subscripts, to denote diagrams and we shall omit, as usual, the outer pair of parentheses. By the definition or our connectives, it is clear that the diagram \(D_{1}\odot D_{2}\) is in the functioning state iff both \(D_{1}\) and \(D_{2}\) are functioning, the diagram \(D_{1}\oplus D_{2}\) is in the functioning state iff at least one of \(D_{1}\) or \(D_{2}\) is functioning, and, finally, the diagram \(\overline{D}\) is in the functioning state iff \(D\) has failed. Moreover, our connective \(\odot\) is associative, in the following sense. If \(D_{1},D_{2},D_{3}\) are diagrams, then the diagram \((D_{1}\odot D_{2})\odot D_{3}\) is in the functioning state iff all of \(D_{1},D_{2},D_{3}\) are functioning; similarly, the diagram \(D_{1}\odot(D_{2}\odot D_{3})\) is in the functioning state iff all of \(D_{1},D_{2},D_{3}\) are functioning, therefore \((D_{1}\odot D_{2})\odot D_{3}\) is functioning iff \(D_{1}\odot(D_{2}\odot D_{3})\) is functioning, and we write this fact as: \[(D_{1}\odot D_{2})\odot D_{3}=D_{1}\odot(D_{2}\odot D_{3}).\] Analogously we can verify that \(\odot\), \(\oplus\), \(\overline{\ \ }\), \(\mathbf{0}\) and \(\mathbf{1}\) satisfy all axioms of Boolean algebra, so we have: **Theorem**.: \(\mathcal{D}\) _is a Boolean algebra._ Consequently, diagrams can be understood as Boolean terms, and it is natural to define the equality between two diagrams as follows. **Definition**.: Two diagrams \(D_{i}\) and \(D_{j}\) are _equal_, denoted by \(D_{i}=D_{j}\), iff they are equal as Boolean terms. In order to determine whether an arbitrary diagram \(D\) is in the functioning or in the failure state, usually the _structure function_\(\Phi\) is defined, as follows: \[\Phi(\mathbf{0})=0,\qquad\Phi(\mathbf{1})=1,\qquad\Phi(\overline{D})=1-\Phi(D),\] \[\Phi(D_{1}\odot D_{2})=\min(\Phi(D_{1}),\Phi(D_{2})),\] \[\Phi(D_{1}\oplus D_{2})=\max(\Phi(D_{1}),\Phi(D_{2})).\] Then, \(D\) is in the functioning state if \(\Phi(D)=1\), and it is in the failure state, otherwise, i.e. when \(\Phi(D)=0\). Immediately, we have: **Theorem**.: _The structure function \(\Phi\) is an isomorphism between the algebras \(\mathcal{D}\) and \(\mathbf{2}\)._ This is a kind of a'semantical' point of view. Namely, any diagram is interpreted as either functioning or not, i.e. as a single value in the set \(\{0,1\}\). However, in the sequel we shall be interested in another interpretation of binary diagrams. Namely, we shall interpret diagrams in a set of _reliability terms_ and that will take us closer to a kind of a'syntactical' point of view. ## 3 Reliability algebra is a Boolean algebra It is well-known that the _reliability of a diagram_\(D\), denoted by \(r(D)\), is the probability that the diagram \(D\) is in the functioning state, i.e. that \(r(D)\) is a real number between \(0\) and \(1\). However, instead over a set of real numbers, we shall define a reliability algebra of diagrams over a set of _reliability terms_. They should be understood as terms built-up upon some finite set of letters, which will be called _reliability constants_. We shall consider only diagrams which are built-up upon a set of \(n\geq 1\) mutually nonequal elementary diagrams. Accordingly, we introduce the following notions: **Definition.** The set \(\mathbf{A_{n}}=\{A_{1},\ldots,A_{n}\}\) will be called the _generating set_ when \(A_{1},\ldots,A_{n}\) are mutually nonequal elementary diagrams. We say that a diagram \(D\)_is built-up upon_\(\mathbf{A_{n}}\) iff every component of \(D\) is in \(\mathbf{A_{n}}\). It is clear that there are exactly \(2^{2^{n}}\) mutually nonequal diagrams, built-up upon \(\mathbf{A_{n}}\). Recall that, this is the same as in the propositional logic, where the number \(2^{2^{n}}\) is exactly the number of mutually nonequivalent formulas built-up upon the set of \(n\) propositional letters. Moreover, if \(\mathbf{For_{n}}\) is the set of all mutually nonequivalent formulas built-up upon a set of \(n\) propositional letters and \(\mathbf{Dia_{n}}\) is the set of all mutually nonequal diagrams built-up upon a generating set of \(n\) elementary diagrams, then we have: **Theorem.**_The algebraic structure \(For=\langle\mathbf{For_{n}},\wedge,\vee,\neg,\top,\bot\rangle\) is isomorphic to the algebraic structure \(Dia=\langle\mathbf{Dia_{n}},\odot,\oplus,\overrightarrow{\,\mathbf{1}, \mathbf{0}}\rangle\)._ Let us note that the proposition 'a diagram is built-up upon a generating set' is equivalent to the condition that a diagram is built-up upon _independent components_, which is, as is well-known, the crucial condition for expressing the reliability of the system as the function of the component reliabilities (see [17]). Now, a _reliability algebra_ can be defined as follows: **Definition.** Let \(\mathbf{A_{n}}\) be a generating set and let \(R=\{r_{1},\ldots,r_{n}\}\) be such that \(r_{i}\) is a _reliability constant_ assigned to an elementary diagram \(A_{i}\), for every \(1\leq i\leq n\). Then \(\mathcal{R}=\langle\mathbf{R},\circ,\ddagger,\ \ ^{-1},i,o\rangle\) is called a _reliability algebra_ of diagrams built-up upon \(\mathbf{A_{n}}\), where \(\mathbf{R}\) is the algebraic closure of the set \(R\) with respect to binary operations \(\circ\) and \(\ddagger\) and a unary operation \(\ {}^{-1}\), where \(\circ\) and \(\ddagger\) satisfy commutative and associative laws, as well as a distributive law of \(\circ\) with respect to \(\ddagger\), and a distributive law of \(\ddagger\) with respect to \(\circ\) and where \(i\) and \(o\) are such that, for every \(x\in\mathbf{R}\) it holds that \(x\ddagger x^{-1}=i\) and \(x\circ x^{-1}=o\) and \(i\) and \(o\) are neutrals for \(\circ\) and \(\ddagger\), respectively, i.e. they satisfy the neutral element laws: \(x\ddagger o=x\) and \(x\circ i=x\), for every \(x\in\mathbf{R}\). As an immediate consequence we have: **Theorem.**_The reliability algebra of diagrams built-up upon \(\mathbf{A_{n}}\) is a Boolean algebra_. It is also clear that the cardinality of \(\mathbf{R}\) is exactly the same as the cardinality of the set of diagrams built-up upon \(\mathbf{A_{n}}\). Thus, to each of \(2^{2^{n}}\) mutually nonequal diagrams built-up upon \(\mathbf{A_{n}}\), a unique term, called the _reliability term_ is assigned. Another, practical question is how can we effectively calculate the reliability of an arbitrary diagram \(D\), built-up upon \(\mathbf{A_{n}}\)? The question is in fact how to identify the diagram equal to \(D\), whose reliability term is in \(\mathbf{R}\). But, this has been studied in the literature (see [2], [10]) and is not the subject of this paper. ## 4 Concluding remarks An approach to modeling reality with undetermined, vague and stochastic elements may be founded on reliability as its basic notion. Those models include statistical, technical, logical etc. treatments of complex systems built up upon elements with given estimated times of their functioning life. Reliability function appears as a basic notion. A traditional probabilistic reliability analysis of systems is usually a part of textbooks in probability theory (see [17]). Our interest in an abstract algebraic structure whose set contains elements with given reliabilities, is inspired by the problem how to define the reliability of a complex sentence consisting of atoms with known reliabilities (see [3], [5]). Namely, the structure of any sentence, made by means of connectives, points to an analogy with a diagram. This point of view leads us to consider sentences as diagrams, as well as sentence's reliability as diagram's reliability. Our main conclusion is that the space, or the algebra, consisting of reliabilities as its basic elements, is structured as a Boolean algebra (see [9]). This fact will be of great interest in further investigations of reliability logic enabling one to express and calculate the reliability of a sentence by working formally with propositions of the form \(A^{r}\) with the intended meaning that '\(r\) is the reliability of the sentence \(A\)' in a framework of a logical reasoning system.
2305.05423
High-throughput Cotton Phenotyping Big Data Pipeline Lambda Architecture Computer Vision Deep Neural Networks
In this study, we propose a big data pipeline for cotton bloom detection using a Lambda architecture, which enables real-time and batch processing of data. Our proposed approach leverages Azure resources such as Data Factory, Event Grids, Rest APIs, and Databricks. This work is the first to develop and demonstrate the implementation of such a pipeline for plant phenotyping through Azure's cloud computing service. The proposed pipeline consists of data preprocessing, object detection using a YOLOv5 neural network model trained through Azure AutoML, and visualization of object detection bounding boxes on output images. The trained model achieves a mean Average Precision (mAP) score of 0.96, demonstrating its high performance for cotton bloom classification. We evaluate our Lambda architecture pipeline using 9000 images yielding an optimized runtime of 34 minutes. The results illustrate the scalability of the proposed pipeline as a solution for deep learning object detection, with the potential for further expansion through additional Azure processing cores. This work advances the scientific research field by providing a new method for cotton bloom detection on a large dataset and demonstrates the potential of utilizing cloud computing resources, specifically Azure, for efficient and accurate big data processing in precision agriculture.
Amanda Issac, Alireza Ebrahimi, Javad Mohammadpour Velni, Glen Rains
2023-05-09T13:15:19Z
http://arxiv.org/abs/2305.05423v1
Development and Deployment of a Big Data Pipeline for Field-based High-throughput Cotton Phenotyping Data ###### Abstract In this study, we propose a big data pipeline for cotton bloom detection using a Lambda architecture, which enables real-time and batch processing of data. Our proposed approach leverages Azure resources such as Data Factory, Event Grids, Rest APIs, and Databrics. This work is the first to develop and demonstrate the implementation of such a pipeline for plant phenotyping through Azure's cloud computing service. The proposed pipeline consists of data preprocessing, object detection using a YOLOv5 neural network model trained through Azure AutoML, and visualization of object detection bounding boxes on output images. The trained model achieves a mean Average Precision (mAP) score of 0.96, demonstrating its high performance for cotton bloom classification. We evaluate our Lambda architecture pipeline using 9,000 images yielding an optimized runtime of 34 minutes. The results illustrate the scalability of the proposed pipeline as a solution for deep learning object detection, with the potential for further expansion through additional Azure processing cores. This work advances the scientific research field by providing a new method for cotton bloom detection on a large dataset and demonstrates the potential of utilizing cloud computing resources, specifically Azure, for efficient and accurate big data processing in precision agriculture. ## 1 Introduction The demand for sustainable agriculture has put significant pressure on the agriculture sector due to the rapid growth of the global population. Precision farming techniques enabled by Computer Vision (CV) and Machine Learning (ML) have emerged as promising solutions where crop health, soil properties, and yield can be monitored and lead to efficient decision-making for agriculture sustainability. Data would be gathered through heterogeneous sensors and devices across the field like moisture sensors and cameras on the rovers. However, the huge number of objects in farms connected to the Internet leads to the production of an immense volume of unstructured and structured data that must be stored, processed, and made available in a continuous and easy-to-analyze manner (Gilbertson and Van Niekerk, 2017). Such acquired data possesses the characteristics of high volume, value, variety, velocity, and veracity, which are all characteristics of big data. In order to leverage the data for informed decisions, a big data pipeline would be needed. One area of agriculture that faces particular challenges with regard to yield prediction is cotton production. The operation of cotton production is faced with numerous challenges, a major one being the timely harvesting of high-quality cotton fiber. Delayed harvesting can lead to the degradation of cotton fiber quality due to the exposure to unfavorable environmental conditions. Therefore, to avoid degradation, it is vital for harvesting cotton when at least 60% to 75% are fully opened, but also prior to the 50-day benchmark when bolls begin to degrade in quality. (UGA, 2019). In addition, cotton harvesting is costly, as the machines used for their processing can weigh over 33 tons and can also cause soil compaction, hence reducing land productivity (Antille et al., 2016). Finally, a lack of skilled labor and external factors such as climate change, decreasing arable land, and shirking water resources hinder sustainable agricultural production (FAO, 2009). In this context, heterogeneous and large-volume data is collected using various static and moving sensors. Therefore, it is imperative to develop a platform that can handle real-time streams and manage large datasets for High-Throughput Phenotyping (HTP) applications. However, most conventional storage frameworks adopted in previous studies support only batch query processing and on-premise servers for data processing. Rather than implementing on-premise processing, the adoption of cloud computing can help prevent over- or under-provisioning of computing resources, reducing costly waste in infrastructure for farmers as shown in (Kiran et al., 2015) who introduced a cost-optimized architecture for data processing through AWS cloud computing resources. Therefore, leveraging cloud computing could be a viable option for developing an efficient and scalable platform for HTP applications. In this paper, we aim to implement batch and real-time processing using cloud computing which can help prevent over- or under-provisioning of computing resources. For that, we propose a big data pipeline with a Lambda architecture through Azure which allows for the cohesive existence of both batch and real-time data processing at a large scale. This two-layer architecture allows for flexible scaling, automated high availability, and agility as it reacts in real time to changing needs and market scenarios. For testing this pipeline, we train and integrate a YOLOv5 model to detect cotton bolls using the gathered dataset. ### _Lambda Architecture_ Lambda architecture, first proposed in (Marz and Warren, 2013), is a data processing architecture that addresses the problem of handling both batch and real-time data processing by using a combination of a batch layer and a speed layer. In the context of agriculture, various research studies have implemented Lambda architecture pipelines to process and analyze large amounts of sensor data, such as weather data and crop yields, in order to improve crop forecasting and precision agriculture. Very recent work (Roukha et al., 2020; Ouafiq et al., 2022) have demonstrated the feasibility of using a Lambda architecture framework in smart farming. ### _Cloud Computing_ Previous research on big data pipelines has employed one- premise servers for data processing, while the use of cloud computing can substantially reduce the cost for farmers. Cloud providers, such as Microsoft Azure, offer various data centers to ensure availability and provide better security compared to on-premise servers. We propose the adoption of Microsoft Azure Big Data resources to implement a Lambda architecture pipeline in the agriculture industry. Azure Big Data Pipeline is a cloud-based processing service offered by Microsoft that can be utilized for analyzing, processing, and implementing predictive analysis and machine learning-based decisions for agricultural operations. #### 1.2.1 Azure Data Factory Azure Data Factory (ADF) allows for the creation of end-to-end complex ETL data workflows while ensuring security requirements. This environment enables the creation and scheduling of data-driven workspace and the ingestion of data from various data stores. It can integrate additional computing services such as HDInsight, Hadoop, Spark, and Azure Machine Learning. ADF is a serverless service, meaning that billing is based on the duration of data movement and the number of activities executed. The service allows for cloud-scale processing, enabling the addition of nodes to handle data in parallel at scales ranging from terabytes to petabytes. Moreover, one common challenge with cloud applications is the need for secure authentication. ADF addresses this issue by supporting Azure Key Vault, a service that stores security credentials (Rawar and Narain, 2018). Overall, the use of ADF in our pipeline allows for efficient and secure data processing at scale. ### _Related Work_ Previous studies have utilized traditional pixel-based CV methods, such as OpenCV, to identify cotton bills based on their white pixel coloring (Kadeghe et al., 2018). Another study has explored the use of YOLOv4 in order to detect cotton blooms and bills (Thesma et al., 2022). Moreover, the integration of big data architecture has been suggested in previous research to optimize agricultural operations (Wolfert et al., 2017). Parallel studies have explored the use of Lambda architecture pipelines as a viable approach to process and analyze large amounts of sensor data, such as weather data and crop yield, in order to improve forecasting for specific crops. For instance, Roukh presents a cloud-based solution, named WALLeSMART, aimed at mitigating the big data challenges facing smart farming operations (Roukha et al., 2020). The proposed system employs a server-based Lambda architecture on the data collected from 30 dairy farms and 45 weather stations. Similarly, Quafig integrates a big data pipeline inspired by Lambda architecture for smart farming for the purposes of predicting drought status, crop distributions, and machine breakdowns (Ouafiq et al., 2022). The study suggests the benefits of flexibility and agility when utilizing a big data architecture. Furthermore, cloud-based solutions have become increasingly popular in agriculture due to their scalability and cost-effectiveness. Another study employs big data in the cloud related to weather (climate) and yield data (Chen et al., 2014). ### _Summary of Contributions and Organization of the Paper_ This paper focuses on the use of Microsoft Azure resources to implement and validate a Lambda architecture High-throughput Phenotyping Big Data pipeline for real-time and batch cotton bloom detection, counting, and visualization. We develop data reduction and processing to transfer useful data and separately train a YOLOv5 object detection model and integrate it into our big data pipeline. The pipeline was thoroughly tested and demonstrated through the analysis of a set of 9000 images. Despite existing research work on the use of Lambda architecture and its benefits, there is still a lack of studies that elaborate on the development process and tools to construct this architecture. Moreover, there has been limited research on the application of Lambda architecture utilizing cloud computing resources, as most are server based. **To the best of our knowledge, there is no previous study that elaborates on the implementation of a big data Lambda architecture pipeline utilizing cloud computing resources, specifically Azure, while integrating advanced machine learning models for plant phenotyping applications**. While big data analytics and cloud computing have become increasingly popular in precision agriculture, the integration of these technologies with Lambda architecture for plant phenotyping (our case, cotton) remains an open research area. Our approach demonstrates the efficacy of utilizing cloud-based resources for the efficient and accurate analysis of large-scale agricultural datasets. This paper makes several contributions to the research field, which are listed as follows: 1. Introducing a Lambda architecture pipeline that takes into account batch and real-time processing, providing an efficient and scalable solution for data analysis. 2. Utilizing cloud computing resources, specifically Microsoft Azure, to improve the performance and reliability of the proposed pipeline. 3. Demonstrating the actual implementation tools and processes used to build the proposed pipeline, enabling other researchers to replicate and build upon our work. 4. Integrating a big data pipeline for cotton plant phenotyping, which enables the efficient analysis of large volumes of data and provides new insights into the growth and development of cotton plants. 5. Contributing a new cotton field dataset to the research community, which is currently limited, enabling other researchers to validate and build upon our findings. The remainder of the paper is organized as follows: Section 2 describes data retrieval; Section 3 summarizes the implemented Lambda architecture pipeline; Section 4 provides a summary of offline AI-based object detection model training; Section 5 discusses fine-tuning methods to optimize the continuous pipeline run-time and showcases final results, and lastly in Section 6, we discuss areas for future work and concluding remarks. ## 2 Dataset In this study, we employed our own cotton field dataset to evaluate the proposed pipeline for phenotyping analysis. The cotton field dataset will be further elaborated in subsequent sections, including data collection procedures and data preprocessing steps. ### Cotton Research Farm The cotton data was collected using a stereo camera that was installed on an autonomous ground vehicle deployed in a research farm at the University of Georgia's Tifton campus in Tifton, GA. Figure 1 illustrates an aerial view of the farm. The treatments described in Figure 1 are 4-row wide, but we collected data on the inner 2-rows for post-analysis as discussed later. ### Cotton Field Data Collection In our data collection efforts, we employed a rover developed by West Texas Lee Corp. (Lubbock, Texas). As described in (Fue et al., 2020), this rover is a four-wheel hydrostatic machine with a length of 340 cm, front and rear axles 91 cm from the center, and a ground clearance of 91 cm. It was powered by a Predator 3500 Inverter generator and equipped with the Nvidia Jetson Xavier for remote control and vision and navigation systems. With a top speed of approximately 2 kilometers per hour, the rover was able to efficiently traverse the study area. To power its electronics, the rover utilized two 12-Volt car batteries, as well as a ZED RGB stereo camera. The ZED stereo camera, with left and right sensors 120 cm apart and mounted 220 cm above the ground facing downward (Fue et al., 2020) was chosen for its ability to perform effectively in outdoor environments and provide real-time depth data. It captured 4-5 frames per second and recorded a video stream of each 4-row treatment from June 2021 to October 2021, 2-3 days per week, as a ROS bag file. ### Dataset Creation In this study, a camera equipped with two lenses was utilized to capture images of cotton plants. The camera captured both left and right views of the plants, with a total of 765 image frames extracted from sixteen 4-row treatments on each data collection day between July 14, 2021 and August 6, 2021, where blooms began to appear. These frames were labeled in ascending numerical order to ensure proper correspondence with the video stream and prevent any overlapping. The 765 image frames were subsequently divided into separate sets for the left and right lens views, resulting in a total of 1,530 frames. An example of the image frames captured by the left and right lens can be seen in Figure 3. Previous research has shown that the small proportion of blooms relative to the background in cotton field images can make it difficult for neural network models to accurately detect the blooms (Thesma et al., 2022). To address this issue, we pre-processed the images by dividing them into five equal slices. The treatments described in Figure 1 are 4-row wide, but we collected data on the inner 2-rows for analysis when slicing. An example of the resulting images is shown in Figure 4 used as input for the subsequent analysis pipeline. Figure 1: Aerial view of our cotton farm in Tifton, GA displaying 40 rows of cotton plants, treatments of two planting populations (2 and 4 seeds per foot), HD (Hildrop), and single planted cotton seed. Two-row spacing of 35 inches and 40 inches were also used as treatment. Each treatment was 4 rows wide and 30 feet long. There were three repetitions per treatment. Figure 2: Front view of the rover with the robotic arm, vacuum, and sensors mounted on the rover (see (Fue et al., 2020) for details) that was used to collect video streams of cotton plants in Tifton, GA. We selected a dataset consisting of sliced images from 10 specific days in 2021: July 8, July 14, July 16, July 19, July 23, July 26, July 29, August 4, August 6, and September 9. This resulted in a total of 9,018 images with 3 color channels (RGB) with dimensions of 530 \(\times\) 144 for testing batch processing and creating the offline object detection model. The dataset in this study comprised diverse cotton plant data, locations, and treatments, as the video streams were collected from various rows on different days. ## 3 Development of Lambda Architecture Pipeline In this work, we propose a Lambda architecture to enable real-time analytics through a distributed storage framework, which traditionally is only capable of batch processing. The proposed architecture consists of three main layers: batch, speed, and serving. The batch layer is responsible for processing large amounts of historical data on a schedule, while the speed layer handles real-time streams of data. The serving layer serves the processed data to clients, applications, or users. This approach allows for the efficient handling of both historical and real-time data, enabling a wide range of analytical capabilities. We illustrate the Lambda architecture using Azure resources in Figure 5. In order to achieve real-time ingestion, we utilize the Azure Data Factory's event-based trigger which sends an event when an image is uploaded to the storage account. This event is handled by Azure's Event Grid for real-time streams. In comparison, batch ingestion is triggered by a scheduled event. Once ingested into the Azure Data Factory, the pipeline connects to Databricks for preprocessing of the image data. The processed data is then forwarded to a deployed AI object detection model, which is running on a Kubernetes cluster, to retrieve the designated bounding box coordinates for the image. Finally, Databricks draws the bounding boxes and outputs the image. The development process will further be elaborated. To initiate our analysis, we established an Azure Data Factory workspace. The Azure Data Factory portal allows monitoring the pipelines' status in real time. In order to use the Data Factory, we had to create a resource group, a container for holding related resources for our Azure solution. For this work, we opted to ingest binary unstructured data from Azure Blob storage into Azure Data Lake. This allowed us to efficiently process and store large volumes of data for subsequent analysis. Azure Blob storage is a highly scalable unstructured data object storage service. To use Blob storage and create an Azure Data Lake, we first had to initialize a storage account. Azure Storage is a Microsoft-managed service that provides cloud storage and provides REST API for CRUD operations. For this project, we configured the storage account to use locally redundant storage (LRS) for data replication, as it is the least expensive option. We also set the blob access tier to 'hot' to optimize for frequently accessed and updated data. The storage account's data protection, advanced, and tags settings were left as their default values. Overall, the use of Azure Blob storage and the creation of an Azure Data Lake allowed us to efficiently store and process large volumes of unstructured data for our analysis. Microsoft Azure Data Lake is a highly scalable data store for unstructured, semi-structured, and structured data (Rawar and Narain, 2018). It is compatible with Azure services and a variety of additional tools, making it capable of performing data transformation and handling large volumes of data for analytics tasks. To separate the stream and batch processing in our pipeline, we created two separate blob containers labeled batch and stream. Files ingested into the 'batch' folder are processed by a scheduled trigger designed for batch processing, while files ingested into the'stream' folder trigger real-time processing. This allows us to efficiently handle both historical and real-time data in our analysis. Figure 4: Example of cotton field pipeline input image after preprocessing prior to data ingestion into the pipeline. Figure 5: Illustration of the proposed pipeline utilizing Azure resources Figure 3: Example of cotton field dataset image after image extraction from original bag files. ### Speed Layer The stream layer of the Lambda architecture is designed for real-time analysis of incoming data streams. It is generally not used for training machine learning models, but rather for applying pre-trained models to classify or predict outcomes for the incoming data. This allows to provide real-time insights which are crucial when timely action is required depending on the data. For example, real-time analysis of cotton bloom location and density can enable farmers to take immediate action. Another benefit of the stream layer is its ability to handle high-volume data streams with low latency, which can be a challenge for traditional batch processing systems that may suffer from delays in the availability of insights. #### 3.1.1 Ingestation To enable real-time processing in our pipeline, we implemented a file storage trigger in the stream layer. This trigger initiates the pipeline in real time whenever a new image is added to the blob storage. This approach allows us to automate the data processing and analysis pipelines, hence reducing the need for manual intervention. Additionally, the file storage trigger is compatible with other services such as Azure IoT Hub, enabling us to process data ingested from IoT devices for scalability. This approach allows to efficiently and effectively analyze data as it is generated in near real time. The creation of a real-time trigger in Azure Data Factory also generates an event grid in Azure. Event Grid is a messaging service in Azure that enables the creation of event-driven architectures. It can be used to trigger actions such as running a pipeline. In our case, the event grid listens for events in the input source (blob storage) and, upon detecting a new event, sends a message to the Data Factory service to trigger the execution of the pipeline. This allows for the automation of the pipeline process. For the transfer of data from blob storage into the data lake, we must create a connection between the Data Factory and the Data Lake. We used a Copy Activity in a Data Factory pipeline to copy data from a Data Lake store to a different store or data sink. In our pipeline, we use two separate folders as input sources, each with its own trigger (batch and stream). To facilitate this configuration, we parameterized the input file name to accommodate the separate cases of the stream and batch layers. By adopting the parameterization of the data folder input as dynamic, we were able to alter the folder used as the input source without modifying the pipeline itself. This approach allows us to flexibly configure the input sources for our pipeline without the need for additional maintenance or modification. ### Batch Layer The batch layer of our pipeline serves as the primary repository for the master dataset and allows us to view a batch view of the data prior to computation. The layer plays a crucial role in managing and organizing the dataset, enabling efficient analysis and processing. We can divide this batch data into smaller batches to train machine learning models on large datasets quicker, independently, parallel, and through fewer computational resources. It also helps with the scalability of a machine learning system as the system will be able to handle larger datasets, optimize the training process, and in improving the performance of the resulting model. #### 3.2.1 Ingestation For our experiments, we selected a dataset consisting of sliced images from 10 specific days in 2021, which resulted in a total of over 9,000 images. The dataset is stored in Azure Blob Storage, a scalable cloud-based object storage service that is capable of storing and serving large amounts of data. This scalability, compatible with terabytes of data, makes it well-suited for use in data-intensive applications such as ours. To accommodate larger volumes of data, Blob Storage is engineered to scale horizontally by automatically distributing data across multiple storage nodes. This allows it to handle increases in data volume and access requests while eliminating additional manual provisioning or configuration. In our pipeline, we integrated a batch trigger in addition to the stream layer trigger. This trigger is of the batch type, allowing us to specify a predetermined schedule for execution. The schedule can be fixed, such as running every day at a specific time, or dynamic through a CRON expression, which is a job scheduler used within Azure. For the purposes of our experimentation, the trigger is calibrated to run every 3 minutes. However, the flexibility of the batch trigger schedule allows for the customization of the frequency of execution to meet our specific data collection and processing needs. For example, the trigger can be executed on a weekly or hourly basis when collecting data on-site. The use of a batch trigger in Azure Data Factory allows us to scalably process large volumes of data. We can ingest data into the batch layer at a rate that meets our specific needs, and schedule the trigger to execute at appropriate intervals to ensure that the data is processed and analyzed in a timely manner. The ability to adjust the schedule of the batch trigger allows us to fine-tune the performance of our pipeline and ensure that it is able to handle the volume and velocity of our data effectively. Figure 6: Screenshot of the parameterization process for the stream and batch triggers to automate the pipeline for continuity #### 3.2.2 Azure Data Factory Connection The batch layer follows the same process as the speed layer for the Azure Data Factory connection. If an image is ingested into the batch folder, the batch trigger sends the parameter of the batch which will be used in the remainder of the pipeline for organizing data. ### Pre-process/Analyze In the analysis of high-volume data, pre-processing is a vital step. Raw data from devices may contain inconsistencies and noise which can depreciate the quality of results and decision-making insights. These issues are addressed through the cleansing, normalization, and reduction of data. Furthermore, the pre-processing step of images integrates various techniques such as noise reduction, image enhancement, and feature extraction. These methods assist with streamlining decision-making and interpretation. We decide to incorporate image compression into our pipeline as it can significantly reduce the size of images. This down-sizes storage size and costs of processing large volumes of data. By integrating image compression methods to eliminate image data redundancy, it is possible to represent an image through fewer bits, resulting in a smaller file size. However, there are trade-offs in terms of image quality and compression ratios, thus it is imperative to select an image compression algorithm that does not deteriorate the quality when compressing. These steps are crucial in the context of big data pipelines, where storage space is often a limiting factor. #### 3.3.1 Databricks Connection to Data Lake To facilitate pre-processing, we incorporate Databricks, a cloud-based platform that integrates Apache Spark, a powerful open-source data processing engine (Zaharia et al., 2012). Apache Spark is optimized to handle substantial amounts of data quickly and efficiently, making it ideal for real-time data processing applications. It boasts the capability for in-memory processing, rendering it significantly more efficient than disk-based systems, especially when working with vast amounts of data, resulting in reduced computation time. Moreover, Apache Spark supports parallel processing, permitting it to divide data into smaller chunks and process them simultaneously to enhance performance even further if needed. In the realm of pre-processing tasks, a popular alternative to Databricks is the open-source big data processing framework, Hadoop. Hadoop utilizes the MapReduce programming model, which has been shown to be challenging to work with in comparison to the Spark engine utilized by Databricks (Gonzalez et al., 2014). Furthermore, Hadoop requires significant configuration and maintenance efforts to set up and run properly, whereas Databricks offers a user-friendly interface and requires less infrastructure (Zaharia et al., 2012). In addition, Databricks provides a range of additional tools and features, such as integration with data storage platforms like Amazon S3 and Azure Blob Storage, as well as the ability for data scientists and analysts to collaborate through notebooks and dashboards (Databricks, 2021), making it a more convenient platform for handling big data. For our experiments, we configured our Databricks cluster to use Databricks Runtime version 11.3 LTS. The worker and driver type is Standard DS3 v2 which contains 14 GB memory and 4 cores. We have the range of workers to be between 2 to 8 and enabled auto scaling, where the cluster configures the appropriate number of workers based on the load. Once the data is ingested into the Data Lake, we decide to compress the image by storing the image into a jpeg file with a 30% quality. This pre-processing stage is flexible and scalable where we can also implement other pre-processing and data transformation techniques such as image slicing. Furthermore, we checked the image dimensions to be a valid input for our model. The next step is to configure the Databricks linked service connection. The Databricks linked service connection in Azure is a way to connect to a Databricks workspace from Azure. It allows users to easily access and integrate data stored in their Databricks workspace with Azure Data Factory. When configuring the Databricks linked service, we enter the Databricks workspace URL and authentication access token. We first selected the method of having a new job cluster created anytime there was an ingestation trigger. In order to improve the efficiency of the data processing pipeline, we decided to switch from creating a new job cluster for each ingestion trigger to using existing interactive clusters. This approach reduces the time required for the pipeline to start processing, as the interactive cluster is already active when new data is ingested. This process saves the average 3 minutes of restarti Figure 7: Screenshot of Azure Data factory when setting up the Databricks linked service connection to ADF. The credentials required are as follows: Databricks workspace URL, Authentication type, Access token. Initially, we decided to create a new job cluster; however, based on the results, we shifted to existing interactive cluster; hence, we input the existing cluster ID. single trigger. However, when a file is first uploaded, there exists a delay while the inactive interactive cluster is first started. To minimize this delay, we calibrate the interactive cluster to terminate if no activity has been detected for a period of 20 minutes. This configuration can be easily adjusted to meet the needs of different use cases. This results in the first image ingestion taking 3 minutes to begin the cluster, however, subsequent image ingesions demonstrated a significant reduction in connection time to the cluster, with a duration of fewer than 10 seconds. To enable Databricks to access the Azure Data Factory, we mounted the Data Lake Storage Gen2 (ADLS Gen2) file system to the Databricks workspace. This allows us to use standard file system operations to read and write files in the ADLS Gen2 file system as if it were a local file system. Mounting the ADLS Gen2 file system to Databricks enables us to access data stored in ADLS Gen2 from Databricks notebooks and jobs, and facilitates integration between Databricks and other tools and systems that use ADLS Gen2 as a storage backend. Furthermore, the parameterization of the input folder (batch vs. stream folder) allows the databricks notebook to use this Dynamic input to make changes to the correct data lake folder. ### AI Model/APIs In this section, we describe the process of deploying the trained AI model. #### 3.4.1 Deployment with Kubernetes To deploy a trained Object Detection model in the pipeline, we utilized Azure Kubernetes Service (AKS) (Corporation, 2021). Microsoft Azure's AKS simplifies the process of deploying and scaling containerized applications on the cloud platform through its managed Kubernetes service. By leveraging the benefits of the open-source Kubernetes container orchestration platform, AKS creates a consistent and predictable environment for managing these applications. With features like automatic bin packing, load balancing, and secret and configuration management, AKS enhances the management of containerized applications. The service achieves this by creating and managing clusters of virtual machines that run these applications, making the deployment and scaling process easier and more efficient. Kubernetes is highly scalable, and its platform allows for management of applications across multiple nodes in a cluster, making it a versatile solution for managing containerized applications in the cloud (Corporation, 2021). It provides a consistent and predictable environment for deploying and scaling containerized applications. Secret and configuration management provides secure, encrypted storage for sensitive data such as passwords and API keys, improving application security. Kubernetes also includes several features that enhance the management of containerized applications, including automatic bin packing, load balancing, and secret and configuration management. Automatic bin packing allows Kubernetes to schedule containers to run on the most appropriate nodes in a cluster, maximizing cost efficiency. Load balancing distributes incoming traffic across multiple replicas of an application to handle high traffic volumes. This cluster uses a Standard D3 v2 virtual machine which has 4 cores, 14 GB RAM, and 200GB storage. We retrieve the score Python script from the best AutoML YOLOv5 run, and use it to deploy the model as an AKS web service. In order to assess the performance of our machine learning model, we utilized a Python script. This script contains code for loading the trained model, reading in data, making predictions using the model, and calculating various performance metrics such as accuracy and precision. It also includes provisions for saving the predictions made by the model and the calculated performance metrics to a file or database for further analysis. By running our script on a separate dataset, known as the test dataset, we were able to obtain an unbiased estimate of the model's performance and assess its ability to generalize to new data. To enhance the capability of our object detection model in handling elevated workloads, it was deployed with autoscaling enabled. This allows for dynamic adjustment of computing resources, such as CPU Processing Nodes and memory, in response to incoming requests. The initial configuration was set to 1 CPU core and 7 GB of memory. To secure the model, an authentication key system was implemented, requiring the provision of a unique key with each request. This key system ensures only authorized access to the model. Subsequent sections of this research will elaborate on the training process of the object detection model and its integration into the workflow. #### 3.4.2 Azure Data Factory Connection In order to optimize the efficiency of our pipeline, we made the decision to include the AKS connection credentials within the initial Databricks notebook where the data is pre-processed. This approach was chosen as an alternative to utilizing Azure's ADF web service option for REST API connection in the Azure Data Factory, which would have required the creation of another Databricks notebook to draw the bounding boxes from the output of bounding box coordinates. By integrating the AKS connection credentials directly into the primary notebook, we were able to streamline the process while eliminating the need for an additional Databricks compute cluster and cluster connection time. This avoided the added overhead of creating an additional notebook in ADF, which would have slowed down the pipeline. Overall, we have one Databricks notebook which will conduct the pre-processing and post-processing of data. Figure \(8\) illustrates the tasks of Databricks in the pipeline. ### Output We developed an object detection model that is capable of identifying and counting cotton blooms in images. When the model is run on an input image, it returns the bounding box coordinates of any cotton blooms that it detects. To visualize the results of the model, we retrieve these bounding box coordinates and use them to create visual bounding boxes over the input image. This output image, which shows the detected cotton blooms overlaid on the original image, is then stored in a blob storage account. By using a blob storage REST API, we can easily send this output image to any other device for further processing or analysis. This approach allows us to scale the output of the model to meet needs. ## 4 Offline YOLOv5 Model Training Object detection is a key task in computer vision, which involves identifying and locating objects of interest in images or video streams. One popular object detection model is YOLO (You Only Look Once), which was first introduced by Redmon in 2015 (Redmon et al., 2015). Since then, the YOLO model has undergone several revisions, and one key difference between YOLOv5 and its predecessor, YOLOv4, is the training process. While previous versions of YOLO, including YOLOv4, were trained using the Darknet framework, YOLOv5 utilizes the TensorFlow backend. This allows YOLOv5 to benefit from the advanced optimization and acceleration techniques provided by TensorFlow, which can improve the model's performance and speed. YOLOv5 also introduces several other improvements and new features compared to YOLOv4. These include more efficient network architecture and support for a wider range of input sizes (Bochkovskiy et al., 2020). ### Data Labeling We utilized AutoML and Azure Machine Learning Studio to train a YOLOv5 model for cotton bloom detection. AutoML automates the process of selecting and training the most suitable machine learning model for a given dataset. It allows users to easily train, evaluate, and deploy machine learning models without the need for extensive programming knowledge or machine learning expertise (Wachs and Kalyansundaram, 2021). To train the YOLOv5 model using AutoML, we first set up a connection between our data lake (which contained the images used for training) and Azure Machine Learning Studio. Azure Machine Learning Studio is a cloud-based platform that provides tools for developing, deploying, and managing machine learning models in Azure (Murphy, 2012). Once the connection was established, we were able to use AutoML and Azure Machine Learning Studio to train and evaluate the YOLOv5 model on our dataset. The platform provided a range of tools and resources for optimizing the model's performance, including the ability to tune hyperparameters, apply data augmentation techniques, and evaluate the model's performance using a variety of metrics which will be further discussed. After creating the Machine Learning studio workspace, we need to create a Datastore which connects to our Data Lake Storage Container. From the Datastore, we created a Data Asset. Data stores and data assets are resources in Azure Machine Learning studio that allows us to store and access data for machine learning experiments. We created the Data Asset through 1300 images saved in a Data Lake that was compressed prior. In our case, we decided to reduce the quality of the images by compressing prior to training the model. This way, our model would provide better accuracy when implementing the full pipeline which compresses the images prior to being send into the AI Model. Using the Azure ML Studio Labeler tool, we annotated 1300 images through bounding boxes that can be used to identify the location and size of the cotton blooms in the image. The AutoML labeler tool is part of the Azure Machine Learning platform. After the annotations were complete, we exported them into AzureML Dataset format. Figure 9 is a screenshot of an example of annotating one image through Azure's Image Labeler tool. ### Model Hyperparameters and Training In this work, the model utilized 80 percent of the dataset for training, and 20 percent for validation purposes. Furthermore, the YOLOv5 model was trained using a learning rate of 0.01, a model size of large which contains 46.5 million training parameters, and a total of 70 epochs. However, the training process was terminated early when the mean average precision (mAP) metrics stopped improving. This resulted in the training process stopping early at 30 epochs in our experiment. The number of epochs used for training is important, as it determines the number of times that the model sees the training data and can influence the model's performance. Figure 10 shows results from our hyperparameter tunings. One key aspect of the training process was the use of the Intersection over Union (IOU) threshold, which is a measure of the overlap between the predicted bounding boxes and the ground truth bounding boxes (see Figure 11). The IOU threshold was set to 0.55 for both precision and recall, which means that a predicted bounding box was considered correct if the overlap with the ground truth bounding box was greater than or equal to 0.55. The use of the IOU threshold is important because it allows the model to be evaluated Figure 8: The figure illustrates the tasks within Databrics notebook: compression (pre-processing), connection to AI Model, and creation of output with results (post-processing) Figure 9: Example of cotton bloom bounding box annotations for one cotton field sliced image. using a standard metric to compare the performance of different models. In addition to the IOU threshold, the training process also involved setting the batch size to 10, where the model parameters were updated for each batch of 10 images. This training was performed using a computing cluster with 6 cores, 1 GPU, 56 GB of RAM, and 360 GB of disk space. The overall training process took 1 hour and 10 minutes to complete. (LeCun et al., 2015). ## 5 Finetuning and Results ### YOLOv5 Model In this work, the trained YOLOv5 AutoML model achieved a mean average precision (mAP) score of 0.96. The mAP score is a metric that is commonly used to evaluate the performance of object detection models. It measures the average precision across all classes of objects in the dataset and takes into account the overall precision and recall of the model. Precision is a measure of the accuracy of the model's predictions and defined as the number of correct predictions divided by the total number of predictions. In comparison, recall calculates the model's ability to capture all relevant instances in its predictions. It can be determined by dividing the number of correct predictions by the total number of instances in the actual data (LeCun et al., 2015). In this case, the YOLOv5 model had a precision value of 0.84 and a recall score of 0.99 when using an IOU validation threshold of 0.55. The F1 score, which is a measure of the harmonic mean of precision and recall, was also calculated and found to be 0.904. The importance of precision, recall, and the F1 score lie in their ability to provide a comprehensive evaluation of the model's performance. High precision is essential for ensuring that the model does not produce false positives. A high recall is essential for ensuring that the model does not produce false negatives. The F1 score, which takes into account both precision and recall, provides a balanced evaluation of the model's performance (LeCun et al., 2015). Below displays the formulas mentioned above which consider the True Positive (_TP_), False Positive (_FP_), and False Negative (_FN_): precision \[=\frac{TP}{TP+FP}\] (1) recall \[=\frac{TP}{TP+FN}\] (2) F1 Score \[=\frac{2\text{core precision}\text{recall}}{\text{precision}+ \text{recall}}\] (3) The model itself returns back the bounding box coordinates. When integrating the model into the pipeline, we conduct post-processing to draw and visualize the bounding boxes on top of the input image. Figure 12 displays the output of the cotton bloom detected image from the AI model. ### Azure Data Factory In order to connect the trained AI model into the rest of the Azure Data Factory Pipeline, we first created a Standard Kubernetes cluster. We then deployed the model into Kubernetes which provides a REST API to interact with. #### 5.2.1 REST API vs. Blob Storage Ingestation Previously, we ingested data from Azure Blob Storage into Azure Data Lake to demonstrate the feasibility of ingesting data from external IoT devices. To assess the performance of the ingestion process, we conducted an experiment using image data and the REST API connection provided by the Data Lake. Initially, we utilized the popular API development and testing tool, Postman, to conduct a synchronous request and observed a substantial improvement in ingestion time. It is commonly used for testing and debugging API applications, and can be used to make both synchronous and Figure 11: Figure illustrates the definition of IOU which takes into account the area of overlap and the area of union. The higher the area of overlap between the detected bounding box and the ground truth box, the higher the IOU. Figure 12: Example of pipeline output after post-processing and adding bounding boxes for cotton bloom detection visualization. Figure 10: Table displays results from tuning hyperparameters. The F1 score and mAP was the highest when utilizing the large YOLOv5 model with a threshold 0.55. We also tuned the number of epochs, but AutoML would terminate after 30 epochs due to no significant improvement. asynchronous requests (Fielding, 2000). The implementation of this reduced the stream ingestation time from 12 seconds to 150 ms. This not only highlights the applicability of the REST API connection but also its efficiency in speeding up the ingestion process. While Postman is a useful tool for testing and debugging APIs, it is not the only option for making HTTP requests to devices. To scale up for batch processing, we adopted asynchronous Python code for HTTP connection. The original ingestation time from blob storage to the data lake took around 2 minutes for 9,000 images (157.6 MB). With the optimization of the REST API and asynchronous Python code, the batch ingestion process was completed in just 8.62 seconds, a marked improvement from the previous ingestion time. For future purposes, one can use the REST API and HTTP connection with other devices and systems (mobile devices or IoT devices). The pipeline is compatible with the integration of machine learning models into a wide range of applications and systems. #### 5.2.2 _Kubernetes_ We also optimized our Kubernetes configurations by increasing the number of nodes and node pools. When testing on a smaller batch amount of 100 images, integrating 5 nodes rather than 3 nodes in the Kubernetes cluster decreases runtime from 32 minutes to 28 minutes. Increasing the number of node pools from 1 pool to 2 pools decreased runtime from 28 minutes to 22 minutes. #### 5.2.3 _Asynchronous vs. Synchronous Processing_ Asynchronous programming allows the execution of multiple tasks to run concurrently without waiting for the completion of prior tasks. The asyncio library, a built-in library in Python, provides the infrastructure for writing asynchronous code with the help of the async and await keywords (Foundation, 2023). Additionally, the aiohttp library enables asynchronous support for HTTP requests, allowing for concurrent processing of multiple requests without waiting for responses (aio libs, 2023b). The aiofiles library, on the other hand, offers asynchronous support for file operations such as reading and writing to files. This can be useful in programs that need to perform numerous file operations simultaneously, such as our program that handles a significant amount of images aio libs (2023a). Our pipeline runtime for processing 9,000 batch images was found to take approximately 3 hours and 50 minutes with synchronous code. After optimizing the pipeline with asynchronous code, the execution time was reduced to 34 minutes, which represents a substantial improvement. This demonstrates the potential benefits of implementing asynchronous processing in our pipeline ### Cost Although Azure is not an open-source environment, the pay-as-you-go service makes sure to charge resources that are effectively used. With Microsoft Azure, we can spin a 100-node Apache Spark Cluster in less than ten minutes and pay only for the time the job runs on the specific cluster (Rawar and Narain, 2018). We used a computer cluster that had the GPU infrastructure for the YOLOv5 training. This costs $1.14 per hour. The total time spent training was 1 hour and 6 minutes. The total cost is as follows: using the virtual machines led to a cost of about $3.56, storage cost $2.18, container costs were $1.85, utilizing a virtual network was $1.33, Azure Databricks connection was $0.30, and Azure Data factory led to an additional cost of $0.30. Furthermore, the Kubernetes cluster deployment was the most costly item as ranges roughly about $70 monthly. ## 6 Future Directions The pipeline can be further optimized by updating computing clusters with higher computing power and incorporating GPU processing to reduce the total processing time. Moreover, the pipeline currently takes approximately three minutes to reactivate the terminated Databricks interactive cluster, which could be improved through the use of pools. A bottleneck encountered during the data processing was the connection to the Internet to send images to the Kubernetes cluster through REST API. To address this issue, we can utilize Databricks MLFlow by downloading the model within the Databricks environment itself rather than having to create a separate Internet connection. We refer back to Figure 8 to gain a better understanding of the bottleneck at step 3 to where the cluster must create an Internet connection to the REST API URL. If we wanted to scale up with more nodes, the price of Kubernetes would increase to even up to $1,000 monthly. This further suggests the benefit of utilizing Databricks MLFlow and downloading the model itself rather than using Kubernetes' REST API for AI Model connection. Another bottleneck encountered is the limitations of OpenCV when drawing bounding boxes. Despite our efforts to optimize results through asynchronous Python code, OpenCV does not have the capability for asynchronous processing. As a result, it is incapable of performing the task of producing bounding boxes concurrently for images. This is because OpenCV relies heavily on the CPU, which is fully operated without waiting for any external input. This results in a linear process when drawing bounding boxes, despite the rest of the code being optimized for concurrent execution. To overcome this issue, we can incorporate PySpark, a Python library for distributed data processing using Apache Spark. PySpark allows us to leverage the power of Spark, which is a distributed computing platform that enables fast and flexible data processing. This is compatible with our pipeline because our Databricks runtime version 11.3 LTS includes Apache Spark 3.3.0, and Scala 2.12. With the use of PySpark, we can employ the parallel computing power of our Databricks cluster and enhance the speed and efficiency of our data processing operations. Overall, these optimization strategies can be used to scale up the pipeline and decrease the total processing time, making it more efficient and effective for handling much larger datasets. ## 7 Conclusion This study has presented a new big data pipeline for cotton bloom detection using a Lambda architecture and Microsoft Azure's cloud computing resources. The pipeline fulfills data preprocessing, object detection using a YOLOv5 neural network trained through Azure AutoML, and visualization of object detection bounding boxes. The results of the study demonstrate the high performance of the neural network with a Mean Average Precision (mAP) score of 0.96 and an optimized runtime of 34 minutes when evaluated on over 9,000 images. This work showcases the scalability of the presented pipeline as a solution for deep learning-based object detection and emphasizes on the potential of employing cloud computing resources for big data processing in precision agriculture. This study advances the field by expanding and demonstrating the big data pipeline implementation of a new method for cotton bloom detection from images collected on a cotton farm. The results obtained in this study suggest a scalable Lambda architecture that can be implemented for big data processing using Azure resources. ## Acknowledgement The authors would like to thank Canicius Mwitta for his assistance in setting up the experiments and data collection.
2307.13438
Local density of states above a disk -- geometrical vs. thermal boundary conditions
We analytically calculate the contribution to the local density of states due to thermal sources in a disk-like patch within the framework of fluctuational electrodynamics. We further introduce a wavevector cutoff method to approximate this contribution. We compare the results obtained with the source and cutoff method with the numerical exact LDOS above a metal disk attained by SCUFF-EM calculations. By this comparison we highlight the difference and resemblance of thermal and geometrical boundary conditions which are both relevant for near-field scanning microscope measurements. Finally, we give an outlook to general lateral temperature profiles and compare it with surface profiles.
Svend-Age Biehs, Achim Kittel, Zhenghua An
2023-07-25T12:08:33Z
http://arxiv.org/abs/2307.13438v1
# Local density of states above a disk -- geometrical vs. thermal boundary conditions ###### Abstract We analytically calculate the contribution to the local density of states due to thermal sources in a disk-like patch within the framework of fluctational electrodynamics. We further introduce a wavevector cutoff method to approximate this contribution. We compare the results obtained with the source and cutoff method with the numerical exact LDOS above a metal disk attained by SCUFF-EM calculations. By this comparison we highlight the difference and resemblance of thermal and geometrical boundary conditions which are both relevant for near-field scanning microscope measurements. Finally, we give an outlook to general lateral temperature profiles and compare it with surface profiles. ## I Introduction In the last two decades, different kinds of scanning thermal microscopes have been developed which enable us to image the thermal near-field of solid interfaces in the infrared-region. A first near-field scanning thermal microscope of such kind has been set up in the research group of Yannik De Wilde [1; 2; 3]. This so-called Thermal Radiation Scanning Tunneling Microscope (TRSTM) is in principle an s-SNOM, which works without any external illumination. Therefore, it scatters the thermal near-field of a heated sample at the apex of the sharp TRSTM tip into the far field. The far-field signal can be decomposed into its different frequency parts, so that the TRSTM makes it possible to measure spectra of the thermal near-field in the vicinity of a sample. In order to obtain signals which are large enough to be measurable it is typically necessary to heat the samples by several hundreds of Kelvins. A similar AFM based near-field scanning thermal microscope, the so-called Thermal Infrared Near-field Spectroscopic (TINS) has been set up in the group of Markus Raschke [8; 9]. A difference between the TINS and the TRSTM relies in the fact, that for TINS also the tip of the microscope can be heated. The far-field signal emitted and scattered by the heated tip can again be decomposed in its frequency components using FTIR. Thus, TINS and TRSTM allow for measuring the spectral information of a given sample. A third near-field scanning thermal microscope, the so-called Scanning Noise Microscope (SNoiM) has been developed by Susumu Komiyama and has been advanced by the group of An Zhenghua [10; 11; 12]. As for the TRSTM the SNoiM is in principle an s-SNOM without external illumination.The important difference between the SNoiM and the TRSTM relies in the fact that the SNoiM has a ultra-sensitive single photon detector [13; 14; 15] which is working at cryogenic temperatures of 4.2K. Due to this specific detector, even very weak far-field signals can be measured such that for the SNoiM measurement it is not necessary to heat neither the microscope tip nor the sample as for the TRSTM or TINS. Consequently, SNoiM is used to measure signals of the microscope and the sample at room temperature. However, in contrast to the TRSTM or TINS the actual SNoiM setup can only measure signals at a single wavelength which is typically about 14.5 \(\mu\)m. Yet another near-field scanning thermal microscope is the Near-field Scanning Thermal Microscope (NSthM) designed by Achim Kittel [4; 5; 6; 7]. It is in principle an STM which has been augmented by a thermocouple at the foremost part of the probe. In contrast to the TRSTM, TINS and SNoiM the NSThM measures the radiative heat flux between the probe and a cooled sample. The STM ability of the NSThM allows for precisely controlling the distances between the tip and the sample so that heat fluxes down to 0.3 nm distance can be measured. Like for the TRSTM, TINS and SNoiM the tip of the NSThM is very sharp having a tip radius of about 20 nm. Consequently, it is possible to acquire highly resolved images of the near-field thermal radiation. With the NSThM only spectrally integrated near-field heat fluxes can be measured so that the NSThM has no access to the spectrum of the heat flux. However, the NSThM has the advantage over the other microscopes that with its STM ability also topographical information of the sample can be obtained with a high lateral and vertical resolution. The first theoretical models for the TRSTM, TINS, SNoiM and NSThM were based on the assumption that the foremost part of the probes of the microscopes can be regarded as small spheres which can be described as dipoles in the long-wavelength regime [16; 17; 18; 19; 20; 21; 22; 23]. Therefore, the signals should in lowest order be proportional to the photonic local density of states (LDOS) at the position of the microscope tip and consequently the measured signals have been compared with the LDOS [2; 5; 8; 9; 11]. Improved theoretical descriptions have been brought forward recently by a discrete-dipole model of the tip and an exact boundary-element method [24; 25; 7; 26]. The aim of our work is to shed some light on the impact of geometrical and thermal boundary conditions on the LDOS which are both highly relevant for the above mentioned near-field thermal imaging methods. To this end, we derive an analytical expression for the contribution to the LDOS above a semi-infinite medium stemming from thermal sources in a disk-shaped patch as sketched in Fig. 1 within the framework of fluctuational electrodynamics as done in Ref. [27] for the van der Waals forces and the near-field radiative heat transfer. This LDOS would correspond to the signal of a SNoiM, for instance, for a planar sample which is heated in a disk-shaped area. We further numerically calculate the LDOS above a nanodisk as sketched in Fig. 2 by using SCUFF-EM [28; 29]. This LDOS would correspond to a measurement of the LDOS above an free-standing nanodisk as conducted recently with a SNoiM [30]. We compare the LDOS obtained with the two methods and discuss the similarities and differences of the impact of the thermal and geometrical boundary conditions. Furthermore, we introduce a simple wave-vector cutoff approximation as used in Ref. [30; 31] and discuss its ability to mimic the LDOS calculated with the source method. Finally, we debate the relation between general lateral temperature and surface profiles. ## II Definition of the LDOS The electric and magnetic part \(D_{e}\) and \(D_{m}\) of the LDOS at a position \(\mathbf{r}\) are defined via the electric and magnetic energy density of the electromagnetic field generated by the fluctuational sources inside the semi-infinite body via the relations [32; 33; 34] \[u_{e} =\frac{\epsilon_{0}}{2}\langle E_{\alpha}(\mathbf{r})E_{\alpha}( \mathbf{r})\rangle=\int_{0}^{\infty}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! In global equilibrium the LDOS is given by [34] \[D_{e}^{\rm ge}(\omega,d) =\frac{\omega}{\pi c^{2}}{\rm Im}{\rm Tr}\big{[}{\sf G}^{\rm EE}({ \bf r},{\bf r})\big{]} \tag{6}\] \[D_{m}^{\rm ge}(\omega,d) =\frac{\mu_{0}}{\epsilon_{0}}\frac{\omega}{\pi c^{2}}{\rm Im}{\rm Tr }\big{[}{\sf G}^{\rm HH}({\bf r},{\bf r})\big{]} \tag{7}\] where \({\sf G}^{\rm HH}\) is the magnetic Green's function. This global equilibrium quantity is typical considered as the LDOS and it coincides with the local equilibrium expressions for the contribution of the evanescent waves [34]. Therefore, in the near-field regime where the evanescent waves dominate both definitions of the LDOS are equivalent. Below, we will use the SCUFF-EM package to evaluate \(D_{e}^{\rm ge}(\omega,d)\) and \(D_{m}^{\rm ge}(\omega,d)\) and compare it to the local equilibrium LDOS \(D_{e}(\omega,d)\) and \(D_{m}(\omega,d)\). ## III LDOS of sources in a disk Now, it can be noted that by definition the local equilibrium LDOS has a very interesting feature. By fixing the volume containing the equilibrated sources at points \({\bf r}^{\prime}\) we can determine how different volume elements are contributing to the LDOS at the position \({\bf r}\). In the following, we consider a semi-infinite material with an interface in the x-y plane to vacuum. We want to determine the LDOS at the position \({\bf r}=(0,0,d)^{t}\) generated by the thermal sources contained in a volume of the bulk of the shape of a circular disk with radius \(R\) and thickness \(t\) (see Fig. 1). Then we can do this by simply introducing polar coordinates \[\int_{V}{\rm d}^{3}{\bf r}^{\prime}=\int_{0}^{R}{\rm d}\rho^{\prime}\,\rho^{ \prime}\int_{0}^{2\pi}{\rm d}\varphi^{\prime}\int_{-t}^{0}{\rm d}z^{\prime}. \tag{8}\] But before it is possible to carry out this volume integration, we need the corresponding expressions for the Green functions for point sources within the semi-infinite medium and observation points outside the medium. These expressions are for a semi-infinite medium well known and can be stated in the Weyl representation as \[{\sf G}^{\rm EE/HH}({\bf r},{\bf r}^{\prime})=\int\!\!\frac{{\rm d}^{2}\kappa }{(2\pi)^{2}}\,{\rm e}^{{\rm i}\kappa({\bf x}-{\bf x}^{\prime})}{\sf G}^{\rm EE /HH}(\mathbf{\kappa}) \tag{9}\] with \[{\sf G}^{\rm EE}(\mathbf{\kappa}) =\frac{{\rm i}{\rm e}^{{\rm i}\gamma_{0}{\rm e}}{\rm e}^{-{\rm i} \gamma_{1}z^{\prime}}}{2\gamma_{1}}\bigg{(}t_{\rm s}{\bf a}_{\rm s}({\bf k}_{ 1})\otimes{\bf a}_{\rm s}({\bf k}_{0}) \tag{10}\] \[\qquad+t_{\rm p}{\bf a}_{\rm p}({\bf k}_{1})\otimes{\bf a}_{\rm p }({\bf k}_{0})\bigg{)}\] and \[{\sf G}^{\rm HH}(\mathbf{\kappa}) =\frac{1}{c\mu_{0}}\frac{{\rm i}{\rm e}^{{\rm i}\gamma_{0}{\rm e} }{\rm e}^{-{\rm i}\gamma_{1}z^{\prime}}}{2\gamma_{1}}\bigg{(}t_{\rm p}{\bf a} _{\rm s}({\bf k}_{1})\otimes{\bf a}_{\rm s}({\bf k}_{0}) \tag{11}\] \[\qquad+t_{\rm s}{\bf a}_{\rm p}({\bf k}_{1})\otimes{\bf a}_{\rm p }({\bf k}_{0})\bigg{)}.\] Here, we have introduced the wave vector parallel to the interface \(\kappa=(k_{x},k_{y})^{t}\), the wavevector perpendicular to the interface in vacuum \(\gamma_{0}=\sqrt{k_{0}^{2}-\kappa^{2}}\) and within the medium \(\gamma_{1}=\sqrt{k_{0}^{2}\epsilon_{1}-\kappa^{2}}\) so that \({\bf k}_{1}=(\kappa,\gamma_{1})\) and \({\bf k}_{0}=(\kappa,\gamma_{0})\). Furthermore, we use the notation \({\bf x}=(x,y)^{t}\), \({\bf x}^{\prime}=(x^{\prime},y^{\prime})^{t}\) and the polarization vectors for s- and p-polarization \({\bf a}_{\rm s}({\bf k})=\frac{1}{\kappa}(k_{y},-k_{x},0)^{t}\) and \({\bf a}_{\rm p}({\bf k})=\frac{1}{\kappa k}(-k_{x}\gamma,-k_{y}\gamma,\kappa^ {2})^{t}\) and the amplitude transmission coefficients \[t_{\rm s}=\frac{2\gamma_{1}}{\gamma_{1}+\gamma_{0}}\quad\text{and}\quad t_{ \rm p}=\frac{2\sqrt{\epsilon_{1}}\gamma_{1}}{\gamma_{0}\epsilon_{1}+\gamma_{1}}. \tag{12}\] Note, that \({\sf G}^{\rm HH}\) can be obtained from \({\sf G}^{\rm EE}\) by interchanging \(t_{\rm s}\leftrightarrow t_{\rm p}\) and multiplying with the factor \(1/c\mu_{0}\). Thus, it suffices to determine \(D_{e}\) because \(D_{m}\) can then be determined from it. After a lengthy and tedious calculation, we get the final expressions for the local equilibrium LDOS by inserting the expressions for the Green functions into the definition of the LDOS and carrying out the volume integral over a disk-like volume \[D_{e/m}=\int_{0}^{\infty}\frac{{\rm d}\kappa}{2\pi}\,\kappa\int_{0}^{\infty} \frac{{\rm d}\kappa^{\prime}}{2\pi}\,\kappa^{\prime}\,\frac{{\rm e}^{(\gamma_{ 0}-\gamma_{0}^{\prime\,*})d}}{4\gamma_{1}{\gamma_{1}^{\prime\,*}}^{*}}\frac{ \gamma_{1}^{2}-{\gamma_{1}^{\prime\,*}}^{*2}+\kappa^{2}-{\kappa^{\prime\,2}}^{ 2}}{\gamma_{1}-{\gamma_{1}^{\prime\,*}}^{*}}\frac{\omega}{c^{2}}\bigg{(}1-{ \rm e}^{{\rm i}(\gamma_{1}-\gamma_{1}^{\prime\,*})t}\bigg{)}\int_{0}^{R}{\rm d }\rho^{\prime}\rho^{\prime}I_{e/m}(\kappa,\kappa^{\prime},\rho^{\prime},\omega) \tag{13}\] with the integral kernels \[I_{e}(\kappa,\kappa^{\prime},\rho^{\prime},\omega) =\bigg{(}t_{\rm s}{t_{\rm s}^{\prime\,*}}+t_{\rm p}{t_{\rm p}^{\prime \,*}}^{*}\frac{\gamma_{0}\gamma_{1}{\gamma_{0}^{\prime\,*}}^{*}\gamma_{1}^{*}}{|k _{1}|^{2}k_{0}^{2}}\bigg{)}C+t_{\rm p}{t_{\rm p}^{\prime\,*}}^{*}\bigg{(}\frac{ \gamma_{1}{\gamma_{1}^{\prime\,*}}^{*}+\gamma_{0}{\gamma_{0}^{\prime\,*}}^{*}}{|k _{1}|^{2}k_{0}^{2}}\kappa\kappa^{\prime}B+\frac{{\kappa^{2}{\kappa^{\prime\,2}}}^{ 2}}{|k_{1}|^{2}k_{0}^{2}}A\bigg{)} \tag{14}\] \[I_{m}(\kappa,\kappa^{\prime},\rho^{\prime},\omega) ={t_{\rm s}{t_{\rm s}^{\prime\,*}}^{*}}\bigg{(}\frac{\gamma_{0}{ \gamma_{0}^{\prime\,*}}^{*}}{k_{0}^{2}}C+\frac{\kappa{\kappa^{\prime\,*}}^{*}}{k _{0}^{2}}B\bigg{)}+t_{\rm p}{t_{\rm p}^{\prime\,*}}^{*}\bigg{(}\frac{\gamma_{1}{ \gamma_{1}^{\prime\,*}}^{*}}{|k_{1}|^{2}}A+\frac{\kappa{\kappa^{\prime\,*}}^{*}}{|k _{1}|^{2}}B\bigg{)}. \tag{15}\] We have further introduced the abbreviations \[A =J_{0}(\rho^{\prime}\kappa)J_{0}(\rho^{\prime}\kappa^{\prime}), \tag{16}\] \[B =J_{1}(\rho^{\prime}\kappa)J_{1}(\rho^{\prime}\kappa^{\prime}),\] (17) \[C =\frac{1}{2}\big{(}J_{0}(\rho^{\prime}\kappa)J_{0}(\rho^{\prime} \kappa^{\prime})+J_{2}(\rho^{\prime}\kappa)J_{2}(\rho^{\prime}\kappa^{\prime}) \big{)} \tag{18}\] where \(J_{n}\) (\(n=0,1,2\)) are the cylindrical Bessel functions. Note that the integrals over \(\kappa\) and \(\kappa^{\prime}\) are including the propagating waves with \(\kappa,\kappa^{\prime}\leq k_{0}\) and the evanescent waves for which \(\kappa,\kappa^{\prime}\geq k_{0}\). By taking the limit \(t\to\infty\) and \(\rho^{\prime}\to\infty\) we have used the relation \[\int_{0}^{\infty}\mathrm{d}\rho^{\prime}\rho^{\prime}J_{n}(\rho^{\prime}\kappa)J _{n}(\rho^{\prime}\kappa^{\prime})=\frac{\delta(\kappa-\kappa^{\prime})}{\kappa} \tag{19}\] to verify that \(D_{\varepsilon/m}\) converges to the well-known expressions for the local equilibrium LDOS of above semi-infinite material [34] \[\begin{split} D^{\infty}(\omega,d)&=D_{e}^{\infty} (\omega,d)+D_{m}^{\infty}(\omega,d)\\ &=\frac{\omega}{4\pi^{2}c^{2}}\biggl{\{}\int_{0}^{k_{0}}\!\! \mathrm{d}\kappa\,\frac{\kappa}{\gamma_{0}}\biggl{[}(1-|r_{\mathrm{s}}|^{2})+( 1-|r_{\mathrm{p}}|^{2})\biggr{]}\\ &\qquad+\int_{k_{0}}^{\infty}\!\!\mathrm{d}\kappa\,\frac{\kappa ^{3}}{\gamma_{0}^{\prime\prime}k_{0}^{2}}\bigl{[}\mathrm{Im}(r_{\mathrm{s}})+ \mathrm{Im}(r_{\mathrm{p}})\bigr{]}\mathrm{e}^{-2\gamma_{0}^{\prime\prime}d} \biggr{\}}\end{split} \tag{20}\] where \(r_{\mathrm{s/p}}\) are the well-known Fresnel reflection coefficients for s- and p-polarized waves. ## IV Wavevector-cutoff method Before comparing the local equilibrium LDOS of the thermal sources inside a disk-shaped volume with the exact LDOS above a finite disk, we want to introduce a simple approximation method for determining the LDOS of a disk-shaped structure. The idea behind this approximation method is rather simple. Only waves with a lateral wave vector \(\kappa\geq\pi/D\) can contribute to the LDOS of the evanescent field above a disk. This means the largest allowed wavelength along the interface of the disk is \(2D\) which correspond to a dipolar mode in radial direction of the disk. Then the LDOS due to the evanescent waves can be approximated by \[D(\omega,d)\approx\frac{\omega}{4\pi^{2}c^{2}}\int_{\pi/D}^{\infty}\!\!\mathrm{ d}\kappa\,\frac{\kappa^{3}}{\gamma_{0}^{\prime\prime}k_{0}^{2}}\bigl{[} \mathrm{Im}(r_{\mathrm{s}})+\mathrm{Im}(r_{\mathrm{p}})\bigr{]}\mathrm{e}^{-2 \gamma_{0}^{\prime\prime}d}. \tag{21}\] Since we have here only approximated the evanescent part, this approximation can only be used in the near-field regime where waves with \(\kappa>k_{0}\) give the dominant contribution to the LDOS. Furthermore, this implies that \(\pi/D>k_{0}\) or consequently \(D<\lambda/2\) must be fulfilled. Hence, this approximation is only useful in the near-field regime for subwavelength disks. In Fig. 3 we compare the numerical results for the LDOS above an infinitely extended Au half space with sources in an infinitely thick disk, i.e. \(t\to\infty\), as function of the disk diameter \(D\) with the source method leading to expression (13) and the cutoff method in Eq. (21). It can be seen that the LDOS evaluated with both methods are in good agreement for most values of \(D\). For \(D\to 0\) the LDOS goes to zero and converges to the half-space value for \(D\to\infty\). From the source method this behaviour can be understood by the fact that for \(D\to 0\) the source volume of the sources which are generating the LDOS vanishes. For \(D\gg d\) enough sources contribute to the LDOS at distance \(d\) so that the resulting LDOS coincides with the half-space value. On the other hand, the behaviour can also be understood from the cutoff method. Since the near-field contribution to the LDOS in Eq. (21) is loosely speaking in the strong near-field regime (where \(\gamma_{0}^{\prime\prime}\approx\kappa\)) dominated by waves with \(\kappa\approx 1/d\) we have for \(\pi/D\ll 1/d\) or \(D\gg\pi d\) no difference to the half-space value. For \(D\to 0\) less and less modes contribute to the \(\kappa\) integral in Eq. (21) and therefore to the LDOS so that it must vanish in this limit. It is interesting that both methods give qualitatively the same result and that for approximately \(D>2d\) the LDOS determined with both methods also agree quantitatively even though the methods are quite different in their ansatz and their final expressions. ## V Geometrical vs. thermal boundary conditions As is clear from the ansatz of the source method, the LDOS in Eq. (13) is the LDOS due to the thermal sources in a disk-shaped part of a semi-infinite sample. It corresponds to the LDOS which can be found above a substrate when it is heated in such a disk-shaped part as depicted in Fig. 1. Therefore, the "boundaries" set by the finite integration volume can be understood as thermal boundary conditions. When considering the LDOS above a real disk as sketched in Fig. 2 then the geometrical boundaries due to the finite structure will also play an important role and affect the LDOS above the disk. In Fig. 4, we compare the LDOS obtained with the Figure 3: LDOS at position \(\mathbf{r}=(0,0,d)^{t}\) for a Au disk with diameter \(D=2R\) at distances \(d=100\,\mathrm{nm},200\,\mathrm{nm},\) and \(300\,\mathrm{nm}\) using the expressions from Eq. (13) labelled as ”source” and the cutoff expression from Eq. (21) labelled as ”cutoff”. The LDOS is evaluated at a wavelength of \(\lambda=14.5\,\mu\mathrm{m}\) using the permittivity \(\epsilon=-8659+4524\mathrm{i}\) for Au [36]. The LDOS is further normalized to the LDOS of an Au half space. source and cutoff method with the LDOS as obtained by the exact numerical SCUFF-EM method [28; 29] for a disk with thickness \(t=300\,\)nm, so that the disk is much thicker than the skin depth. It can be seen that for very small and very large \(D\) the exact LDOS of a metal disk coincides with the LDOS from the source method or the cutoff method, respectively. It is interesting to note that for extremely small distances of only \(10\,\)nm the LDOS above the disk evaluated with SCUFF-EM coincides more or less with the cutoff method. There is only a small deviation for a diameter of \(10\) nm which is due to an electric contribution to the LDOS which becomes larger than the magnetic one. The pure magnetic LDOS would for all diameter \(D\) coincide with the cutoff method. Consequently, on the one hand for extremely small distances the geometrical boundary conditions tend to make the LDOS smaller than the LDOS due to purely thermal boundary conditions. On the other hand, for intermediate distances and small diameters the source method can better approximate the LDOS of a real finite disk than the cutoff method. In the intermediate regime as for example, for \(D\approx 500\,\)nm the LDOS above a real nanodisk is larger than above a substrate with a disk-shaped heated region. Hence, one can conclude that here the geometrical boundary conditions are very important in this regime and have the tendency to increase the LDOS. As can be seen for a distance of \(250\,\)nm this tendency is not very strict, because the exact LDOS seems to overshoot the LDOS. Figure 4: LDOS at position \(\mathbf{r}=(0,0,d)^{t}\) for a Au disk with diameter \(D=2R\) at distances (a) \(d=10\,\)nm, (b) \(d=50\,\)nm, (c) \(d=100\,\)nm, (d) \(d=150\,\)nm, (e) \(d=200\,\)nm, (f) \(d=250\,\)nm using the expressions from Eq. (13) labelled as ”source” and the cutoff expression from Eq. (21) labelled as ”cutoff” as well as the numerically exact SCUFF-EM method. The LDOS is evaluated at a wavelength of \(\lambda=14.5\,\mu\)m using the permittivity \(\epsilon=-8659+4524\)i for Au [36]. around the LDOS calculated with the source method so that in particular for \(d=250\,\mathrm{nm}\) the LDOS above a real disk can even be smaller than that above a disk-shaped heated region as seen for \(D=2\,\mu\mathrm{m}\). Of course, for larger \(D\) the exact LDOS will converge to the value of the half space as shown in Fig. 5. This \(D\)-dependence in the exact numerical result obtained with SCUFF-EM might be associated to surface plasmon cavity modes of the gold disk. ## VI General temperature profiles So far, we have seen that geometrical and thermal boundary conditions for a very simple example can result in similar values for the LDOS. This property can make it difficult in near-field scanning thermal microscopy to disentangle geometrical from thermal information. Here, we want to discuss in more general terms how lateral temperature profiles and surface geometries are related. To keep the discussion simple, we focus on the electrical part of the LDOS or energy density, only. Similar results can of course be obtained for the magnetic LDOS as well. On one hand, let us assume that the temperature \(T(\mathbf{x})\) of semi-infinite material as depicted in Fig. 1 is a function of the coordinate \(\mathbf{x}=(x,y)^{t}\) which means that we allow a lateral temperature variation in x- and y-direction. Then inserting the Weyl representation of the Green's function in Eq. (9) into the definition of the electric energy density in Eq. (1) we obtain for the spectral electric energy density \[u_{E}^{T}(\omega)=\frac{\epsilon_{0}}{2}\int\!\!\frac{\mathrm{d}\kappa}{(2\pi) ^{2}}\int\!\!\frac{\mathrm{d}\kappa^{\prime}}{(2\pi)^{2}}\mathrm{e}^{\mathrm{i }(\mathbf{\kappa}-\mathbf{\kappa}^{\prime})\cdot\mathbf{x}}\tilde{\Theta}(\mathbf{\kappa}- \mathbf{\kappa}^{\prime})I_{E}(\mathbf{\kappa},\mathbf{\kappa}^{\prime}) \tag{22}\] where \[\tilde{\Theta}(\mathbf{\kappa}-\mathbf{\kappa}^{\prime})=\int\mathrm{d}^{2}x^{\prime} \,\Theta(T(\mathbf{x}^{\prime}))\mathrm{e}^{-\mathrm{i}(\mathbf{\kappa}-\mathbf{ \kappa}^{\prime})\cdot\mathbf{x}^{\prime}} \tag{23}\] is the Fourier transform of the mean energy of a harmonic oscillator \(\theta\) for a temperature profile \(T(\mathbf{x})\) and \(I_{E}(\mathbf{\kappa},\mathbf{\kappa}^{\prime})\) is given by \[\begin{split} I_{E}(\mathbf{\kappa},\mathbf{\kappa}^{\prime})& =\frac{\mathrm{i}\epsilon_{0}^{\prime\prime}}{\gamma_{1}-{\gamma_ {1}^{\prime}}^{*}}\frac{\mathrm{e}^{\mathrm{i}(\gamma_{0}-\gamma_{0}^{\prime \prime})z}}{4{\gamma_{1}}{\gamma_{1}^{\prime}}^{*}}\bigg{[}\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! structure. We have further introduced a simple cutoff-method which gives very similar results as the source method. We found by numerical comparison that for a gold disk the results for both methods agree very well for disks with a diameter of \(D>2d\). Finally, we have compared the results of the source method with the exact LDOS above a metal disk using SCUFF-EM. This allows us to discuss the difference between thermal and geometrical boundary conditions. We have shown that due to the geometrical boundary conditions the exact LDOS of an Au disk typically overshoots the values of the LDOS obtained with the source method. For extremely small distances of 10 nm, the geometrical boundary condition have the tendency to give an LDOS which is smaller than obtained with the source method and which is very well described by the cutoff method. For intermediate distances between \(100\,\mathrm{nm}-200\,\mathrm{nm}\) the geometrical boundary conditions have the tendency to enhance the LDOS compared to the LDOS obtained from the source method, even though for very small and large \(D\) the exact LDOS coincides with the LDOS obtained with the source method. We expect that similar results can be found for polar materials. However, since such materials have resonances in the infrared, a detailed study for a broad range of frequencies around these resonances is necessary which is out of the scope of our work. Hence, we conclude that in a LDOS measurement with instruments like the NSthM and the SNoiM thermal and geometrical inhomogeneities will lead to similar signals so that a clear distinction between geometrically and thermally induced effects might not always be possible. This observation is particularly important in experiments studying samples that possess nanostructures and temperature distributions. As pointed out by our results from theory for such experimental setups it is of utmost importance for a correct interpretation to use combined measurement methods or information channels to disentangle the geometrical and thermal information. Obviously, with near-field scanning thermal microscopes based on AFMs or STMs, like the NSThM and TINS, it is possible to obtain a thermo-signal and a signal which contains the geometric information due to the AFM and STM capabilities. In this case, a comparison of theoretical results using the geometric information and the experimental thermosignal will allow to disentangle both informations in the thermosignal. For microscopes like the TRSTM and SNoiM one could use samples which are precharacterized with an AFM or STM. Another method could be to make measurements of the thermosignal at two different wavelengths \(\lambda_{1}\) and \(\lambda_{2}\) with \(|\lambda_{1}-\lambda_{2}|\) much smaller than the Planck window. The intensity contrast in this case only contains the signal due to variation of the geometry or material properties in the case of inhomogeneous samples. Finally, TRSTM and SNoiM measurements with two microscope probes of different materials could be a method to disentangle the geometric and thermal signal. Clearly, further theoretical and experimental works are necessary to quantify and pinpoint the impact of thermal and geometrical inhomogeneities. ###### Acknowledgements. S.-A. B. acknowledges support from Heisenberg Programme of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under the project No. 461632548 and support from the QuantUM program of the University of Montpellier, and he thanks the University of Montpellier and the group Theory of Light-Matter and Quantum Phenomena of the Laboratoire Charles Coulomb for hospitality during his stay in Montpellier where part of this work has been done. Z.A. acknowledges support from National Natural Science Foundation of China (NSFC) under the project No. 12027805 and Shanghai Science and Technology Committee under project No. 20JC1414700. The authors further acknowledge support from the Sino-German Center for Research Promotion (No. M-0174). This work has received funding from the European Community through the Horizon 2020 research and innovation programs under grant agreement No. 766853 (EFINED).
2306.13711
Topological zero modes and edge symmetries of metastable Markovian bosonic systems
Tight bosonic analogs of free-fermionic symmetry-protected topological phases, and their associated edge-localized excitations, have long evaded the grasp of condensed-matter and AMO physics. In this work, building on our initial exploration [PRL 127, 245701 (2021)], we identify a broad class of quadratic bosonic systems subject to Markovian dissipation that realize tight bosonic analogs of the Majorana and Dirac edge modes characteristic of topological superconductors and insulators, respectively. To this end, we establish a general framework for topological metastability for these systems, by leveraging pseudospectral theory as the appropriate mathematical tool for capturing the non-normality of the Lindbladian generator. The resulting dynamical paradigm, which is characterized by both a sharp separation between transient and asymptotic dynamics and a nontrivial topological invariant, is shown to host edge-localized modes, which we dub Majorana and Dirac bosons. Generically, these consist of one conserved mode and a canonically conjugate generator of an approximate symmetry of the dynamics. The general theory is exemplified through several models exhibiting a range of exotic boundary physics that topologically metastable systems can engender. In particular, we explore the extent to which Noether's theorem is violated in this dissipative setting and the interplay between symmetries and these edge modes. We also demonstrate the possibility of anomalous parity dynamics for a bosonic cat state prepared in a topologically metastable system. Observable multitime signatures in the form of anomalously long-lived quantum correlations and divergent zero-frequency power spectral peaks are proposed and discussed in detail. Our results point to a new paradigm of genuine symmetry-protected topological physics in free bosons, embedded deeply in the long-lived transient regimes of metastable dynamics.
Vincent P. Flynn, Emilio Cobanera, Lorenza Viola
2023-06-23T18:00:03Z
http://arxiv.org/abs/2306.13711v2
# Topological zero modes and edge symmetries of metastable Markovian bosonic systems ###### Abstract Tight bosonic analogs of free-fermionic symmetry-protected topological phases, and their associated edge-localized excitations, have long evaded the grasp of condensed-matter and AMO physics. In this work, building on our initial exploration [Phys. Rev. Lett. **127**, 245701 (2021)], we identify a broad class of quadratic bosonic systems subject to Markovian dissipation that realize _faitfull_ bosonic analogs of the Majorana and Dirac edge modes characteristic of topological superconductors and insulators, respectively. To this end, we establish a general framework for _topological metastability_ for these systems, by leveraging pseudospectral theory as the appropriate mathematical tool for capturing the non-normality of the Lindbladian generator. The resulting dynamical paradigm, which is characterized by both a sharp separation between transient and asymptotic dynamics and a non-trivial topological invariant, is shown to host edge-localized modes, which we dub Majorana and Dirac bosons. Generically, such modes consist of one conserved mode and a canonically conjugate generator of an approximate phase-space translation symmetry of the dynamics. The general theory is exemplified through several representative models exhibiting the full range of exotic boundary physics that topologically metastable systems can engender. In particular, we explore the extent to which Noether's theorem is violated in this dissipative setting and the way in which certain symmetries can non-trivially modify the edge modes. Notably, we also demonstrate the possibility of anomalous parity dynamics for a bosonic cat state prepared in a topologically metastable system, whereby an equal distribution between even and _odd_ parity sectors is sustained over a long transient. For both Majorana and Dirac bosons, observable multitime signatures in the form of anomalously long-lived quantum correlations and divergent zero-frequency power spectral peaks are proposed and discussed in detail. Our results provide evidence of genuine symmetry-protected topological physics in free bosons, embedded deeply in the long-lived transient regimes of metastable dynamics. ## I Introduction ### Context and motivation Indistinguishable quantum particles come in two flavors: fermions and bosons. While the distinction is kinematical and, as such, unrelated to any Hamiltonian specification, it can be explained most clearly when the particles are independent, or "free". Systems of free fermions (bosons) are described by Hamiltonians that are _quadratic_ in their respective canonical fermionic (bosonic) operators, and have long played a paradigmatic role as tractable - either genuinely non-interacting or mean-field - models for both equilibrium and non-equilibrium many-body physics [1]. For a quadratic fermionic Hamiltonian (QFH), there is always a state of lowest energy, the ground state, which captures the statistical behavior of the system in equilibrium at (and close to) zero temperature. A quantum phase transition is a phase transition at zero temperature that occurs as some parameter of the Hamiltonian is varied. Generically, the phases of free-fermion quantum matter are gapped, display no local order parameter, and the energy gap closes at a phase transition. Since there is no local order parameter, there is no general Landau theory of the quantum phases of QFHs. Rather, the general theory of such phases is based on a different set of notions: that of protecting global symmetries, space dimension, and topological invariants. The phases of free fermions are examples of _symmetry-protected topological_ (SPT) phases of quantum matter [2]. In the absence of local order parameters, how can one tell apart the different SPT phases of free fermions? A compelling answer is provided by the bulk-boundary correspondence. This powerful principle states that the topological invariant that characterizes an SPT phase also mandates the emergence of _robust zero-energy boundary-localized modes_[3; 4]. These zero modes (ZMs) are regarded as the main experimental manifestation of the underlying SPT phase. For example, the integer quantum Hall regimes in two (spatial) dimensions are SPT phases. In this case, the protecting symmetry is particle number and the topological invariant is the Chern number of the occupied single-particle energy bands. A measurement of the quantized Hall conductance probes directly the associated chiral surface modes. In one dimension, the Su-Schrieffer-Heeger model of polyacetylene also displays topologically-mandated edge modes [5]. The protecting symmetries are particle number, spin rotations, (spinful) time reversal, and a many-body particle-hole symmetry that exchanges fermionic creation and annihilation operators. Likewise, superconductors can exist in a variety of SPT phases. While particle number cannot be one of the protecting symmetries, the superconducting classes are protected by a combination of spin symmetry, time reversal, and many-body particle-hole. The celebrated Majorana chain of Kitaev provides a paradigmatic example for \(p\)-wave topological superconductivity and can display edge ZMs [6], which are protected against perturbations that do not increase the symmetry of the model. Altogether, SPT phases of free fermions are distinguished by the following key features: (i) The translationally invariant (bulk) system is gapped; (ii) The system displays certain combinations of protecting many-body symmetries; (iii) The ground state has an associated topological number, which can only change across a quantum phase transition as long the protecting symmetries are preserved; (iv) When the topological number is non-trivial and the system is terminated, the ground energy
2306.05173
Bayesian Inference for $k$-Monotone Densities with Applications to Multiple Testing
Shape restriction, like monotonicity or convexity, imposed on a function of interest, such as a regression or density function, allows for its estimation without smoothness assumptions. The concept of $k$-monotonicity encompasses a family of shape restrictions, including decreasing and convex decreasing as special cases corresponding to $k=1$ and $k=2$. We consider Bayesian approaches to estimate a $k$-monotone density. By utilizing a kernel mixture representation and putting a Dirichlet process or a finite mixture prior on the mixing distribution, we show that the posterior contraction rate in the Hellinger distance is $(n/\log n)^{- k/(2k + 1)}$ for a $k$-monotone density, which is minimax optimal up to a polylogarithmic factor. When the true $k$-monotone density is a finite $J_0$-component mixture of the kernel, the contraction rate improves to the nearly parametric rate $\sqrt{(J_0 \log n)/n}$. Moreover, by putting a prior on $k$, we show that the same rates hold even when the best value of $k$ is unknown. A specific application in modeling the density of $p$-values in a large-scale multiple testing problem is considered. Simulation studies are conducted to evaluate the performance of the proposed method.
Kang Wang, Subhashis Ghosal
2023-06-08T13:11:38Z
http://arxiv.org/abs/2306.05173v1
# Bayesian Inference for \(k\)-Monotone Densities with Applications to Multiple Testing + ###### Abstract Shape restriction, like monotonicity or convexity, imposed on a function of interest, such as a regression or density function, allows for its estimation without smoothness assumptions. The concept of \(k\)-monotonicity encompasses a family of shape restrictions, including decreasing and convex decreasing as special cases corresponding to \(k=1\) and \(k=2\). We consider Bayesian approaches to estimate a \(k\)-monotone density. By utilizing a kernel mixture representation and putting a Dirichlet process or a finite mixture prior on the mixing distribution, we show that the posterior contraction rate in the Hellinger distance is \((n/\log n)^{-k/(2k+1)}\) for a \(k\)-monotone density, which is minimax optimal up to a poly-logarithmic factor. When the true \(k\)-monotone density is a finite \(J_{0}\)-component mixture of the kernel, the contraction rate improves to the nearly parametric rate \(\sqrt{(J_{0}\log n)/n}\). Moreover, by putting a prior on \(k\), we show that the same rates hold even when the best value of \(k\) is unknown. A specific application in modeling the density of \(p\)-values in a large-scale multiple testing problem is considered. Simulation studies are conducted to evaluate the performance of the proposed method. **Keywords:**\(k\)-monotonicity; Shape restriction; Contraction rate; Mixture representation; Dirichlet process mixture; Adaptation. ## 1 Introduction A regression or density function is typically estimated under the assumption of smoothness. However, in certain cases, it is natural to impose specific shape restrictions like monotonicity or convexity. For instance, in survival analysis, The density of the failure time can often be assumed to be nonincreasing. In such scenarios, a natural and sensible estimator should comply with the given shape restriction. This allows for estimation without relying on smoothness assumptions. One advantage of estimation methods based on shape restrictions instead of smoothness is that there is no need to select bandwidths of kernels or degrees of splines. Nonincreasing or convex nonincreasing probability densities naturally arise in several contexts, particularly in inverse problems where data is indirectly observed. For example, consider Hampel's bird migration problem (see, for instance, Section 4.3 of [29]). This problem involves estimating the distribution of the time a population of birds spends in an oasis, based on the time elapsed between two exact captures. By employing a Poisson process model for the number of captures, the distribution function of the unobserved sojourn time can be expressed in terms of the density function of the observed time interval between the captures. As a result, the density of the time interval is proven to be convex and nonincreasing on the interval \((0,\infty)\). The study of monotone nonincreasing densities was pioneered by [27], who obtained the nonparametric maximum likelihood estimator, commonly known as the Grenander estimator. The pointwise asymptotic distribution of the Grenander estimator was subsequently derived by [46] and was revisited by [28] using a switch relation technique. The nonparametric maximum likelihood estimator of a nonincreasing density under random right-censoring was investigated by [32, 31]. Convergence rates and asymptotic distributions of the nonparametric maximum likelihood estimator under convexity have also been extensively studied, as shown in works such as [40, 30], among others. One notable advantage of function estimation under shape restrictions is that it eliminates the need for selecting tuning parameters. The global shape restriction itself serves as a regularization mechanism for the estimator and helps denoise the observed data. Both nonincreasing and convex nonincreasing density classes can be viewed as special cases of \(k\)-monotone density classes. Roughly speaking, the class consists of (a dense set of) functions whose odd-order derivatives are non-positive and even-order derivatives are non-negative, up to order \(k\). A formal definition will be given in the next section. From [57], a \(k\)-monotone density on \((0,\infty)\) can be represented as a scale mixture of scaled \(\text{Beta}(1,k)\) kernels. Moreover, a density on \((0,\infty)\), which is \(k\)-monotone for every \(k\), is infinitely smooth and can be expressed as a scale mixture of exponential densities (see [11], page 439). In [33], Jewell used a mixture of exponential distributions to model the lifetime distribution and studied the maximum likelihood estimator. The classes of \(k\)-monotone densities for intermediate values of \(k\) gradually transition from the class of monotone density (the largest class) to that of completely monotone densities (the smallest class). The class of \(k\)-monotone densities for a given integer \(k\), as a potentially useful model in nonparametric shape-restricted inference, has received some attention. The study of the nonparametric maximum likelihood estimator encompasses various aspects, including estimator characterization, convergence rates, and asymptotic distributions; see [1, 2, 15]. It is worth noting that the notion of \(k\)-monotonicity need not be restricted only to the positive half-line. In some applications, such as modeling \(p\)-values in a multiple-testing framework, as discussed below, the unit interval is the domain where the density is defined. In such cases, the mixture representation changes, though. This article focuses on Bayesian approaches to \(k\)-monotone density estimation on the unit interval. By modifying the mixture representation introduced in [57], we can adopt the well-established technique of Bayesian nonparametrics in mixture models. As no restrictions are imposed on the mixing distribution, the Dirichlet process prior, introduced by [12], can be considered, leading to a Dirichlet process scale-mixture of scaled beta kernels prior for the resulting \(k\)-monotone density. Dirichlet process mixtures with various kernels have been explored for Bayesian density estimation; see [13, 38, 7, 10, 44], among others. Markov chain Monte Carlo computational techniques have been invented to compute posterior characteristics; see [10, 39, 56] and others. Recently developed faster computational methods such as variational algorithms (cf. [5]) make the methodology appealing for practical implementation with large datasets and even allows computation in real time. The posterior distribution based on a suitable Dirichlet process mixture prior concentrates near the true value of the density with high true probability under appropriate conditions as shown by [16] for the normal kernel using the Schwartz theory of posterior consistency; see [25] for details. An extension and a multivariate generalization were obtained respectively by [54] and [59]. Rates of contraction of the posterior distribution of the Dirichlet mixture of normal prior have been established under various settings by [26, 24, 50] using the general theory of posterior contraction rate [18, 25]. Other kernels were treated in the works of [17, 45, 36, 58, 48, 49]. Among shape-restricted inference problems, for the special case of monotone nonincreasing densities on the half-line, the density can be expressed as a mixture of the uniform distributions on \([0,\theta]\). This representation has been used for Bayesian estimation of a symmetric unimodal density by [7] and for a unimodal density by [6], using Dirichlet process mixture priors, but no convergence results were obtained. The posterior contraction rate for monotone nonincreasing densities on the half-line was obtained by [49], who considered the Dirichlet process mixture and finite mixture priors and obtained the optimal rate \(n^{-1/3}\) up to some logarithmic factors. A challenge in applying the general theory of posterior contraction in this context is that the prior concentration condition on the Kullback-Leibler neighborhood around the true density in [18] may not be satisfied since the support of the mixture kernel is only on a finite interval whereas the true density can spread out over the positive real half-line. To overcome these, the prior concentration condition was modified in [49] using a suitably truncated density class by dropping a negligible part. Another issue is that the metric entropy increases with the monotone density class's upper bound and support length. Salomond [49] also modified the metric entropy condition on a sieve with a growing upper bound to address the complication. A closely related work is by [42], who used empirical priors in a finite mixture model setting for the same problem. The optimal posterior contraction rate \(n^{-1/3}\) in Hellinger distance up to a logarithmic factor was derived based on the theory of the empirical Bayesian approach in [43]. Unlike [18, 25], the condition requires a sufficient prior mass of a data-dependent Kullback-Leibler neighborhood centered at the (sieve) maximum likelihood estimator instead of the true density. Mariucci et al. [41] obtained the near minimax-optimal posterior contraction rate for a log-concave density using an exponentiated Dirichlet process mixture prior. Shape-restricted densities arise naturally in certain multiple testing applications, such as large-scale simultaneous hypothesis testing of DNA microarray data. To assess and control the error rate in simultaneous testing, inference on the proportion of the true null hypotheses is instrumental. Estimation procedures were developed based on the observed \(p\)-values for these tests in [51, 52, 37, 53]. These methods use the facts that the \(p\)-values from true null hypotheses are distributed as uniform over the unit interval, at least approximately, while those from the alternative have density highly concentrated near zero and decaying sharply from there. In [37], the density of the \(p\)-values is modeled as a monotone density which is \(k\)-monotone with \(k=1\) and \(2\). Tang et al. [53] used a mixture of beta densities with a singularity at zero, analogous to a completely monotone density (i.e. \(k\)-monotone with \(k=\infty\)), to model the density of \(p\)-values and put a Dirichlet process prior on the mixing distribution. This application motivates the study of \(k\)-monotone densities on the unit interval with an additional uniform component. In this framework, using the mixture representation of a \(k\)-monotone density and putting either a finite mixture or a Dirichlet process mixture prior, we shall obtain the posterior contraction rates using the general theory of posterior contraction in [18] with respect to the Hellinger metric. The presence of the additional uniform component makes the densities bounded away from zero, which is instrumental in controlling the Hellinger distance and the Kullback-Leibler divergence. We obtain the minimax-optimal rate \(n^{-k/(2k+1)}\) up to a polylogarithmic factor for any given value of \(k\), thus providing a spectrum of rates corresponding to different regularity guided by the shape-hierarchy, analogous to the smoothness hierarchy. If the true density is a \(J_{0}\)-mixture of the kernel in the mixture representation of a \(k\)-monotone density, we also show that the same procedure achieves the nearly parametric rate \(\sqrt{(J_{0}\log n)/n}\), which is difficult to achieve using a non-Bayesian method. By putting a suitable prior on \(k\), the Bayesian approach can adapt to the optimal posterior contraction rate as if the best \(k\) were known. This is a significant merit of the proposed Bayesian procedure, as to the best of our knowledge, the existing methods are all for a given value of \(k\). A mathematical relationship in a statistical inverse problem may imply the \(k\)-monotone shape ([29, 1]), but in applications such as [37], the best \(k\) fitting the data may not be clear to the researcher. The proposed method can automatically select the best underlying \(k\) while maintaining the optimal convergence rate. The organization of this paper is as follows. In the next section, we introduce the notion of a \(k\)-monotone density, characterize it through a mixture representation, and present an important approximation result using finite mixtures. In Section 3, we introduce the prior and present results on the posterior contraction rates. In Section 4, we consider Bayesian estimation with unknown \(k\). The application to multiple testing is discussed in Section 5. We present a simulation study in Section 6. The main conclusions are summarized in Section 7. Proofs are postponed to Section 8 and the appendix. ## 2 Preliminaries We shall use the following notations throughout the paper. Let \(\mathbb{R}\) be the set of real numbers and \(\Delta_{J}\) be the unit \(J\)-simplex, \(J=1,2,\ldots\). For a set \(A\), let \(\mathbbm{1}_{A}\) stand for the indicator of \(A\). Let \(f_{+}=\max(f,0)\) denote the positive part of the function \(f\), and \(f(x-)\) (respectively, \(f(x+)\)) denote the left (respectively, right) limit of \(f\) at \(x\) when it exists. If \(\int|f|^{p}<\infty\), we use \(\left\|f\right\|_{p}\) to denote the \(\mathbb{L}_{p}\)-norm, and \(\left\|f\right\|_{\infty}\) to denote the essential supremum \(\operatorname{ess\,sup}|f|\) if \(f\) is measurable and bounded almost everywhere. For \(1\leq p\leq\infty\), the space of \(p\)-integrable functions on the domain \(A\) is denoted by \(\mathbb{L}_{p}(A)\). Additionally, we define \(d_{p}(f,\mathcal{S})=\inf\{\left\|f-s\right\|_{p}:s\in\mathcal{S}\}\) for \(f\in\mathbb{L}_{p}(A)\) and \(\mathcal{S}\subseteq\mathbb{L}_{p}(A)\). The Dirac delta measure at \(\theta\) will be denoted by \(\delta_{\theta}\). The notation \(\operatorname{Dir}(J;\omega_{1},\ldots,\omega_{J})\) stands for the Dirichlet distribution on the probabilistic \(J\)-simplex with parameters \(\omega_{1},\ldots,\omega_{J}\). For a probability measure \(P\), absolutely continuous with respect to the Lebesgue measure, we denote its density by the corresponding lowercase letter \(p\). The Hellinger distance between two densities \(p_{1}\) and \(p_{2}\) is defined by \(d_{H}(p_{1},p_{2})=\left\|\sqrt{p_{1}}-\sqrt{p_{2}}\right\|_{2}\). The Kullback-Leibler divergence and Kullback-Leibler variation are respectively given by \(K(p_{1},p_{2})=\int p_{1}\log(p_{1}/p_{2})\) and \(V(p_{1},p_{2})=\int p_{1}[\log(p_{1}/p_{2})]^{2}\). For a semimetric space \((\mathcal{T},d)\), the metric entropy refers to the logarithm of the covering number \(\mathcal{N}(\epsilon,\mathcal{T},d)\), while the bracketing entropy refers to the logarithm of the bracketing number \(\mathcal{N}_{[]}(\epsilon,\mathcal{T},d)\); see Section 2.1 of [55] for details. For two real positive sequences \(\{a_{n}:n\geq 1\}\) and \(\{b_{n}:n\geq 1\}\), the notation \(a_{n}\lesssim b_{n}\) (equivalently, \(b_{n}\gtrsim a_{n}\)) means that \(a_{n}\leq Cb_{n}\) for some constant \(C>0\). We say \(a_{n}\asymp b_{n}\) if \(a_{n}\gtrsim b_{n}\) and \(a_{n}\lesssim b_{n}\). **Definition 2.1** (\(k\)-monotonicity).: _Let \(I\) be subinterval of \((0,\infty)\). A function \(f\) on \(I\) is said to be \(1\)-monotone on \(I\) if \(f\) is nonnegative and nonincreasing. For \(k\geq 2\), \(f\) is said to be \(k\)-monotone on \(I\) if \((-1)^{j}f^{(j)}\) is nonnegative, nonincreasing and convex on \(I\), for every \(j=0,\ldots,k-2\)._ Let the class of \(k\)-monotone functions on \(I\) be denoted by \(\mathcal{F}_{I}^{k}\). The class of \(k\)-monotone probability densities on \(I\) will be denoted by \(\mathcal{D}_{I}^{k}=\{g\in\mathcal{F}_{I}^{k}:\int g=1\}\). We shall be concerned about \(k\)-monotone functions on a bounded interval, which can be taken to be the unit interval \((0,1)\) without loss of generality. Since the domain is fixed at \((0,1)\) throughout, we shall drop \((0,1)\) from the notations \(\mathcal{F}_{(0,1)}^{k}\) and \(\mathcal{D}_{(0,1)}^{k}\), and simply write \(\mathcal{F}^{k}\) and \(\mathcal{D}^{k}\) respectively. A closely related concept is \(k\)-convexity, which is sometimes referred to as \(k\)-monotonicity by some authors in the approximation theory literature. There are multiple ways to characterize \(k\)-convex functions, and we present an equivalent definition in the following. **Definition 2.2** (\(k\)-convexity).: _A function \(f:(0,1)\to\mathbb{R}\) is said to be \(1\)-convex on \((0,1)\) if \(f\) is nondecreasing, while for \(k\geq 2\), \(f\) is said to be \(k\)-convex on \((0,1)\) if \(f^{(k-2)}\) exists and is convex on \((0,1)\). We shall write \(\mathcal{C}^{k}\) for the space of \(k\)-convex functions on \((0,1)\)._ Introduce a probability density function \[\psi_{k}(x,\theta)=\frac{k}{\theta}\left(1-\frac{x}{\theta}\right)_{+}^{k-1}, \text{ for }x>0,\ \theta>0. \tag{1}\] Note that \(\psi_{k}(\cdot,1)\) is the probability density function of \(\operatorname{Beta}(1,k)\). The following result shows that \(k\)-monotone functions and densities on \((0,1)\) admit a useful mixture representation using the kernel \(\psi_{k}\). **Lemma 2.1** (Characterization of \(k\)-monotone functions and densities on \((0,1)\)).: _A function \(f\in\mathcal{F}^{k}\) if and only if there exist a nondecreasing function \(\gamma(t)\) on \((0,1)\) and \(\alpha_{j}\geq 0\) for \(j=0,1,\ldots,k-2\), such that, for \(x\in(0,1)\),_ \[f(x)=\sum_{j=0}^{k-1}\alpha_{j}(1-x)^{j}+\int_{0}^{1}\psi_{k}(x,t)d\gamma(t). \tag{2}\] _A density \(g\in\mathcal{D}^{k}\) if and only if there exists a probability measure \(Q\) and \((\beta_{j}:0\leq j\leq k)\in\Delta_{k+1}\) such that, for every \(x\in(0,1)\),_ \[g(x)=\sum_{j=0}^{k-1}\beta_{j}\psi_{j+1}(x,1)+\beta_{k}\int_{0}^{1}\psi_{k}(x, \theta)dQ(\theta). \tag{3}\] The proof of this lemma is based on Taylor expansion and further integration by parts of the integral form remainder term. Similar results can be found in [14] for a \(k\)-monotone distribution function on a compact interval and [57] for \(k\)-monotone function on the positive half real line. We defer the proof of the lemma to the appendix. A crucial property of \(k\)-monotone functions for deriving posterior contraction rates is that they can be approximated effectively by \(k\)-monotone free-knot spline functions in the \(\mathbb{L}_{p}\)-metric, \(1\leq p<\infty\). This property is derived from Theorem 1.1 of [35] on a shape-preserving approximation of \(k\)-convex functions. Let \(\mathcal{S}_{N,k}\) denote the space of free knot splines of degree \(k-1\) with \(N\) interior knots in \([0,1]\). To align with the \(k\)-convex functions, we introduce a reflection transformation to the argument. Let \(\tau(x)=1-x\) for \(x\in(0,1)\) and denote \(\check{\mathcal{F}}^{k}=\{f\circ\tau:f\in\mathcal{F}^{k}\}\). Then shape preserving approximation to \(\mathcal{F}^{k}\) is essentially the same problem of shape-preserving approximation to \(\check{\mathcal{F}}^{k}\). By Definition 2.1, for \(k\geq 2\), \(f\in\check{\mathcal{F}}^{k}\) if and only if \(f^{(j)}\) is nonnegative, nondecreasing, and convex, for every \(j=0,1,\ldots,k-2\). It is then clear that \(\check{\mathcal{F}}^{k}\) is a subclass of \(\mathcal{C}^{k}\). Moreover, for \(h\in\mathcal{C}^{k}\) and \(k\geq 2\), let \(h^{(k-1)}\) denote the right derivative of \(h^{(k-2)}\), which is well defined since \(h\) is a convex function on \((0,1)\). We also know that \(h^{(k-1)}\) is right continuous. It is not hard to see that \(h\in\check{\mathcal{F}}^{k}\) as well if \(h^{(j)}(0+)\geq 0\), for \(j=0,\ldots,k-1\). Indeed, \(h^{(k-1)}(0+)\geq 0\) implies that \(h^{(k-2)}\) is nondecreasing, and furthermore, it is known that \(h^{(k-2)}\) is nonnegative, nondecreasing, and convex. Continuing in the same way, we know that \(h^{(j)}\) is nonnegative, nondecreasing, and convex for all \(j=0,\ldots,k-2\), that is, \(h\in\check{\mathcal{F}}^{k}\) by definition. In view of this point, for \(f\in\check{\mathcal{F}}^{k}\subset\mathcal{C}^{k}\), the shape preserving approximation by a free knot spline function \(s\in\mathcal{S}_{N,k}\cap\mathcal{C}^{k}\) considered in [35] is also a shape preserving approximation in \(\check{\mathcal{F}}^{k}\) (i.e. \(s\in\mathcal{S}_{N,k}\cap\check{\mathcal{F}}^{k}\)) provided that \(s^{(j)}(0)\geq 0\) for all \(j=0,\ldots,k-1\). By close inspection of the construction of approximating function in [35], this set of conditions is naturally satisfied. We leave the details of the argument in the appendix. The main result in [35] states that the shape-preserving approximation by free knot splines can be as good as the free knot spline approximation regarding the number of splines used to construct the approximation function and the approximation error. In fact, Theorem 1.1 of [35] presents a more general result. In what follows, we only use their result with the order of free knot splines fixed at \(k\) (i.e. approximation by piecewise polynomials of degree \(k-1\)) as this is the only case of interest in the current work. **Proposition 2.2** (Theorem 1.1 of [35]).: _For any \(1\leq p\leq\infty\) and any \(f\in\mathcal{C}^{k}\cap\mathbb{L}_{p}(0,1)\), there exist constants \(C_{k}>0\) and \(C_{k,p}>0\) such that_ \[d_{p}(f,\mathcal{S}_{\mathcal{C}_{k}N,k}\cap\mathcal{C}^{k})\leq C_{k,p}d_{p} (f,\mathcal{S}_{N,k}).\] On the other hand, the approximation error of free-knot splines is well-studied in approximation theory, as can be found in Chapter 12 of [9]. If the \((k-1)\)-th derivative of is bounded, the right-hand side of the last display is bounded by \(N^{-k}\) up to some positive constant. Moreover, the shape-preserving approximation to a \(k\)-monotone function is not hard to be adapted to the shape-preserving approximation to the \(k\)-monotone density. With the help of Lemma 2.1, the free knot spline approximation of order \(k\) with \(N\) interior knots admits a representation as in (3), indicating that the mixing distribution \(Q\) is supported on a set of at most \(N\) points. To summarize, we obtain the following approximation result, whose proof is deferred to the appendix. **Lemma 2.3**.: _Let \(g\in\mathcal{D}^{k}\) be given by (3) such that \(|g^{(k-1)}(0+)|<\infty\). Then there exists a discrete probability measure \(Q_{N}\) with \(N\) support points in \((0,1)\) such that with_ \[g_{N}(x)=\sum_{j=0}^{k-1}\beta_{j}\psi_{j+1}(x,1)+\beta_{k}\int_{0}^{1}\psi_{k }(x,\theta)dQ_{N}(\theta)\in\mathcal{D}^{k}, \tag{4}\] _we have that \(\|g-g_{N}\|_{\infty}\leq CN^{-k}\) for some constant \(C>0\)._ ## 3 Posterior Contraction Rates Let \(\mathbf{X}_{n}=(X_{1},\ldots,X_{n})\) be independent and identically distributed (i.i.d.) samples from a \(k\)-monotone density \(g\) given by the representation (3) for a known \(k=1,2,\ldots\). To place a prior on \(g\), it is natural to consider independent priors for the coefficient vector \(\mathbf{\beta}=(\beta_{0},\ldots,\beta_{k})\) and the mixing distribution \(Q\). We put a Dirichlet distribution prior on \(\mathbf{\beta}\) with parameters \(0<a_{j}<\infty\), for all \(j=0,1,\ldots,k\). Independently of \(\mathbf{\beta}\), we assign either a Dirichlet process (DP) prior or a Finite Mixtures (FM) prior on \(Q\): * \(Q\sim\text{DP}_{aH}\), where \(a>0\) is the precision parameter and \(H\) is the center measure supported on \((0,1)\); see [25] for definitions; * \(Q=\sum_{j=1}^{J}w_{j}\delta_{\theta_{j}}\), with \(\theta_{1},\ldots,\theta_{J}|J\stackrel{{ i.i.d.}}{{\sim}}H\), and \((w_{1},\ldots,w_{J})|J\sim\text{Dir}(J;\omega_{1J},\ldots,\omega_{JJ})\), independently. \(J\) is given a prior \(\Pi(J)=(n^{c}-1)n^{-cJ}\) on the set of positive integers. In the above priors, \(a\), \(H\), \((\omega_{jJ}:1\leq j\leq J<\infty)\) and \(c\) are hyperparameters. We assume that * \(H\) admits a Lebesgue density \(p_{H}\) on \((0,1)\) such that in a small neighborhood of zero, \(p_{H}(\theta)\lesssim\theta^{t_{1}}\) for some \(t_{1}>0\); * for any interval \((u,v)\subset(0,1)\) and some \(t_{2}>0\), \(H((u,v))\gtrsim(v-u)^{t_{2}}\). If \(g\in\mathcal{D}^{k}\) for \(k\geq 2\), it is assumed that \(g\) is differentiable only up to order \(k-2\). However, \((-1)^{k-2}g^{(k-2)}\) is convex and non-increasing on \((0,1)\). Hence, we can define \(g^{(k-1)}\) uniquely almost everywhere as either the left or right derivative of \(g^{(k-2)}\), which are equal except possibly on an at most countable set. **Theorem 3.1** (Contraction rate for Dirichlet process mixture prior).: _Let the data \(\mathbf{X}_{n}\) be generated from a \(k\)-monotone density \(g_{0}\) on \((0,1)\) given by_ \[g_{0}(x)=\sum_{j=0}^{k-1}\beta_{0,j}\psi_{j+1}(x,1)+\beta_{0,k}\int\psi_{k}(x, \theta)dQ_{0}(\theta), \tag{5}\] _where \(k\) is known. We assume \(g_{0}^{(k-1)}(0+)<\infty\) and \(\beta_{0,0}>0\). Let \(\mathbf{\beta}\) be given a Dirichlet prior with positive constant parameters, \(a_{0},\ldots,a_{k}\), and independently, put a Dirichlet process prior on \(Q\) satisfying Conditions (C1) and (C2). Then the posterior distribution of \(g\) contracts at the rate \(\epsilon_{n}=(n/\log n)^{-k/(2k+1)}\) at \(g_{0}\) with respect to the Hellinger distance, i.e., \(\mathrm{E}_{0}[\Pi(d_{H}(g,g_{0})\geq M_{n}\epsilon_{n}|\mathbf{X}_{n})]\to 0\) for any \(M_{n}\to\infty\)._ The same posterior contraction rate can be obtained by using a finite mixture prior on the mixing distribution, as presented in the following theorem. **Theorem 3.2** (Contraction rate for finite mixture prior).: _Let the data \(\mathbf{X}_{n}\) be generated from a \(k\)-monotone density \(g_{0}\) on \((0,1)\) given by (5) with a known \(k\), satisfying \(g_{0}^{(k-1)}(0+)<\infty\) and \(\beta_{0,0}>0\). Let \(\mathbf{\beta}\) be given the Dirichlet prior with positive constant parameters \(a_{0},\ldots,a_{k}\), and independently, put a finite mixture prior for \(Q\) satisfying Conditions (C1) and (C2), with \(c>0\) chosen sufficiently large. Then the posterior distribution of \(g\) contracts at \(g_{0}\) at the rate \(\epsilon_{n}=(n/\log n)^{-k/(2k+1)}\) with respect to the Hellinger distance._ **Remark 3.1**.: In Theorem 3.2, the same posterior contraction rate can be derived if the prior on \(J\) is replaced by a fixed prior that satisfies the condition, \(e^{-b_{1}j\log j}\leq\Pi(J=j)\leq e^{-b_{2}j\log j}\). For instance, a Poisson prior truncated at \(0\) satisfies the required tail condition. The posterior contraction rate is substantially improved to a nearly parametric rate using the same prior if the mixing distribution \(Q_{0}\) is finitely supported on \(J_{0}\) points, i.e., \[g_{0}(x)=\sum_{j=0}^{k-1}\beta_{0,j}\psi_{j+1}(x,1)+\beta_{0,k}\sum_{l=1}^{J_ {0}}w_{l}^{0}\psi_{k}(x,\theta_{j}^{0}). \tag{6}\] In the result below, both \(k\) and \(J_{0}\) are allowed to depend on \(n\) (and hence the resulting rate involves \(k\) and \(J_{0}\)), provided that \(\max(\log k,\log J_{0})\lesssim\log n\). **Theorem 3.3** (Finitely supported mixing).: _Let the true density \(g_{0}\) as given in (6) with \(\beta_{0,0}>0\). Let \(\mathbf{\beta}\) be given the Dirichlet prior with positive constant parameters \(a_{0},\ldots,a_{k}\), and independently, let \(Q\) be given a finite mixture prior with \(c>0\) chosen sufficiently large and \(H\) satisfying the conditions that \(H((u,v))\gtrsim(v-u)^{t_{2}}\) and \(p_{H}\lesssim\exp\{-t_{3}/\theta\}\) for any interval \((u,v)\subset(n^{-2},1)\), and \(\theta\in(0,n^{-2})\), where \(t_{2},t_{3}>0\) are constants. Then the posterior of \(g\) contracts at \(g_{0}\) at the rate \(\epsilon_{n}=\sqrt{\max(k,J_{0})(\log n)/n}\) with respect to the Hellinger metric._ It can be seen from the proof that the fixed prior in Remark 3.1 also obtains the same rate if \(\log J_{0}\asymp\log n\). Adaptation to \(k\) In the last section, we studied posterior contraction rates assuming that the order of monotonicity \(k\) is known. Here, the parameter \(k\) serves as a regularity index controlling the complexity of the model, much like a smoothness index. Adapting the rates to different values of \(k\) is therefore a highly desirable objective. In the Bayesian framework, a natural approach is to treat \(k\) as a model index parameter and put a prior distribution on it. Consequently, the resulting prior becomes a mixture of the priors used for a fixed index value. Under similar situations in smoothness or sparsity settings, the corresponding posterior distribution often adapts to the optimal rate under fairly mild conditions; see [19, 25, 8], among others. In this section, we show that such an automatic adaptation strategy works in the \(k\)-monotone setting as well. This feature is particularly attractive by employing the Bayesian approach, while no parallel result is known in the non-Bayesian literature for the family of \(k\)-monotone densities indexed by \(k\). Noting the models \(\mathcal{D}^{k}\) are nested in the following way: \(\mathcal{D}^{k+1}\subset\mathcal{D}^{k}\), we define the true value \(k_{0}\) as the largest value of \(k\) such that \(g_{0}\in\mathcal{D}^{k}\). We assume \(k_{0}\) is finite, which is the case of interest. It is not hard to see that the finiteness of \(k_{0}\) implies \(\beta_{k_{0}}>0\) in the characterization (3). Otherwise, this \(k_{0}\)-monotone density would be a polynomial of degree at most \(k_{0}-1\), which would correspond to the case \(k_{0}=\infty\), contradicting the finiteness of \(k_{0}\). Let \(k\) be given a prior \(\Pi\) that is one of the two types: 1. \(e^{-d_{1}k\log k}\leq\Pi(k)\leq e^{-d_{2}k\log k}\) for some \(d_{1}\geq d_{2}>0\); 2. \(\Pi(k)=(n^{r}-1)n^{-rk}\), for some \(r>0\). **Theorem 4.1** (Adaptive contraction rate).: _Let the monotonicity index \(k\) be unknown and endowed with a prior satisfying the conditions (K1) or (K2). Given \(k\), let the prior for \(\mathbf{\beta}\) be \(\operatorname{Dir}(a_{0},\ldots,a_{k})\) for some \(a_{0},\ldots,a_{k}\) lying between two fixed positive numbers, and independently, let \(Q\) be given either the Dirichlet process prior or the finite mixture prior with a sufficiently large \(c>0\), satisfying Conditions (C1) and (C2). Let the true density be \(g_{0}\) be given by (5) with \(k=k_{0}\) satisfying \(|g_{0}^{(k_{0}-1)}(0+)|<\infty\) and \(\beta_{0,0}>0\). Then the posterior distribution contracts at \(g_{0}\) at the rate \(\epsilon_{n}=(n/\log n)^{-k_{0}/(2k_{0}+1)}\) with respect to the Hellinger metric._ If the true \(k\)-monotone density has a finite representation as in (6) with \(k=k_{0}\), then it is still possible to obtain the nearly parametric posterior contraction rate stated in Theorem 3.3 without knowing the true value \(k_{0}\) of \(k\). As shown in Theorem 3.3, this holds when both \(k_{0}\) and \(J_{0}\) satisfy that \(\max(\log J_{0},\log k_{0})\lesssim\log n\). It may be noted that, even though \(g\in\mathcal{D}^{\bar{k}}\) for any \(\bar{k}<k_{0}\) as well, the corresponding mixture representation with the kernel \(\psi_{\bar{k}}\) will not be supported on finitely many points in general. **Theorem 4.2** (Adaptive contraction for finite mixture).: _Let the monotonicity index \(k\) be unknown and endowed with a prior of the type (K2). Given \(k\), let the prior for \(\mathbf{\beta}\) be \(\operatorname{Dir}(a_{0},\ldots,a_{k})\) for some \(a_{0},\ldots,a_{k}\) lying between two fixed positive numbers. Independently, let \(Q\) be given the finite mixture prior with a sufficiently large \(c>0\). The prior distribution \(H\) of a support point \(\theta\) satisfies, for some \(t_{2},t_{3}>0\), that \(H((u,v))\gtrsim(v-u)^{t_{2}}\) for any interval \((u,v)\subset(n^{-2},1)\), and the corresponding density \(p_{H}(\theta)\lesssim\exp\{-t_{3}/\theta\}\) for all \(\theta\in(0,n^{-2})\). If \(g_{0}\) is given by (6) for some finite \(J_{0}\) and \(k=k_{0}\) with \(\beta_{0,0}>0\), then the posterior contracts at \(g_{0}\) at the rate \(\sqrt{\max(k_{0},J_{0})(\log n)/n}\) with respect to the Hellinger metric._ It can be seen from the proof that a prior of the type (K1) for \(k\) and a fixed prior for \(J\) as in Remark 3.1 may also be used to derive the same rate provided that \(\log k_{0}\asymp\log n\) and \(\log J_{0}\asymp\log n\). ## 5 Applications to Multiple Testing In large-scale hypothesis testing, it is essential to assess the proportion of true null hypotheses when reporting scientific findings. The proportion of null hypotheses, denoted as \(\alpha\), plays a crucial role in the calculation of the positive false discovery rate [51]. Consider a problem of simultaneously testing \(n\) hypotheses. For each individual test, the data are summarized using a test statistic, and a \(p\)-value is computed based on an exact, approximate, or asymptotic null distribution of the test statistic and the scope of the alternative hypothesis. Furthermore, we assume that the test statistics corresponding to different hypotheses are (nearly) independent, resulting in (nearly) independent \(p\)-values. Under a simple null hypothesis, the \(p\)-value is calibrated; that is, it has a uniform distribution on \([0,1]\), provided that the test statistic follows a continuous null distribution. Even when the null hypothesis is composite, certain Bayesian \(p\)-values (e.g., the partial posterior predictive \(p\)-value of [3]) asymptotically follow a uniform distribution (cf. [47]) when the data are sampled using an i.i.d. scheme. The \(p\)-values from the alternative hypotheses usually concentrate near the origin and have decreasing density on \([0,1]\). This feature, along with true null hypotheses outnumbering true alternative hypotheses in practice, is used to estimate the proportion of null hypotheses in Storey's procedure [51] for controlling the positive false discovery rate (pFDR). It is easy to see that the proportion of the null hypothesis is identifiable if the \(p\)-value density under the alternative approaches \(0\) at \(1\) (Proposition 4 of [23]). This assumption is not always true; however, see the discussion in Section 2.2 of [23]. For example, in the two-sided t-test, the density of \(p\)-values does not vanish at \(1\), in which case we can only identify an upper bound for the proportion of null hypotheses. However, if the sample size is reasonably large, the height of the density under the alternative is very small, so the condition holds approximately. The \(p\)-value density under the alternative is explicitly modeled as a monotone decreasing density (\(k\)-monotone for \(k=1\)) in [37]. This assumption is extremely mild as it can be seen to hold under the Monotone Likelihood Ratio (MLR) property of the distribution of the test statistic for both one- and two-sided alternatives (Propositions 1 and 2 of [23]). However, simulation results demonstrate that the Grenander estimator exhibits unstable performance near \(1\), which significantly affects the quality of the estimator for the positive false discovery rate (pFDR). To enhance the performance, [37] recommends using a convex nonincreasing density to fit the density of the \(p\)-values. A model-based Bayesian approach to the estimation of the pFDR was adopted in [53] using certain mixtures of beta densities. The corresponding distribution function under a logarithmic transformation of the argument is completely monotone (Proposition 7 of [23]), which corresponds to \(k\)-monotonicity for all \(k\). Results in Section 3 of [23] show that the Bayesian procedure under a Dirichlet process prior on the mixing distribution gives consistent posterior for the proportion of null hypotheses and the pFDR. Other model-based and Bayesian approaches to the estimation of pFDR have been proposed based on modeling probit-transformed mixtures of skew-normal densities in [4] and [22] and sufficient conditions for the identification of the proportion of null hypotheses are discussed in [21]. A review of Bayesian nonparametric methods for multiple testing is available in [20]. A very appealing condition on the \(p\)-value density under the alternative compromising between the generality of the class of monotone densities and the smoothness of the class of completely monotone functions is that the density of the \(p\)-value distribution under the alternative belongs to the class of \(k\)-monotone density for some \(k\). For instance, the case \(k=2\) corresponding to decreasing convex densities already gives a much more stable estimator of the density [37], but it may be harder to ensure under what condition the density of \(p\)-values under the alternative would be decreasing and convex. The approach to modeling the density of the \(p\)-values under the alternative as a \(k\)-monotone density is irresistibly appealing if \(k\) can be left unspecified and be adaptively chosen from the data using the technique developed in Section 4. The following result quantifies the accuracy of the procedure. **Theorem 5.1**.: _Let \(U_{1},\ldots,U_{n}\) be independent \(p\)-values arising from the simultaneous testing of \(n\) hypotheses. We assume that the \(p\)-value density \(g\) is modeled as \(k\)-monotone, where \(k\geq 2\). The value of \(k\) can be either known or unknown. In the latter case, a prior on \(g\) is specified as described in Section 3 or 4. In both scenarios, \(\alpha\) represents the corresponding proportion of null hypotheses. Let \(g_{0}\) stand for the true density and let the true proportion of null hypotheses be denoted by \(\alpha_{0}\). Then under the conditions of Theorems 3.1, 3.2 or 4.1, the posterior distribution of \(\alpha\) is consistent at \(\alpha_{0}\) and contracts at the rate \(\epsilon_{n}=(n/\log n)^{-k/(2(2k+1))}\), that is, for any \(M_{n}\to\infty\), \(\Pi(|\alpha-\alpha_{0}|>M_{n}\epsilon_{n}|U_{1},\ldots,U_{n})\to 0\) in probability._ ## 6 Simulation Study We implement the proposed Bayesian approach for a \(k\)-monotone density estimation. Specifically, we employ the Dirichlet process prior for the mixing distribution. To simplify, we only retain the additional uniform component and the mixture component of k-monotone kernels in computation. We consider both scenarios: when the value of \(k\) is known and when it is unknown. Simulation results demonstrate the superiority of our method compared to nonparametric maximum likelihood estimation for monotone density, as well as for convex and monotonically nonincreasing density. We present the specifics of our simulation in the following sections. ### Estimation accuracy To perform posterior sampling under the Dirichlet process mixture prior, we utilize the sliced Gibbs sampling algorithm as described in [34]. We use a uniform base measure on \([n^{-1},1]\) for the Dirichlet process prior and set the precision parameter to a fixed value \(1\). For the simplified model, we set \(\beta_{j}=0\) for \(j=1,\ldots,k-1\), and give the proportion of the uniform component \(\beta_{0}\) a uniform prior on \([0,1]\). We let \(k\) be fixed or assign an appropriate prior for \(k\). In particular, when using the adaptive Bayesian approach, the prior on \(k\) is uniformly distributed over the set \(1,\ldots,10\). We select the largest \(k\) at \(10\), which is sufficiently large to approximate common smoothly decreasing densities of interest. In the following, we generate \(1000\) posterior samples, based on which we make inferences on the unknown density function after dropping the first \(2000\) burn-in ones in every Bayesian application. Let the sequence \(\theta_{j,J}=j/J\). We consider the following density functions: * \(g_{1}(x)=\psi_{2}(x,1)=2(1-x)\), * \(g_{2}(x)=0.5f_{1}(x)+0.5=1.5-x\), * \(g_{3}(x)=\sum_{j=1}^{3}3^{-1}\psi_{2}(x,\theta_{j,3})\), * \(g_{4}(x)=0.5f_{3}(x)+0.5\), * \(g_{5}(x)=\sum_{j=1}^{3}3^{-1}\psi_{4}(x,\theta_{j,3})\), * \(g_{6}(x)=\int_{0}^{1}\psi_{4}(x,\theta)2\theta d\theta\). For \(g_{6}\), the mixing distribution for \(\theta\) is Beta(2, 1), and sampling according to \(g_{6}\) is straightforward. We take sample sizes of \(n=100,200\), and \(500\). For every sample size, we generate independent and identically distributed samples with all six aforementioned models. The proposed Bayesian procedure is applied to each dataset, accounting for both known and unknown values of \(k\). To compare with non-Bayesian methods, we apply the posterior mean density function as the Bayesian estimator. We consider the classical Grenander estimator, as well as the nonparametric maximum likelihood estimator, for convex and nonincreasing densities on the interval \([0,1]\). To measure the deviation from the true density functions, for each estimate \(\hat{g}\), we compute the mean squared error (MSE) over a grid in \([0,1]\). This grid is defined as \(x_{j,K}=j/K\) for \(j=1,\ldots,K\). The MSE is then calculated as follows: \[\text{MSE}(\hat{g})=\frac{1}{K}\sum_{j=1}^{K}(\hat{g}(x_{j,K})-g_{0}(x_{j,K})) ^{2}.\] Here we choose \(K=100\) and \(f_{0}\) stands for the corresponding \(f_{i}\), \(i=1,\ldots,6\). We independently conduct \(R=500\) iterations for each setup and present the average mean squared error (MSE) calculated in Table 1. Each row in Table 1 corresponds to a specific method applied to the dataset with the corresponding sample size. "Bay" denotes the Dirichlet mixture model with a known \(k\). "Ada" represents the Bayesian methods where \(k\) is unknown. "Con" and "Gre" stand for the nonparametric maximum likelihood estimation for the convex and nonincreasing density class and for the nonincreasing density class, respectively. In summary, the proposed Bayesian methods for both known and unknown values of \(k\) demonstrate superiority over nonparametric likelihood estimations. The adaptive Bayesian approach performs nearly as well as the Bayesian method that employs the optimal choice of \(k\). ### Estimation of the proportion of null hypotheses The simulation setup in this part closely follows that of [37]. Here, we simulate DNA microarray data that involves multiple hypothesis testing problems. For each of the \(m\) individuals, we collect a dataset of sample size \(n\), denoted by \(\mathbf{X}_{j}=(X_{1,j},\ldots,X_{n,j})\) for \(j=1,\ldots,m\), independently drawn from a multi-normal distribution, i.e., \(\mathbf{X}_{j}\stackrel{{ i.i.d.}}{{\sim}}\mathrm{N}(\mathbf{\mu},\Sigma)\) where \(\mathbf{\mu}=(\mu_{1},\ldots,\mu_{n})\). We test the hypotheses \[H_{0,i}:\mu_{i}=0,\ \text{versus}\ H_{0,i}:\mu_{i}\neq 0,\] based on the t-test statistics. For comparison, we also consider the set of one-sided t-tests, \[H_{0,i}:\mu_{i}=0,\ \text{versus}\ H_{0,i}:\mu_{i}>0.\] The mean vector \(\mathbf{\mu}\) and the covariance matrix \(\mathbf{\Sigma}\) are generated as follows. For \(\alpha_{0}\in\{0.5,0.8,0.9,0.95\}\), we generate a binomial variable \(n_{0}\) with parameter \(n\) and \(\alpha_{0}\). Randomly selected \(n_{0}\) positions among \(n\) and set the corresponding \(\mu_{i}=0\). For the two-sided t-tests, we generate the alternative means by independently sampling from the symmetric bitriangular distribution with parameter \(a=\log_{2}1.2\) and \(b=2\); see [37] for details. For \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & & \(g_{1}\) & \(g_{2}\) & \(g_{3}\) & \(g_{4}\) & \(g_{5}\) & \(g_{6}\) \\ \hline \multirow{4}{*}{\(n=100\)} & Bay & 0.018 & 0.018 & 0.027 & 0.018 & 0.029 & 0.028 \\ & Ada & 0.024 & 0.023 & 0.027 & 0.026 & 0.030 & 0.031 \\ & Con & 0.019 & 0.022 & 0.041 & 0.032 & 0.068 & 0.076 \\ & Gre & 0.058 & 0.047 & 0.097 & 0.068 & 0.158 & 0.162 \\ \hline \multirow{4}{*}{\(n=200\)} & Bay & 0.009 & 0.011 & 0.017 & 0.011 & 0.021 & 0.017 \\ & Ada & 0.014 & 0.013 & 0.016 & 0.014 & 0.019 & 0.017 \\ & Con & 0.010 & 0.011 & 0.024 & 0.017 & 0.040 & 0.041 \\ & Gre & 0.036 & 0.029 & 0.058 & 0.041 & 0.102 & 0.102 \\ \hline \multirow{4}{*}{\(n=500\)} & Bay & 0.003 & 0.005 & 0.008 & 0.006 & 0.010 & 0.010 \\ & Ada & 0.003 & 0.006 & 0.008 & 0.007 & 0.014 & 0.015 \\ \cline{1-1} & Con & 0.004 & 0.005 & 0.010 & 0.008 & 0.018 & 0.020 \\ \cline{1-1} & Gre & 0.018 & 0.015 & 0.029 & 0.022 & 0.052 & 0.053 \\ \hline \hline \end{tabular} \end{table} Table 1: Average of MSE. one-sided t-tests, the remaining \(\mu_{i}\) are independently generated from the symmetric triangular distribution with the same parameters as in the previous case. To consider the effect of correlation between tests, we consider a specific block diagonal structure for \(\mathbf{\Sigma}\). We choose a block size \(G\in\{50,100\}\). The within-block correlation \(\rho\) takes values in \(\{0,0.25,0.5,0.75\}\). Between blocks, the coordinates are independent. Note that \(\rho=0\) means all the tests are pairwise independent. We choose \(n=2000\) and \(m=10\) throughout. For the Bayesian approach, we continue to use the Dirichlet process mixture prior with parameters defined as in the previous section while considering \(k\) as an unknown parameter with a prior distribution. The proposed estimator is the posterior mean of \(\beta_{0}\). For comparison, we also estimate the null proportion by the maximum likelihood estimation for convex and decreasing density class, as proposed in [37]. This can be easily computed using the \(\mathsf{convest}\) function in the R package limma. Each setting is replicated 1000 times, and the densities of these two estimators are plotted in Figure 1 - Figure 4. For both two-sided and one-sided tests, the simulation results exhibit a similar pattern. The presence of correlation between tests has a detrimental effect on the performance of both methods. However, our Bayesian procedure demonstrates more stable performance in the presence of within-group correlation and for cases with larger blocks of correlation. Furthermore, our method shows more accurate estimation performance when the proportion of null hypotheses is relatively large. However, when \(\alpha_{0}\) is moderate, such as 0.5, the convex maximum likelihood estimator appears to be less biased. These findings suggest that our Bayesian approach offers advantages in handling correlated tests and estimating the null proportion accurately, particularly when there is a higher proportion of null hypotheses, which is the very common case. These observations highlight the strengths and limitations of both methods in different scenarios, providing valuable insights into their respective performance characteristics. ## 7 Summary, conclusions and further directions We observed that a \(k\)-monotone density on the unit interval, like a \(k\)-monotone density on the positive half-line, also admits a mixture representation in terms of scaled beta densities. Such a density can be uniformly approximated by a finite mixture of the same kernels. We considered Bayesian procedures for making an inference on a \(k\)-monotone density by putting either a Dirichlet process prior or a finite mixture prior on the mixing distribution. We showed that under mild conditions on the true density and the prior distribution, the posterior contracts at the rate \((n/\log n)^{-k/(2k+1)}\), which is the optimal rate up to a polylogarithmic factor. Then we showed that even when \(k\) is not known, simply by putting a prior on \(k\), the corresponding mixture prior attains the same rate as if \(k\) were known. We described an application for estimating the \(p\)-value distribution in a multiple-testing problem. We argued that it is very appealing to model the \(p\)-value density under the alternative as a \(k\)-monotone density, especially if \(k\) is unspecified. The posterior contraction results ensure that the posterior distribution for the proportion of null hypotheses is consistent. We con ducted a comprehensive simulation to check the comparative performance of the Bayesian method against those of the Grenander estimator and the maximum likelihood estimator for decreasing convex densities in finite samples. We found that the Bayesian procedure is overall the better performer. Further, the performance of the Bayesian procedure with unspecified \(k\) is almost as good as the same with the correctly specified \(k\), implying that the proposed adaptation scheme works for finite sample sizes. Observe that the kernel \(\psi_{k}\) appearing in the characterization of \(k\)-monotone densities, as given in (2), makes sense for any positive \(k\), even when \(k\) is not a positive integer. This can be used to define a \(k\)-monotone density for a fractional \(k>0\) through the representations (2) and (3), with \(k\) replaced by its integer part in the first term of (2) and (3). If Lemma 2.3 can be generalized for a non-integer \(k\), then the posterior contraction rate \((n/\log n)^{-k/(2k+1)}\) can be generalized for a general \(k\) as well. Then the contraction rates \((n/\log n)^{-k/(2k+1)}\) prevail for all values of \(k\) in the continuum, giving a continuous spectrum of rates similar to the smoothness regime. ## 8 Proofs ### Proof of main theorems Proof of Theorem 3.1.: We apply the general theory of posterior contraction for i.i.d. observations as in Section 8.2 of [25]. We need to obtain a lower bound for prior concentration in a Kullback-Leibler neighborhood of the true density and bound the size of a sieve in terms of the metric entropy so that the remaining part of the parameter space has an exponentially small prior probability. We verify the first condition given by (8.4) of [25] at any \(g_{0}\) with \(\epsilon_{n}\) a constant multiple of \((n/\log n)^{-k/(2k+1)}\): \[-\log\Pi(K(g_{0},g)\leq\epsilon_{n}^{2},V(g_{0},g)\leq\epsilon_{n}^{2}) \lesssim n\epsilon_{n}^{2}. \tag{7}\] By Lemma 2.3, there exists \(g^{*}(x)=\sum_{j=0}^{k-1}\beta_{j}^{*}\psi_{j+1}(x,1)+\beta_{k}^{*}\sum_{l=1} ^{J^{*}}w_{l}^{*}\psi_{k}(x,\theta_{l}^{*})\in\mathcal{D}^{k}\) with \((w_{l}^{*}:l=1,\ldots,J^{*})\in\Delta_{J^{*}}\) and \((\theta_{l}^{*}:l=1,\ldots,J^{*})\in(0,1)^{J^{*}}\), such that \(J^{*}\lesssim\epsilon_{n}^{-1/k}\) and \(\left\|g_{0}-g^{*}\right\|_{\infty}\lesssim\epsilon_{n}\). First, we show that we can maintain the same approximation rate by restricting the choice to \(\theta_{j}^{*}\not\in(0,\epsilon_{n}^{2})\) to ensure that \(\left\|g_{0}-g^{*}\right\|_{2}\lesssim\epsilon_{n}\). Indeed, if there are \(\theta_{l}^{*}<\epsilon_{n}^{2}\), we write \(\bar{w}=\sum_{l:\theta_{l}^{*}<\epsilon_{n}^{2}}w_{l}^{*}\) and define \[g^{\dagger}(x)=\sum_{j=0}^{k-1}\beta_{j}^{*}\psi_{j+1}(x,1)+\beta_{k}^{*}\sum _{l:\theta_{l}^{*}\geq\epsilon_{n}^{2}}w_{l}^{*}\psi_{k}(x,\theta_{l}^{*})+ \beta_{k}^{*}\bar{w}\psi_{k}(x,\epsilon_{n}^{2}).\] It follows that \(g^{\dagger}\in\mathcal{D}^{k}\) and \(g^{\dagger}(x)=g^{*}(x)\) for all \(\epsilon_{n}^{2}\leq x<1\). Since, \(g_{0}\) is bounded and \(\left\|g^{*}-g_{0}\right\|_{\infty}\lesssim\epsilon_{n}\), clearly \(g^{*}\) is bounded. As \[g^{*}(0+)-g^{\dagger}(0+)=\beta_{k}^{*}\sum_{l:\theta_{l}<\epsilon_{n}^{2}}w_{ l}^{*}(\psi_{k}(0,\theta_{l}^{*})-\psi_{k}(0,\epsilon_{n}^{2}))=\beta_{k}^{*}\sum_{l: \theta_{l}<\epsilon_{n}^{2}}w_{l}^{*}(\frac{k}{\theta_{l}}-\frac{k}{\epsilon_{ n}^{2}})\] is nonenagtive, \(g^{\star}\) is bounded as well. Now \(\left\|g^{\dagger}-g^{*}\right\|_{2}\lesssim\left\|\mathbbm{1}_{(0,\epsilon_{n}^{2} )}\right\|_{2}=\epsilon_{n}\), and hence \(\|g^{\dagger}-g_{0}\|_{2}\lesssim\epsilon_{n}\). This assures that we can assume without loss of generality that \(\theta_{l}^{*}\geq\epsilon_{n}^{2}\) for an \(\mathbb{L}_{2}\)-approximation of \(g_{0}\) within the order of \(\epsilon_{n}\) using \(J^{*}\lesssim\epsilon_{n}^{-1/k}\) mixture components. To bound \(K(g_{0},g)\) and \(V(g_{0},g)\), we first bound the Hellinger distance. As \(d_{H}(g_{0},g)\leq d_{H}(g_{0},g^{*})+d_{H}(g,g^{*})\), and \(d_{H}(g_{0},g^{*})\leq\|1/g_{0}\|_{\infty}^{1/2}\|g-g_{0}\|_{2}\lesssim \epsilon_{n}\), it suffices to bound \(d_{H}(g,g^{*})\). Let \(I_{j}=(\theta_{l}^{*}-\epsilon_{n}^{4}/2,\theta_{l}^{*}+\epsilon_{n}^{4}/2)\), \(l=1,\ldots,J^{*}\). We can assume, without loss of generality, that all spacings between \(\theta_{1}^{*},\ldots,\theta_{J^{*}}^{*}\) are bigger than \(\epsilon_{n}^{4}\). Indeed, as \(\min\theta_{l}^{*}\geq\epsilon_{n}^{2}\), Lemma 8.2 eliminates the need for placing multiple support points within \(\epsilon_{n}^{4}\)-neighborhood to control the \(\mathbb{L}_{1}\)-distance within a constant multiple of \(\epsilon_{n}^{2}\). This implies that \(I_{l}\), \(l=1,\ldots,J^{*}\), can be assumed to be pairwise disjoint. Let \(I_{0}=(0,1)\setminus(\cup_{l=1}^{J^{*}}I_{l})\). Then \[\|g-g^{*}\|_{1}\leq \sum_{j=0}^{k-1}|\beta_{j}-\beta_{j}^{*}|+\sum_{l=1}^{J^{*}}\int_ {I_{l}}\|\psi_{k}(\cdot,\theta)-\psi_{k}(\cdot,\theta_{l}^{*})\|_{1}dQ(\theta)\] \[+\sum_{l=1}^{J^{*}}|Q(I_{l})-w_{l}^{*}|+Q(I_{0}). \tag{8}\] The fourth term in (8) is bounded by the third term because \(Q(I_{0})=1-\sum_{l=1}^{J^{*}}Q(I_{l})\) and \(\sum_{l=1}^{J^{*}}w_{l}^{*}=1\). Since \(\|\psi_{k}(\cdot,\theta)-\psi_{k}(\cdot,\theta_{j}^{*})\|_{1}\leq 2|\theta- \theta_{l}^{*}|/\min\theta_{l}^{*}\leq 2\epsilon_{n}^{2}\) for any \(\theta\in I_{l}\), and \(\sum_{l=1}^{J^{*}}Q(I_{l})\leq 1\), the second term is bounded by \(\sum_{l=1}^{J^{*}}2\epsilon_{n}^{2}Q(I_{l})\leq 2\epsilon_{n}^{2}\). Therefore \(\sum_{j=0}^{k-1}|\beta_{j}^{*}-\beta_{j}|\leq\epsilon_{n}^{2}\) and \(\sum_{l=1}^{J^{*}}|Q(I_{l})-w_{l}^{*}|\leq\epsilon_{n}^{2}\) together ensure that \(\|g-g^{*}\|_{1}\lesssim\epsilon_{n}^{2}\), and hence \(d_{H}(g,g^{*})\lesssim\epsilon_{n}\). Therefore, by the last part of Lemma B.2 of [25], it follows that \(\max(K(g_{0},g),V(g_{0},g))\lesssim\epsilon_{n}^{2}\), so it suffices to lower-bound \(\Pi\{\sum_{j=0}^{k-1}|\beta_{j}-\beta_{0,j}|\leq\epsilon_{n}^{2},\sum_{l=1}^{ J^{*}}|Q(I_{l})-w_{l}^{*}|\leq\epsilon_{n}^{2}\}\). By Lemma G.13 of [25], under the assumed conditions on the center measure \(H\), for some constant \(C,C^{\prime}>0\), we have \(\Pi(\sum_{j=1}^{k-1}|\beta_{j}-\beta_{0,j}|\leq\epsilon_{n}^{2})\gtrsim e^{-Ck \log(1/\epsilon_{n})}\) and \(\Pi(\sum_{l=1}^{J^{*}}|Q(I_{l})-w_{l}^{*}|\leq\epsilon_{n}^{2})\gtrsim e^{-C^{ \prime}J^{*}\log(1/\epsilon_{n})}\). As \(\mathrm{\ and\ }Q\) are independent, it follows that \[-\log\Pi\big{(}\sum_{j=0}^{k-1}|\beta_{j}-\beta_{0,j}|\leq\epsilon_{n}^{2}, \sum_{l=1}^{J^{*}}|Q(I_{l})-w_{l}^{*}|\leq\epsilon_{n}^{2}\big{)}\lesssim J^{* }\log(1/\epsilon_{n}). \tag{9}\] Equating \(J^{*}\log(1/\epsilon_{n})\) with \(n\epsilon_{n}^{2}\), it is now immediate that (7) holds for \(\epsilon_{n}\) a constant multiple of \((n/\log n)^{-k/(2k+1)}\). Define a sieve \(\mathcal{D}_{n}^{k}=\{g\in\mathcal{D}^{k}:g(0+)\leq M_{n}\}\), where \(M_{n}=\exp\{Cn^{1/(2k+1)}(\log n)^{2k/(2k+1)}\}\), for a large positive constant \(C>0\) to be determined later, denotes the upper cut-off. It suffices to verify the local entropy condition given by (8.10) of [25]: For all \(\epsilon\geq\epsilon_{n}\), \[\log\mathcal{N}(\epsilon/2,\{g\in\mathcal{D}_{n}^{k}:d_{H}(g,g_{0})\leq 2 \epsilon\},d_{H})\lesssim n\epsilon_{n}^{2}. \tag{10}\] If \(g\in\mathcal{D}_{n}^{k}\), then \(\left\|g-g_{0}\right\|_{1}\leq 2d_{H}(g_{0},g)\leq 4\epsilon_{n}\). This implies that \[\epsilon_{n}g(\epsilon_{n})\leq\int_{0}^{\epsilon_{n}}g(x)dx\leq\int_{0}^{ \epsilon_{n}}g_{0}(x)dx+4\epsilon_{n}\leq(g_{0}(0+)+4)\epsilon_{n},\] giving \(g(\epsilon_{n})\leq g_{0}(0+)+4\). Define \(\mathcal{D}_{n,1}^{k}=\{g\mathbbm{1}_{(0,\epsilon_{n})}:g\in\mathcal{D}_{n}^{k}\}\) and \(\mathcal{D}_{n,2}^{k}=\{g\mathbbm{1}_{[\epsilon_{n},1]}:g\in\mathcal{D}_{n,j}^{k}\}\). By Lemma 8.1, we have \[\log\mathcal{N}(\epsilon_{n}/4,\mathcal{D}_{n,1}^{k},d_{H})\lesssim\log( \epsilon_{n}M_{n})^{1/(2k)}[(4+g_{0}(0+))\epsilon_{n}]^{1/k}(\epsilon_{n}/4)^{ -1/k}\] which can be bounded by a constant multiple of \(n^{1/(2k+1)}(\log n)^{2k/(2k+1)}=n\epsilon_{n}^{2}\). By Lemma 8.1 again, we have \[\log\mathcal{N}(\epsilon_{n}/4,\mathcal{D}_{n,2}^{k},d_{H})\lesssim\log g_{0}(0+ )+8^{1/(2k)}(\epsilon_{n}/4)^{-1/k},\] which can be bounded by a constant multiple of \(\epsilon_{n}^{-1/k}\lesssim n\epsilon_{n}^{2}\). Since \(\mathcal{D}_{n}^{k}\subset\mathcal{D}_{n,1}^{k}+\mathcal{D}_{n,2}^{k}\), \[\log\mathcal{N}(\epsilon_{n}/2,\mathcal{D}_{n}^{k},d_{H})\leq\log\mathcal{N}( \epsilon_{n}/4,\mathcal{D}_{n,1}^{k},d_{H})+\log\mathcal{N}(\epsilon_{n}/4, \mathcal{D}_{n,2}^{k},d_{H})\] is bounded by a multiple of \(n\epsilon_{n}^{2}\), verifying (10). Next, we control the residual prior probability \(\Pi(\mathcal{M}^{k}\setminus\mathcal{M}_{n}^{k})\). Using the fact that \(\psi_{k}(x,\theta)\leq k/\theta\), we obtain the estimate \[\Pi(g(0+)>M_{n}) \leq\Pi(k\int\theta^{-1}dQ(\theta)>M_{n})\] \[=\Pi\big{(}\int_{0}^{2k/M_{n}}\theta^{-1}dQ(\theta)+\int_{2k/M_{n }}^{1}\theta^{-1}dQ(\theta)>M_{n}/k\big{)}.\] Because always \(\int_{2k/M_{n}}^{1}\theta^{-1}dQ(\theta)\leq M_{n}/(2k)\), the residual probability is at most \[\Pi\big{(}\int_{0}^{2k/M_{n}}\theta^{-1}dQ(\theta)>\frac{M_{n}}{2k}\big{)} \leq\frac{2k}{M_{n}}\mathrm{E}\int_{0}^{2k/M_{n}}\theta^{-1}dQ(\theta)\lesssim \int_{0}^{2k/M_{n}}\theta^{-1+t_{1}}d\theta\] respectively using Markov's inequality and the assumption on the base measure. As the last expression is bounded by a multiple of \(M_{n}^{-t_{1}}\), it follows that the residual probability is at most \(\exp\{-Cn\epsilon_{n}^{2}\}\), where \(C>0\) can be chosen as large we please for \(\epsilon_{n}=(n/\log n)^{-k/(2k+1)}\) by our choice of \(M_{n}\). This verifies all required conditions for the applicability of the general theory of posterior contraction rate with \(\epsilon_{n}=(n/\log n)^{-k/(2k+1)}\). Proof of Theorem 3.2.: The proof is largely similar to that of Theorem 3.1 by verifying the three conditions of Theorem 8.9 of [25] for \(\epsilon_{n}=(n/\log n)^{-k/(2k+1)}\). We highlight the differences in the following. To estimate the prior concentration in the Kullback-Leibler neighborhood, we need to condition on the event \(\{J=J^{*}\}\) in (7), where \(J^{*}\) is such that the uniform approximation using a mixture \(\psi_{k}\) with \(J^{*}\) support points is within \(n^{-k/(2k+1)}\). By Lemma 2.3, we can assume that \(J^{*}\leq n^{1/(2k+1)}\). To bound the prior probability of \(\{\theta_{j}\in I_{j}\}\), given \(J=J^{*}\), observe that the condition assumed on on \(H\), we have that \[\Pi(\theta_{l}\in I_{l},1\leq l\leq J|J=J^{*})\gtrsim\epsilon_{n}^{4t_{2}J^{* }}\geq\exp\{C_{1}J^{*}\log n\}\geq\exp\{-C_{2}n\epsilon_{n}^{2}\},\] where \(C_{1}\) and \(C_{2}\) are two positive constants. Along with the estimate \(\Pi(J=J^{*})=\exp\{-cJ^{*}\log n+\log(n^{c}-1)\}\geq\exp\{-C_{3}n\epsilon_{n} ^{2}\}\) for some \(C_{3}>0\), the required prior concentration rate is verified. A sieve is chosen to be \(\{g\) given by (3): \(g(0+)\leq M_{n}\}\), where \(M_{n}=\exp\{Cn^{1/(2k+1)}(\log n)^{2k/(2k+1)}\}\) for a large positive constant \(C\). The residual prior probability of the complement of the sieve is bounded by \(\sum_{j=1}^{J_{n}}\Pi(J=j)\Pi(g(0+)\leq M_{n}|J=j)\). Each term in the sum can be estimated as in the proof of Theorem 3.1. It suffices to note that, as in the last theorem, \(\mathrm{E}Q=H\) because the support points and their weights are independently distributed, these weights sum to one, and the support points are i.i.d. draws from \(H\). The metric entropy bound for the sieve is obtained by applying Lemma 8.1, as before. Proof of Theorem 3.3.: The proof is again obtained by applying the general theory on the posterior contraction rate in [25] and the verification of the prior concentration condition proceeds in the same way as in the proof of Theorem 3.1. However, to derive the stated nearly parametric rate, the estimates of the metric entropy and the residual prior probability will have to be refined. Suppose that \(Q_{0}\) is supported on \(J_{0}\) fixed points \(\theta_{1}^{0},\cdots,\theta_{J_{0}}^{0}\) in \((0,1)\) with weights \(w_{1}^{0},\ldots,w_{J_{0}}^{0}\). Let \(c_{0}\) be the minimum of \(\{\theta_{l}^{0}\}\). Let \(I_{l}=(\theta_{l}^{0}-c_{0}\epsilon_{n}^{2}/2,\theta_{l}^{0}+c_{0}\epsilon_{n }^{2}/2)\), for \(l=1,\ldots,J_{0}\), and \(I_{0}=(0,1)\setminus(\cup_{l=1}^{J_{0}}I_{l})\). Clearly, \(I_{l}\), \(1\leq l\leq J_{0}\), are pairwise disjoint when \(n\) large enough. Then \(\|g-g_{0}\|_{1}\) is bounded by the expression on the right side of (8) with \(J_{0}\) replacing \(J^{*}\). Therefore, following the same chain of arguments, the estimate of the prior probability of the Kullback-Leibler neighborhood reduces to \[\Pi\big{(}\sum_{j=0}^{k-1}|\beta_{j}-\beta_{0,j}|\leq\epsilon_{n}^{2}\big{)} \times\Pi(J=J_{0})\times\Pi\big{(}\sum_{l=1}^{J_{0}}|Q(I_{l})-w_{l}^{0}|\leq \epsilon_{n}^{2}|J=J_{0}\big{)}.\] As before, \(-\log\Pi(\sum_{j=0}^{k-1}|\beta_{j}-\beta_{0,j}|\leq\epsilon_{n}^{2})\lesssim k \log(1/\epsilon_{n})\) and \[-\log\Pi\big{(}\sum_{l=1}^{J_{0}}|Q(I_{l})-w_{l}^{0}|\leq\epsilon_{n}^{2}|J=J_ {0}\big{)}\lesssim J_{0}\log(1/\epsilon_{n})\lesssim J_{0}\log n,\] while the prior for \(J\) satisfies \(-\log\Pi(J=J_{0})=cJ_{0}\log n-\log(n^{c}-1)\lesssim J_{0}\log n\). Hence the prior concentration condition (7) holds for \(\epsilon_{n}=\sqrt{(\max(J_{0},k)\log n)/n}\). Take \(\bar{J}=LJ_{0}\) for some \(L>1\) to be determined later. We consider a sieve \(\mathcal{L}_{n}^{k}=\{g\) given by (3): \(Q=\sum_{l=1}^{J}w_{l}\psi_{k}(\cdot,\theta_{l}),(w_{l}:l\leq J)\in\Delta_{J}, (\theta_{l})\in(n^{-2},1)^{J},l=1,\ldots,J,J\leq\bar{J}\}=\cup_{J=1}^{\bar{J}} \mathcal{L}_{n,\bar{J}}^{k}\), say. Then the residual prior is bounded by \[\Pi(J>\bar{J})+\Pi(\theta_{l}<n^{-2},1\leq l\leq J,J\leq\bar{J}). \tag{11}\] The first term in the last display is bounded by \[\exp\{-cL\max\{k,J_{0}\}\log n+\log(n^{c}-1)\}\leq\exp\{-C\max(k,J_{0})\log n\},\] for some \(C>0\). We also observe that \[\Pi(\bigcup_{J\leq\bar{J}}\bigcup_{l\leq J}\{\theta_{l}<n^{-2}\})=\sum_{j=1}^ {\bar{J}}\sum_{l=1}^{J}\Pi(\theta_{l}<n^{-2}|J=j)\leq\bar{J}^{2}H((0,n^{-2})).\] Using the inequality \[\int_{0}^{a}e^{-t/x}dx=\frac{a^{2}}{t}e^{-t/a}-\int_{0}^{a}\frac{2}{t}xe^{-t/ x}dx\leq\frac{a^{2}}{t}e^{-t/a},\quad t>0, \tag{12}\] the estimate above reduced to a constant multiple of \(\bar{J}^{2}n^{-4}e^{-t_{3}n^{2}}\). Thus the residual prior probability is bounded by \(\exp\{-C\max(k,J_{0})\log n\}\), where \(C\) can be chosen as large as we please by making \(L\) large enough. Next, we estimate the metric entropy of the sieve. For any two arbitrary elements \(g_{1},g_{2}\) of \(\mathcal{L}_{n,\bar{J}}^{k}\), \(g_{r}(x)=\sum_{j=0}^{k-1}\beta_{r,j}\psi_{j+1}(x,1)+\beta_{r,k}p_{r}(x)\), where \(p_{r}(x)=\sum_{l=1}^{J}w_{r,j}\psi_{k}(x,\theta_{l})\), \(r=1,2\), observe that \(\left\|g_{1}-g_{2}\right\|_{1}\leq\sum_{j=0}^{k-1}\left|\beta_{1,j}-\beta_{2, j}\right|+\left\|p_{1}-p_{2}\right\|_{1}\). Using the estimate in Lemma 8.2, \(\left\|p_{1}-p_{2}\right\|_{1}\) can be bounded by \[\sum_{l=1}^{J}w_{1,l}\|\psi_{k}(\cdot,\theta_{1,l})-\psi_{k}( \cdot,\theta_{2,l})\|_{1}+\sum_{l=1}^{J}|w_{1,l}-w_{2,l}|\] \[\qquad\leq 2n^{2}\sum_{l\leq J}|\theta_{1,l}-\theta_{2,l}|+\sum_{ l=1}^{J}|w_{1,l}-w_{2,l}|. \tag{13}\] Thus if \(\sum_{j=0}^{k-1}|\beta_{1,l}-\beta_{2,l}|\leq\epsilon_{n}^{2}/2\), \(\sum_{l=1}^{J}|w_{1,l}-w_{2,l}|\leq\epsilon_{n}^{2}/4\) and \(\max\{|\theta_{1,l}-\theta_{2,l}|:l\leq J\}\leq\epsilon_{n}^{2}/(8n^{2}J)\), then \(d_{H}(g_{1},g_{2})\leq\left\|g_{1}-g_{2}\right\|_{1}^{1/2}\leq\epsilon_{n}\). The \(\epsilon_{n}^{2}/(4n^{2}J)\) covering number of \([0,1]\) is bounded by \(4n^{2}J\leq 4n^{2}\bar{J}\). The \(\epsilon_{n}^{2}/2\)-covering number of \(\Delta_{k}\) in the \(\ell_{1}\) metric and the \(\epsilon_{n}^{2}/4\)-covering number of \(\Delta_{J}\) in the \(\ell_{1}\) metric are respectively bounded by \((10/\epsilon_{n}^{2})^{k-1}\) and \((20/\epsilon_{n}^{2})^{J-1}\leq(20/\epsilon_{n}^{2})^{J}\) by Proposition C.1 of [25]. Then the \(\epsilon\)-Hellinger metric entropy of \(\mathcal{L}_{n}^{k}\) is bounded by \[\log(\bar{J}\times 4n^{2}\bar{J}\times(10/\epsilon_{n}^{2})^{k-1}\times(20/ \epsilon_{n}^{2})^{\bar{J}})\lesssim\bar{J}(\log n+\log(1/\epsilon)).\] Thus for \(\epsilon_{n}=\sqrt{(\max(J_{0},k)\log n)/n}\), the entropy condition (8.5) of Theorem 8.9 of [25] holds. Proof of Theorem 4.1.: The first condition follows from the proof of Theorems 3.1 and 3.2 upon conditioning on \(k=k_{0}\) and using the fact that \(-\log\Pi(k=k_{0})\lesssim k_{0}\log k_{0}\leq n\epsilon_{n}^{2}\) under (K1) and \(-\log\Pi(k=k_{0})\lesssim k_{0}\log n\lesssim n\epsilon_{n}^{2}\) under (K2), where \(\epsilon_{n}=(n/\log n)^{-k_{0}/(2k_{0}+1)}\). Then the first condition holds for both the Dirichlet process mixture prior (see the proof of Theorem 3.1) and the finite mixture prior (see the proof of Theorem 3.2). To verify the remaining two conditions for posterior contraction rate, we first address the finite mixture prior. For the metric entropy condition, consider the sieve \(\mathcal{L}_{n}=\cup_{k=1}^{k_{n}}\cup_{j=1}^{J_{n}}\mathcal{L}_{j,k}\), where \(\mathcal{L}_{J,k}=\big{\{}\sum_{j=0}^{k-1}\beta_{j}\psi_{j+1}(x,1)+\beta_{k} \sum_{l=1}^{J}w_{l}\psi_{k}(\cdot,\theta_{l})\in\mathcal{D}^{k}:(w_{l}:l\leq J )\in\Delta_{J},n^{-2}\leq\theta_{l}\leq 1,l\leq J\big{\}}.\) Following the corresponding part in the proof of Theorem 3.3, we know that the Hellinger metric entropy of \(\mathcal{L}_{J_{n},k}\) is bounded by up to a constant multiple of \(\max(k,J_{n})\log n\). Hence, the Hellinger entropy of \(\mathcal{L}_{n}\) can be bounded as follows, \[\log\sum_{k=1}^{k_{n}}\mathcal{N}(\epsilon_{n},\mathcal{L}_{J_{n},k},d_{H}) \leq\log k_{n}+\log\mathcal{N}(\epsilon_{n},\mathcal{L}_{J_{n},k_{n}},d_{H}) \lesssim\max(k_{n},J_{n})\log n.\] By choosing \(J_{n}\) the integer part of \(L_{1}(n/\log n)^{1/(2k_{0}+1)}\) for some \(L_{1}>0\), we get \(J_{n}\log n\asymp n\epsilon_{n}^{2}\) while maintaining \(J_{n}>J^{*}\asymp\epsilon_{n}^{-k_{0}}\), where \(J^{*}\) is the number of terms used to approximate to derive the estimate in the prior concentration condition. We choose \(k_{n}\) the integer part of \(L_{2}(n/\log n)^{1/(2k_{0}+1)}\) for some \(L_{2}>0\) to fulfil the entropy condition. Now it remains to bound the residual prior probability of the sieve. Under both (K1) and (K2), the tail estimate of \(\Pi(k>k_{n})\leq e^{-Ln\epsilon_{n}^{2}}\) is obtained with \(L\) as large as we please by choosing \(L_{2}\) sufficiently large. A similar argument applies for the tail \(\Pi(J>J_{n})\). Now \[\Pi(\theta_{l}<n^{-2},1\leq l\leq J,J\leq J_{n})\leq\sum_{m=1}^{J_{n}}\sum_{l=1 }^{J}\Pi(\theta_{l}\in(0,n^{-2}))\leq J_{n}^{2}n^{-4}e^{-t_{3}n^{2}},\] where the last inequality is due to (12). This expression is also bounded by \(e^{-Ln\epsilon_{n}^{2}}\) where we can make \(L>0\) as large as we wish. the proof for the finite mixture prior case now follows by an application of the general theory of posterior contraction. For the Dirichlet process mixture prior, the sieve construction and the residual prior bounding need some modifications. We will elaborate on the differences in the following. Consider the sieve, \(\mathcal{E}_{n}=\cup_{k=1}^{k_{n}}\mathcal{E}_{k,n}\) where \[\mathcal{E}_{k,n}=\big{\{}\sum_{j=0}^{k-1}\beta_{j}\psi_{j+1}(x,1 )+\beta_{k}\sum_{l=1}^{\infty}w_{l}\psi_{k}(x,\theta_{l})\in\mathcal{D}^{k}: \\ (w_{j}:j=1,2,\ldots)\in\Delta_{\infty},\sum_{j>J_{n}}w_{j}< \epsilon_{n}^{2},\theta_{1},\ldots,\theta_{J_{n}}\in(n^{-2},1)\big{\}}.\] The residual prior probability is bounded in the following, \[\Pi(\mathcal{E}_{n}^{\text{c}})\leq\Pi(k>k_{n})+\Pi\big{(}\sum_{j>J_{n}}w_{j} \geq\epsilon_{n}^{2}\big{)}+J_{n}H((0,n^{-2})).\] The first and the third terms can be bounded in a similar way as in the previous part. For the second term, by stick-breaking weight representation, \(\sum_{l>J_{n}}w_{l}=\prod_{l=1}^{J_{n}}(1-V_{l})\), where \(V_{l}\stackrel{{ i.i.d.}}{{\sim}}\text{Beta}(1,a)\). Since \(-\sum_{l=1}^{J_{n}}\log(1-V_{l})\) is Gamma distributed with shape parameter \(J_{n}\) and rate parameter \(A\). Then it follows that \(\Pi(\sum_{l>J_{n}}w_{l}\geq\epsilon_{n}^{2})\) is given by \[\text{P}\big{(}-\sum_{l=1}^{J_{n}}\log(1-V_{l})\leq 2\log\epsilon_{n}^{-1} \big{)}\leq\frac{(2a\log\epsilon_{n}^{-1})^{J_{n}}}{(J_{n}-1)!}\leq\sqrt{ \frac{J_{n}}{2\pi}}(2eAJ_{n}^{-1}\log\epsilon_{n}^{-1})^{J_{n}}\] by Stirling's inequality for factorials. Choosing \(J_{n}\) to be the integer part of \(L_{1}(n/\log n)^{1/(2k_{0}+1)}\) for some \(L_{1}\),. We can bound the expression by \(e^{-Ln\epsilon_{n}^{2}}\), where \(L\) can be made as large as we like by choosing \(L_{1}\) large enough. Following the same argument of the proof of Theorem 3.3, we obtain the bound in (13) plus \(\|\sum_{l\geq J_{n}}w_{l}\psi_{k}(\cdot,\theta_{l})-\sum_{l\geq J_{n}}w_{l}^{ \prime}\psi_{k}(\cdot,\theta_{l}^{\prime})\|_{1}\leq 2\epsilon_{n}^{2}\). Hence, the Hellinger metric entropy of the sieve \(\mathcal{E}_{k,n}\) can be bounded by a constant multiple of \(J_{n}\log n\). The proof is concluded by following the same argument used for the finite mixture case. Proof of Theorem 5.1.: For two density functions \(g_{1},g_{2}\) from model (3), we represent them as \(g_{1}(u)=\alpha_{1}+(1-\alpha_{1})h_{1}(u)\) and \(g_{2}(u)=\alpha_{2}+(1-\alpha_{2})h_{2}(u)\), separating out the constant component. We shall bound the \(|\alpha_{1}-\alpha_{2}|\) by a constant multiple of the square root of the Hellinger distance between \(g_{1}\) and \(g_{2}\). This will lead to the conclusion in view of Theorems 3.1, 3.2, and 4.1. For \(\alpha_{1}>\alpha_{2}\) and \(\alpha_{1}\leq g_{2}(0+)\), the solution \(s_{0}\) to the equation \(g_{2}(u)=\alpha_{1}\) exists and is unique due to the strict convexity of \(g_{2}\). Then \[g_{2}(u)\geq g_{2}^{\prime}(s_{0})(u-s_{0})+\alpha_{1}\text{ for every }u\in(0,1); \tag{14}\] here \(g_{2}^{\prime}\) can be considered as either the right or the left derivative, both of which are well-defined for a convex function. As \(g_{2}\geq\max\{g_{2}^{\prime}(s_{0})(u-s_{0})+\alpha_{1},0\}\) for every \(u\in(0,1)\), and \(|g_{2}^{\prime}(s_{0})|\geq(\alpha_{1}-\alpha_{2})/(1-s_{0})\), the absolute slope of the line passing through two points on the graph of \(g_{2}\), \((s_{0},\alpha_{1})\) and \((1,\alpha_{2})\), due to the convexity of \(g_{2}\), upon integrating (14), it follows that \[1\geq\int_{0}^{s_{0}}[g_{2}^{\prime}(s_{0})(u-s_{0})+\alpha_{1}]du\geq\frac{( \alpha_{1}-\alpha_{2})s_{0}^{2}}{2(1-s_{0})}+\alpha_{1}s_{0}\geq\frac{(\alpha _{1}-\alpha_{2})s_{0}}{2(1-s_{0})}.\] This implies that \(s_{0}\leq(1+(\alpha_{1}-\alpha_{2})/2)^{-1}\), or equivalently, the bound \(1-s_{0}\geq(1+2/(\alpha_{1}-\alpha_{2}))^{-1}\geq(\alpha_{1}-\alpha_{2})/3\). Using these estimates \[\left\|g_{1}-g_{2}\right\|_{1}\geq\int_{s_{0}}^{1}(g_{1}(u)-g_{2}(u))du\geq\int _{s_{0}}^{1}(\alpha_{1}-\frac{\alpha_{1}-\alpha_{2}}{1-s_{0}}(s_{0}-u)-\alpha _{1})du\] is seen to be bounded below by \((\alpha_{1}-\alpha_{2})(1-s_{0})/2\geq(\alpha_{1}-\alpha_{2})^{2}/6\). Thus \[|\alpha_{1}-\alpha_{2}|\leq\sqrt{6\left\|g_{1}-g_{2}\right\|_{1}}\leq\sqrt{6d _{H}(g_{1},g_{2})}. \tag{15}\] Let \(\mathbf{U}_{n}=\{U_{1},\ldots,U_{n}\}\). Hence, for any \(M_{n}\to\infty\), \(\Pi(|\alpha-\alpha_{0}|>M_{n}(n/\log n)^{-k/(2(2k+1))}|\mathbf{U}_{n})\leq\Pi(d_{ H}(g,g_{0})>(M_{n}^{2}/6)(n/\log n)^{-k/(2k+1)}|\mathbf{U}_{n})\to 0\) in probability under the true distribution. ## Appendix: Proofs of the auxiliary results The following lemma is adapted from Theorem 3 of [15], which gives an upper bound of the Hellinger metric entropy of \(k\)-monotone functions. **Lemma 8.1**.: _Let \(\mathcal{F}\) be the set of nonnegative \(k\)-monotone functions on an interval \([p,p+A]\) such that \(f(p)\leq B\) and \(\int f\leq M\) for any \(f\in\mathcal{F}\), then_ \[\log\mathcal{N}(2\epsilon,\mathcal{F},d_{H})\leq\log\mathcal{N}_{[\}]( \epsilon,\mathcal{F},d_{H})\lesssim|\log AB|^{1/(2k)}M^{1/k}\epsilon^{-1/k}.\] The following lemma gives a property of the kernel function \(\psi_{k}(\cdot;\theta)\) that will be used in our analysis. **Lemma 8.2**.: _For \(\psi_{k}(x,\theta)\) as defined in (1), we have_ \[\|\psi_{k}(\cdot,\theta)-\psi_{k}(\cdot,\theta^{\prime})\|_{1}\leq 2(1-\min\{ \theta,\theta^{\prime}\}/\max\{\theta,\theta^{\prime}\}). \tag{16}\] Proof.: Without loss of generality, assume that \(0<\theta<\theta^{\prime}\). Let \(\delta_{k}(x)=\psi_{k}(x,\theta)-\psi_{k}(x,\theta^{\prime})\). It is easy to see that (16) holds for \(k=1\). In fact, the equality in (16) holds for \(k=1\) by direct calculation using the fact that \(\psi_{k}(\cdot,\theta)\) and \(\psi_{k}(\cdot,\theta^{\prime})\) are two densities of uniform distributions on \((0,\theta)\) and \((0,\theta^{\prime})\) respectively. It is clear that \(\delta_{k}(x)\equiv 0\) on \([\theta^{\prime},1)\). If \(k\geq 2\), we first claim that there exists a unique solution \(x_{0}\) to the equation \(\delta_{k}(x)=0\) for \(x\in(0,\theta^{\prime})\). Since \(\delta_{k}(x)<0\) for all \(x\in[\theta,\theta^{\prime})\), we restrict \(x\) in \((0,\theta)\). Noting that \(\delta_{k}(0)=k(\theta^{-1}-\theta^{\prime-1})>0\) and \(\delta_{k}(\theta)=-k\theta^{\prime-1}(1-\theta/\theta^{\prime})^{k-1}<0\), by the continuity of \(\delta_{k}\), there exists at least one \(x\) such that \(\delta_{k}(x)=0\). Additionally, by (1), for \(x\in(0,\theta)\), the equation \(\delta_{k}(x)=0\) is equivalent to \(\{(\theta^{\prime}-x)/(\theta-x)\}^{k-1}=(\theta^{\prime}/\theta)^{k}\). Since the function on the left-hand side of the equation is strictly increasing in \(x\in(0,\theta)\), there can be only one solution. By continuity again, \(\delta_{k}(x)>0\) for \(x\in(0,x_{0})\) and \(\delta_{k}(x)<0\) for \(x\in(x_{0},\theta^{\prime})\), and hence the \(\mathbb{L}_{1}\)-distance in (16) as \[2\int_{0}^{x_{0}}\big{[}\frac{k}{\theta}\big{(}1-\frac{x}{\theta}\big{)}^{k-1} -\frac{k}{\theta^{\prime}}\big{(}1-\frac{x}{\theta^{\prime}}\big{)}^{k-1} \big{]}dx=2\big{[}\big{(}1-\frac{x_{0}}{\theta^{\prime}}\big{)}^{k}-\big{(}1- \frac{x_{0}}{\theta}\big{)}^{k}\big{]}.\] Rewriting the equation \(\delta_{k}(x)=0\) as \((1-x/\theta)^{k}=\frac{\theta-x}{\theta^{\prime}-x}(1-x/\theta^{\prime})^{k}\), the expression for the \(\mathbb{L}_{1}\)-distance reduces to \(2(1-x_{0}/\theta^{\prime})^{k-1}(1-\theta/\theta^{\prime})\). Since \(0<x_{0}<\theta<\theta^{\prime}\), the bound \(2(1-\theta/\theta^{\prime})\) is immediate. Proof of Lemma 2.1.: For sufficiency, note that \(f\) given in (2) is continuously differentiable up to the order \(k-2\). The derivatives are given by \[(-1)^{j}f^{(j)}(x) =\sum_{l=j}^{k-1-j}\alpha_{l}l(l-1)\cdots(l-j+1)(1-x)^{l-j}\] \[\quad+\int_{0}^{1}\frac{k(k-1)\cdots(k-j)}{t^{j+1}}\big{(}1- \frac{x}{t}\big{)}_{+}^{k-1-j}d\gamma(t), \tag{17}\] for \(j=0,1,\ldots,k-2\). It is obvious that the expression in (17) are nonnegative and nonincreasing as \(\alpha_{j}\geq 0\). The derivative functions (17) are also convex as the summation and integral of convex functions are also convex. To prove the necessity of the characterization, expand \(f\) in a Taylor series at \(a\in(0,1)\) using the fact that \(f^{(k-2)}\) is absolutely continuous on \([a,x]\) or \([x,a]\): \[f(x)=\sum_{j=0}^{k-2}\frac{(x-a)^{j}}{j!}f^{(j)}(a)+\int_{a}^{x}\frac{(x-t)^{k -2}}{(k-2)!}f^{(k-1)}(t)dt,\] where \(f^{(k-1)}\) can be either the right or the left derivative function of the convex or concave function \(f^{(k-2)}\), as they are different only on up to countably many points. Note that \(f^{(k-1)}\) is a monotone piecewise constant function with bounded variation on \([a,x]\) or \([x,a]\). Applying integration by parts on the reminder once and letting \(a\) tend to \(1\), we obtain \[f(x) =\sum_{j=0}^{k-1}\frac{(x-1)^{j}}{j!}f^{(j)}(1-)-\int_{x}^{1-} \frac{(x-t)^{k-1}}{(k-1)!}df^{(k-1)}(t)\] \[=\sum_{j=0}^{k-1}\frac{(x-1)^{j}}{j!}f^{(j)}(1-)+\int_{0+}^{1-} \frac{k}{t}\left(1-\frac{x}{t}\right)_{+}^{k-1}d\gamma(t),\] where \(\gamma(t)=\int_{0+}^{t}(-1)^{k}u^{k}df^{(k-1)}(u)/k!\) for any \(t>0\). Note that \(\gamma\) is nondecreasing as \((-1)^{k}f^{(k-1)}\) is nondecreasing. Then the characterization for \(k\)-monotone functions follows. The characterization for \(k\)-monotone densities follows from proper normalization to a probability density function, which leads to a constraint, \(\{\beta_{j}:0\leq j\leq k\}\in\Delta_{k+1}\) ### Discrete approximation We shall show the following shape-preserving approximation result using free knot splines. Indeed, we consider the shape-preserving free knot spline approximation of the functions in \(\check{\mathcal{F}}^{k}\), which can be transformed into the approximant of \(\mathcal{F}^{k}\) since if \(f\in\check{\mathcal{F}}^{k}\cap\mathbb{L}_{p}\) and \(s\in\mathcal{S}_{N,k}\), then \(f\circ\tau\in\mathcal{F}^{k}\cap\mathbb{L}_{p}\), \(s\circ\tau\in\mathcal{S}_{N,k}\), and \(\|f-s\|_{p}=\|f\circ\tau-s\circ\tau\|_{p}\) and vice versa. However, Lemma 8.4 is not a consequence of Proposition 2.2. Since \(\check{\mathcal{F}}^{k}\) is a proper subset of \(\mathcal{C}^{k}\), for \(f\in\check{\mathcal{F}}^{k}\cap\mathbb{L}_{p}\), typically \(d_{p}(f,\mathcal{S}_{N,k}\cap\check{\mathcal{F}}^{k})\geq d_{p}(f,\mathcal{S} _{N,k}\cap\mathcal{C}^{k})\). Thus we can not directly derive Lemma 8.4 from Proposition 2.2. Fortunately, we can follow the argument of the proof of Proposition 2.2 by modifying some supporting lemmas therein, and the \(k\)-convex free knot spline approximant, constructed in [35], of a \(k\)-convex function is a free knot spline in \(\check{\mathcal{F}}^{k}\) provided the function to be approximated is not only \(k\)-convex but in \(\check{\mathcal{F}}^{k}\) as well. We introduce some notations used in [35] and also used in the following lemmas. For \((a,b)\subset(0,1)\), set \[\mathcal{C}^{k}_{*}(a,b) =\{f\in\mathcal{C}^{k}:\max\{|f^{(j)}(a+)|,|f^{(j)}(b-)|:j=0,1, \ldots,k-1\}<\infty\}\] \[\check{\mathcal{F}}^{k}_{*}(a,b) =\{f\in\check{\mathcal{F}}^{k}:\max\{|f^{(j)}(b-)|:j=0,1,\ldots,k -1\}<\infty\}.\] Note that, if \(f\in\check{\mathcal{F}}^{k}\), \(f^{(j)}\) is nonnegative and nondecreasing for every \(j=0,1,\ldots,k-1\). Then \(\check{\mathcal{F}}^{k}_{*}(a,b)\subset\mathcal{C}^{k}_{*}(a,b)\) as \(f^{(j)}(a)\) are all bounded up to the order \(k-1\). For \(f\in\mathcal{C}^{k}_{*}(a,b)\), let \(\mathcal{C}^{k}[f](a,b)\) stand for the set \[\left\{g\in\mathcal{C}^{k}:\begin{aligned} g^{(j)}(a+)& =f^{(j)}(a+),0\leq j\leq k-2,g^{(k-1)}(a+)\geq f^{(k-1)}(a+);\\ g^{(j)}(b-)&=f^{(j)}(b-),0\leq j\leq k-2,g^{(k-1)}(b-) \leq f^{(k-1)}(b-).\end{aligned}\right\}.\] In view of the following lemma, we can assume, without loss of generality that \(f\) has bounded derivatives up to the order \(k-1\). **Lemma 8.3**.: _Let \(f\in\check{\mathcal{F}}^{k}\cap\mathbb{L}_{p}(0,1)\) for \(1\leq p\leq\infty\). Then for any \(\epsilon>0\), there exists \(f_{\epsilon}\in\check{\mathcal{F}}^{k}_{*}(0,1)\) such that \(\|f-f_{\epsilon}\|_{p}<\epsilon\)._ Proof of Lemma 8.3.: This proof follows from some modifications from [35, Lemma 4.4]. First, we construct \(f_{\epsilon}\) in the following. For \(u\in(0,1)\), denote the Taylor polynomial of \(f\) up to the degree \(k-1\) at \(u\) as \(T_{u}(x)=\sum_{l=0}^{k-1}f^{(l)}(u+)l!(x-u)^{l}/l!\). For some \(\delta\in(0,1)\), define \(f_{\epsilon}(x)\) to be \(f(x)\) if \(x\in[0,1-\delta]\) and \(T_{1-\delta}(x)\) if \(x\in[1-\delta,1]\). By the proof of [35, Lemma 4.4], we know that \(\|f-f_{\epsilon}\|_{p}\to 0\) as \(\delta\to 0\). To conclude the proof, it suffices to show that \(f_{\epsilon}\in\check{\mathcal{F}}^{k}\). By definition, \(f^{(j)}((1-\delta)+)\) are all nonnegative for \(j=0,1,\ldots,k-1\). Then on \([1-\delta,1]\), the derivative values \(T^{(j)}_{1-\delta}\) are nonnegative and nondecreasing for \(j=0,1,\ldots,k-1\). As \(T_{1-\delta}\) is the Taylor polynomial of degree \(k-1\), obviously, \(f^{(j)}_{\epsilon}\) are nonnegative and nondecreasing on \([0,1]\) up to the order \(k-1\). Then we can say that \(f^{(j)}_{\epsilon}\) is nonnegative, nondecreasing, and convex on \([0,1]\) for every \(j=0,1,\ldots,k-1\), that is, \(f_{\epsilon}\in\check{\mathcal{F}}^{k}\). **Lemma 8.4**.: _For any \(f\in\check{\mathcal{F}}^{k}\cap\mathbb{L}_{p}(0,1)\), there exists some \(s\in\mathcal{S}_{C_{k}N,k}\cap\check{\mathcal{F}}^{k}\) such that \(\|f-s\|_{p}\leq C_{k,p}d_{p}(f,\mathcal{S}_{N,k})\) and \(s^{(j)}(0+)=f^{(j)}(0+)\) for \(j=0,1,\ldots,k-2\)._ Proof of Lemma 8.4.: In view of Lemma 8.3, we can assume that \(f\in\tilde{\mathcal{F}}_{*}^{k}(0,1)\subset\mathcal{C}_{*}^{k}(0,1)\). Following the proof of [35, Theorem 1], we can construct a spline \(s\in\mathcal{S}_{C_{k}N,k}\) such that \(s\in\mathcal{C}^{k}[f](0,1)\) and \(\|f-s\|_{p}\leq C_{k,p}d_{p}(f,\mathcal{S}_{N,k})\). Next, we will show that \(s\in\tilde{\mathcal{F}}^{k}\). As \(s\in\mathcal{C}^{k}[f](0,1)\), \(s^{(k-1)}\) is of piecewise constant and nondecreasing due to the convexity of \(s^{(k-2)}\) and, moreover, \(s^{(k-1)}(0+)\geq f^{(k-1)}(0+)\geq 0\). Then \(s^{(k-1)}\) is nonnegative. As \(s^{(k-2)}(0+)=f^{(k-2)}(0+)\geq 0\), \(s^{(k-2)}\) is nonnegative, nondecreasing and convex. Noting that \(s^{(j)}(0)=f^{(j)}(0)\geq 0\) for \(j=0,1,\ldots,k-3\), by induction, it is easy to see that \(s^{(j)}\) is nonnegative, nondecreasing, and convex for every \(j=0,1,\ldots,k-3\) since \(s^{(j+1)}\) is nonnegative. Hence, we conclude that \(s\in\tilde{\mathcal{F}}^{k}\). Proof of Lemma 2.3.: Observe that \(g\in\mathbb{L}_{\infty}(0,1)\) as \(|g^{(k-1)}(0+)|<\infty\). Note that \(\mathcal{D}^{k}\) is a subclass of \(\mathcal{F}^{k}\). By Lemma 8.4, for any \(g\in D^{k}\), there exists a \(\tilde{g}\in\mathcal{S}_{N,k}\cap\mathcal{F}^{k}\) such that \(\|g-\tilde{g}\|_{p}\leq Cd_{p}(g,\mathcal{S}_{N,k})\) and \(\tilde{g}(1-)=g(1-)=0\) for \(j=0,1,\ldots,k-2\). By Lemma 2.1, \(\tilde{g}(x)=\alpha_{k-1}(1-x)^{(k-1)}/(k-1)!+\int_{0}^{1}kt^{-1}(1-x/t)_{+}^{k -1}d\gamma(t)\) for some nonnegative \(\alpha_{k-1}\) and some nondecreasing function \(\gamma\) on \((0,1)\). The first polynomial term can be incorporated into the integral by defining \(\gamma(1)=\gamma(1-)+\alpha_{k-1}/k!\). Since \(\tilde{g}\) is a piecewise polynomial of degree \(k-1\), we conclude that \(\gamma\) is piecewise constant function with jumps at the knots of the spline. Let \(g_{N}=\tilde{g}/\int\tilde{g}\) satisfying the structure requirement. Now \(g_{N}\) maintains the desired approximation rate: \[\|g-g_{N}\|_{\infty}\leq\|g-\tilde{g}\|_{\infty}+\big{\|}\tilde{g}-\frac{ \tilde{g}}{\int\tilde{g}}\big{\|}_{\infty}\leq\frac{1+\|g\|_{\infty}}{1-\|g- \tilde{g}\|_{\infty}}\|g-\tilde{g}\|_{\infty}. \tag{18}\] By [9, Theorem 12.4.5], \(\|g-\tilde{g}\|_{\infty}\leq C_{k,g}N^{-k}\), provided \(|g^{(k-1)}(0+)|<\infty\). Thus the right-hand side of (18) can be further bounded by \(C^{\prime}_{k,g}N^{-k}\). ## Acknowledgments The authors are deeply indebted to Professor Bodhisattva Sen for bringing the present problem to the authors' attention and for pointing out to several key references.
2303.11900
Determination of the order in Abstract fractional differential equations
In this paper we identify, for small $t$ and a fixed $T>0,$ the order $\alpha>0$ in the abstract fractional differential equation $$\partial^\alpha u(t)=Au(t),$$ where the time-fractional derivative $\partial^\alpha$ is understood in the sense of Caputo and Riemann-Liouville, $A$ is a closed (possibly unbounded) linear operator in a Banach space $X,$ and $0<\alpha<1$ or $1<\alpha<2.$
Rodrigo Ponce
2023-03-21T14:45:34Z
http://arxiv.org/abs/2303.11900v1
# Determination of the order in abstract fractional differential equations. # Determination of the order in abstract fractional differential equations. Rodrigo Ponce Department of Mathematics, University of California, Berkeley, CA 94720, USA [email protected] March 15, 2023 and, in revised form, XXX, XXX ###### Abstract. In this paper we identify, for small \(t\) and a fixed \(T>0,\) the order \(\alpha>0\) in the abstract fractional differential equation \[\partial^{\alpha}u(t)=Au(t),\] where the time-fractional derivative \(\partial^{\alpha}\) is understood in the sense of Caputo and Riemann-Liouville, \(A\) is a closed (possibly unbounded) linear operator in a Banach space \(X,\) and \(0<\alpha<1\) or \(1<\alpha<2.\) Key words and phrases:Resolvent families; inverse problems; fractional differential equations; unbounded linear operators 2020 Mathematics Subject Classification: Primary 34K29, Secondary 34A55, 47D06, 26A33 ## 1. Introduction The problem of finding or approximating the order in time-fractional differential equations has been widely studied in the last ten years. See for instance [3, 5, 10, 14, 15, 16, 19, 20, 22, 26, 30, 31]. One of the most notable contributions is the paper [10], where authors consider (for \(0<\alpha<1\)) the fractional differential equation for the Caputo fractional derivative \[\partial_{t}^{\alpha}u(x,t)=Au(x,t),\quad x\in\Omega,t>0 \tag{1.1}\] under the initial condition \(u(x,0)=u_{0}(x),x\in\Omega,\) where \(\Omega\) and \(A\) are defined as follows: For a bounded open set \(\Omega\subset\mathbb{R}^{N}\) with sufficiently smooth boundary \(\partial\Omega,\) let \(X\) be the Hilbert space \(L^{2}(\Omega).\) On \(X,\) the operator \(\mathcal{A}\) is defined by \(\mathcal{A}u(x)=\sum_{i=1}^{N}\frac{\partial}{\partial x_{i}}\left(\sum_{j=1}^ {N}A_{ij}(x)\frac{\partial}{\partial x_{i}}u(x)\right),u\in X,\) where \(A_{ij}=A_{ji}\) for any \(1\leq i,j\leq N.\) Suppose that there exists a constant \(\gamma>0\) such that \(\sum_{i,j=1}^{N}A_{ij}(x)\xi_{i}\xi_{j}\geq\gamma|\xi|^{2},\) for all \(\xi\in\mathbb{R}^{N}\) and \(x\in\overline{\Omega}.\) The operator \(A:D(A)\to X\) is defined by \[(Au)(x)=(\mathcal{A}u)(x),\quad x\in\Omega,\] where \(D(-A)=H^{2}(\Omega)\cap H^{1}_{0}(\Omega).\) The operator \(-A\) has a discrete spectrum and its eigenvalues satisfy its eigenvalues satisfy \(0<\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{n}\leq\cdots\) and \(\lim_{n\to\infty}\lambda_{n}=\infty.\) Now, if \(\phi_{n}\in H^{2}(\Omega)\cap H^{1}_{0}(\Omega)\) denotes the normalized eigenfunction associated with \(-\lambda_{n},\) then, by the Fourier method (see [28]), the solution \(u\) to (1.1) is given by \[u(x,t)=\sum_{n=1}^{\infty}\langle u_{0},\phi_{n}\rangle_{L^{2}(\Omega)}E_{ \alpha,1}(-\lambda_{n}t^{\alpha})\phi_{n}(x),\quad x\in\Omega,t>0, \tag{1.2}\] where for any \(\alpha,\beta>0\) and \(z\in\mathbb{C},\)\(E_{\alpha,\beta}\) denotes the Mittag-Leffler function which is defined by \(E_{\alpha,\beta}(z):=\sum_{k=0}^{\infty}\frac{z^{k}}{\Gamma(\alpha k+\beta)}.\) If \(u(t,x)\) denotes the solution to (1.1), \(u_{0}\in C_{0}^{\infty}(\Omega)\) with \(Au_{0}(x)\neq 0\) and \(x_{0}\) is a fixed element in \(\Omega,\) then the order \(\alpha\) in (1.1) is given by (see [10, Theorem 1]) \[\alpha=\lim_{t\to 0^{+}}\frac{t\frac{\partial u}{\partial t}(x_{0},t)}{u(x_{0},t)-u _{0}(x_{0})}. \tag{1.3}\] A similar result holds for \(t\to+\infty\) (see also [10, Theorem 1]). Thus, to determinate the order \(\alpha\) we need to know \(u(x_{0},t)\) and \(\frac{\partial u}{\partial t}(x_{0},t)\) for \(t>0\) on an interval close to \(0\) (or \(+\infty\)). As the authors mention in [23, p. 440], "the problems of the recovery of the fractional orders are far from satisfactory since all the publications either assumed the homogeneous boundary condition or studied this inverse problem by the measurement in \(t\in(0,\infty)\)." Therefore, the problem of finding the order \(\alpha\) in (1.1) in terms of its solution \(u(x,t)\) for a fixed time \(t>0\) remains as an open problem. Now, if for any \(t\geq 0\), we define the family of linear operators \(S_{\alpha,\beta}(t):X\to X\) by \[S_{\alpha,\beta}(t)u(x):=\sum_{n=1}^{\infty}\langle u_{0},\phi_{n}\rangle_{L^ {2}(\Omega)}t^{\beta-1}E_{\alpha,\beta}(-\lambda_{n}t^{\alpha})\phi_{n}(x), \quad x\in\Omega,u\in X,\] then, the solution (1.2) to equation (1.1) can be written as \[u(x,t)=S_{\alpha,1}(t)u_{0}(x),\quad x\in\Omega,t\geq 0. \tag{1.4}\] The properties of the Laplace transform of the Mittag-Leffler function imply that \(\{S_{\alpha,\beta}(t)\}_{t\geq 0}\) corresponds to an \((\alpha,\beta)\)-fractional resolvent family generated by \(A\), see for instance [25]. This theory allows us write the solutions to fractional differential equations (for the Caputo and Riemann-Liouville derivatives) in case \(0<\alpha<1\) and \(1<\alpha<2\) as a variation of parameters formulas. In fact, consider the fractional differential equations for the Caputo fractional derivative, \[\left\{\begin{array}{lll}\partial_{t}^{\alpha}u(x,t)&=&Au(x,t),\quad t>0\\ u(x,t)&=&0,\quad\quad\quad\quad x\in\partial\Omega,t>0\\ u(x,0)&=&u_{0}(x),\quad x\in\Omega,\end{array}\right. \tag{1.5}\] (for \(0<\alpha<1\)) and \[\left\{\begin{array}{lll}\partial_{t}^{\alpha}u(x,t)&=&Au(x,t),\quad t>0\\ u(x,t)&=&0,\quad\quad\quad\quad x\in\partial\Omega,t>0\\ u(x,0)&=&u_{0}(x),\quad x\in\Omega\\ \partial_{t}u(x,0)&=&u_{1}(x),\quad x\in\Omega,\end{array}\right. \tag{1.6}\] (for \(1<\alpha<2\)) where \(u_{0},u_{1}\in X.\) By [28], the solution to (1.5) is given by (1.4) and the solution to (1.6) is \[u(x,t)=\sum_{n=1}^{\infty}\langle u_{0},\phi_{n}\rangle_{L^{2}(\Omega)}E_{ \alpha,1}(-\lambda_{n}t^{\alpha})\phi_{n}(x)+\sum_{n=1}^{\infty}\langle u_{1},\phi_{n}\rangle_{L^{2}(\Omega)}tE_{\alpha,2}(-\lambda_{n}t^{\alpha})\phi_{n} (x),\] and therefore, the solutions to (1.5) and (1.6) can be written, in terms of the resolvent family \(\{S_{\alpha,\beta}(t)\}_{t\geq 0}\), respectively, as \[u(x,t)=S_{\alpha,1}(t)u_{0}(x),\quad u(x,t)=S_{\alpha,1}(t)u_{0}(x)+S_{\alpha,2}(t)u_{1}(x),\quad x\in\Omega,t\geq 0.\] Now, if we consider the fractional differential equations for the Riemann-Liouville fractional derivatives \[\left\{\begin{array}{lll}{}^{R}\partial_{t}^{\alpha}u(x,t)&=&Au(x,t),\quad t >0\\ u(x,t)&=&0,\quad\quad\quad x\in\partial\Omega,t>0\\ (g_{1-\alpha}*u)(x,0)&=&u_{0}(x),\quad x\in\Omega,\end{array}\right. \tag{1.7}\] (for \(0<\alpha<1\)) and \[\left\{\begin{array}{lll}{}^{R}\partial_{t}^{\alpha}u(x,t)&=&Au(x,t),\quad t >0\\ u(x,t)&=&0,\quad\quad\quad x\in\partial\Omega,t>0\\ (g_{2-\alpha}*u)(x,0)&=&u_{0}(x),\quad x\in\Omega\\ \partial_{t}(g_{2-\alpha}*u)(x,0)&=&u_{1}(x),\quad x\in\Omega,\end{array}\right. \tag{1.8}\] (for \(1<\alpha<2\)), where \(u_{0},u_{1}\in X\), \({}^{R}\partial_{t}^{\alpha}\) corresponds to the Riemann-Liouville fractional derivative and \(g_{2-\alpha}(t):=t^{1-\alpha}/\Gamma(2-\alpha)\), then, the solutions to (1.7) and (1.8) are given respectively, by (see for instance [27]), \[u(x,t)=\sum_{n=1}^{\infty}\langle u_{0},\phi_{n}\rangle_{L^{2}(\Omega)}t^{ \alpha-1}E_{\alpha,\alpha}(-\lambda_{n}t^{\alpha})\phi_{n}(x),\] and \[u(x,t)=\sum_{n=1}^{\infty}\langle u_{0},\phi_{n}\rangle_{L^{2}(\Omega)}t^{\alpha-2 }E_{\alpha,\alpha-1}(-\lambda_{n}t^{\alpha})\phi_{n}(x)+\sum_{n=1}^{\infty} \langle u_{1},\phi_{n}\rangle_{L^{2}(\Omega)}t^{\alpha-1}E_{\alpha,\alpha}(- \lambda_{n}t^{\alpha})\phi_{n}(x),\] which can be written, respectively, as \[u(x,t)=S_{\alpha,\alpha}(t)u_{0}(x),\quad u(x,t)=S_{\alpha,\alpha-1}(t)u_{0}(x )+S_{\alpha,\alpha}(t)u_{1}(x).\] The resolvent families have been extensively studied, both in abstract settings and in applications (see for instance [7, 8, 9, 12, 13, 25, 27, 29]). The operators \(\{S_{\alpha,\beta}(t)\}_{t\geq 0}\) are well-known in some cases: the uniqueness of the Laplace transform implies that \(\{S_{1,1}(t)\}_{t\geq 0}\) is the \(C_{0}\)-semigroup generated by \(A,\)\(\{S_{2,1}(t)\}_{t\geq 0}\) and \(\{S_{2,2}(t)\}_{t\geq 0}\) are, respectively, the cosine and sine family generated by \(A,\) see [4]. Now, for \(1\leq\alpha\leq 2\) and \(\beta=1,\)\(\{S_{\alpha,1}(t)\}_{t\geq 0}\) is an \(\alpha\)-times resolvent [18], the case \(1\leq\alpha=\beta\leq 2\) corresponds to an \(\alpha\)-order resolvent (see [21]) and if \(\alpha=1\) and \(\beta=n+1,n\in\mathbb{N},\) then we get an \(n\)-times integrated semigroup, see [4]. In this paper, we explore the resolvent families \(\{S_{\alpha,\beta}(t)\}_{t\geq 0}\) to identify the order \(\alpha\) (for small times \(t>0\) and a fixed time \(T>0\)) in the fractional differential equations (1.5)-(1.8), where \(A\) is a closed linear operator in a Banach space \(X.\) More specifically, we consider the abstract fractional differential equation for the Caputo fractional derivative \[\left\{\begin{array}{lll}\partial_{t}^{\alpha}v(t)&=&Av(t),\quad t>0\\ v(0)&=&x,\end{array}\right. \tag{1.9}\] where \(0<\alpha<1,\)\(A\) is a closed linear operator defined in a Banach space \(X\) and \(x\in X.\) If \(v(t;x)=S_{\alpha,1}(t)x\) is the solution to Problem (1.9) (where \(\{S_{\alpha,1}(t)\}_{t\geq 0}\) is the \((\alpha,1)\)-resolvent family generated by \(A\)), then we prove in Theorem 3.6 that \[\alpha x=\lim_{t\to 0^{+}}t\psi_{t}\varphi_{t}^{-1}(x),\] where \(\varphi_{t}:X\to X\) and \(\psi_{t}:X\to X,\) are defined respectively, by \(\varphi_{t}(x)=v(t;x)-x\) and \(\psi_{t}(x)=v^{\prime}(t;x).\) This is exactly the abstract version of formula (1.3). Moreover, if \(T>0\) is a fixed time, \(A\) generates the \((\alpha,1)\)-resolvent family \(\{S_{\alpha,1}(t)\}_{t\geq 0}\) and \(x\) is an element in the Banach space \(X,\) then the order \(\alpha\) verifies \[Tv(T;x)-(g_{1}*v)(T;x)=\alpha\left[(S_{\alpha,1}*v)(T;x)-(g_{1}*v)(T;x)\right],\] where \(v(t;x)\) is the solution to Problem (1.9), see Theorem 5.10 and Remark 5.11. Thus, to determinate \(\alpha\) we only to know the solution \(v(T;x),\) its integral \((g_{1}*v)(T;x)\) and the convolution \((S_{\alpha,1}*v)(T;x)\) for any fixed \(x\in X\) and \(T>0.\) This implies that in Equation (1.1), the order \(\alpha\) verifies \[\alpha=\frac{Tu(x_{0},T)-(g_{1}*u)(x_{0},T)}{(S_{\alpha,1}*u)(x_{0},T)-(g_{1}* u)(x_{0},T)}\] for any fixed \(x_{0}\in\Omega\) and \(T>0\) such that \((S_{\alpha,1}*u)(x_{0},T)-(g_{1}*u)(x_{0},T)\neq 0,\) where \(u(x,t)\) is given by (1.4). This result gives an answer to the problem proposed in [23, p. 440] and allow us to find the order \(\alpha\) in Equation (1.1) in terms of the solution \(u(x,t)\) for a fixed time \(t>0.\) Moreover, we obtain here similar results for (1.9) in case \(1<\alpha<2\) and for the abstract fractional differential equation for the Riemann-Liouville fractional derivative in case \(0<\alpha<1\) and \(1<\alpha<2.\) The paper is organized as follows. In Section 2 we give the preliminaries on fractional calculus and fractional resolvent families generated by a closed linear operator \(A.\) In Section 3 we identify \(\alpha\in(0,1)\) for small times for the Caputo and Riemann-Liouville fractional derivatives. Section 4 is devoted to the same problem, but with \(\alpha\in(1,2).\) In Sections 5 and 6 we identify, respectively, \(\alpha\in(0,1)\) and \(\alpha\in(1,2),\) for a fixed time \(T>0\) for the Caputo and Riemann-Liouville fractional derivatives. Finally, we illustrate our results with some examples. ## 2. Preliminaries Let \(X\equiv(X,\|\cdot\|)\) be a Banach space. By \(\mathcal{B}(X)\) we denote the Banach space of all bounded and linear operators from \(X\) into \(X.\) For a given closed linear operator \(A\) on \(X,\)\(\rho(A)\) denotes its resolvent set and \(R(\lambda,A)=(\lambda-A)^{-1}\) its resolvent operator, which is defined for all \(\lambda\in\rho(A).\) A strongly continuous family of linear operators \(\{S(t)\}_{t\geq 0}\subset\mathcal{B}(X)\) is called _exponentially bounded_ if there exist constants \(M>0\) and \(w\in\mathbb{R}\) such that \(\|S(t)\|\leq Me^{wt},\) for all \(t>0.\) For a given \(\alpha>0,\) we define the function \(g_{\alpha}\) as \(g_{\alpha}(t):=\frac{t^{\alpha-1}}{\Gamma(\alpha)},\) where \(\Gamma(\cdot)\) stands the Gamma function. It is easy to see that if \(\alpha,\beta>0,\) then the functions \(g_{\gamma}\) satisfy the semigroup law \(g_{\alpha+\beta}(t)=(g_{\alpha}*g_{\beta})(t),\) where \((f*g)\) denotes the finite convolution \((f*g)(t):=\int_{0}^{t}f(t-s)g(s)ds.\) For \(n-1<\alpha<n,\) where \(n\in\mathbb{N},\) the Caputo and Riemann-Liouville fractional derivatives of order \(\alpha\) of a function \(f\) are defined, respectively, by \[\partial_{t}^{\alpha}f(t):=(g_{n-\alpha}*f^{\prime})(t)=\int_{0}^{t}g_{n- \alpha}(t-s)f^{(n)}(s)ds,\quad^{R}\partial_{t}^{\alpha}f(t):=\frac{d^{n}}{dt^ {n}}\int_{0}^{t}g_{n-\alpha}(t-s)f(s)ds.\] If \(\alpha=1\) or \(\alpha=2,\) then \(\partial_{t}^{1}=\,^{R}\partial_{t}^{1}=\frac{d}{dt}\) and \(\partial_{t}^{2}=\,^{R}\partial_{t}^{2}=\frac{d^{2}}{dt^{2}}.\) For more details, examples and applications on fractional calculus, we refer to the reader to [17]. In this paper, our focus is in \(0<\alpha<1\) and \(1<\alpha<2.\) **Definition 2.1**.: _Let \(A\) be a closed and linear operator defined on a Banach space \(X.\) Given \(\alpha,\beta>0\) we say that \(A\) is the generator of an \((\alpha,\beta)\)-resolvent family, if there exist \(\omega\geq 0\) and a strongly continuous function \(S_{\alpha,\beta}:(0,\infty)\rightarrow\mathcal{B}(X)\) such that \(S_{\alpha,\beta}(t)\) is exponentially bounded, \(\{\lambda^{\alpha}:\mathrm{Re}\lambda>\omega\}\subset\rho(A),\) and for all \(x\in X,\)_ \[\lambda^{\alpha-\beta}\left(\lambda^{\alpha}-A\right)^{-1}x=\int_{0}^{\infty} e^{-\lambda t}S_{\alpha,\beta}(t)xdt,\ \mathrm{Re}\lambda>\omega. \tag{2.10}\] _In this case, \(\{S_{\alpha,\beta}(t)\}_{t>0}\) is called the \((\alpha,\beta)\)-resolvent family generated by \(A.\)_ Moreover, if an operator \(A\) with domain \(D(A)\) is the infinitesimal generator of \(S_{\alpha,\beta}(t),\) then, for \(x\in D(A),\) we have \[Ax=\lim_{t\to 0^{+}}\frac{S_{\alpha,\beta}(t)x-g_{\beta}(t)x}{g_{\alpha+ \beta}(t)}.\] We notice that the case \(S_{1,1}(t)\) corresponds to a \(C_{0}\)-semigroup, \(S_{2,1}(t)\) is a cosine family and \(S_{2,2}(t)\) is a sine family generated by \(A.\) In the scalar case, that is, when \(A=\rho I,\) where \(\rho\in\mathbb{C}\) and \(I\) denotes the identity operator, we have, by the uniqueness of the Laplace transform, that \(S_{\alpha,\beta}(t)\) corresponds to the function \(t^{\beta-1}E_{\alpha,\beta}(\rho t^{\alpha}).\) Finally, for \(0<\alpha<1\) and \(\beta\geq\alpha,\) let \(\{S_{\alpha,\beta}(t)\}_{t\geq 0}\) be the family of operators defined by \[S_{\alpha,\beta}(t)f(s):=\int_{0}^{s}f(s-r)\varphi_{\alpha,\beta-\alpha}(t,r)dr,\] where \(s\in\mathbb{R}_{+},\)\(f\in L^{1}(\mathbb{R}_{+})\) and the function \(\varphi_{a,b}(t,r)\) is defined by \[\varphi_{a,b}(t,r):=t^{b-1}W_{-a,b}(-rt^{-a}),\quad a>0,b\geq 0,\] where \(W_{-a,b}(z):=\sum_{n=0}^{\infty}\frac{z^{n}}{n!\Gamma(-an+b)}\)\((z\in\mathbb{C})\) denotes the Wright function. Then, \(\{S_{\alpha,\beta}(t)\}_{t\geq 0}\) is an \((\alpha,\beta)\)-resolvent family on the Banach space \(X=L^{1}(\mathbb{R}_{+})\) generated by \(A=-\frac{d}{dt}.\) See [2, Example 11]. From [1, 2] or [24] we have the following result that gives some important properties of the resolvent family \(\{S_{\alpha,\beta}(t)\}_{t\geq 0}.\) **Proposition 2.2**.: _If \(\alpha,\beta>0\) and \(A\) generates an \((\alpha,\beta)\)-resolvent family \(\{S_{\alpha,\beta}(t)\}_{t>0},\) then_ 1. \(\lim_{t\to 0^{+}}\frac{S_{\alpha,\beta}(t)x}{g_{\beta}(t)}=x,\) _for all_ \(x\in X.\)__ 2. \(S_{\alpha,\beta}(t)x\in D(A)\) _and_ \(S_{\alpha,\beta}(t)Ax=AS_{\alpha,\beta}(t)x\) _for all_ \(x\in D(A)\) _and_ \(t>0.\)__ _._ 3. _For all_ \(x\in D(A),\)__ \[S_{\alpha,\beta}(t)x=g_{\beta}(t)x+\int_{0}^{t}g_{\alpha}(t-s)AS_{\alpha,\beta}(s )xds.\] 4. \(\int_{0}^{t}g_{\alpha}(t-s)S_{\alpha,\beta}(s)xds\in D(A)\) _and_ \[S_{\alpha,\beta}(t)x=g_{\beta}(t)x+A\int_{0}^{t}g_{\alpha}(t-s)S_{\alpha,\beta }(s)xds,\] _for all_ \(x\in X.\)__ For a given locally integrable function \(f:[0,\infty)\to X,\) we define the _Laplace transform_ of \(f,\) denoted by \(\hat{f}(\lambda)\) (or \(\mathcal{L}(f)(\lambda)\)) as \[\hat{f}(\lambda)=\int_{0}^{\infty}e^{-\lambda t}f(t)dt,\] provided the integral converges for some \(\lambda\in\mathbb{C}.\) The following lemmata will be useful for our purposes. **Lemma 2.3**.: _Assume that \(A\) is the generator of the family \(\{S_{\alpha,\beta}(t)\}_{t\geq 0}.\) If \(\gamma>0,\) then \(A\) generates the family \(\{S_{\alpha,\beta+\gamma}(t)\}_{t\geq 0}\) given by_ \[S_{\alpha,\beta+\gamma}(t)=(g_{\gamma}*S_{\alpha,\beta})(t). \tag{2.11}\] Proof.: In fact, for any \(\gamma>0\) and \(\lambda^{\alpha}\in\rho(A)\) with \(\mathrm{Re}\lambda>w,\) we have \[\mathcal{L}(g_{\gamma}*S_{\alpha,\beta})(\lambda)=\lambda^{\alpha-\beta- \gamma}(\lambda^{\alpha}-A)^{-1}=\lambda^{\alpha-(\beta+\gamma)}(\lambda^{ \alpha}-A)^{-1}=\hat{S}_{\alpha,\beta+\gamma}(\lambda).\] And the result follows from the uniqueness of the Laplace transform. **Lemma 2.4**.: _Assume that \(A\) is the generator of the family \(\{S_{\alpha,1}(t)\}_{t\geq 0},\) where \(\alpha>0.\) Then, the following assertions hold for any \(t\geq 0\) and \(x\in X,\)_ 1. \(tS_{\alpha,1}(t)x=-(\alpha-1)(g_{1}*S_{\alpha,1})(t)x+\alpha(S_{\alpha,1}*S_{ \alpha,1})(t)x.\)__ 2. \(tS^{\prime}_{\alpha,1}(t)x=\alpha(S^{\prime}_{\alpha,1}*S_{\alpha,1})(t)x.\)__ Proof.: To prove (1), let \(h(t):=tS_{\alpha,1}(t)x.\) By the properties of the Laplace transform, we have for any \(\lambda^{\alpha}\in\rho(A),\) \[\hat{h}(\lambda)=-\frac{d}{d\lambda}(\hat{S}_{\alpha,1}(\lambda))x=-\frac{d}{ d\lambda}(\lambda^{\alpha-1}(\lambda^{\alpha}-A)^{-1}x)=-\frac{(\alpha-1)}{ \lambda}\hat{S}_{\alpha,1}(\lambda)x+\alpha\hat{S}_{\alpha,1}(\lambda)\hat{S} _{\alpha,1}(\lambda)x,\] and the assertion follows from the uniqueness of the Laplace transform. As \(S_{\alpha,1}(0)=I,\) to prove the second assertion we only need to note that (2) corresponds to the derivative of (1). **Lemma 2.5**.: _Assume that \(A\) is the generator of the family \(\{S_{\alpha,\alpha}(t)\}_{t\geq 0},\) where \(0<\alpha<1.\) Then, for any \(t\geq 0\) and \(x\in X,\)_ 1. \(tS_{\alpha,\alpha}(t)x=\alpha(g_{1-\alpha}*S_{\alpha,\alpha}*S_{\alpha,\alpha })(t)x=\alpha(S_{\alpha,1}*S_{\alpha,\alpha})(t)x,\) _where_ \(S_{\alpha,1}(t)=(g_{1-\alpha}*S_{\alpha,\alpha})(t).\)__ 2. \(\int_{0}^{t}rS_{\alpha,\alpha}(r)xdr=\alpha[A(g_{2}*S_{\alpha,\alpha}*S_{ \alpha,\alpha})(t)x+(g_{2}*S_{\alpha,\alpha})(t)x].\)__ 3. \(S_{\alpha,\alpha}(t)x+tS^{\prime}_{\alpha,\alpha}(t)x=\alpha[A(S_{\alpha, \alpha}*S_{\alpha,\alpha})(t)x+S_{\alpha,\alpha}(t)x].\)__ Proof.: The proof of the first assertion follows similarly to the proof of Lemma 2.4. To prove the second one, we integrate (1) to obtain \[\int_{0}^{t}rS_{\alpha,\alpha}(r)xdr=\alpha(g_{1}*S_{\alpha,1}*S_{\alpha, \alpha}(t)x=:\alpha h(t)x.\] As \(\lambda^{\alpha}(\lambda^{\alpha}-A)^{-1}=A(\lambda^{\alpha}-A)^{-1}+I,\) for any \(\lambda^{\alpha}\in\rho(A),\) we have \[\hat{h}(\lambda)x=\lambda^{-2}\lambda^{\alpha}(\lambda^{\alpha}-A)^{-1}( \lambda^{\alpha}-A)^{-1}=\lambda^{-2}A(\lambda^{\alpha}-A)^{-1}(\lambda^{ \alpha}-A)^{-1}+\lambda^{-2}(\lambda^{\alpha}-A)^{-1},\] and (2) follows by the uniqueness of the Laplace transform. Finally, by Proposition 2.2, we have \[\widehat{S}^{\prime}_{\alpha,1}(\lambda)x=\lambda\hat{S}_{\alpha,1}(\lambda)x-S_ {\alpha,1}(0)x=\lambda^{\alpha}(\lambda^{\alpha}-A)^{-1}x-x=A(\lambda^{\alpha}- A)^{-1}x=A\hat{S}_{\alpha,\alpha}(\lambda)x,\] and therefore \(S^{\prime}_{\alpha,1}(t)x=AS_{\alpha,\alpha}(t)x.\) To conclude, we notice that (3) is exactly the derivative of (1). ## 3. Determination of \(\alpha\) for small times. The sub-diffusion case: \(0<\alpha<1.\) Let \(0<\alpha<1.\) In this section we determinate the order \(\alpha\) of the fractional differential equations \(\partial_{t}^{\alpha}u(t)=Au(t)\) and \({}^{R}\partial_{t}^{\alpha}u(t)=Au(t),\) for the Caputo and Riemann-Liouville fractional derivatives. Here \(A\) is a given closed linear operator. Let \(x\in X.\) Consider the equation for the Caputo fractional derivative \[\left\{\begin{array}{lll}\partial_{t}^{\alpha}u(t)&=&Au(t),\quad t>0\\ u(0)&=&x\end{array}\right. \tag{3.12}\] where \(0<\alpha<1.\) For each \(x\in X,\) we denote by \(u(t;x)\) the solution to Problem (3.12). If \(\{S_{\alpha,1}(t)\}_{t\geq 0}\) is the \((\alpha,1)\)-resolvent family generated by \(A,\) then, \[u(t;x)=S_{\alpha,1}(t)x,\] see for instance [6, Chapter 1]. Next, we define the operators \(\varphi_{t}:X\to X\) and \(\psi_{t}:X\to X,\) respectively, by \(\varphi_{t}(x)=u(t;x)-x\) and \(\psi_{t}(x)=u^{\prime}(t;x),\) where \(u(t;x)\) is the solution of (3.12). Let \(\varphi,\psi:X\to X\) be the operators defined, respectively, by \[\varphi(x)=\lim_{t\to 0^{+}}t^{-\alpha}\varphi_{t}(x),\quad\psi(x)=\lim_{t \to 0^{+}}t^{1-\alpha}\psi_{t}(x).\] Similarly, if \(A\) generates the \((\alpha,\alpha)\)-resolvent family \(\{S_{\alpha,\alpha}(t)\}_{t\geq 0}\) and we consider the Riemann-Liouville in the fractional differential equation \[\left\{\begin{array}{lll}{}^{R}\partial_{t}^{\alpha}u(t)&=&Au(t),\quad t>0 \\ (g_{1-\alpha}*u)(0)&=&x,\end{array}\right. \tag{3.13}\] then, its solution is given by \[u(t;x)=S_{\alpha,\alpha}(t)x.\] Moreover, we define the operators \(\tilde{\varphi}_{t}:X\to X\) and \(\tilde{\psi}_{t}:X\to X,\) respectively, by \(\tilde{\varphi}_{t}(x)=\int_{0}^{t}u(s;x)ds=(g_{1}*u)(t;x)\) and \(\tilde{\psi}_{t}(x)=u(t;x),\) where \(u(t;x)\) is the solution of (3.13). Finally, we define \(\tilde{\varphi},\tilde{\psi}:X\to X,\) respectively, by \[\tilde{\varphi}(x)=\lim_{t\to 0^{+}}t^{-\alpha}\tilde{\varphi}_{t}(x),\quad \tilde{\psi}(x)=\lim_{t\to 0^{+}}t^{1-\alpha}\tilde{\psi}_{t}(x).\] Now, we consider the following problem: * Let \(x\in X\) be fixed. Determinate \(\alpha\in(0,1)\) in (3.12) and (3.13) from the observation data \(u(t;x)\) for small \(t.\) The next result gives an answer for the Caputo fractional derivative. **Theorem 3.6**.: _If \(A\) generates the \((\alpha,1)\)-resolvent family \(\{S_{\alpha,1}(t)\}_{t\geq 0}\) and \(x\in D(A)\cap D(A^{-1})\) with \(A(g_{\alpha}*S_{\alpha,1})(t)x\neq 0\) for \(t>0\) small enough, then_ \[\alpha x=\lim_{t\to 0^{+}}t\psi_{t}\varphi_{t}^{-1}(x).\] Proof.: Since \(u(t;x)=S_{\alpha,1}(t)x,\) from Proposition 2.2 and Lemma 2.3 we can write \[u(t;x)-x=A(g_{\alpha}*S_{\alpha,1})(t)x=AS_{\alpha,\alpha+1}(t)x, \tag{3.14}\] for any \(x\in X,\) and therefore \[t^{-\alpha}\varphi_{t}(x)=t^{-\alpha}(u(t;x)-x)=\frac{AS_{\alpha,\alpha+1}(t)} {g_{\alpha+1}(t)}\frac{1}{\Gamma(\alpha+1)}x,\quad x\in X. \tag{3.15}\] By Proposition 2.2 we have for any \(x\in D(A)\) that \[\varphi(x)=\lim_{t\to 0^{+}}t^{-\alpha}\varphi_{t}(x)=\frac{1}{\Gamma(\alpha+1)}Ax.\] Now, we claim that \(\Gamma(\alpha+1)t^{-\alpha}A^{-1}\varphi_{t}\) is an invertible operator for \(t>0\) small enough. In fact, by (3.15), we have \[\Gamma(\alpha+1)t^{-\alpha}A^{-1}\varphi_{t}(x)-x=\frac{S_{\alpha,\alpha+1}(t) }{g_{\alpha+1}(t)}x-x.\] By Proposition 2.2 the right hand side in the last identity goes to \(0\) as \(t\to 0^{+}.\) Hence, we can take \(t>0\) small enough, with \(\|\Gamma(\alpha+1)t^{-\alpha}A^{-1}\varphi_{t}(x)-x\|<1\) for all \(x\in D(A).\) This implies that \(\Gamma(\alpha+1)t^{-\alpha}A^{-1}\varphi_{t}\) is invertible for \(t>0\) small enough, and thus, \[\varphi^{-1}(x)=\lim_{t\to 0^{+}}t^{\alpha}\varphi_{t}^{-1}(x)=\Gamma( \alpha+1)A^{-1}x, \tag{3.16}\] for all \(x\in D(A)\cap D(A^{-1}).\) On the other hand, for any \(\lambda^{\alpha}\in\rho(A)\) we have by Proposition 2.2 that \[\widehat{S}^{\prime}_{\alpha,1}(\lambda)x=\lambda\hat{S}_{\alpha,1}(\lambda)x -S_{\alpha,1}(0)x=\lambda^{\alpha}(\lambda^{\alpha}-A)^{-1}x-x=A(\lambda^{ \alpha}-A)^{-1}x=A\hat{S}_{\alpha,\alpha}(\lambda)x. \tag{3.17}\] By Proposition 2.2 and Lemma 2.3 we have for any \(x\in D(A)\) that \[u^{\prime}(t;x)=S^{\prime}_{\alpha,1}(t)x=AS_{\alpha,\alpha}(t)x=g_{\alpha}(t) Ax+A(g_{\alpha}*S_{\alpha,\alpha})(t)Ax=g_{\alpha}(t)Ax+AS_{\alpha,2\alpha}(t)Ax.\] Therefore, \[t^{1-\alpha}u^{\prime}(t;x)=\frac{Ax}{\Gamma(\alpha)}+At^{1-\alpha}S_{\alpha,2 \alpha}(t)Ax.\] The Proposition 2.2 implies again that \[\lim_{t\to 0^{+}}t^{1-\alpha}AS_{\alpha,2\alpha}(t)x=\lim_{t\to 0^{+}}\frac{S_{ \alpha,2\alpha}(t)}{g_{2\alpha}(t)}\frac{t^{\alpha}}{\Gamma(2\alpha)}Ax=0.\] Hence, we get \[\psi(x)=\lim_{t\to 0^{+}}t^{1-\alpha}u^{\prime}(t;x)=\lim_{t\to 0^{+}}t^{1- \alpha}\psi_{t}(x)=\frac{Ax}{\Gamma(\alpha)}, \tag{3.18}\] for all \(x\in D(A).\) By (3.16) and (3.18) we obtain \[\lim_{t\to 0^{+}}\|t\psi_{t}(\varphi_{t}^{-1}(x))-\psi(\varphi^{-1}(x ))\| = \lim_{t\to 0^{+}}\|t^{1-\alpha}\psi_{t}(t^{\alpha}\varphi_{t}^{-1}(x )-\varphi^{-1}(x))-\psi(\varphi^{-1}(x))+t^{1-\alpha}\psi_{t}(\varphi^{-1}(x))\|\] \[\leq \lim_{t\to 0^{+}}t^{1-\alpha}\|\psi_{t}\|\|t^{\alpha}\varphi_{t}^{-1} (x)-\varphi^{-1}(x)\|+\|(\psi-t^{1-\alpha}\psi_{t})(\varphi^{-1}(x))\|\] \[= 0,\] for all \(x\in D(A)\cap D(A^{-1}).\) As \[\psi(\varphi^{-1}(x))=\frac{1}{\Gamma(\alpha)}A(\Gamma(\alpha+1)A^{-1}x)= \alpha x,\] we conclude that \[\lim_{t\to 0^{+}}t\psi_{t}\varphi_{t}^{-1}(x)=\alpha x,\] for all \(x\in D(A)\cap D(A^{-1}).\) Now, we have an answer to the inverse problem for the Riemann-Liouville fractional derivative. **Theorem 3.7**.: _If \(A\) generates the \((\alpha,\alpha)\)-resolvent family \(\{S_{\alpha,\alpha}(t)\}_{t\geq 0}\) and \(x\in X\) with \((g_{1}*S_{\alpha,\alpha})(t)x\neq f\) for \(t>0\) small enough, then_ \[\alpha x=\lim_{t\to 0^{+}}t\tilde{\psi}_{t}\tilde{\varphi}_{t}^{-1}(x).\] Proof.: Let \(x\in X.\) As \(u(t;x)=S_{\alpha,\alpha}(t)x,\) we have by Proposition 2.2 and Lemma 2.3 that \[t^{1-\alpha}u(t;x)=\frac{x}{\Gamma(\alpha)}+At^{1-\alpha}(g_{\alpha}*S_{\alpha, \alpha})(t)x=\frac{x}{\Gamma(\alpha)}+At^{1-\alpha}S_{\alpha,2\alpha}(t)x.\] By Proposition 2.2 we have \[\lim_{t\to 0^{+}}t^{1-\alpha}AS_{\alpha,2\alpha}(t)x=\lim_{t\to 0^{+}}\frac{AS_{ \alpha,\alpha+1}(t)}{g_{2\alpha}(t)}\frac{t^{\alpha}}{\Gamma(2\alpha)}x=0.\] Hence, \[\tilde{\psi}(x)=\lim_{t\to 0^{+}}t^{1-\alpha}\tilde{\psi}_{t}(x)=\lim_{t\to 0^{+}}t^{1- \alpha}u(t;x)=\frac{x}{\Gamma(\alpha)}. \tag{3.19}\] Since \(u(t;x)=g_{\alpha}(t)x+A(g_{\alpha}*S_{\alpha,\alpha})(t)x,\) integrating over \([0,t],\) by Lemma 2.3 and the semigroup law for the functions \(g_{\beta},\) we have that \[(g_{1}*u)(t;x)=g_{\alpha+1}(t)x+A(g_{1}*g_{\alpha}*S_{\alpha,\alpha})(t)x=g_{ \alpha+1}(t)x+AS_{\alpha,2\alpha+1}(t)x.\] Therefore, \[t^{-\alpha}(g_{1}*u)(t;x)=\frac{x}{\Gamma(\alpha+1)}+At^{-\alpha}S_{\alpha,2 \alpha+1}(t)x.\] By Proposition 2.2 we get \[\lim_{t\to 0^{+}}t^{-\alpha}AS_{\alpha,2\alpha+1}(t)x=\lim_{t\to 0^{+}} \frac{AS_{\alpha,2\alpha+1}(t)}{g_{2\alpha+1}(t)}\frac{t^{\alpha}}{\Gamma(2 \alpha+1)}x=0.\] Hence, \[\tilde{\varphi}(x)=\lim_{t\to 0^{+}}t^{-\alpha}\tilde{\varphi}_{t}(x)=\lim_{t \to 0^{+}}t^{-\alpha}(g_{1}*u)(t;x)=\frac{x}{\Gamma(\alpha+1)}. \tag{3.20}\] Now, we claim that \(t^{-\alpha}\tilde{\varphi}_{t}\) is an invertible operator for \(t>0\) small enough. In fact, as \(u(t;x)=S_{\alpha,\alpha}(t)x,\) we have by Proposition 2.2 that \[\tilde{\varphi}_{t}(x)=(g_{1}*u)(t;x)=(g_{1}*S_{\alpha,\alpha})(t)x=g_{\alpha +1}(t)x+A(g_{\alpha}*S_{\alpha,\alpha})(t)x,\] and therefore, the Lemma 2.3 implies that \[\frac{1}{g_{\alpha+1}(t)}\tilde{\varphi}_{t}(x)-x=\frac{S_{\alpha,\alpha+1}(t )x}{g_{\alpha+1}(t)}-x=\frac{1}{g_{\alpha+1}(t)}A(g_{\alpha}*S_{\alpha,\alpha} )(t)x,\quad t>0.\] By Proposition 2.2 we have \(\lim_{t\to 0^{+}}\frac{S_{\alpha,\alpha+1}(t)x}{g_{\alpha+1}(t)}-x=0,\) and thus we can take \(t>0\) small enough, with \(\|\frac{1}{g_{\alpha+1}(t)}\tilde{\varphi}_{t}(x)-x\|<1\) for all \(x\in X.\) This implies that \(\frac{1}{g_{\alpha+1}(t)}\tilde{\varphi}_{t}=\Gamma(\alpha+1)t^{-\alpha} \tilde{\varphi}_{t}\) is an invertible operator, and therefore \(t^{-\alpha}\tilde{\varphi}_{t}\) is invertible for \(t>0\) small enough. By (3.20) we get \[\tilde{\varphi}^{-1}(x)=\lim_{t\to 0^{+}}t^{\alpha}\tilde{\varphi}_{t}^{-1}(x)= \Gamma(\alpha+1)x. \tag{3.21}\] By (3.19) and (3.21) have \[\lim_{t\to 0^{+}}\|t\tilde{\psi}_{t}(\tilde{\varphi}_{t}^{-1}(x))- \tilde{\psi}(\tilde{\varphi}^{-1}(x))\| = \lim_{t\to 0^{+}}\|t^{1-\alpha}\tilde{\psi}_{t}(t^{\alpha} \tilde{\varphi}_{t}^{-1}(x)-\tilde{\varphi}^{-1}(x))-\tilde{\psi}(\tilde{ \varphi}^{-1}(x))+t^{1-\alpha}\tilde{\psi}_{t}(\tilde{\varphi}^{-1}(x))\|\] \[\leq \lim_{t\to 0^{+}}t^{1-\alpha}\|\tilde{\psi}_{t}\|\|t^{\alpha} \tilde{\varphi}_{t}^{-1}(x)-\tilde{\varphi}^{-1}(x)\|+\|(\tilde{\psi}-t^{1- \alpha}\tilde{\psi}_{t})(\tilde{\varphi}^{-1}(x))\|\] \[= 0,\] for all \(x\in X.\) Since \(\tilde{\psi}(\tilde{\varphi}^{-1}(x))=\tilde{\psi}(\Gamma(\alpha+1)x)=\frac{ \Gamma(\alpha+1)}{\Gamma(\alpha)}x=\alpha x,\) we conclude that \[\lim_{t\to 0^{+}}t\tilde{\psi}_{t}\tilde{\varphi}_{t}^{-1}(x)=\alpha x.\] ## 4. Determination of \(\alpha\) for small times. The super-diffusion case: \(1<\alpha<2.\) In this section we determinate the order \(\alpha\) (where \(1<\alpha<2\)) of the fractional differential equations \(\partial_{t}^{\alpha}u(t)=Au(t)\) and \({}^{R}\partial_{t}^{\alpha}u(t)=Au(t),\) for the Caputo and Riemann-Liouville fractional derivatives. We first consider the equation for the Caputo fractional derivative \[\left\{\begin{array}{lll}\partial_{t}^{\alpha}u(t)&=&Au(t),\quad t\geq 0\\ u(0)&=&x\\ u^{\prime}(0)&=&y.\end{array}\right. \tag{4.22}\] For each \(x,y\in X,\) we denote by \(u(t;x,y)\) the solution to problem (4.22). If \(\{S_{\alpha,1}(t)\}_{t\geq 0}\) is the \((\alpha,1)\)-resolvent family generated by \(A,\) then, \[u(t;x,y)=S_{\alpha,1}(t)x+(g_{1}*S_{\alpha,1})(t)y,\] see for instance [27]. By Lemma 2.3 we can write \[u(t;x,y)=S_{\alpha,1}(t)x+S_{\alpha,2}(t)y.\] Now, we define the operators \(\varphi_{t}:X\to X\) and \(\psi_{t}:X\to X,\) respectively, by \(\varphi_{t}(x)=u(t;x,y)-x-ty\) and \(\psi_{t}(x)=u^{\prime\prime}(t;x,y),\) where \(u(t;x,y)\) is the solution of (4.22) for any fixed \(y\in X.\) Moreover, we define \(\varphi,\psi:X\to X,\) respectively, by \[\varphi(x)=\lim_{t\to 0^{+}}t^{-\alpha}\varphi_{t}(x),\quad\psi(x)=\lim_{t \to 0^{+}}t^{2-\alpha}\psi_{t}(x),\quad x\in X.\] Now, we consider the Riemann-Liouville fractional derivative. If \(u(t;x,y)\) denotes the solution to the problem \[\left\{\begin{array}{lll}{}^{R}\partial_{t}^{\alpha}u(t)&=&Au(t),\quad t\geq 0 \\ (g_{2-\alpha}*u)(0)&=&x\\ (g_{2-\alpha}*u)^{\prime}(0)&=&y,\end{array}\right. \tag{4.23}\] and \(A\) generates the \((\alpha,\alpha-1)\)-resolvent family \(\{S_{\alpha,\alpha-1}(t)\}_{t\geq 0},\) then (by [27]), we have \[u(t;x,y)=S_{\alpha,\alpha-1}(t)x+(g_{1}*S_{\alpha,\alpha-1})(t)y. \tag{4.24}\] Moreover, by Lemma 2.3 we can write \[u(t;x,y)=S_{\alpha,\alpha-1}(t)x+S_{\alpha,\alpha}(t)y. \tag{4.25}\] Moreover, for any fixed \(y\in X,\) we define the operators \(\tilde{\varphi}_{t}:X\to X\) and \(\tilde{\psi}_{t}:X\to X,\) respectively, by \(\tilde{\varphi}_{t}(x)=\int_{0}^{t}u(s;x,y)ds=(g_{1}*u)(t;x,y)\) and \(\tilde{\psi}_{t}(x)=\int_{0}^{t}(t-s)u(s;x,y)ds=(g_{2}*u)(t;x,y),\) where \(u(t;x,y)\) is the solution of (4.23). Finally, we define \(\tilde{\varphi},\tilde{\psi}:X\to X,\) respectively, by \[\tilde{\varphi}(x)=\lim_{t\to 0^{+}}t^{1-\alpha}\tilde{\varphi}_{t}(x),\quad \tilde{\psi}(x)=\lim_{t\to 0^{+}}t^{-\alpha}\tilde{\psi}_{t}(x),\quad x\in X.\] Next, we consider the following problem: * Let \(x\in X\) be fixed. Determinate \(\alpha\in(1,2)\) in (4.22) and (4.23) from the observation data \(u(t;x,y)\) for small \(t.\) For the Caputo fractional derivative (4.22) we have the following result. **Theorem 4.8**.: _Let \(y\in X.\) If \(A\) generates the \((\alpha,1)\)-resolvent family \(\{S_{\alpha,1}(t)\}_{t\geq 0}\) and \(x\in D(A)\cap D(A^{-1})\) with \(S_{\alpha,1}(t)x+(g_{1}*S_{\alpha,1})(t)y\neq 0\) for \(t>0\) small enough, then_ \[\alpha(\alpha-1)x=\lim_{t\to 0^{+}}t^{2}\psi_{t}\varphi_{t}^{-1}(x).\] Proof.: Let \(x\in D(A)\cap D(A^{-1}).\) As \(u(t;x,y)=S_{\alpha,1}(t)x+S_{\alpha,2}(t)y,\) by Proposition 2.2 and Lemma 2.3 we can write \[u(t;x,y)=x+A(g_{\alpha}*S_{\alpha,1})(t)x+ty+A(g_{\alpha}*S_{\alpha,2})(t)y=x+ AS_{\alpha,\alpha+1}(t)x+ty+AS_{\alpha,\alpha+2}(t)y. \tag{4.26}\] The Proposition 2.2 implies \[\lim_{t\to 0^{+}}t^{-\alpha}AS_{\alpha,\alpha+1}(t)x=\lim_{t\to 0^{+}}\frac{ AS_{\alpha,\alpha+1}(t)}{g_{\alpha+1}(t)}\frac{1}{\Gamma(\alpha+1)}Ax=\frac{1}{ \Gamma(\alpha+1)}Ax\] and \[\lim_{t\to 0^{+}}t^{-\alpha}AS_{\alpha,\alpha+2}(t)x=\lim_{t\to 0^{+}}\frac{ AS_{\alpha,\alpha+2}(t)}{g_{\alpha+2}(t)}\frac{t}{\Gamma(\alpha+2)}Ax=0.\] Hence, \[\lim_{t\to 0^{+}}t^{-\alpha}(u(t;x,y)-x-ty)=\frac{1}{\Gamma(\alpha+1)}Ax,\] that is, \[\varphi(x)=\lim_{t\to 0^{+}}t^{-\alpha}\varphi_{t}(x)=\frac{1}{\Gamma( \alpha+1)}Ax,\quad x\in D(A),y\in X. \tag{4.27}\] By (3.17), \(S^{\prime}_{\alpha,1}(t)=AS_{\alpha,\alpha}(t)\) and \(S^{\prime}_{\alpha,2}(t)=S_{\alpha,1}(t)\), therefore \[u^{\prime}(t;x,y)=S^{\prime}_{\alpha,1}(t)x+S^{\prime}_{\alpha,2}(t)y=AS_{ \alpha,\alpha}(t)x+S_{\alpha,1}(t)y.\] As \(\alpha>1\), we have for any \(\lambda^{\alpha}\in\rho(A)\), \[\widehat{S}^{\prime}_{\alpha,\alpha}(\lambda)x=\lambda\hat{S}_{\alpha,\alpha }(\lambda)x-S_{\alpha,\alpha}(0)x=\lambda^{\alpha-(\alpha-1)}(\lambda^{\alpha} -A)^{-1}x=\hat{S}_{\alpha,\alpha-1}(\lambda)x. \tag{4.28}\] Hence, \[t^{2-\alpha}u^{\prime\prime}(t;x,y)=t^{2-\alpha}AS^{\prime}_{\alpha,\alpha}(t )x+t^{2-\alpha}S^{\prime}_{\alpha,1}(t)y=t^{2-\alpha}AS_{\alpha,\alpha-1}(t)x +t^{2-\alpha}AS_{\alpha,\alpha}(t)y.\] By Proposition 2.2 we have \[\lim_{t\to 0^{+}}t^{2-\alpha}AS_{\alpha,\alpha-1}(t)x=\lim_{t\to 0^{+}} \frac{AS_{\alpha,\alpha-1}(t)}{g_{\alpha-1}(t)}\frac{1}{\Gamma(\alpha-1)}x= \frac{Ax}{\Gamma(\alpha-1)}\] and \[\lim_{t\to 0^{+}}t^{2-\alpha}AS_{\alpha,\alpha}(t)x=\lim_{t\to 0^{+}} \frac{AS_{\alpha,\alpha}(t)}{g_{\alpha}(t)}\frac{t}{\Gamma(\alpha)}x=0.\] Therefore, \[\lim_{t\to 0^{+}}t^{2-\alpha}u^{\prime\prime}(t;x,y)=\frac{Ax}{\Gamma(\alpha-1)},\] that is, \[\psi(x)=\lim_{t\to 0^{+}}t^{2-\alpha}\psi_{t}(x)=\frac{Ax}{\Gamma(\alpha-1)}. \tag{4.29}\] Now, we claim that \(\Gamma(\alpha+1)t^{-\alpha}\varphi_{t}\) is invertible for \(t>0\) small enough. In fact, by (4.26) we can write \[\Gamma(\alpha+1)t^{-\alpha}A^{-1}\varphi_{t}(x)-x=\frac{S_{\alpha,\alpha+1}(t )}{g_{\alpha+1}(t)}x-x+\frac{S_{\alpha,\alpha+2}(t)}{g_{\alpha+2}(t)}\frac{g_ {\alpha+2}(t)}{g_{\alpha+1}(t)}y.\] The Proposition 2.2 implies that the right hand side in the last identity goes to \(0\) as \(t\to 0^{+}.\) Hence, we take \(t>0\) small enough, with \(\|\Gamma(\alpha+1)t^{-\alpha}A^{-1}\varphi_{t}(x)-x\|<1\) for all \(x\in D(A).\) This implies that \(\Gamma(\alpha+1)t^{-\alpha}A^{-1}\varphi_{t}\) is invertible for \(t>0\) small enough, and thus, \[\varphi^{-1}(x)=\lim_{t\to 0^{+}}(t^{-\alpha}\varphi_{t})^{-1}(x)=\Gamma( \alpha+1)A^{-1}x, \tag{4.30}\] for all \(x\in D(A)\cap D(A^{-1}).\) Now, by (4.29) and (4.30) we get \[\lim_{t\to 0^{+}}\|t^{2}\psi_{t}(\varphi_{t}^{-1}(x)) - \psi(\varphi^{-1}(x))\|\] \[= \lim_{t\to 0^{+}}\|t^{2-\alpha}\psi_{t}(t^{\alpha}\varphi_{t}^{-1}(x)- \varphi^{-1}(x))-\psi(\varphi^{-1}(x))+t^{2-\alpha}\psi_{t}(\varphi^{-1}(x))\|\] \[\leq \lim_{t\to 0^{+}}t^{2-\alpha}\|\psi_{t}\|\|t^{\alpha}\varphi_{t}^{-1}(x )-\varphi^{-1}(x)\|+\|(\psi-t^{2-\alpha}\psi_{t})(\varphi^{-1}(x))\|\] \[= 0,\] for all \(x\in D(A)\cap D(A^{-1}).\) As \(\psi(\varphi^{-1}(x))=\psi(\Gamma(\alpha+1)A^{-1}x)=\frac{1}{\Gamma(\alpha-1) }\Gamma(\alpha+1)x=\alpha(\alpha-1)x,\) we conclude that \[\lim_{t\to 0^{+}}t^{2}\psi_{t}\varphi_{t}^{-1}(x)=\alpha(\alpha-1)x.\] Now, we consider the problem (4.23) for the Riemann-Liouville fractional derivative. **Theorem 4.9**.: _Let \(y\in X.\) If \(A\) generates the \((\alpha,\alpha-1)\)-resolvent family \(\{S_{\alpha,\alpha-1}(t)\}_{t\geq 0}\) and for \(x\in X\) we have \(S_{\alpha,\alpha-1}(t)x+(g_{1}*S_{\alpha,\alpha-1})(t)y\neq 0\) for \(t>0\) small enough, then_ \[\alpha x=\lim_{t\to 0^{+}}t^{2}\tilde{\varphi}_{t}\tilde{\psi}_{t}^{-1}(x).\] Proof.: Let \(x\in X.\) As \(u(t;x,y)=S_{\alpha,\alpha-1}(t)x+S_{\alpha,\alpha}(t)y,\) by Proposition 2.2, Lemma 2.3, and the semigroup law for the functions \(g_{\beta},\) we obtain \[(g_{1}*u)(t;x,y) = \int_{0}^{t}u(s;x,y)ds\] \[= g_{\alpha}(t)x+A(g_{\alpha+1}*S_{\alpha,\alpha-1})(t)x+g_{\alpha +1}(t)y+A(g_{\alpha+1}*S_{\alpha,\alpha})(t)y\] \[= g_{\alpha}(t)x+AS_{\alpha,2\alpha}(t)x+g_{\alpha+1}(t)y+AS_{ \alpha,2\alpha+1}(t)y.\] Hence, \[t^{1-\alpha}(g_{1}*u)(t;x,y)=\frac{1}{\Gamma(\alpha)}x+t^{1-\alpha}AS_{\alpha,2\alpha}(t)x+\frac{ty}{\Gamma(\alpha+1)}+t^{1-\alpha}AS_{\alpha,2\alpha+1}(t )y.\] The Proposition 2.2 implies that \[\lim_{t\to 0^{+}}t^{1-\alpha}AS_{\alpha,2\alpha}(t)x=\lim_{t\to 0^{+}}\frac{AS_{ \alpha,2\alpha}(t)}{g_{2\alpha}(t)}\frac{t^{\alpha}}{\Gamma(2\alpha)}x=0\] and \[\lim_{t\to 0^{+}}t^{1-\alpha}AS_{\alpha,2\alpha+1}(t)y=\lim_{t\to 0^{+}}\frac{AS_{ \alpha,2\alpha+1}(t)}{g_{2\alpha+1}(t)}\frac{t^{\alpha+1}}{\Gamma(2\alpha+1)} y=0.\] We obtain \[\lim_{t\to 0^{+}}t^{1-\alpha}(g_{1}*u)(t;x,y)=\frac{1}{\Gamma(\alpha)}x,\] which means that \[\tilde{\varphi}(x)=\lim_{t\to 0^{+}}t^{1-\alpha}\tilde{\varphi}_{t}(x)=\frac{1}{ \Gamma(\alpha)}x, \tag{4.31}\] for all \(x,y\in X.\) Now, we integrate (4.25) twice to obtain, by Proposition 2.2 and Lemma 2.3, that \[(g_{2}*u)(t;x,y) = \int_{0}^{t}(t-s)u(s;x,y)ds \tag{4.32}\] \[= g_{\alpha+1}(t)x+A(g_{\alpha+2}*S_{\alpha,\alpha-1})(t)x+g_{ \alpha+2}(t)y+A(g_{\alpha+2}*S_{\alpha,\alpha})(t)y\] \[= g_{\alpha+1}(t)x+AS_{\alpha,2\alpha+1}(t)x+g_{\alpha+2}(t)y+AS_{ \alpha,2\alpha+2}(t)y.\] Thus \[t^{-\alpha}(g_{2}*u)(t;x,y)=\frac{1}{\Gamma(\alpha+1)}x+t^{-\alpha}AS_{\alpha,2 \alpha+1}(t)x+\frac{ty}{\Gamma(\alpha+2)}+t^{-\alpha}AS_{\alpha,2\alpha+2}(t)y.\] As \[\lim_{t\to 0^{+}}t^{-\alpha}AS_{\alpha,2\alpha+1}(t)x=\lim_{t\to 0^{+}}\frac{AS_{ \alpha,2\alpha+1}(t)}{g_{2\alpha+1}(t)}\frac{t^{\alpha}}{\Gamma(2\alpha+1)}x=0\] and \[\lim_{t\to 0^{+}}t^{-\alpha}AS_{\alpha,2\alpha+2}(t)y=\lim_{t\to 0^{+}}\frac{AS_{ \alpha,2\alpha+2}(t)}{g_{2\alpha+2}(t)}\frac{t^{\alpha+1}}{\Gamma(2\alpha+2)}y=0,\] (see Proposition 2.2) we conclude that \[\tilde{\psi}(x)=\lim_{t\to 0^{+}}t^{-\alpha}\tilde{\psi}_{t}(x)=\lim_{t\to 0^{+}}t^{- \alpha}(g_{2}*u)(t;x,y)=\frac{1}{\Gamma(\alpha+1)}x. \tag{4.33}\] Now, we will see that \(t^{-\alpha}\tilde{\psi}_{t}\) is invertible for \(t>0\) small enough and any \(y\in X\) being fixed. In fact, by (4.32), we obtain \[\frac{1}{g_{\alpha+1}(t)}\tilde{\psi}_{t}(x)-x = \frac{S_{\alpha,2\alpha+1}(t)}{g_{\alpha+1}(t)}x+\frac{g_{ \alpha+2}(t)}{g_{\alpha+1}(t)}y+\frac{AS_{\alpha,2\alpha+2}(t)}{g_{\alpha+1}(t )}y\] \[= \frac{S_{\alpha,2\alpha+1}(t)}{g_{2\alpha+1}(t)}\frac{g_{2\alpha +1}(t)}{g_{\alpha+1}(t)}x+\frac{g_{\alpha+2}(t)}{g_{\alpha+1}(t)}y+\frac{AS_{ \alpha,2\alpha+2}(t)}{g_{2\alpha+2}(t)}\frac{g_{2\alpha+2}(t)}{g_{\alpha+1}(t )}y\quad t>0.\] By Proposition 2.2, the right hand side in the last equality goes to \(0\) as \(t\to 0^{+}\), and therefore, we can choose \(t>0\) small enough such that \(\|\frac{1}{g_{\alpha+1}(t)}\tilde{\psi}_{t}(x)-x\|<1\) for all \(x\in X.\) This implies that \(\frac{1}{g_{\alpha+1}(t)}\tilde{\psi}_{t}\) is an invertible operator, and therefore \(t^{-\alpha}\tilde{\psi}_{t}\) is invertible for \(t>0\) small enough. By (4.33) we obtain \[\tilde{\psi}^{-1}(x)\lim_{t\to 0^{+}}(t^{-\alpha}\tilde{\psi}_{t})^{-1}(x)= \Gamma(\alpha+1)x.\] As \[\tilde{\varphi}(\tilde{\psi}^{-1}(x))=\tilde{\varphi}(\Gamma(\alpha+1)x)= \frac{1}{\Gamma(\alpha)}\Gamma(\alpha+1)x=\alpha x,\] the conclusion follows as in the proof of Theorem 4.8. ## 5. Determination of \(\alpha\) for a fixed time \(T.\) The sub-diffusion case: \(0<\alpha<1.\) In this section we consider the problem of finding the order \(\alpha\in(0,1)\) for a fixed time \(T>0\) in the fractional problems (3.12) and (3.13). We first consider the problem for the Caputo fractional derivative. Assume that \(A\) is the generator of the resolvent family \(\{S_{\alpha,1}(t)\}_{t\geq 0}.\) Let \(\varphi_{t}:X\to X\) be the operator defined by \(\varphi_{t}(x):=(S_{\alpha,1}*u)(t;x)-(g_{1}*u)(t;x),\) where \(u(t;x)\) is the solution to Problem (3.12). **Theorem 5.10**.: _If \(A\) generates the \((\alpha,1)\)-resolvent family \(\{S_{\alpha,1}(t)\}_{t\geq 0},\)\(x\in X\) and \(T>0\) are fixed, then the order \(\alpha\) verifies_ \[Tu(T;x)-(g_{1}*u)(T;x)=\alpha\varphi_{T}(x).\] Proof.: Let \(x\in X\) and \(T>0.\) By (2) in Lemma 2.4 we have \[tS_{\alpha,1}(t)x-(g_{1}*S_{\alpha,1})(t)x=\alpha[(S_{\alpha,1}*S_{\alpha,1})( t)x-(g_{1}*S_{\alpha,1})(t)x] \tag{5.34}\] for all \(t\geq 0.\) As \(u(t;x)=S_{\alpha,1}(t)x\) is the solution to Problem (3.12), we have \[\varphi_{t}(x) = \int_{0}^{t}S_{\alpha,1}(t-r)u(r;x)dr-\int_{0}^{t}u(r;x)dr\] \[= (S_{\alpha,1}*S_{\alpha,1})(t)x-(g_{1}*S_{\alpha,1})(t)x.\] Therefore, (5.34) can be written as \[tS_{\alpha,1}(t)x-(g_{1}*S_{\alpha,1})(t)x=\alpha\varphi_{t}(x),\] for any \(t>0\) and \(x\in X.\) We conclude that \[Tu(T;x)-(g_{1}*u)(T;x)=\alpha\varphi_{T}(x).\] _Remark 5.11_.: _We notice that if \(u(t;x)\) is real valued, then to find \(\alpha,\) we only to_ divide _by \(\varphi_{T}\) in Theorem 5.10 to obtain_ \[\alpha=\frac{Tu(T;x)-(g_{1}*u)(T;x)}{\varphi_{T}(x)}=\frac{Tu(T;x)-(g_{1}*u)( T;x)}{(S_{\alpha,1}*u)(T;x)-(g_{1}*u)(T;x)},\] _that is, we need to know the data: The solution \(u(T;x),\) its integral \((g_{1}*u)(T;x)\) and the convolution \((S_{\alpha,1}*u)(T;x)\) for a fixed \(x\in X\) and a time \(T>0.\)_ On the other hand, we notice that by Lemma 2.4 we have \[tS^{\prime}_{\alpha,1}(t)x=\alpha(S^{\prime}_{\alpha,1}*S_{\alpha,1})(t)x, \tag{5.35}\] for any \(t>0\) and \(x\in X.\) As \(u(t;x)=S_{\alpha,1}(t)x\) is the solution to Problem (3.12), if \(F_{t}:X\to X\) is the operator defined by \(F_{t}(x):=(S^{\prime}_{\alpha,1}*S_{\alpha,1})(t)x,\) then the order \(\alpha\) also verifies \[F_{t}(x)=\int_{0}^{t}S^{\prime}_{\alpha,1}(r)u(r;x)dr=(S^{\prime}_{\alpha,1}* u)(t;x),\] and, as in Theorem 5.10, we obtain \[tu^{\prime}(t;x)=\alpha F_{t}(x)\] for any \(t>0\) and \(x\in X.\) Therefore, by (5.35) we have the following result. **Theorem 5.12**.: _If \(A\) generates the \((\alpha,1)\)-resolvent family \(\{S_{\alpha,1}(t)\}_{t\geq 0},\)\(x\in X\) and \(T>0,\) then_ \[Tu^{\prime}(T;x)=\alpha F_{T}(x).\] Now, we consider the Problem (3.13) for the Riemann-Liouville fractional derivative. Assume that \(A\) is the generator of \(\{S_{\alpha,\alpha}(t)\}_{t\geq 0}.\) Let \(\psi_{t}:X\to X\) be the operator defined by \(\psi_{t}(x):=A(g_{2}*S_{\alpha,\alpha}*S_{\alpha,\alpha})(t)x+(g_{2}*S_{ \alpha,\alpha})(t)x=A(g_{2}*S_{\alpha,\alpha}*u)(t;x)ds+(g_{2}*u)(t;x),\) where \(u(t;x)\) is the solution to Problem (3.13). **Theorem 5.13**.: _If \(A\) generates the \((\alpha,\alpha)\)-resolvent family \(\{S_{\alpha,\alpha}(t)\}_{t\geq 0}\) and \(x\in X,\)\(T>0,\) then the order \(\alpha\) verifies_ \[\int_{0}^{T}ru(r;x)dr=\alpha\psi_{T}(x).\] Proof.: Let \(t>0\) and \(x\in X.\) As \(u(t;x)=S_{\alpha,\alpha}(t)x\) is the solution to (3.13), by Lemma 2.5 we have \[\int_{0}^{t}rS_{\alpha,\alpha}(r)xdr=\alpha[A(g_{2}*S_{\alpha,\alpha}*S_{ \alpha,\alpha})(t)x+(g_{2}*S_{\alpha,\alpha})(t)x]=\alpha\psi_{t}(x),\] for any \(t>0\) and \(x\in X.\) We conclude that \[\int_{0}^{T}ru(r;x)dr=\alpha\psi_{T}(x),\] for any \(x\in X.\) Finally, by Lemma 2.5, we notice that \(tS_{\alpha,\alpha}(t)x=\alpha(S_{\alpha,1}*S_{\alpha,\alpha})(t)x,\) for all \(t\geq 0\) and \(x\in X.\) As \(\lambda^{\alpha}(\lambda^{\alpha}-A)^{-1}=A(\lambda^{\alpha}-A)^{-1}+I,\) we get \[\mathcal{L}((S_{\alpha,1}*S_{\alpha,\alpha}))(\lambda)x=\lambda^{\alpha-1}( \lambda^{\alpha}-A)^{-1}(\lambda^{\alpha}-A)^{-1}x=\frac{1}{\lambda}A(\lambda ^{\alpha}-A)^{-1}(\lambda^{\alpha}-A)^{-1}x+\frac{1}{\lambda}(\lambda^{\alpha }-A)^{-1}x,\] which implies that \[tS_{\alpha,\alpha}(t)x=\alpha[(g_{1}*AS_{\alpha,\alpha}*S_{\alpha,\alpha})(t)x +(g_{1}*S_{\alpha,\alpha})(t)x],\] for all \(t\geq 0,\)\(x\in X.\) Since \(u(t;x)=S_{\alpha,\alpha}(t)x\) is the solution to Problem (3.13), we have that if \(G_{t}(x):=A(g_{1}*S_{\alpha,\alpha}*S_{\alpha,\alpha})(t)x+(g_{1}*S_{\alpha, \alpha})(t)x,\) then \[tu(t;x)=\alpha G_{t}(x)=\alpha[A(g_{1}*S_{\alpha,\alpha}*u)(t;x)+(g_{1}*u)(t;x)]\] Therefore, we have the following result. **Theorem 5.14**.: _If \(A\) generates the \((\alpha,\alpha)\)-resolvent family \(\{S_{\alpha,\alpha}(t)\}_{t\geq 0}\) and \(x\in X,\)\(T>0,\) then the order \(\alpha\) verifies_ \[Tu(T;x)=\alpha G_{T}(x).\] ## 6. Determination of \(\alpha\) for a fixed time \(T.\) The super-diffusion case: \(1<\alpha<2.\) In this section we find the order \(\alpha\in(1,2)\) for a fixed time \(T>0\) in the fractional problems (4.22) and (4.23). We first consider the problem (4.22). Assume that \(A\) is the generator of the resolvent family \(\{S_{\alpha,1}(t)\}_{t\geq 0}.\) For a given \(y\in X,\) let \(\varphi_{t}:X\to X\) be the operator defined by \(\varphi_{t}(x):=(S_{\alpha,1}*u)(t;x,y)-(g_{1}*u)(t;x,y),\) where \(u(t;x,y)\) is the solution to Problem (4.22). **Theorem 6.15**.: _If \(A\) generates the \((\alpha,1)\)-resolvent family \(\{S_{\alpha,1}(t)\}_{t\geq 0},\)\(x,y\in X\) and \(T>0,\) then the order \(\alpha\) verifies_ \[Tu(T;x,y)-(g_{1}*u)(T;x,y)-(g_{2}*S_{\alpha,1})(T)y=\alpha\varphi_{T}(x).\] Proof.: Let \(x\in X\) and \(T>0.\) We first notice that if \(h(t)=t(g_{1}*S_{\alpha,1})(t),\) then for any \(\lambda^{\alpha}\in\rho(A)\) we have \[\hat{h}(\lambda)x=-\frac{d}{d\lambda}(\mathcal{L}(g_{1}*S_{\alpha,1}))(\lambda )x=-\frac{d}{d\lambda}\left(\lambda^{\alpha-2}(\lambda^{\alpha}-A)^{-1}x \right)=-(\alpha-2)\lambda^{-2}\hat{S}_{\alpha,1}(\lambda)x+\alpha\lambda^{-1 }\hat{S}_{\alpha,1}(\lambda)\hat{S}_{\alpha,1}(\lambda)x.\] This means that \[t(g_{1}*S_{\alpha,1})(t)x=-(\alpha-2)(g_{2}*S_{\alpha,1})(t)x+\alpha(g_{1}*S_ {\alpha,1}*S_{\alpha,1})(t)x. \tag{6.36}\] for all \(t\geq 0\) and \(x\in X.\) Moreover, by Lemma 2.3, 2.4 and (6.36) we have \[tu(t;x,y) = tS_{\alpha,1}(t)x+t(g_{1}*S_{\alpha,1})(t)y\] \[= \alpha[(S_{\alpha,1}*S_{\alpha,1})(t)x+(g_{1}*S_{\alpha,1}*S_{ \alpha,1})(t)y-(g_{1}*S_{\alpha,1})(t)x-(g_{2}*S_{\alpha,1})(t)y]\] \[+(g_{1}*S_{\alpha,1})(t)x+2(g_{2}*S_{\alpha,1})(t)y.\] Now, by Lemma 2.3, we get \[tu(t;x,y)-[(g_{1}*S_{\alpha,1})(t)x+(g_{1}*S_{\alpha,2})(t)y]-(g_{2}*S_{ \alpha,1})(t)y=\alpha[(S_{\alpha,1}*u)(t;x,y)-(g_{1}*u)(t;x,y)],\] that is, \[tu(t;x,y)-(g_{1}*u)(t;x,y)-(g_{2}*S_{\alpha,1})(t)y=\alpha[(S_{\alpha,1}*u)(t; x,y)-(g_{1}*u)(t;x,y)].\] Finally, we consider Problem (4.23). Assume that \(A\) is the generator of the resolvent family \(\{S_{\alpha,\alpha-1}(t)\}_{t\geq 0}.\) By (4.24), the solution to (4.23) is given by \(u(t;x,y)=S_{\alpha,\alpha-1}(t)x+(g_{1}*S_{\alpha,\alpha-1})(t)y.\) For a fixed \(y\in X,\) we define \(\tilde{\varphi}_{t}:X\to X\) be the operator defined by \(\tilde{\varphi}_{t}(x):=(g_{1}*u)(t;x,y)+A(g_{2}*S_{\alpha,\alpha-1}*u)(t;x,y),\) where \(u(t;x,y)\) is the solution to Problem (4.23). **Theorem 6.16**.: _If \(A\) generates the \((\alpha,\alpha-1)\)-resolvent family \(\{S_{\alpha,\alpha-1}(t)\}_{t\geq 0},\)\(x,y\in X\) and \(T>0,\) then the order \(\alpha\) verifies_ \[Tu(T;x,y)+(g_{1}*S_{\alpha,\alpha-1})(T)x=\alpha\tilde{\varphi}_{T}(x).\] Proof.: Let \(t>0\) and \(x,y\in X.\) By (4.25) we have \[tu(t;x,y)=tS_{\alpha,\alpha-1}(t)x+tS_{\alpha,\alpha}(t)y.\] By Lemma 2.3 we have \(S_{\alpha,\alpha}(t)=(g_{1}*S_{\alpha,\alpha-1})(t)\) and thus \(S^{\prime}_{\alpha,\alpha}(t)=S_{\alpha,\alpha-1}(t).\) Hence \[tu(t;x,y)=tS^{\prime}_{\alpha,\alpha}(t)x+tS_{\alpha,\alpha}(t)y.\] As in the proof of Lemma 2.5, it is easy to see that \[tS_{\alpha,\alpha}(t)=\alpha(S_{\alpha,1}*S_{\alpha,\alpha})(t), \tag{6.37}\] for any \(t\geq 0.\) As \(S^{\prime}_{\alpha,\alpha}(t)=S_{\alpha,\alpha-1}(t)\) and for \(\alpha>1,\)\(S_{\alpha,\alpha}(0)=0,\) we get \[S_{\alpha,\alpha}(t)+tS^{\prime}_{\alpha,\alpha}(t)=\alpha(S_{\alpha,1}*S^{ \prime}_{\alpha,\alpha})(t)+\alpha S_{\alpha,1}(t)S^{\prime}_{\alpha,\alpha}( 0)=\alpha(S_{\alpha,1}*S_{\alpha,\alpha-1})(t), \tag{6.38}\] for any \(t\geq 0.\) By (6.37) and (6.38) we have \[tu(t;x,y) = tS^{\prime}_{\alpha,\alpha}(t)x+tS_{\alpha,\alpha}(t)y\] \[= \alpha[(S_{\alpha,1}*S_{\alpha,\alpha-1})(t)x+(S_{\alpha,1}*S_{ \alpha,\alpha})(t)y]-S_{\alpha,\alpha}(t)x\] \[= \alpha\int_{0}^{t}S_{\alpha,1}(t-s)[S_{\alpha,\alpha-1}(s)x+S_{ \alpha,\alpha}(s)y]ds-S_{\alpha,\alpha}(t)x\] \[= \alpha\int_{0}^{t}S_{\alpha,1}(t-s)u(s;x,y)ds-S_{\alpha,\alpha}( t)x.\] As \(S^{\prime}_{\alpha,1}(t)=AS_{\alpha,\alpha}(t)\) and \(S_{\alpha,1}(0)=I,\) integrating by parts, we obtain \[\int_{0}^{t}S_{\alpha,1}(t-s)u(s;x,y)ds = S_{\alpha,1}(t-s)(g_{1}*u)(s;x,y)dr\Big{|}_{s=0}^{s=t}+\int_{0}^ {t}AS_{\alpha,\alpha}(t-s)(g_{1}*u)(s;x,y)ds\] \[= (g_{1}*u)(t;x,y)+A(g_{1}*S_{\alpha,\alpha}*u)(t;x,y).\] By Lemma 2.3 we conclude that \[tu(t;x,y)+(g_{1}*S_{\alpha,\alpha-1})(t)x=\alpha[(g_{1}*u)(t;x,y)+A(g_{2}*S_{ \alpha,\alpha-1}*u)(t;x,y)],\] for any \(x,y\in X\) and \(t>0.\) ## 7. Examples Let \(-A\) be a non-negative and self-adjoint operator on the Hilbert space \(X=L^{2}(\Omega)\) where \(\Omega\subset\mathbb{R}^{N}\) is a bounded and open set. If the operator \(A\) has a compact resolvent, then \(-A\) has a discrete spectrum and its eigenvalues satisfy \(0<\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{n}\leq\cdots\) with \(\lim_{n\to\infty}\lambda_{n}=\infty.\) If \(\phi_{n}\) denotes the normalized eigenfunction associated with \(\lambda_{n},\) then for all \(v\in D(A)\) we have \[-Av=\sum_{n=1}^{\infty}\lambda_{n}\langle v,\phi_{n}\rangle_{L^{2}(\Omega)} \phi_{n}.\] Now, consider the problem \[\left\{\begin{array}{lcl}\partial_{t}^{\alpha}u(t,x)&=&Au(t,x)\quad t>0,\\ u(0,x)&=&u_{0}(x),\end{array}\right. \tag{7.39}\] where \(x\in\Omega\) and \(u_{0}\in L^{2}(\Omega).\) Multiplying both sides of (7.39) by \(\phi_{n}(x)\) and integrating over \(\Omega\) we obtain that \(u_{n}(t):=\langle u(t,\cdot),\phi_{n}(\cdot)\rangle_{L^{2}(\Omega)}\) is a solution of the system \[\left\{\begin{array}{lcl}\partial_{t}^{\alpha}u_{n}(t)&=&-\lambda_{n}u_{n}(t )\quad t>0,\\ u_{n}(0)&=&u_{0,n},\end{array}\right. \tag{7.40}\] where \(u_{0,n}=\langle u_{0}(\cdot),\phi_{n}(\cdot)\rangle_{L^{2}(\Omega)},\) for all \(n\in\mathbb{N}.\) The solution to (7.40) is given by \[u_{n}(t)=E_{\alpha,1}(-\lambda_{n}t^{\alpha})u_{0,n}=:S^{n}_{\alpha,1}(t)u_{0,n},\] where \(S^{n}_{\alpha,1}(t):=E_{\alpha,1}(-\lambda_{n}t^{\alpha})\) is the resolvent family generated by \(A_{n}:=-\lambda_{n}.\) According to notation in Theorem 3.6, we have \(\varphi^{n}_{t}:\mathbb{R}\rightarrow\mathbb{R}\) and \(\psi^{n}_{t}:\mathbb{R}\rightarrow\mathbb{R},\) are respectively, given by \(\varphi^{n}_{t}(u_{0,n})=u_{n}(t)-u_{0,n}=S^{n}_{\alpha,1}(t)u_{0,n}-u_{0,n}\) and \(\psi^{n}_{t}(u_{0,n})=S^{n}_{\alpha,1}(t)^{\prime}u_{0,n}.\) By (3.17), \(S^{n}_{\alpha,1}(t)^{\prime}=-\lambda_{n}S^{n}_{\alpha,\alpha}(t)=-\lambda_{n }t^{\alpha-1}E_{\alpha,\alpha}(-\lambda_{n}t^{\alpha}).\) By Theorem 3.6, \[\alpha=\lim_{t\to 0^{+}}\frac{tu^{\prime}_{n}(t)}{u_{n}(t)-u_{0,n}}= \lim_{t\to 0^{+}}\frac{-\lambda_{n}t^{\alpha}E_{\alpha,\alpha}(- \lambda_{n}t^{\alpha})}{E_{\alpha,1}(-\lambda_{n}t^{\alpha})u_{0,n}-u_{0,n}}.\] Now, let \(T>0\) be a fixed time. By Remark 5.11 we have \[\alpha=\frac{Tu_{n}(T)-(g_{1}*u_{n})(T)}{(u_{n}*u_{n})(T)-(g_{1}*u_{n})(T)}.\] By Lemma 2.3 we have \((g_{1}*u_{n})(T)=(g_{1}*S^{n}_{\alpha,1})(T)u_{0,n}=S^{n}_{\alpha,2}(T)u_{0,n }=TE_{\alpha,2}(-\lambda_{n}T^{\alpha})u_{0,n}\) and by [11, Theorem 11.2], \[(u_{n}*u_{n})(T) = (S^{n}_{\alpha,1}*S^{n}_{\alpha,1})(T)\] \[= \int_{0}^{T}E_{\alpha,1}(-\lambda_{n}(T-s)^{\alpha})E_{\alpha,1}( -\lambda_{n}s^{\alpha})u_{0,n}ds\] \[= TE^{2}_{\alpha,2}(-\lambda_{n}T^{\alpha})u_{0,n},\] where \(E^{2}_{\alpha,2}(z):=\sum_{k=0}^{\infty}\frac{(k+1)z^{k}}{\Gamma(\alpha k+2)}\) is the generalized Mittag-Leffler function (see [11]). Therefore, \[\alpha=\frac{TE_{\alpha,1}(-\lambda_{n}T^{\alpha})u_{0,n}-TE_{\alpha,2}(- \lambda_{n}T^{\alpha})u_{0,n}}{TE^{2}_{\alpha,2}(-\lambda_{n}T^{\alpha})u_{0, n}-TE_{\alpha,2}(-\lambda_{n}T^{\alpha})u_{0,n}}=\frac{E_{\alpha,1}(-\lambda_{n}T^{ \alpha})-E_{\alpha,2}(-\lambda_{n}T^{\alpha})}{E^{2}_{\alpha,2}(-\lambda_{n}T^ {\alpha})-E_{\alpha,2}(-\lambda_{n}T^{\alpha})}:=\alpha_{n},\] for every \(T>0\) and any \(n\in\mathbb{N}\) (that is, any eigenvalue \(\lambda_{n}\) and \(u_{0,n}\)). This means that, to find \(\alpha\) in (7.39) we need know: \(u_{n}(T)=E_{\alpha,1}(-\lambda_{n}T^{\alpha})u_{0,n},\)\((g_{1}*u_{n})(T)=E_{\alpha,2}(-\lambda_{n}T^{\alpha})u_{0,n}\) and \((u_{n}*u_{n})(T)=E^{2}_{\alpha,2}(-\lambda_{n}T^{\alpha})u_{0,n}\) for any eigenvalue \(\lambda_{n}\) of \(A\) and any fixed time \(T>0.\) The next Table shows a comparison between \(\alpha_{n},\) and \(\alpha=0.2\) and \(\alpha=0.4,\) for different choices of \(T>0\) and \(\lambda_{n}.\) Here, the Mittag-Leffler function has been approximated by its \(N\)-partial sums, with \(N=50.\) Now, if we consider Problem (1.6) for \(1<\alpha<2,\) then the solution to the corresponding Problem (1.6) for its eigenvalue is given by \[u_{n}(t)=E_{\alpha,1}(-\lambda_{n}t^{\alpha})u_{0,n}+tE_{\alpha,2}(-\lambda_{ n}t^{\alpha})u_{1,n}=S^{n}_{\alpha,1}(t)u_{0,n}+S^{n}_{\alpha,2}(t)u_{1,n}, \quad n\in\mathbb{N}.\] The notation in Theorem 4.8 gives us \(\varphi^{n}_{t}(u_{0,n},u_{1,n})=S^{n}_{\alpha,1}(t)u_{0,n}+S^{n}_{\alpha,2}(t )u_{1,n}-u_{0,n}-tu_{1,n},\) and \(\psi^{n}_{t}(u_{0,n},u_{1,n})=u^{\prime\prime}_{n}(t)=S^{n}_{\alpha,1}(t)^{ \prime\prime}u_{0,n}+S^{n}_{\alpha,2}(t)^{\prime\prime}u_{1,n}.\) By Lemma 2.3, \(S^{n}_{\alpha,2}(t)=(g_{1}*S^{n}_{\alpha,1})(t)\) and by \begin{table} \begin{tabular}{|c|c|c|c||c|c|c|c|} \hline \(\alpha\) & \(T\) & \(\lambda_{n}\) & \(\alpha_{n}\) & \(\alpha\) & \(T\) & \(\lambda_{n}\) & \(\alpha_{n}\) \\ \hline 0.2 & 0.1 & 4 & 0.1999999998 & 0.4 & 0.1 & 4 & 0.3999999997 \\ \hline 0.2 & 0.1 & 9 & 0.1999999998 & 0.4 & 0.1 & 9 & 0.3999999998 \\ \hline 0.2 & 1 & 4 & 0.2000000002 & 0.4 & 1 & 4 & 0.3999999996 \\ \hline 0.2 & 1 & 9 & 0.2000000000 & 0.4 & 1 & 9 & 0.4000000001 \\ \hline 0.2 & 10 & 4 & 0.199999999 & 0.4 & 10 & 4 & 0.4000000001 \\ \hline 0.2 & 10 & 9 & 0.2000000003 & 0.4 & 10 & 9 & 0.4000000002 \\ \hline 0.2 & 100 & 4 & 0.1999999999 & 0.4 & 100 & 4 & 0.3999999997 \\ \hline 0.2 & 100 & 9 & 0.2000000001 & 0.4 & 100 & 9 & 0.4000000000 \\ \hline \end{tabular} \end{table} Table 1. Order in Caputo fractional derivative for \(0<\alpha<1.\) (3.17) and (4.28) we have \[S^{n}_{\alpha,2}(t)^{\prime\prime}=S^{n}_{\alpha,1}(t)^{\prime}=-\lambda_{n}S^{n} _{\alpha,\alpha}(t)\quad\text{ and }\quad S^{n}_{\alpha,1}(t)^{\prime\prime}=-\lambda_{n}S^{n}_{\alpha,\alpha}(t)^{ \prime}=-\lambda_{n}S^{n}_{\alpha,\alpha-1}(t).\] By Theorem 4.8 we have \[\alpha(\alpha-1)=\lim_{t\to 0^{+}}\frac{-t^{2}\lambda_{n}S^{n}_{\alpha, \alpha-1}(t)}{S^{n}_{\alpha,1}(t)u_{0,n}+S^{n}_{\alpha,2}(t)u_{1,n}-u_{0,n}-tu _{1,n}}.\] Now, let \(T>0\) be a fixed time. By Theorem 6.15 we have \[\alpha=\frac{Tu_{n}(T)-(g_{1}*u_{n})(T)-(g_{2}*S^{n}_{\alpha,1})(T)u_{1,n}}{(S ^{n}_{\alpha,1}*u_{n})(T)-(g_{1}*u_{n})(T)}.\] By Lemma 2.3 we have \((g_{1}*u_{n})(T)=(g_{1}*S^{n}_{\alpha,1})(T)u_{0,n}+(g_{1}*S^{n}_{\alpha,2})( T)u_{1,n}=S^{n}_{\alpha,2}(T)u_{0,n}+S^{n}_{\alpha,3}(T)u_{1,n}=TE_{\alpha,2}(- \lambda_{n}T^{\alpha})u_{0,n}+T^{2}E_{\alpha,3}(-\lambda_{n}T^{\alpha})u_{1,n}.\) Moreover, \((g_{2}*S^{n}_{\alpha,1})(T)u_{1,n}=S^{n}_{\alpha,3}(T)u_{1,n}=T^{2}E_{\alpha, 3}(-\lambda_{n}T^{\alpha})u_{1,n}\) and by [11, Theorem 11.2] we have \[(S^{n}_{\alpha,1}*u_{n})(T) = \int_{0}^{T}S^{n}_{\alpha,1}(T-s)u_{n}(s)ds\] \[= \int_{0}^{T}E_{\alpha,1}(-\lambda_{n}(T-s)^{\alpha})E_{\alpha,1}( -\lambda_{n}s^{\alpha})u_{0,n}ds+\int_{0}^{T}E_{\alpha,1}(-\lambda_{n}(T-s)^{ \alpha})sE_{\alpha,2}(-\lambda_{n}s^{\alpha})u_{1,n}ds\] \[= TE^{2}_{\alpha,2}(-\lambda_{n}T^{\alpha})u_{0,n}+T^{2}E^{2}_{ \alpha,3}(-\lambda_{n}T^{\alpha})u_{1,n},\] where \(E^{2}_{\alpha,3}(z):=\sum_{k=0}^{\infty}\frac{(k+1)z^{k}}{\Gamma(\alpha k+3)}.\) Therefore \[\alpha = \frac{TE_{\alpha,1}(-\lambda_{n}T^{\alpha})u_{0,n}+T^{2}E_{\alpha,2}(-\lambda_{n}T^{\alpha})u_{1,n}-TE_{\alpha,2}(-\lambda_{n}T^{\alpha})u_{0,n }-2T^{2}E_{\alpha,3}(-\lambda_{n}T^{\alpha})u_{1,n}}{TE^{2}_{\alpha,2}(- \lambda_{n}T^{\alpha})u_{0,n}+T^{2}E^{2}_{\alpha,3}(-\lambda_{n}T^{\alpha})u_{ 1,n}-TE_{\alpha,2}(-\lambda_{n}T^{\alpha})u_{0,n}-T^{2}E_{\alpha,3}(-\lambda_{ n}T^{\alpha})u_{1,n}}\] \[= \frac{E_{\alpha,1}(-\lambda_{n}T^{\alpha})u_{0,n}+TE_{\alpha,2}(- \lambda_{n}T^{\alpha})u_{1,n}-E_{\alpha,2}(-\lambda_{n}T^{\alpha})u_{0,n}-2TE_ {\alpha,3}(-\lambda_{n}T^{\alpha})u_{1,n}}{E^{2}_{\alpha,2}(-\lambda_{n}T^{ \alpha})u_{0,n}+TE^{2}_{\alpha,3}(-\lambda_{n}T^{\alpha})u_{1,n}-E_{\alpha,2}( -\lambda_{n}T^{\alpha})u_{0,n}-TE_{\alpha,3}(-\lambda_{n}T^{\alpha})u_{1,n}}\] \[=: \alpha_{n}.\] In the next Table we compare \(\alpha_{n}\) and the order \(\alpha=1.4\) and \(\alpha=1.8\) for different choices of \(T>0\) and \(\lambda_{n}.\) For simplicity, we take \(u_{0,n}=1,u_{1,n}=2.\) Here, the Mittag-Leffler function has been approximated by its \(N\)-partial sums, with \(N=100.\) Now, we consider the fractional differential equations for the Riemann-Liouville fractional derivative (1.7) and (1.8). Let \(T>0\) be a fixed time. We first consider \(0<\alpha<1.\) The solution to the corresponding Problem (1.7) for its eigenvalue is given by \[u_{n}(t)=t^{\alpha-1}E_{\alpha,\alpha}(-\lambda_{n}t^{\alpha})u_{0,n}=S^{n}_{ \alpha,\alpha}(t)u_{0,n},\quad n\in\mathbb{N}.\] \begin{table} \begin{tabular}{|c|c|c|c||c|c|c|c|} \hline \(\alpha\) & \(T\) & \(\lambda_{n}\) & \(\alpha_{n}\) & \(\alpha\) & \(T\) & \(\lambda_{n}\) & \(\alpha_{n}\) \\ \hline 1.4 & 0.5 & 1 & 1.4000000001 & 1.8 & 0.5 & 1 & 1.8000000002 \\ \hline 1.4 & 0.5 & 4 & 1.399999999 & 1.8 & 0.5 & 4 & 1.7999999997 \\ \hline 1.4 & 1 & 1 & 1.4000000001 & 1.8 & 1 & 1 & 1.8000000002 \\ \hline 1.4 & 1 & 4 & 1.3999999999 & 1.8 & 1 & 4 & 1.8000000000 \\ \hline 1.4 & 5 & 1 & 1.40000000008 & 1.8 & 5 & 1 & 1.8000000011 \\ \hline 1.4 & 5 & 4 & 1.399959885 & 1.8 & 5 & 4 & 1.799986643 \\ \hline \end{tabular} \end{table} Table 2. Order in Caputo fractional derivatives for \(1<\alpha<2.\) By Theorem 5.14, we have \[\alpha=\frac{Tu_{n}(T)}{-\lambda_{n}(g_{1}*S^{n}_{\alpha,\alpha}*u_{n})(T)+(g_{1} *u_{n})(T)}.\] By Lemma 2.3, \((g_{1}*u_{n})(T)=(g_{1}*S^{n}_{\alpha,\alpha})(T)u_{0,n}=S^{n}_{\alpha,\alpha+1 }(T)u_{0,n}=T^{\alpha}E_{\alpha,\alpha+1}(-\lambda_{n}T^{\alpha})u_{0,n}.\) By Lemma 2.3 and [11, Theorem 11.2] we get \[(g_{1}*S^{n}_{\alpha,\alpha}*u_{n})(T) = (S^{n}_{\alpha,\alpha+1}*u_{n})(T)\] \[= \int_{0}^{T}(T-s)^{\alpha}E_{\alpha,\alpha+1}(-\lambda_{n}(T-s)^ {\alpha})s^{\alpha-1}E_{\alpha,\alpha}(-\lambda_{n}s^{\alpha})u_{0,n}ds\] \[= T^{2\alpha}E^{2}_{\alpha,2\alpha+1}(-\lambda_{n}T^{\alpha})u_{0, n},\] where \(E^{2}_{\alpha,2\alpha+1}(z):=\sum_{k=0}^{\infty}\frac{(k+1)z^{k}}{\Gamma( \alpha k+2\alpha+1)}.\) Therefore, for any \(u_{0,n},\) \[\alpha = \frac{T^{\alpha}E_{\alpha,\alpha}(-\lambda_{n}T^{\alpha})u_{0,n} }{-\lambda_{n}T^{2\alpha}E^{2}_{\alpha,2\alpha+1}(-\lambda_{n}T^{\alpha})u_{0,n}+T^{\alpha}E_{\alpha,\alpha+1}(-\lambda_{n}T^{\alpha})u_{0,n}}\] \[= \frac{E_{\alpha,\alpha}(-\lambda_{n}T^{\alpha})}{-\lambda_{n}T^{ \alpha}E^{2}_{\alpha,2\alpha+1}(-\lambda_{n}T^{\alpha})+E_{\alpha,\alpha+1}(- \lambda_{n}T^{\alpha})}\] \[=: \alpha_{n}.\] The next Table compares \(\alpha_{n}\) and the order \(\alpha=0.4\) and \(\alpha=0.7\) for different choices of \(T>0\) and \(\lambda_{n}.\) Here, the Mittag-Leffler function has been approximated by its \(N\)-partial sums, with \(N=1000.\) Finally, we consider \(1<\alpha<2.\) The solution to the corresponding Problem (1.8) for its eigenvalue is \[u_{n}(t)=t^{\alpha-2}E_{\alpha,\alpha-1}(-\lambda_{n}t^{\alpha})u_{0,n}+t^{ \alpha-1}E_{\alpha,\alpha}(-\lambda_{n}t^{\alpha})u_{1,n}=S^{n}_{\alpha,\alpha- 1}(t)u_{0,n}+S^{n}_{\alpha,\alpha}(t)u_{1,n},\quad n\in\mathbb{N}.\] By Theorem 6.16, \[\alpha=\frac{Tu_{n}(T)+(g_{1}*S^{n}_{\alpha,\alpha-1})(T)u_{0,n}}{(g_{1}*u_{n} )(T)-\lambda_{n}(g_{2}*S^{n}_{\alpha,\alpha-1}*u_{n})(T)}.\] The Lemma 2.3 implies that \((g_{1}*S^{n}_{\alpha,\alpha-1})(T)u_{0,n}=S^{n}_{\alpha,\alpha}(T)u_{0,n}=T^{ \alpha-1}E_{\alpha,\alpha}(-\lambda_{n}T^{\alpha})u_{0,n}.\) Moreover, \[(g_{1}*u_{n})(T) = (g_{1}*S^{n}_{\alpha,\alpha-1})(T)u_{0,n}+(g_{1}*S^{n}_{\alpha, \alpha})(T)u_{1,n}\] \[= S^{n}_{\alpha,\alpha}(T)u_{0,n}+S^{n}_{\alpha,\alpha+1}(T)u_{1,n}\] \[= T^{\alpha-1}E_{\alpha,\alpha}(-\lambda_{n}T^{\alpha})u_{0,n}+T^{ \alpha}E_{\alpha,\alpha+1}(-\lambda_{n}T^{\alpha})u_{1,n}.\] \begin{table} \begin{tabular}{|c|c|c|c||c|c|c|c|} \hline \(\alpha\) & \(T\) & \(\lambda_{n}\) & \(\alpha_{n}\) & \(\alpha\) & \(T\) & \(\lambda_{n}\) & \(\alpha_{n}\) \\ \hline \(0.4\) & \(0.1\) & \(1\) & \(0.3999999998\) & \(0.7\) & \(0.1\) & \(1\) & \(0.6999999993\) \\ \hline \(0.4\) & \(0.1\) & \(4\) & \(0.4000000066\) & \(0.7\) & \(0.1\) & \(4\) & \(0.700000018\) \\ \hline \(0.4\) & \(0.5\) & \(1\) & \(0.399999994\) & \(0.7\) & \(0.5\) & \(1\) & \(0.6999999986\) \\ \hline \(0.4\) & \(0.5\) & \(4\) & \(0.3999999709\) & \(0.7\) & \(0.5\) & \(4\) & \(0.69999998401\) \\ \hline \(0.4\) & \(1\) & \(1\) & \(0.3999999994\) & \(0.7\) & \(1\) & \(1\) & \(0.7000000000\) \\ \hline \(0.4\) & \(1\) & \(4\) & \(0.3999998780\) & \(0.7\) & \(1\) & \(4\) & \(0.6999962379\) \\ \hline \end{tabular} \end{table} Table 3. Order in Riemann-Liouville fractional derivatives for \(0<\alpha<1.\) Finally, by Lemma 2.3 and [11, Theorem 11.2] we have \[(g_{2}*S^{n}_{\alpha,\alpha-1}*u_{n})(T) = (S^{n}_{\alpha,\alpha+1}*u_{n})(T)\] \[= \int_{0}^{T}S_{\alpha,\alpha+1}(T-s)S_{\alpha,\alpha-1}(s)u_{0,n} ds+\int_{0}^{T}S_{\alpha,\alpha+1}(T-s)S_{\alpha,\alpha}(s)u_{1,n}ds\] \[= \int_{0}^{T}(T-s)^{\alpha}E_{\alpha,\alpha+1}(-\lambda_{n}(T-s)^{ \alpha})s^{\alpha-2}E_{\alpha,\alpha-1}(-\lambda_{n}s^{\alpha})u_{0,n}ds\] \[+\int_{0}^{T}(T-s)^{\alpha}E_{\alpha,\alpha+1}(-\lambda_{n}(T-s)^ {\alpha})s^{\alpha-1}E_{\alpha,\alpha}(-\lambda_{n}s^{\alpha})u_{1,n}ds\] \[= T^{2\alpha-1}E_{\alpha,2\alpha}^{2}(-\lambda_{n}T^{\alpha})u_{0, n}+T^{2\alpha}E_{\alpha,2\alpha+1}^{2}(-\lambda_{n}T^{\alpha})u_{1,n}.\] We obtain \[\alpha = \frac{T^{\alpha-1}E_{\alpha,\alpha-1}(-\lambda_{n}T^{\alpha})u_{ 0,n}+T^{\alpha}E_{\alpha,\alpha}(-\lambda_{n}T^{\alpha})u_{1,n}+T^{\alpha-1}E_ {\alpha,\alpha}(-\lambda_{n}T^{\alpha})u_{0,n}}{T^{\alpha-1}E_{\alpha,\alpha}( -\lambda_{n}T^{\alpha})u_{0,n}+T^{\alpha}E_{\alpha,\alpha+1}(-\lambda_{n}T^{ \alpha})u_{1,n}-\lambda_{n}T^{2\alpha-1}E_{\alpha,2\alpha}^{2}(-\lambda_{n}T^ {\alpha})u_{0,n}-\lambda_{n}T^{2\alpha}E_{\alpha,2\alpha+1}^{2}(-\lambda_{n}T^ {\alpha})u_{1,n}}\] \[= \frac{E_{\alpha,\alpha-1}(-\lambda_{n}T^{\alpha})u_{0,n}+TE_{ \alpha,\alpha}(-\lambda_{n}T^{\alpha})u_{1,n}+E_{\alpha,\alpha}(-\lambda_{n}T ^{\alpha})u_{0,n}}{\frac{E_{\alpha,\alpha}(-\lambda_{n}T^{\alpha})u_{0,n}+TE_{ \alpha,\alpha+1}(-\lambda_{n}T^{\alpha})u_{1,n}-\lambda_{n}T^{\alpha}E_{\alpha,2\alpha}^{2}(-\lambda_{n}T^{\alpha})u_{0,n}-\lambda_{n}T^{\alpha+1}E_{\alpha,2\alpha+1}^{2}(-\lambda_{n}T^{\alpha})u_{1,n}}\] \[=: \alpha_{n}.\] To conclude the paper, in the next Table we compare \(\alpha_{n}\) and the order \(\alpha=1.3\) and \(\alpha=1.7\) for different choices of \(T>0\) and \(\lambda_{n}.\) For simplicity, we take again \(u_{0,n}=1,u_{1,n}=2.\) Moreover, the Mittag-Leffler function has been approximated by its \(N\)-partial sums, with \(N=1000.\)
2305.13241
Whose baseline compiler is it anyway?
Compilers face an intrinsic tradeoff between compilation speed and code quality. The tradeoff is particularly stark in a dynamic setting where JIT compilation time contributes to application runtime. Many systems now employ multiple compilation tiers, where one tier offers fast compile speed while another has much slower compile speed but produces higher quality code. With proper heuristics on when to use each, the overall performance is better than using either compiler in isolation. At the introduction of WebAssembly into the Web platform in 2017, most engines employed optimizing compilers and pre-compiled entire modules before execution. Yet since that time, all Web engines have introduced new "baseline" compiler tiers for Wasm to improve startup time. Further, many new non-web engines have appeared, some of which also employ simple compilers. In this paper, we demystify single-pass compilers for Wasm, explaining their internal algorithms and tradeoffs, as well as providing a detailed empirical study of those employed in production. We show the design of a new single-pass compiler for a research Wasm engine that integrates with an in-place interpreter and host garbage collector using value tags, while also supporting flexible instrumentation. In experiments, we measure the effectiveness of optimizations targeting value tags and find, somewhat surprisingly, that the runtime overhead can be reduced to near zero. We also assess the relative compile speed and execution time of six baseline compilers and place these baseline compilers in a two-dimensional tradeoff space with other execution tiers for Wasm.
Ben L. Titzer
2023-05-22T17:13:11Z
http://arxiv.org/abs/2305.13241v2
# Whose baseline (compiler) is it anyway? ###### Abstract Compilers face a intrinsic tradeoff between compilation speed and code quality. The tradeoff is particularly stark in a dynamic setting where JIT compilation time contributes to application runtime. Many systems now employ multiple compilation _tiers_, where one tier offers fast compile speed while another has much slower compile speed but produces higher quality code. With proper heuristics on when to use each, the overall performance is better than using either compiler in isolation. At the introduction of WebAssembly into the Web platform in 2017, most engines employed optimizing compilers and pre-compiled entire modules before execution. Yet since that time, all Web engines have introduced new "baseline" compiler tiers for Wasm to improve startup time. Further, many new non-web engines have appeared, some of which also employ simple compilers. In this paper, we demystify single-pass compilers for Wasm, explaining their internal algorithms and tradeoffs, as well as providing a detailed empirical study of those employed in production. We show the design of a new single-pass compiler for a research Wasm engine that integrates with an in-place interpreter and host garbage collector using value tags. In experiments, we measure the effectiveness of optimizations targeting the cost of value tags, the relative compile speed and execution time of six baseline compilers, and place these baseline compilers in the tradeoff space with other execution tiers for Wasm. compilers, JITs, single-pass, baseline, compilation time, tradeoff, WebAssembly ## I Introduction Software virtual machines (VMs) provide a way to execute a _guest_ programming language, instruction-set architecture, or bytecode format on a different _host_ machine. VMs employ a variety of execution strategies that balance memory consumption, startup time, and peak performance. In settings where loading or generating code at runtime is possible, new code can "appear from nowhere", and purely ahead-of-time translation is not possible. This leaves such virtual machines with the option to employ an interpreter or a dynamic compiler. ### _WebAssembly_ First appearing in major Web Browsers in 2017, WebAssembly [1] (or Wasm for short) is a bytecode format designed to offer portable native-level performance and software fault isolation with efficient in-process sandboxing. Wasm is a machine-independent but machine-level compilation target that can be executed on modern CPUs (via translation) with very low overhead. It has allowed an explosion of new, powerful Web applications and capabilities, such as desktop applications like AutoCAD [2] and Photoshop [3], video conference acceleration [4], real-time audio processing for echo reduction [5], and many others. Many of these are made possible by recompiling (potentially millions of lines of) legacy C/C++ code using standard toolchains, such as LLVM, that now support WebAssembly as a target. With a fully-formalized specification [6] and machine-checked proof of type safety [7], Wasm offers the most rigorously-specified compilation target to date. It the most robust option for strongly isolating untrusted code such as that on the Web or loaded in-kernel. It is perhaps the first example of a major language that has employed formal specification and verification from its design inception. In the literature, Wasm has inspired a number of new exciting directions in Web research [8][9], verification research [10][11], systems research [12], cloud and edge computing [13][14][15], and PL research [16]. ### _Execution Strategies for Dynamic Code_ **Interpreters.** An interpreter executes a program by examining _data_ that represents guest code. Strict interpreters can execute any given input program without generating new machine code1. Interpreters have an advantage in that little or no up-front processing of the program is required and, for well-designed bytecode formats, can often execute the code directly from the disk or wire format, saving both startup time and memory. Interpreters also excel at debugging and introspecting execution states, as they often directly implement the state abstractions of their respective code format, such as an operand stack. However, interpretation overhead means interpreters can never match the performance of compiled code in the long run. Footnote 1: Some interpreters may need to generate machine code stubs or, e.g. per-signature helper routines, but don’t translate guest code directly to machine code. **Baseline JIT compilers.** Systems have deployed _dynamic translation_ to machine code as far back as LISP in 1960. Often called just-in-time (JIT) compilation, a dynamic compiler generates new machine code at runtime that behaves equivalently to the interpreter's semantics, but is much faster. A _baseline_ compiler is designed to generate machine code as fast as possible, forgoing the use of an intermediate representation (IR). The very first dynamic translators were baseline compilers, stamping out templates of the interpreter's logic for each guest instruction or AST node, one after another, thus neatly eliminating the interpreter dispatch loop. Despite the simplicity of baseline compilers, execution time improvements of 3\(\times\) to 10\(\times\) are common. **Optimizing JIT compilers.** JITs in today's virtual machines are powerful, integrating many ideas from static compilers, employing state-of-the-art IRs and sophisticated optimization passes. For example, TurboFan [18], the optimizing compiler in V8, employs a program dependence graph (PDG) representation called the "sea of nodes" [19], with two different but overlapping optimization pipelines, one for JavaScript, and one for WebAssembly. Key optimizations employed by most modern optimizing JITs are inlining, load elimination, strength reduction, branch folding, loop peeling and unrolling, global code motion, instruction selection, and register allocation. ### _Overview and Contributions_ This paper is about maximizing compile speed for WebAssembly. It presents a new single-pass compiler design for a research engine and compares and contrasts it with other single-pass compilers and other tiers for Wasm execution. This paper's contributions are: * **A new baseline compiler**, **Wizard-SPC**, designed for interoperability with in-place interpretation of Wasm in the Wizard Research Engine, supporting full-fidelity instrumentation and debugging. * **Distillation** of the key designs for five other Wasm baseline compilers that all share the same foundational abstract interpretation approach, yet are discussed nowhere in the literature. * **Performance comparison** among interpreters, baseline compilers, and optimizing compilers for Wasm. * **Empirical evaluation** of baseline compiler optimizations in **Wizard**, including optimizations for value tags. ## II Executing Wasm Wasm bytecode is organized into modules, with top-level functions that are filled with instructions that manipulate a stack machine. Wasm bytecode is unusual in that it has structured control-flow constructs like **block**, **if**, and **loop**. Such constructs improve the compactness of the code format and the efficiency of the code validation algorithm. A key property is that branches that target a **block** or **loop** must be nested inside the construct. This leads to a natural notion of a "control stack" that allows the validator algorithm to immediately reuse any internal metadata for control constructs as soon as the construct is exited2. Another intended design property is that all control-flow predecessors of a label (except **loop**) precede the label, enabling highly efficient single-pass forward data flow analysis via abstract interpretation. Footnote 2: It is believed, but has not been yet shown, that this representation is optimally efficient. Wasm now exhibits execution tiers of all three major designs. Optimizing compilers for Wasm appeared first in Web engines, made possible by the engineering effort put into making JavaScript fast. Later, Web Engines added baseline compiler tiers, as startup time became an issue for large modules. Concurrently, non-Web compilers and interpreters started appearing. Initially, interpreters translated Wasm code to another representation internally, but recent work introduced an in-place interpreter in the Wizard Research Engine [20]. Today, many engines have a baseline compiler tier. Yet descriptions of these compilers which are hyper-tuned for compile speed do not yet appear in the literature. This paper documents these designs and examines their performance characteristics. We also report on **Wizard-SPC**, a new, state-of-the-art single-pass compiler for **Wizard**, a research Wasm engine. A specific design problem was integrating with an existing in-place interpreter to support full-fidelity debugging and instrumentation. For evaluation, we compare six baseline compilers found in industry across a wide variety of benchmarks. ## III Single-pass Compilation of Wasm Single-pass compilers for Wasm are designed for compile speed and simplicity. A single pass affords no time to build an intermediate representation of the code. Instead, such compilers are limited to generating code for one (or a small number) of instructions at a time based on limited context accumulated from prior instructions. #### Iii-1 The Abstract-Interpretation Approach In our study of Wasm compilers, we found that all single-pass compilers are simply variations on a basic abstract-interpretation approach that is similar to Wasm code validation3. Thus, by understanding this common approach, we can compare and contrast the variations and more easily understand the innovation represented by **Wizard**'s new variation. Footnote 3: In fact, some baseline compilers in this study, like Liftoff, reuse parts of their validation algorithms to drive compilation. Figure 1 gives an example compilation of Wasm code using the common abstract-interpretation approach. The abstract state consists of an _abstract value stack_ (shown), an _abstract control stack_ (not shown), and register allocation state (not shown). Each local variable and operand stack slot in the abstract value stack has an _abstract value_ that can contain information such as: * whether the value has been stored (spilled) into the execution frame, and where, * the register, if any, which holds the value, * the concrete value, if a constant. Local variables representing parameters are initialized from the signature of the function and the calling convention. In the example, argument values arrive from the caller in memory on the execution stack. Declared local variables (not shown) are by Wasm semantics initialized to the default value for their respective type (i.e. a constant). Not shown, the algorithm maintains an abstract control stack that tracks the nesting of control constructs such as **block** and **loop**. Each construct has a _label_ which represents the place in the machine code where branches targeting it will jump. After emitting a few machine instructions for the prologue, compilation procedes by examining each instruction in sequence. Instructions that access locals (local.get, local.set and local.tee) manipulate the abstract state. Depending on whether the local or top-of-stack is allocated to a register, the compiler may emit a load or store instruction, but often emits no code at all. We'll see in the next section that variations in the abstract state of the compilers we study impact the amount of moves generated. For constant-generating instructions like i32.const, abstract values can model concrete values and avoid generating any code at all. Of the six compilers we study, all but one model constants. Control flow requires the compiler to manage _snapshots_ of the abstract state that represent the contents of registers and the stack at labels, i.e. merges in control flow. All constructs except loop have their label at the end, which means that all branches to the label will be seen before the label itself. For loop, absent any knowledge of the code in the loop body, compilers must over-approximate the abstract state before compiling the loop body, e.g. by assuming all operand slots could be modified on backedges. A key design consideration in making a fast compiler is efficiently snapshotting the abstract state and merging states coming from multiple branches, since the abstract state can be thousands or tens of thousands of locals for large functions. Though beyond the level of detail appropriate for such a short paper, different compilers we studied have different strategies, either making copy extremely cheap (i.e. memcpy), keeping a delta index, or by spilling many values so they don't need to be tracked. A nice benefit of Wasm's structured control flow is that the snapshot for a merge point can be deallocated as soon as a control construct is exited. Both of these considerations help avoid JIT bombs, which are small programs that exploit a non-linearity in the algorithmic complexity of a compiler as a form of denial-of-service attack [21]. The deceptively simple compilation approach is quite tricky to implement correctly and efficiently, but nevertheless yields surprisingly good code, as can be seen in the example. This is enhanced by the design of Wasm control flow; since labels (other than loop) will have had all their predecessors visited Fig. 1: Illustration of single-pass compilation using abstract interpretation. (Actual code emitted by the Wizard single-pass compiler.) before reaching the label, abstract interpretation can even propagate constants through control flow accurately. All-in-all, a single-pass compiler can peform: * if abstract values track register occupancy for each slot4, codegen can avoid emitting any code for local accesses, often just updating the abstract state, Footnote 4: Note also that this requires the register allocator to allow the same register to be used for more than one slot at a time, which complicates the design of a fast datstructure for checkpointing, needed for control-flow splits and merges. * if abstract values track constants, codegen can can be compile-time evaluate side-effect free instructions, producing more constants, * if abstract values track constants, then branches whose input condition is a constant can be removed or compiled to unconditional jumps, * if abstract values track constants, then some simple patterns such as **(i32.add x (i32.const 0))** can be reduced or eliminated by not generating any code, * if abstract values track register occupancy, codegen can select memory or register addressing modes, and if abstract values track constants, it can emit machine instructions such as "add with immediate", * if the abstract values track whether a value is already spilled to the stack, codegen can avoid repeated spills to the stack in subsequent instructions, and * if codegen can peek one or more instructions ahead, it can generate a single instruction that combines, e.g. a compare and a branch. From our study of baseline Wasm compilers, code generation for each individual instruction (in Wasm today there are over 440, including SIMD) is tedious and time-consuming but not intrinsically difficult. Instead, the crux of good single-pass compilation is two subtle things that require careful data structure design. First, **managing the abstract state**, whose size is proportional to the locals and operand stack, must be done carefully and efficiently at all control flow points (branches, loops, and merges) to avoid (or at least mitigate) quadratic compilation time. And second, abstract values should model constants _and_ registers, allowing **efficient forward-pass register allocation** so that most Wasm instructions use machine registers and avoid spills. The two are intertwined; the abstract state of all compilers contains register assignments, and must be checkpointed at control flow split points and merged at control-flow join points. ## IV Baseline Compiler Integration A JIT compiler in any virtual machine must integrate with other services such as debugging and garbage collection. Thus, in a mature system, a JIT compiler becomes invisible, and users experience better performance with no loss of functionality. This becomes progressively more complicated with more execution tiers, as handoff between different types of code efficiently can involve very intricate systems work at the machine code level. In this section we cover aspects of integrating **Wizard-SPC** that motivated and are in turn constrained by design decisions. ### _Debugging and Instrumentation_ A key design goal of the Wizard Research Engine is debuggability and introspection. This allows language implementers and researchers to trace, profile, debug, and experiment with WebAssembly code in ways more flexible than production engines whose main focus is performance. These goals motivated the design of **Wizard's fast interpreter design**[20] which interprets WebAssembly code in-place, without rewriting. Relevant design points are: * The value stack is explicitly emulated at runtime, including locals and operand stack values. * Stack walking uses _value tags_ to precisely find GC roots (**externref** and WebAssembly GC objects). * Interpreter performance is on par with production interpreters for Wasm. * Users can insert _probes_ into bytecode locations which call back to instrumentation and implement tracing, debugging, and profiling. ### _Value Stack and Execution Frame Layouts_ As we've seen, all single-pass compilers for Wasm use abstract interpretation to statically compute the operand stack height and approximate stack contents at every instruction in a function. Some of the baseline compilers we studied _reallocate_ the storage of operand stack slots and locals to machine stack slots and registers, i.e. they scramble the stackframe layout. Scrambling the stack creates a mapping problem for debugging and instrumentation: where are operand stack slots on the machine stack, and vice-versa? This imposes a space cost, in metadata, and is remarkably complex, tricky and error-prone. In fact, of the five previous compilers we studied, only two support introspection in their baseline compilers; the others just _do not support debugging at all_. **Wizard's baseline compiler is meant to integrate with a fast interpreter that has an exact model of the operand stack. It does not scramble the stack, and moreover, uses a nearly identical execution frame layout between the interpreter and JITed code. In Figure 2, we see the layout of execution frames in **Wizard** for the interpreter and JIT code. Both use the exact same value stack representation for storing Wasm values, and only differ in what metadata values their native execution frames contain. In particular, interpreter frames contain bycode-level pointers (**IP**), a sidetable pointer (**STP**), and additional metadata. Moreover, when executing in the interpreter, more registers are needed to store these additional pointers, whereas in JIT code, only the value frame pointer (**VFP**), instance (**inst**), and memory base (not shown) are needed. That leaves more registers to be allocated to compiled code. While values are in registers, the value stack in memory may not be up to date. At observable points like outcalls, JITed code simply spills values into the value stack in memory. The current Wasm bytecode PC, also useful for debugging, can be recomputed from the machine code instruction pointer or explicitly saved into the execution stack. The compatibility between the two frame layouts allows **Wizard** to _tier-up_ (e.g. when a function is detected as _hot_) from the interpreter to baseline-compiled code by changing only the execution frame and jumping into JITed machine code. Conversely, **Wizard** can _tier-down_ (for debugging or to support user instrumentation) by simply reconstructing the missing information such as **IP** and **STP** and jumping back into the interpreter's machine code. ### _Value Tags versus Stackmaps for GC_ Virtual machines that employ precise garbage collectors must find all roots, including those on the stack. There are two basic strategies that allow the VM to distinguish references from non-references: _stackmaps_ or _value tags_. The primary difference between the two is that stackmaps are basically static and value tags are basically dynamic. **Stackmaps.** For JIT-compiled code, compilers often emit metadata called _stackmaps_ attached to the code which encodes how to find references in stack frames of JITed code. Such metadata usually adds space proportional the size of JITed code, so it is often very compactly stored. It is also notoriously hard to get right, as bugs in stack walking logic or errors (especially off-by-ones) in compressed metadata result in VM-level crashes that are insanely tedious to debug5. Despite the added complexity (and potential robustness problems) of stackmaps, they have less dynamic cost, being primarily static and only used during GC. Footnote 5: Debugging machine code with tweezers; almost beyond humans at this point, perhaps if only in patience. **Value Tags.** Value tags are an entirely dynamic strategy where values themselves contain the metadata that distinguishes references from non-references. This metadata can be encoded in various ways, such as a tag bit, an indirection, a value range restriction, or often additional bytes or words, such as a tag byte or dynamic type information. The possibilities for encodings varies with the number of kinds of values that are used to implement the guest language. Value tags allow the virtual machine to easily inspect a value anywhere in memory, such as GC scanning stacks for references, making it vastly simpler and more robust. Another important advantage is that with value tags, a JIT compiler may avoid generating stackmaps for JITed code at all, which saves significant metadata. A disadvantage is that value tags exact a dynamic cost, since tags require additional space and may introduce dynamic checks. Of the Wasm engines in the wild, including the ones containing the six baseline compilers, none use value tags except **Wizard**. These systems either do no precise garbage collection at all, removing the need for stackmaps, or they reuse the battle-tested stackmap logic of their host system, as is the case in all Web engines. Since **Wizard** makes unusual choices here, we evaluate some of the tradeoffs specific to that design in the experimental section. **Optimizing Value Tags.** The dynamic cost of value tags can be reduced with compiler optimization. While an optimizing compiler can use a sophisticated global register allocator to only store tags on spills, a baseline compiler cannot afford an IR. Instead, we outline three optimizations for reducing the dynamic cost of value tags that are usable in a single-pass compiler. * _lazy tagging_ of locals. Since Wasm is a typed bytecode, local variables have static types that do not change during the execution of a function. Thus the types of locals are Fig. 2: Execution frame and value stack layout for the fast interpreter and **Wizard-SPC**. Both kinds of execution frames are the same number of machine words, allowing quick tier-up (OSR) and tier-down (deopt) by rewriting execution frames in place. The interpreter directly manipulates the value stack in memory, while JIT code only spills to the value stack when registers are exhausted and across calls. determinable from their declaration in the first bytes of a function body. Instead of writing these value tags at runtime, the stackwalker computes them on-the-fly by decoding the locals of the function's original bytecode, thus needing no additional metadata. * _lazy tagging_ of operand stack. While the types of local variables of a Wasm function don't change during execution, the types of operand stack slots certainly can. With this optimization, the compiler omits tag stores for operand stack slots. Like lazy tagging for locals, types are reconstructed at stackwalking time, but this is more complicated than for locals, because the types could be different at each bytecode. That means storing additional metadata (which is basically a stackmap), or reconstructing them from the bytecode by effectively reverifying the code. * _on-demand tagging_ using abstract interpretation. The default for **Wizard-SPC**, value tag stores are only emitted by the compiler across possible observations (calls, traps, and instrumentation) and the abstract state tracks whether each slot has had its tag stored. Parameters are assumed to have their tags stored by the caller. We evaluate these alternatives in the experiments section by comparing with the worst-case overhead (an implementation that always stores value tags at each instruction, exactly as an interpreter would do) and the best-case alternative of simply disabling value tags. As we will see, results show that on-demand tagging via abstract interpretation is close enough to ideal performance that the additional complexity of the other optimizations may not be justifiable. ## V Baseline Compiler Comparison We studied the implementation of six single-pass compilers for WebAssembly that employ the basic abstract-interpretation algorithm. The table in Figure 3 compares their designs in terms of features. In particular, we find that both Web engine compilers (**w8-liftoff** and **sm-base**) implement GC with stackmaps, using the same metadata format as their optimizing compilers. As discussed, **Wizard-SPC** uses value tags, and the three remaining compilers _do no GC_, because their host environment is not garbage-collected. Of the six, only **Wizard-SPC** performs constant-folding and branch-folding, though our experiments show that it has marginal benefit for the benchmarks studied. A key feature is _multiple register allocation_, where the abstract state allows a register to be used for more than one slot. This is more complex to track and merge efficiently, but experimental results show it significantly improves code quality. All compilers except **wazero** track constants. Experiments also show that tracking constants is key to good code quality, as it allows some local instruction selection. ## VI Experiments This section details a number of experiments we conducted to evaluate **Wizard-SPC**'s optimizations and design choices, compare it against other baseline compilers, and place baseline compilers in context with other tiers. **Benchmark Suites.** We use three different benchmark suites: PolyBenchC [27], an often-used suite of numerical kernels, Libsodium [28] a suite of cryptographic primitive benchmarks, and Ostrich [29]. Each of these suites consists of a number of _line-items_ comprised of different programs, each compiled into a separate Wasm module (28 for PolyBenchC, 39 for Libsodium, and 11 for Ostrich). ### _Speedup over Interpreter_ Our first experiment evaluates the improvement in execution time over **Wizard**'s existing configuration with its in-place interpreter. Here we focus on code quality by measuring the _main execution time_, the time from the start of the program's main function until program exit. This intentionally factors out the cost of VM startup and compilation time, pitting the interpreter speed against the speed of compiled code directly. We study startup and compilation time in following experiments. We evaluate five different optimization settings of **Wizard-SPC** to assess the impact of each optimization. * (default) all optimizations turned on. * abstract values do not track constants, thus no constant-folding or instruction selection. * no constant-folding or branch-folding. * no instruction selection of immediate addressing modes. * no "multi-register" support; a register can cache at most one slot at a time. Figure 4 summarizes speedups across the three benchmark suites. For each configuration, we run each benchmark line item 25 times, each time in a separate VM instance (9750 data points). The height of each bar corresponds to the average speedup across line items in that suite. The error bars correspond to the minimum and maximum average speedup for any line item in that suite. While there is significant variance amongst line items, measurements for a single line item are stable within a small variance. From these results we can see that the compiled code runs between 5\(\times\) and 28\(\times\) faster than the interpreter for all line items, while suites averages are 10\(\times\) to 15\(\times\). From the **nok** configuration, we can see that disabling constant tracking in abstract interpretation has the most dramatic effect on code quality. Disabling multiple register allocation (**nomr**) also has a significant effect, in some cases larger than disabling constant tracking. Finally, disabling constant-folding (**nokfold**) and instruction selection (**noisel**) are small but measurable effects. ### _Optimizations for Value Tags_ Our second experiment in Figure 5 compares design alternatives for **Wizard-SPC**'s support for value tags. Using the same measurement methodology as the previous experiment, we measure relative main execution time of various tagging configurations. Here, the baseline in the figure is no longer the interpreter but **nottags**, where we disabled value tags altogether, including removing their space from the value stack. The configurations tested here are: * "eagerly" store modified tags at every instruction. * "eagerly" store tags for operand slots only. * "eagerly" store tags for locals only. * (default) store tags on-demand by tracking their state in abstract interpretation. * store tags on-demand, but leave tagging of locals to the stack walker. The height of each bar in Figure 5 is the average relative main execution time over the line-items in each benchmark suite, while the error bars represent the minimum and maximum of any line item within that suite. We see that the eager-tagging imposes a 2.4\(\times\)- 3.3\(\times\) overhead on execution time. By measuring eager-tagging of locals separately from the operand stack, we can attribute that overhead mostly to tagging of the operand stack6. We also see that the default **on-demand** tagging strategy almost completely eliminates the cost of value tags, within 0.9 - 4.9% of the ideal **nottags** configuration. We can also see that **lazytags** can further reduce the tagging overhead of **on-demand**, statistically measurable, but the improvement is marginal, to 0.4 - 4.2% on average. Given that **lazytags** would imply design complexity to perform tagging in the stack walker, it was not productionized. Footnote 6: Which is to be expected, as the operand stack is where the action is! ### _Baseline shootout_ Our next experiment compares the compile speed and code quality of baseline compilers listed in Figure 3. To gather compile times, we instrumented each engine to measure and report the time taken to compile each module, as well the number of input Wasm code bytes. We compute the compile time as the time taken _per byte_ of input code, which naturally normalizes across different function and benchmark sizes and also controls for lazy compilation7. In both experiments, we normalize the results relative to **Wizard-SPC** for each line item, which allows comparing short-running and long-running benchmark items unweighted. Fig. 4: Execution time speedup of **Wizard-SPC** over **Wizard’s** interpreter (1\(\times\) = same speed, 10 = 10\(\times\) faster, _up_ is better). Fig. 5: Execution time of **Wizard-SPC** tagging configurations relative to a no-tagging configuration (1.0 = same speed as **nottags**, _up_ is better). Fig. 3: WebAssembly baseline compilers used in this study. MR = multiple register allocation, R = register allocation, R = constant tracking, KF = constant-folding, ISEL = instruction selection, TAG = value tags, MAP = stackmaps, MV = multi-value Fig. 6: Relative execution time over **Wizard-SPC** for other engines in their baseline compiler configurations. (1.0 = same speed, 2.0 = 2\(\times\) as long; _up_ is better). Figure 7 displays the results of measuring compile time. The height of each bar represents the compile time per byte of input code normalized to **Wizard-SPC**, averaged over the line items in each suite. The error bars represent the minimum and maximum of line items within each suite. The results show a clear standout: **sm-base** is the fastest compiler; nearly 3\(\times\) faster than the others, and **wazero** is 3\(\times\) to 4\(\times\) slower than the others. We were not able to run **wasmow** on all benchmarks, only Ostrich, that it appears to be faster than **sm-base**. **Wizard-SPC** is roughly on par with **v8-liftoff** in compile speed, varying between 0.6\(\times\) the speed to 1.5\(\times\) the speed over different line items. To measure code quality of compilers, we compare the execution time of benchmarks. For this experiment, we use a more comprehensive measurement methodology that factors in VM startup and compilation. If necessary we configure their respective engine to _only_ a specific tier, and to disable on-disk caching of compiled code. Figure 6 displays the results of our measurements. The work done by (and thus execution time of) individual line items varies considerably, so differences in speedup among line items is more stark. With this data we can approximate each compiler's _SQ-region_ (speed-quality region), the general area in the tradeoff space for the runtime of the compiler versus the runtime of the generated code, which is characteristic of the specific compiler. Figure 8 displays the SQ-space for baseline compilers using the same data as Figures 6 and 7. It uses a scatter-plot with all benchmark line items to illustrate the variance in both compilation time and execution time across items. Since many short-running benchmark line items are included, clusters towards the bottom of the graph (lower speedups), correspond indicate where time spent compiling pays off less and VM startup time is more significant. Our last experiment puts baseline compilers in context with other execution tiers. We compare baseline compilers to other tiers (interpreters, optimizing JIT compilers, and ahead-of-time translations) in two dimensions: _setup time (S)_ and execution speed, or _quickness_. This makes a larger _SQ-space_ that is similar in nature but more general than the compiler _SQ-space_ because it includes other setup costs than compiling. We define _setup time_ as the time a VM takes from starting the load of a guest program to executing the first instruction of that program. This therefore will characterize the per-module processing time that happens before execution, such as loading and verifying code, building program IR, and compiling. Since most of these costs are a function of module size, it's reasonable to define their ratio as the _setup speed_ and measure it in megabytes per second (_MB/s_). In this experiment, we measure an even larger set of Wasm execution tiers that includes several interpreters and optimizing compilers, drawn from a larger set of engines. All new compiler tiers are IR-builders, and all interpreter tiers rewrite the bytecode, with the exception of **Wizard**'s in-place interpreter. Most, but surprisingly not _all8_, verify the bytecode. Thus every engine has some measurable per-module parsing, verification, translation, or compilation cost. Measuring setup time can be done by instrumenting engines, but requires intrusive modifications. Instead, we use a simpler, less precise strategy to empirically bound setup time without missing hidden costs. Footnote 8: wasm3 does not, in fact, verify the bytecode! We define \(T_{E}(m)\) as the time to execute a module \(m\) on engine configuration \(E\). First, we measure VM startup time by executing the smallest possible Wasm module \(M_{\mathrm{nop}}\), which has only one function that simply returns (total module size is 104 bytes). Since this is fast, we can run this hundreds of times to get a statistically significant characterization of startup time. Next, we approximate the processing cost of each benchmark line-item by simply modifying the code of its **_start** function to immediately return. This is done by automatically editing each module \(m\) by inserting an early return (if (opaque_predicate) return;) in its main entrypoint, resulting in module \(m_{0}\). The new module will undergo loading and processing (often compilation) in each engine, but execution time is near zero. Fig. 8: SQ-space comparison for baseline compilers. Quality is measured by the relative improvement in execution time over **Wizard**’s interpreter. (1.0 = same speed, 2.0 = 2\(\times\) as fast; _up_ and _right_ are better). Fig. 7: Relative compilation time over **Wizard-SPC** for other engines in their baseline compiler configurations. (1.0 = same speed, 2.0 = 2\(\times\) as long; _up_ is better). With measurements \(T_{E}(M_{\text{\tiny{prop}}})\), \(T_{E}(m_{0})\), and \(T_{E}(m)\): * \(T_{E}(m_{0})-T_{E}(M_{\text{\tiny{prop}}})\) approximates9 the _upper bound_ of pre-processing time by removing VM startup time, Footnote 9: In fact, all of these quantities are all subject to sampling error and thus form individual distributions. The resulting “crude” approximation is just another distribution that approximates processing time. * \(\tilde{T}_{E}(m)=T_{E}(m)-T_{E}(m_{0})\) defines the _adjusted execution time_ which is the program's execution time without VM startup or module setup time, and * \(\tilde{S}_{E,B}(m)=\frac{\tilde{T}_{B}(m)}{T_{E}(m)}\) defines the _adjusted speedup_ of configuration \(E\) over a baseline config \(B\). ### _Mapping the Larger SQ-space_ Figure 9 presents averages of 25 runs of each of the 78 benchmark line items on 18 different engines (3 data points each = 106550 data points). The vertical axis is \(\tilde{S}_{E,\textbf{wizand\_int}}(m)\) (i.e. adjusted speedup over **Wizard**'s interpreter) and the horizontal axis represents _setup speed_, (the speed of loading, verifying, and translating). New tiers are: * **jsc-int**, **jsc-bbq**, **jsc-omg**, the interpreter, less optimizing, and more optimizing compiler tiers of JavaScriptCore [30]. * **wasmtime**[31] and **wasmer**, two different Wasm runtimes written in Rust, which both use the Cranelift [32] optimizing compiler. * **wavm**[33], a primarily ahead-of-time Wasm engine that uses LLVM. * **iwasm-int** and **iwasm-fjit**, the interpreter and fast JIT of the WebAssembly MicroRuntime [34]. * **wasm3**[35], a fast rewriting interpreter for embedded systems. In the top plot of Figure 9, we see all tiers compared. The primarily ahead-of-time **wavm** engine uses LLVM to compile up-front; a slow compiler, this is clearly the slowest at setting up due to a large compile time. Apparent in the zoomed-in middle plot, baseline compilers (blue colors) all cluster together in the middle; they all have very similar speedups, and though they vary by an order of magnitude in setup speed, are clearly distinguishable from optimizing compilers (red and purple colors) which definitely produce bigger speedups, about 2\(\times\)-3\(\times\) faster than baseline compilers, though at an order-of-magnitude slower compile speed than baseline. When we zoom in on interpreters in the bottom plot, it is clear they have a clear performance ceiling; they are all fairly close to each other, within 2\(\times\) of **Wizard**'s in-place interpreter. Interpreter setup time varies the most; we attribute this to the fact that 1) some don't verify bytecode, and 2) all the **jsc-\(\star\)** (JavaScriptCore) tiers use lazy translation. In general, laziness (i.e. translating a function upon first invocation) is a confounding factor in these measurements, as lazy compile time is not measured in setup time, but attributed to run time, and therefore the adjusted speedup is lower. As can be seen in the figure, this might be factor for the **jsc-\(\star\)** compiler tiers, whose speedups appear lower than other optimizing compilers and setup speeds appear faster. Another confounding factor is parallelism in compilation. Some engines have fully parallel compilation pipelines and others do not10. We chose to leave default threading settings for all engines. Benchmark modules used in this study are fairly small, so parallel speedup may not be as big of a factor. A third confounding factor is caching of compiled code. After noticing anomalies in initial experiments11wasmtime and Fig. 9: The SQ-space for 18 different Wasm execution strategies. **wasmer**, we disabled caching in both of these. Overall, we can see a great diversity of execution characteristics for Wasm engines, as each tier tends to occupy its own region in this space. Precision of the plot could probably be improved with metrics reported directly from instrumenting engines. Nevertheless, we believe the SQ-space analysis provides insight into tradeoffs in a new way and can further inform the design of tomorrow's virtual machines. ## VII Related Work The first disk format for intermediate code was invented as early as 1968, in the first BCPL compiler's O-Code [36]. Prioritizing compiler simplicity and speed above code quality is an old idea that has roots at least as far back as the design of the first Pascal compiler [37] in 1970. Pascal compilers gave rise to the first widely-used intermediate code format, P-code [38], in the mid 1970s, which was still in use as late as 1990 [39]. P-code was certainly not the last portable low-level code, with others such as TIMI [40], LLVM bitcode [41], PNaCl [42] (itself a variant of LLVM bitcode). Fast P-code translators might be considered the first baseline compilers. **Dynamic Compilation.** Over the years, many virtual machines and bytecode formats have been developed, from Smalltalk [43], to Java [44], to the Common Language Runtime (CLR). The first dynamic compilers were simple, fast, and performed little optimization. They were often instruction-by-instruction translators, with extremely simple, or even no, register allocation. They were essentially baseline compilers, but some had IR, e.g. to harness type feedback [45]. Later, runtime profiling led to more complex compilers that build and optimize IRs. **Copy & Patch Code Generation** Recent work [23] on fast compilation using code templates demonstrated dramatic improvements over V8's Liftoff compiler. The key idea is to use an offline compiler (e.g., LLVM) to generate machine code snippets under various register assignments and with "holes" for constants. When compiling Wasm, an assembler isn't needed; instead, a cache supplies the appropriate snippet for the register assignment at each step of abstract interpretation, patched with appropriate constants. Our paper evaluated the artifacts of that work, but only a subset of the benchmarks, which did confirm fastest compile speed when measured carefully. However, this paper found that **sm-base**, as a much faster baseline compiler, is nearly on par in compile speed. One issue with a template-based approach is that the number of templates is combinatoric in the possible abstract values. **Wizard-SPC** is unique in that it also tracks value tags, which could potentially double the number of templates needed. **Synthesizing and Verifying JITs.** Simple compilers are easier to build, specify, verify, and even synthesize. Recent work [46] has advanced the generation of _correct_ JIT compilers from a specification, which demonstrated a instruction-by-instruction compiler for eBPF running in-kernel with correctness guarantees. Another approach is to verify the output of the compiler for sandboxing properties, and has been employed for Wasm in [11]. **Fast compilers in other domains.** Many other domains than VMs employ dynamic code generation. Generating machine code without an intermediate representation has been repeatedly shown to dramatically improve compile speed. For example, the VCode [47] research system improved on its predecessor, DCG [48] by 35\(\times\). Simple AST-walking compilers have been deployed in database systems and programmable networks. Regeexes are often implemented with JIT compilers today. For example, all Web engines use JITs in their regex implementations [49], as well as popular libraries [50]. **Fast compilers cooperate with other tiers.** Today, many production virtual machines employ multiple compilers of different designs. OpenJDK [52] employs an interpreter and two (iterable) JIT compilers; C2, a highly-optimizing sea-of-nodes compiler, and C1, a faster, SSA-based optimizing compiler. Web engines continue to evolve, and all employ multiple tiers for both JavaScript and WebAssembly. The V8 JavaScript engine [53] became multi-tier in 2010 when its first optimizing compiler "Crankshaft" [54] joined its fast AST-walking code generator named "full codegen" [55]. In 2018 V8 replaced both tiers with an interpreter and a new TurboFan [18] optimizing compiler, and in 2021 added a baseline compiler "Sparkplug" for JavaScript [56]. The JavaScriptCore [30] virtual machine in Safari employs three different compiler designs, even briefly using LLVM as a top-tier optimizing compiler. ## VIII Conclusion This paper captured the core design ideas of baseline compilers for Wasm and documented six implementations, which have appeared nowhere in the literature to date. As this paper documents, efficient forward-pass register allocation via abstract interpretation is widespread in single-pass Wasm compilers. Examples in this paper illustrate and experiments show that single-pass compilers for Wasm can generate good code very quickly. This paper also presented the design of a new, state-of-the-art single-pass compiler, **Wizard-SPC** with the unique design choice of value tags, which simplifies integration with an in-place interpreter for Wasm and the host garbage collector. Measurements show that the overhead of **Wizard-SPC**'s value tag approach is mostly eliminated by optimizations and that the resultant performance is on par with production single-pass compilers. Discussion compared and contrasted the six designs and experiments evaluated them on benchmarks. That experiment showed that single-pass compilers vary in code quality, primarily due to the differences in how they model constants and perform register allocation. Additional benchmarking data allows us to place all single-pass compilers in a two-dimensional speed-quality tradeoff space (SQ-space) with other available execution tiers for Wasm, including rewriting interpreters and optimizing compilers. We find these developments extremely exciting; the explosion of execution strategies for WebAssembly holds great promise to shed new light on long-standing tradeoffs in VM design by studying many diverse engines that all accept a common, well-specified code format. ## Acknowledgments This work is supported in part by NSF Grant Award #2148301, as well as funding and support from the DFinity Foundation [57]. Thanks to Hannes Payer, Toon Verwaest and Clemens Backes on the V8 team for JIT compiler and tiering discussions. Thanks to Lars Hansen (formerly Mozilla) for questions on the Spidermonkey baseline compiler design. Thanks to undergraduate Bradley Teo for work on the instrumentation framework in **Wizard**. Thanks to Heather Miller, Josh Sunshine, Jonathan Aldrich, and Anthony Rowe at CMU. Thanks to Ulan Degenbaev at DFinity.
2309.02099
Towards Diverse and Consistent Typography Generation
In this work, we consider the typography generation task that aims at producing diverse typographic styling for the given graphic document. We formulate typography generation as a fine-grained attribute generation for multiple text elements and build an autoregressive model to generate diverse typography that matches the input design context. We further propose a simple yet effective sampling approach that respects the consistency and distinction principle of typography so that generated examples share consistent typographic styling across text elements. Our empirical study shows that our model successfully generates diverse typographic designs while preserving a consistent typographic structure.
Wataru Shimoda, Daichi Haraguchi, Seiichi Uchida, Kota Yamaguchi
2023-09-05T10:08:11Z
http://arxiv.org/abs/2309.02099v1
# Towards Diverse and Consistent Typography Generation ###### Abstract In this work, we consider the typography generation task that aims at producing diverse typographic styling for the given graphic document. We formulate typography generation as a fine-grained attribute generation for multiple text elements and build an autoregressive model to generate diverse typography that matches the input design context. We further propose a simple yet effective sampling approach that respects the consistency and distinction principle of typography so that generated examples share consistent typographic styling across text elements. Our empirical study shows that our model successfully generates diverse typographic designs while preserving a consistent typographic structure. ## 1 Introduction In textual communication, typographers carefully express their intent in their typographic work, such as product packages, posters, banner ads, book covers, signboards, and presentation slides. Appropriately designed typography affects how people perceive the impression, legibility, and importance of the text content, yet choosing appropriate typography is surprisingly challenging [3]. Typographic design involves a complex interplay between the message content, background visuals, layout arrangement, and styling consistency across text elements. In building a practical automatic typography system, we have to take into account the following requirements. _Context awareness_: A system should reflect the context of the creative work; e.g., styling should emphasize the word "Sale" for a sale event poster or use serif-style fonts with careful letter spacing for luxury brands to express their authority. Also, typography should match the background visuals; e.g., a bright font color for a dark background. _Fine-grained representation_: A system can handle fine-grained typographic attributes beyond font family and color, such as horizontal text alignment, line spacing, letter spacing, or angle, that are important to convey a delicate nuance within the graphic design. _Consistency and distinction_: A system should apply consistent style across multiple texts that share the same semantics [41]; e.g., menu items should have uniform styling. On the other hand, typography should have distinct styling to emphasize the content semantics; e.g., a title should be highlighted by a different font family and size. _Diversity_: A system should be able to suggest diverse design candidates to the users because there is usually no single optimal typographic design in a real-world creative workflow. In this paper, we formulate the typography generation task as fine-grained typographic attribute generation and build an autoregressive model that can generate diverse yet consistent typographic styling for the given graphic document. Given a canvas, texts, and their rough positions (Fig. 1a), our model generates fine-grained attributes such as font, color, or letter spacing for each text element. Our model relies on the attention mechanism of the Transformer architecture to capture the consistency-relationship among texts as well as the relationship between texts and the input context. For generating diverse typography, we propose a simple yet effective sampling approach to enforce consistent styling among text elements, which we refer to as _structure-preserved sampling_. Our sampling approach predicts which text elements share the uniform styling in the first step (Fig. 1b) and samples diverse attributes constrained by the predicted relationships in the second step (Fig. 1c). We also propose metrics to evaluate the quality of typography generation, where we define the typography structure in the form of pairwise consistency relationships among text elements. We show in experiments that our autoregressive models outperform baseline approaches and successfully generate diverse typography that respects context and consistency. Our user study also confirms that our approach is qualitatively preferred over the baseline. Our attribute-based formulation is readily applicable in a real-world creative workflow, as designers usually work on graphic documents with vector-graphic authoring tools like Adobe Illustrator. We summarize our main contributions in the following. * We formulate the typography generation task that aims at jointly generating diverse fine-grained typographic attributes. * We present an autoregressive approach to generate typographic attributes, where we develop the structure-preserved sampling to generate diverse yet consistent typographic designs. * We propose metrics to evaluate the quality of typography generation that is aware of the consistency among text elements. * We empirically show the effectiveness of our approach both quantitatively and qualitatively. ## 2 Related work ### Attribute-based typography generation While attribute-based representation is commonly observed in commercial design authoring tools, we do not find much literature on attribute-based typography generation. MFC [50] is a notable exception that predicts the font, color, and font size of a single text box from the global image, local image, and auxiliary tag information. AutoPoster [24] recently proposes a poster generation approach that also considers font, color, and font size within the model. While the previous work considers typographic attributes, we consider far more fine-grained attributes including text angle, alignment, letter spacing, and line spacing, and explicitly consider consistency relationships among multiple text elements. Other notable works include the study of Jiang _et al_. [15] on combinatorial preference in font selection for subjects and subtitles in PDF data and Shimoda _et al_. [37] proposing a de-rendering approach to parse rendering parameters from texts in raster images. ### Raster typography generation Raster typography generators directly render stylized texts in pixels. There are two types of formulations: text style transfer and conditional stylized text generator. Text style transfer aims at generating stylized text images for the specified styles. Awesome Typography is a style transfer method by a patch matching algorithm [45]. Recent literature reports several GAN-based models [29, 6, 2, 39, 46, 47, 48]. Wang _et al_. propose a layout-specified text style transfer method [40]. Raster text editing is another branch of the text style transfer task, where the goal is to apply a reference style to the manually edited image [44, 36, 42]. There are several neural network-based glyph renderers without reference images. We refer to these approaches as conditional stylized text generators. Miyazono _et al_. and Gao _et al_. [7] propose a generative model that directly produces stylized texts in the raster format from background images, layouts, and text contents [30]. Recent text-to-image model [34, 35] can draw stylized texts via prompts, but these models tend to corrupt glyphs in the raster format [25]. Some recent works propose fine-tuned text-to-image models [49, 28, 14] that address glyph corruption. While there are quite a few works on raster generation, attribute-based generation has a clear practical advantage in that the generation result is 1) free from raster artifacts and 2) easily applicable in real-world authoring tools. ### Graphic design generation Our typography generation task can be regarded as one sub-topic within the broader study of attribute-based graphic design or layout generation. Early work on layout generation utilizes templates [5, 12] or heuristic rules [31]. Recent literature relies on neural networks for generation. LayoutVAE [17] generates scene layouts from label sets using autoregressive VAE. LayoutGAN [23] adopts a GAN-based layout generator via a differentiable wire-frame model. VTN [1], LayoutTransformer [9], and CanvasVAE [43] report Transformer-based VAE for graphic de Figure 1: We formulate the fine-grained typography generation task considering the structure of multiple texts. a) An example of an input context: background image, texts, and their corresponding center positions. b) Typographic structure predicted by our model via top-1 sampling. c) Generated typography by our structure-preserved sampling. signs. LayoutDM [13] adopts a discrete diffusion model to layout generation. Towards finer control on the generation quality, several literature [4, 8, 13, 16, 18, 19, 20, 21, 51, 52] tackles to generate layouts with constraining and conditional information. While most recent attempts seem to be interested in the layout-level generation, our focus is the unique and explicit modeling of text styling in the typographic design. ## 3 Approach Our goal is to generate typography with consistency and diversity from context attributes such as _background image_, _texts_, and their corresponding _center positions_. To this end, our model first predicts typographic structure (Fig. 1b) and then generates typography through a structure-preserved sampling of typographic attributes such as _font_ and _color_ (Fig. 1c). ### Problem formulation We define the context attributes by \(X\equiv(\mathbf{x}_{\mathrm{canvas}},\mathbf{x}_{1},\dots,\mathbf{x}_{T})\), where \(\mathbf{x}_{\mathrm{canvas}}\equiv(x_{\mathrm{background}},x_{\mathrm{aspect}},\dots)\) denotes a tuple of canvas input and \(\mathbf{x}_{t}\equiv(x_{\mathrm{text}}^{t},x_{\mathrm{top}}^{t},x_{\mathrm{ left}}^{t},\dots)\) denotes the \(t\)-th element input. We assume there are \(T\) text elements in the document. We consider target typographic attributes \(Y\equiv(\mathbf{y}_{1},\dots,\mathbf{y}_{T})\), where \(\mathbf{y}_{t}\equiv(y_{\mathrm{font}}^{t},y_{\mathrm{color}}^{t},\dots)\) is typographic attributes of the \(t\)-th text element. Our goal is to generate typographic attributes \(Y\) by a conditional generation model \(p_{\theta}\) parametrized by \(\theta\): \[\hat{Y}\sim p_{\theta}(Y|X). \tag{1}\] ### Typographic attributes Our context and typographic attributes contain multiple modalities, which we preprocess into feature representation beforehand. We summarize the feature representation of all attributes in Table 1. Our context attributes consist of the canvas input and the element input. We extract a background image for both the global canvas and the region of each text element, resize the image to the fixed resolution with the RGB format, and finally apply an ImageNet pretrained ResNet50 [10] to extract features. We preprocess text content using a pre-trained CLIP encoder [33]. We discretize continuous attributes, such as an aspect ratio or a position, based on the k-means clustering, where we empirically set the appropriate number of clusters. In this work, we consider the following typographic attributes as outputs: _font_, _color_, _font size_, _alignment_, _capitalization_, _angle_, _letter spacing_, and _line spacing_ for each text element. Our typographic attributes have semantic and geometric quantities. We show the illustration of the typographic attributes in Fig. 2. We also discretize typographic attributes based on the k-means clustering. ### Typography generation We build an encoder-decoder architecture based on Transformer [38] to effectively capture the interaction among the inputs and the target attributes within the attention mechanism. Fig. 3 illustrates the overall architec \begin{table} \begin{tabular}{c c c c} \hline Type & Name & Modality & Size \\ \hline Canvas & Background & Image & \(256\times 256\times 3\) \\ input & Aspect ratio & Categorical & 40 \\ \(\mathbf{x}_{\mathrm{canvas}}\) & Number of text & Categorical & 50 \\ \hline \multirow{6}{*}{Element} & Text & Text & variable \\ & Left & Categorical & 64 \\ input & Top & Categorical & 64 \\ \(\mathbf{x}_{\mathrm{t}}\) & Line count & Categorical & 50 \\ & Char count & Categorical & 50 \\ & Background & Image & \(256\times 256\times 3\) \\ \hline \multirow{6}{*}{Typographic attributes (output)} & Font & Categorical & 261 \\ & Color & Categorical & 64 \\ \cline{1-1} & Alignment & Categorical & 3 \\ \cline{1-1} & Capitalization & Categorical & 2 \\ \cline{1-1} & Font size & Categorical & 16 \\ \cline{1-1} & Angle & Categorical & 16 \\ \cline{1-1} & Letter spacing & Categorical & 16 \\ \cline{1-1} & Line spacing & Categorical & 16 \\ \hline \end{tabular} \end{table} Table 1: Context and typographic attributes. Context attributes consist of canvas input and element input. Figure 3: Model architecture. Figure 2: An illustration of typographic attributes. We handle semantic quantities including _font_, _color_, _alignment_, and _capitalization_ and geometric quantities including _font size_, _angle_, _letter spacing_, and _line spacing_. ture. Our architecture combines BART-style Transformer blocks [22] with skip connections between the input and output of each element. We project input features into fixed-size embeddings and feed into the Transformer encoder blocks. We adopt an autoregressive decoder to model the joint distribution of typographic attributes: \[p_{\theta}(Y|X)=\prod_{t}^{T}p_{\theta}(\mathbf{y}_{t}|\mathbf{y}_{t-1},\dots, \mathbf{y}_{1},X), \tag{2}\] and we apply element-wise autoregressive sampling to generate attribute \(k\) at the \(t\)-th element: \[\hat{y}_{k}^{t}\sim p_{\theta}(y_{k}^{t}|\mathbf{y}_{t-1},\dots,\mathbf{y}_{1 },X). \tag{3}\] Here, we apply top-\(p\) sampling [11] to draw attributes. Top-\(p\) sampling has a hyper-parameter \(p_{k}\in[0,1]\) that controls the diversity for each attribute \(k\). In our experiments, we fix \(p_{k}=0.1\) for geometric attributes (font size, angle, letter spacing, and line spacing) to avoid visually disturbing generation, and vary \(p_{k}\) for other attributes depending on the experimental setup. To train the model, we minimize the following objective: \[\sum_{t}\sum_{k}\mathcal{L}_{\mathrm{entropy}}^{k}(y_{k}^{t},\tilde{y}_{k}^{t} )+\lambda_{\mathrm{reg}}|\theta|^{2}, \tag{4}\] where \(\mathcal{L}_{\mathrm{entropy}}^{k}\) is the standard cross entropy for the attribute \(k\), \(\tilde{y}_{k}^{t}\) is the ground truth, and \(\lambda_{\mathrm{reg}}\) is the L2 weight decay. ### Structure-preserved sampling While autoregressive sampling can adjust sampling hyper-parameter for each attribute, we find the plain autoregressive approach sometimes corrupts the consistency and distinction among element styling (Sec. 1), especially when we increase the parameter \(p_{k}\) of top-\(p\) sampling. Here, we propose the _structure-preserved sampling_, which is a simple two-step inference approach that effectively controls the diversity while preserving the typography structure. The general steps are the following. 1. Infer initial prediction \(\tilde{Y}\) via top-1 sampling: \[\hat{y}_{k}^{t}=\operatorname*{argmax}p_{\theta}(y_{k}^{t}|\mathbf{y}_{t-1}, \dots,\mathbf{y}_{1},X).\] (5) 2. For each attribute \(k\), cluster text elements \(\mathcal{T}\equiv\{1,\dots,T\}\) by label linkage \(\hat{y}_{k}^{t}=\hat{y}_{k}^{t^{\prime}}\) for any pair \(t\neq t^{\prime}\). 3. Autoregressively sample \(\hat{y}_{k}^{t}\) again but assign the same label if any element in the same cluster is already assigned a label. In both inference steps, we keep the same raster scan order of elements (left-to-right, top-to-bottom). Basically, we autoregressively sample over clusters instead of all the elements in the second sampling step. Fig. 4 illustrates the above steps. The intuition is that top-1 sampling gives the best typographic structure, and the second sampling generates diverse examples while forcing the consistent structure from the initial inference. Our approach is heuristic but generates visually plausible typography without significant overhead. It is possible to replace the initial top-1 sampling with other sampling approaches if we need to generate a typographic design with a different structure. In this work, we assume a typical typographic design does not require a diverse structure in the application scenario; e.g., design suggestion in an authoring tool. We split the clustering step for each attribute, but it is also possible to consider joint clustering across attributes. The challenge here is that a different attribute has a different perception in the final visualization. It is not straightforward to define a unified cluster affinity across typographic attributes; e.g., humans perceive the difference in a font more than the different alignments. In our dataset, we often observe texts that share the same font but with different sizes. We leave the optimal design of typographic clusters for our future work. ## 4 Evaluation Metrics There is no standardized evaluation metric for typography generation. We adopt several metrics to evaluate typography generation performance. ### Attribute metrics In our setting, we handle several typographic attributes, but the format of each attribute is not the same. Here, we Figure 4: Our structure-preserved sampling first clusters elements by top-1 prediction, then draws samples per cluster so that the result maintains the most likely typographic structure while being capable of generating various design. introduce evaluation metrics for measuring the fidelity of attribute prediction. **Accuracy:** We evaluate categorical attributes (_font_, _align_, _capitalization_) by the standard accuracy metric between the prediction and the ground truth. **Mean absolute error:** We evaluate the geometric attributes by the absolute difference in their respective unit. We measure _font size_ in points, _angle_ in degree, _letter spacing_ in points, and _line spacing_ in a relative scale centered at 1.0. **Color difference:** We employ CIEDE2000 color difference [27] to measure the similarity between colors, which is known to well reflect the human perception of color difference. ### Structure score The structure score examines whether the use of the same attribute pairs matches the ground truth. That is, if a pair of texts share the same attribute, we assign 1, and if the pair differs, we assign 0, then measure the accuracy between the prediction and the truth. Formally, for attribute \(k\), we consider the set of binary labels over any pair of text elements: \[S_{k}(Y)\equiv\{\delta(y^{i}_{k},y^{j}_{k})|i\in\mathcal{T},j\in\mathcal{T},i \neq j\}, \tag{6}\] where \(\delta(y^{i}_{k},y^{j}_{k})\) is an indicator function that measures the condition \(y^{i}_{k}=y^{j}_{k}\). The structure score is the accuracy of prediction \(S_{k}(\tilde{Y})\) against the ground truth \(S_{k}(Y)\) for each document. ### Diversity score We evaluate how diverse the generated typography attributes are. Assuming we generate \(N\) samples, we count the average number of unique labels over elements in the generated samples: \[\frac{1}{T}\sum_{t}^{T}\frac{N^{t}_{\mathrm{uniq},k}}{N}, \tag{7}\] where \(N^{t}_{\mathrm{uniq},k}\) is the unique count of attribute \(k\) at the \(t\)-th element. ## 5 Experiments We evaluate typography generation performance as well as top-1 prediction performance for fair comparison. ### Dataset We evaluate the generation task using the Crello dataset [43], which includes various design templates in vector format. Since the original dataset does not contain all of the necessary typographic information for visualization, we collect additional resources like ttf files. We parsed and compiled the typographic details of each template, and finally obtained 23,475 templates that contain text elements in the vector format. We split the Crello dataset to _train:test:val_ with an 8:1:1 ratio, (i.e., 18,780, 2,347, 2,347). ### Implementation details We set the dimension of feature embeddings to 256. We set the feed-forward dimension to 512 and the number of the head to 8 in the Transformer blocks. We stack 8 Transformer blocks in our model. We use AdamW [26] optimizer with a 0.0002 learning rate and 30 epochs to train our model. ### Prediction evaluation Here, we evaluate the performance of the top-1 prediction for a fair comparison with the previous work. We compare the following baselines. **Mode** always predicts the most frequent category, which shows the bias of each attribute in the dataset. **MFC**[50] is a fill-in-the-single-blank model tailored for typography. This model predicts three attributes: _font_, _font size_, and _color_. MFC learns to predict embedding for font representation by minimizing L2 loss and adversarial loss, a scalar value for font size by minimizing the L1 loss, and a discretized token for color. The embedding for font representation is obtained by a simple autoencoder. Since this model cannot produce multiple outputs, we repeatedly apply the model to generate multiple outputs in an autoregressive manner. We do not consider external contexts (HTML tags and design tags) used in [50] since the Crello dataset does not contain such resources. **CanvasVAE***[43] is a Transformer-based variational autoencoder for structured elements, including layout and canvas information [43]. Since CanvasVAE is an unconditional model, we adapt CanvasVAE to accept input contexts and predict typographic attributes. For the prediction task, we fix the bottleneck latent of the VAE to the mean vector. **Ours** is the initial autoregressive prediction of our model (Sec 3). Table 2 and Table 3 summarize the quantitative prediction performance. Our model achieves the best scores in all structure scores, though not always the best in attribute metrics. Interestingly, while our model shows moderate improvement over baselines in attribute metrics like _font size_, our model shows significant improvement in terms of the structure score. We observe that our model outperforms MFC even if MFC designs a dedicated loss for each attribute. Our model also outperforms CanvasVAE, perhaps because CanvasVAE has a limited model capacity due to the global latent that is regularized to follow the normal distribution. In distinction, our autoregressive models have sufficient capacity to model rich conditions across attributes and elements. ### Generation evaluation We generate 10 samples for each test input for evaluation. We compare the following baselines. **CanvasVAE*** is the same model we evaluate in Sec 5.3. We control the generation diversity by scaling the coefficient of standard deviation in the latent space. **Ours** is our model with a plain top-\(p\) sampling and without our structure-preserved sampling. We control the generation diversity by the hyper-parameter \(p_{k}\in[0,1]\) of top-p sampling except for geometric attributes. **Ours+SS** applies the structure-preserved sampling to the above model. Quantitative resultsFig. 5 plots the attribute metrics and the structure score of font and color as we increase the diversity hyper-parameter. We observe that our models show a good quality-diversity trade-off compared to CanvasVAE. While a plain top-p approach clearly degrades the structure score as we increase the diversity, our structure-preserved sampling keeps the constant score in the highly diverse regime. Note that our structure-preserved sampling can slightly drop the attribute metrics compared to the plain autoregressive sampling due to the cases when the initial structure prediction fails. Qualitative resultsFig. 6 shows qualitative results. We set the diversity hyper-parameter of CanvasVAE to \(std=100\), Ours to \(p=0.9999\), and Ours+SS to \(p_{k}=0.99999\), which yields similar diversity scores. CanvasVAE tends to ignore the input context. We suspect CanvasVAE suffers from learning a good single latent space for a complex task like typography generation. Besides, CanvasVAE cannot independently control the diversity of different attributes, which causes poor overall appearance. Our models generate sufficiently diverse typography for individual attributes in each element, and with the structure-preserved sampling, the results hold consistent styling across elements. We show more generation examples by our model in Fig. 7. The first row, the second row, and the third row show examples that have only a few elements but have sufficient contrast. The fourth and fifth rows show that our model consistently generates diverse yet plausible typography even when a document has many text elements. LimitationWe show some failure cases of our approach in Fig. 8. Our model does not explicitly handle the appearance of typography and sometimes generates unintentional spatial overlaps between texts (Fig. 7(a)), colors that are difficult to see (Fig. 7(b)), and overflow of a text element due to the unawareness of the final text width (Fig. 7(c)). Further, if our model fails to capture plausible structure, generated results corrupt (Fig. 7(d)). ### User study To verify that our evaluation metrics accurately reflect human perception, we conducted pilot user studies. We asked ten participants to choose which generated design groups they preferred in a pairwise comparison between the two methods. We compared the generation quality of our model with the CanvasVAE and our model without the structure-preserved sampling. Each user study comprises 100 questions, resulting in 1000 responses in total. As the diversity hyper-parameter affects generation quality, we choose the hyper-parameter of each approach to be comparable. Specifically, we set the diversity hyper-parameter to have the diversity score within 49.8-51.5% for font and 33.3-35.2% for color in the CanvasVAE comparison, and the diversity score within 70.4-73.3% for the font and 60.0-61.3% for the color in the plain sampling baseline. We pick the diversity scores from Fig. 5. Fig. 9 summarizes the user preference. We confirm that participants clearly prefer our model compared to Canvas \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & Font & Color & Align & Capitalize & Size & Angle & Letter space & Line height \\ & Acc (\%) \(\uparrow\) & Diff (-) \(\downarrow\) & Acc (\%) \(\uparrow\) & Acc (\%) \(\uparrow\) & MAE (pt) \(\downarrow\) & MAE (\(\text{\text{\text{\text{\text{\text{\text{\text{\text{\text@ VAE. The results support the hypothesis that our quantitative results indeed reflect human perception. On the other hand, our structure-preserved sampling does not make a difference in user preference. While unexpected, we suspect that our sampling hyper-parameter was too diverse to give appropriate colors to texts and that made the pairwise comparison difficult for users. In the future, we wish to continue on studying how to suggest the most comfortable designs. ## 6 Conclusion In this paper, we formulate the task of typography generation where we have to generate diverse yet compelling typography given the input contexts. We build a fine-grained typographic attribute generation model and propose a sampling technique to generate diverse typography with consistency and distinction among texts. The empirical study confirms our approach successfully generates diverse yet consistent typography and outperforms the baselines. There are remaining research questions we wish to explore. We hope to analyze the relationship between at Figure 5: Generation performance in terms of attribute metrics vs. diversity score for font and color attributes. Our models outperform the CanvasVAE baseline by a large margin. Our structure-preserved sampling further keeps the constant structure score regardless of the sampling parameter \(p_{k}\). Figure 6: Qualitative comparison of typography generation. Our models generate sufficiently diverse typography with appropriate color tones to the background. With the structure-preserved sampling, our model further enforces consistent styling like fonts to multiple texts (Ours+SS). tributes to human perception, as we identify that the fidelity of colors to the given background somehow dominates the first impression of the design. We also hope to study to what degree of diversity users prefer in the generated results, for building a practical typography generation system.
2310.17291
Fermion Proca Stars: Vector Dark Matter Admixed Neutron Stars
Dark matter could accumulate around neutron stars in sufficient amounts to affect their global properties. In this work, we study the effect of a specific model for dark matter -- a massive and self-interacting vector (spin-1) field -- on neutron stars. We describe the combined systems of neutron stars and vector dark matter using Einstein-Proca theory coupled to a nuclear-matter term, and find scaling relations between the field and metric components in the equations of motion. We construct equilibrium solutions of the combined systems, compute their masses and radii and also analyse their stability and higher modes. The combined systems admit dark matter (DM) core and cloud solutions. Core solutions compactify the neutron star component and tend to decrease the total mass of the combined system. Cloud solutions have the inverse effect. Electromagnetic observations of certain cloud-like configurations would appear to violate the Buchdahl limit. This could make Buchdahl-limit violating objects smoking gun signals for dark matter in neutron stars. The self-interaction strength is found to significantly affect both mass and radius. We also compare fermion Proca stars to objects where the dark matter is modelled using a complex scalar field. We find that fermion Proca stars tend to be more massive and geometrically larger than their scalar field counterparts for equal boson masses and self-interaction strengths. Both systems can produce degenerate masses and radii for different amounts of DM and DM particle masses.
Cédric Jockel, Laura Sagunski
2023-10-26T10:15:46Z
http://arxiv.org/abs/2310.17291v2
# Fermion Proca Stars: Vector Dark Matter Admixed Neutron Stars ###### Abstract Dark matter could accumulate around neutron stars in sufficient amounts to affect their global properties. In this work, we study the effect of a specific model for dark matter - a massive and self-interacting vector (spin-1) field - on neutron stars. We describe the combined systems of neutron stars and vector dark matter using Einstein-Proca theory coupled to a nuclear-matter term, and find scaling relations between the field and metric components in the equations of motion. We construct equilibrium solutions of the combined systems, compute their masses and radii and also analyse their stability and higher modes. The combined systems admit dark matter (DM) core and cloud solutions. Core solutions compactify the neutron star component and tend to decrease the total mass of the combined system. Cloud solutions have the inverse effect. Electromagnetic observations of certain cloud-like configurations would appear to violate the Buchdahl limit. This could make Buchdahl-limit violating objects smoking gun signals for dark matter in neutron stars. The self-interaction strength is found to significantly affect both mass and radius. We also compare fermion Proca stars to objects where the dark matter is modelled using a complex scalar field. We find that fermion Proca stars tend to be more massive and geometrically larger than their scalar field counterparts for equal boson masses and self-interaction strengths. Both systems can produce degenerate masses and radii for different amounts of DM and DM particle masses. ## I Introduction The nature of dark matter (DM) is one of the large remaining open questions in physics. Even though it constitutes roughly 26.8% of the total energy density of the universe [1] and has a long observational history [2], its properties remain largely unknown. We currently know that DM likely is a particle that is only interacting gravitationally and weakly with standard model particles, and that is invisible through electromagnetic radiation. Large-scale structure formation in the universe further suggests that DM is mostly cold, i.e., slowly moving [3; 4; 2; 5]. This makes it an integral part of the standard model of cosmology. Neutron stars (NSs) are used to probe a large range of physical phenomena. They are dense and compact remnants of heavy stars. Their high densities make them excellent laboratories for probing gravitation and nuclear physics under extreme conditions. They are characterized using the nuclear matter equation of state (EOS). The EOS describes the relation between pressure and energy density of the matter found inside NSs. It is needed to close the Tolman-Oppenheimer-Volkoff equations [6; 7] that describe the density distribution of a spherically symmetric static NS and the spacetime curvature. A significant constraint on the EOS is the ability to produce NSs with masses larger than two solar masses, \(2\,M_{\odot}\). The most massive NS known to date is PSR J0952\(-\)0607 with a mass of \(M=2.35^{+0.17}_{-0.17}\,M_{\odot}\)[8]. The lighter companion of the binary system observed in the GW190814 gravitational wave event [9] was also proposed to be the heaviest NS, with a mass of around \(2.6\,M_{\odot}\). But there is evidence that it might be the lightest known black hole instead [10]. High maximum NS masses require stiff EOS, where the nuclear matter is difficult to compress and the energy density rises sharply with increasing pressure. Other constraints include the measurements of the pulsars PSR J0030+0451 [11] and J0740+6620 [12] by the NICER telescope. They also favor a stiff EOS. In contrast, the gravitational wave event GW170817 [13; 14] favors soft EOS which produce smaller NSs that are more compact and more difficult to tidally disrupt. Additionally, it has been proposed to probe the DM properties using NSs. For example, DM can form a cloud or accumulate inside NSs as a core. In sufficient amounts, it can modify the NS properties such as mass, radius and tidal deformability. These properties have been measured using telescopes such as NICER and the gravitational wave detectors LIGO, Virgo and KAGRA. This allows us to probe the properties of DM such as its particle mass and self-interaction strength (see, e.g., [15; 16; 17; 18; 19; 20]). There exist numerous candidates for DM particles. A possible DM candidate is an additional bosonic field (scalar field or vector field), as was studied in [21; 22; 23; 24]. The idea that an astrophysical object consists of a mixture of fermionic and bosonic matter goes back to [25; 26]. A multitude of different models of these fermion boson stars (FBSs) have since been investigated (see, e.g., [27; 28; 29; 30; 18] for reviews). In the simplest case, the fermionic and bosonic components interact only gravitationally (i.e., they are minimally coupled). This makes FBSs interesting objects in the context of DM research (see, e.g., [31; 15; 32]). They have been studied in connection to NSs, where the NS provides the fermionic component and a bosonic field provides the bosonic component of the FBS [31; 15]. The bosonic component can be modelled via, e.g., scalar and vector fields. FBSs have been studied with regard to their stability [26]. Their dynamical properties were explored in [33; 34; 35; 36; 37; 38]. Numerical simulations aiming to understand the gravitational wave signals were performed by [38]. In all these cases, the NS component was modelled using a perfect fluid and a classical complex scalar field was used for the bosonic component. However, understanding vector fields is equally relevant for a number of reasons. If DM is a spin-1 particle, it would be described using a vector field. Some theories of modified gravity also feature vector fields with similar behavior [39; 40; 41; 42; 43]. In this work, we therefore explore the effect of vector fields on NSs. Fermion boson stars can form in a variety of ways. But in essence, the problem reduces to how one can accumulate a large amount of scalar or vector fields in and around a NS. One common motivation for these fields is bosonic dark matter. It could arrange itself around NSs as a cloud or inside NSs as a core. NSs with DM cores could form * from an initial DM'seed' through accretion of baryonic matter [44; 45; 46; 15], * through mergers of NSs and boson stars [15], * through accretion of DM onto a NS and subsequent accumulation in the center [15; 16; 27; 47; 48], * through the decay of standard model particles inside the NS into DM [49; 50; 51; 52; 53]. NSs with clouds could form in a similar way, given that either the DM is the dominant contribution to the FBS or that the DM properties only allow low-compactness configurations (e.g., when the particle mass is small [15]). The fermionic and bosonic components could conceivably be separated from one another, e.g., during a supernova NS-kick [54; 55; 56; 57]. There, the stellar remnant gets ejected and rapidly moves away from the remaining stellar envelope. This process could allow for NSs with a large range of possible DM-fractions. The DM particles most interesting for FBSs are generally (self-interacting) ultralight DM particles, weakly interacting massive particles, dark photons [23; 24] (as a candidate for vector DM) and axions [15; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67]. Another formation channel is motivated through theories of modified gravity. One way of producing large amounts of scalar (or vector) fields is superradiance [68; 39]. Spontaneous scalarization [28; 69] also provides a way of producing significant scalar [28; 70] and vector1[40; 41] field amplitudes. It has also been studied explicitly in NSs [42; 43; 69] and could be a way of forming systems with scalar and vector fields. Scalarization might also take place dynamically in the late stages of the evolution of binary NS systems [71], forming either a black hole or a FBS after merger (depending, e.g., on the initial masses of the binary objects). Footnote 1: In the case of vector fields, the process is also called spontaneous vectorization. Self gravitating vector fields have already been investigated. These objects are called Proca stars. They are modelled by a complex vector field and were first proposed by [72]. They can be thought of as macroscopic condensates of spin-1 particles [28]. Proca stars have been studied by a number of groups analytically [73; 74; 75] and numerically [76; 77], such as in merger simulations [78; 79]. Different types of Proca stars with charge [80], rotation [72] and with a quartic self-interaction potential [81] were also considered. Other works [82; 83; 84] studied shadow images of Proca stars in different scenarios. In this work, we study the combined system of a vector field and NS matter, which we call fermion Proca stars (FPSs). Starting with an action for complex vector fields coupled minimally to gravity and nuclear matter, we derive a system of differential equations and solve them numerically (section II.1). We also pedagogically motivate the boundary conditions (section II.2), find an analytical bound for the vector field amplitude and derive scaling relations in the equations of motion (section II.3). The equations are solved using a shooting method and the integrator implemented in our code (for the code, see [85]). The numerical methods are also explained in section II.5. We show radial profiles of FPSs (section III.1) and then compute global quantities such as mass and radius and compare them to astrophysical observations (section III.2). In section III.3, we compare FPSs to their counterpart with a scalar field. In the following, we refer to the scalar case as "fermion boson stars" (FBSs). Finally, we compute higher modes of FPSs and compute configurations with different EOS (section III.4). We find that the vector field significantly affects the NS properties and thus produces detectable signatures. FPSs admit DM core and cloud solutions. Small DM masses lead to DM clouds, and large masses form DM cores. Core solutions compactify the NS component. Cloud solutions lead to less compact configurations. Some solutions appear to violate the Buchdahl limit when only observing the NS component. We then compare FPSs (with a vector field) to FBSs (with a scalar field). FPSs tend to be more massive and geometrically larger than FBSs for equal boson masses and self-interaction strengths. For a given measurement, this would favor larger vector DM masses (compared to scalar DM), because larger DM masses produce smaller and less massive objects. We find a significant amount of degenerate solutions between different choices of FBSs, FPSs, the DM properties and the EOS. For different boson masses and DM-fractions, FPSs and FBSs can both be degenerate with each other and also be degenerate with pure NSs with a different EOS. Using scaling relations for pure boson stars and Proca stars, we show that FBSs and FPSs are virtually indistinguishable if the boson masses differ by a factor of 1.671 and the DM has no self-interactions. We confirm the existence of FPSs in higher modes which are stable under linear radial perturbations. Throughout this work, we use units where \(G=c=M_{\odot}=1\) (also see Appendix A). The Einstein summation convention for tensors is implied. This paper is based on the Master thesis of Cedric Jockel [86]. ## II Theoretical background ### Equilibrium Solutions Ferion Proca stars (FPSs) are combined systems of fermions and vector bosons, which interact only gravitationally. They can be seen as a macroscopic Bose-Einstein condensate which coexists with a NS at the same point in space. We model FPSs using a relativistic fluid for the NS component and a complex vector field for the bosonic component. FPSs are described by the Einstein-Proca system minimally coupled to a matter term \(\mathcal{L}_{m}\), \[S=\int\sqrt{-g}\left(\frac{R}{2\kappa}-\frac{1}{2}F_{\mu\nu}\bar{ F}^{\mu\nu}-V(A_{\rho}\bar{A}^{\rho})-\mathcal{L}_{m}\right)dx^{4}\;, \tag{1}\] where \(R\) is the Ricci curvature scalar, \(g\) is the determinant of the spacetime metric \(g_{\mu\nu}\) and \(\kappa=8\pi G/c^{4}\) is a constant. The bar denotes complex conjugation. \(F_{\mu\nu}=\nabla_{\mu}A_{\nu}-\nabla_{\nu}A_{\mu}\) is the antisymmetric field strength tensor and \(V(A_{\rho}\bar{A}^{\rho})\) is the vector field potential. The latter depends solely on the magnitude of the vector field \(A_{\rho}\bar{A}^{\rho}\). By taking the variation of Eq. (1) with respect to the inverse spacetime metric \(\delta g^{\mu\nu}\), one obtains the Einstein equations \[G_{\mu\nu}= \,\kappa\left(T^{(NS)}_{\mu\nu}+T^{(A)}_{\mu\nu}\right)\,, \tag{2}\] where \(T^{(NS)}_{\mu\nu}\) and \(T^{(A)}_{\mu\nu}\) are the energy-momentum tensors describing the NS matter and the vector field matter, respectively. The energy-momentum tensor of the NS matter is taken to be that of a perfect fluid: \[T^{(NS)}_{\mu\nu}=(e+P)u_{\mu}u_{\nu}+Pg_{\mu\nu}\;. \tag{3}\] \(P\) and \(e\) are the pressure and the energy density of the fluid, respectively. The energy density \(e\) is related to the rest mass density \(\rho\) through \(e=\rho(1+\epsilon)\), where \(\epsilon\) is the internal energy. \(u_{\mu}\) is the four-velocity of the fluid. The energy-momentum tensor Eq. (3) and the fluid flow \(J^{\mu}:=\rho u^{\mu}\) are conserved (implying conservation of energy-momentum and of the rest mass, respectively). This leads to the conservation equations \[\nabla_{\mu}T^{\mu\nu}_{(NS)}=0\;\,,\;\;\nabla_{\mu}J^{\mu}=0\;. \tag{4}\] The conservation of the fluid flow \(J^{\mu}\) allows us to define the conserved total rest mass of neutron matter, which we call the fermion number \(N_{\rm f}\). We obtain the fermion number by integrating the right part of Eq. (4) over space, \[N_{\rm f}:=\int\sqrt{-g}\;g^{t\mu}J_{\mu}\,dx^{3}\;. \tag{5}\] The energy-momentum tensor of the vector field is given by \[T^{(A)}_{\mu\nu} =F_{\mu\rho}\,\bar{F}_{\nu}{}^{\rho}+\bar{F}_{\mu\rho}\,F_{\nu}{} ^{\rho}-\frac{1}{2}g_{\mu\nu}F^{\rho\sigma}\bar{F}_{\rho\sigma} \tag{6}\] \[+g_{\mu\nu}V(A_{\rho}\bar{A}^{\rho})+V^{\prime}(A_{\rho}\bar{A}^{ \rho})(A_{\mu}\bar{A}_{\nu}+A_{\nu}\bar{A}_{\mu})\,,\] where the derivative of the potential \(V\) is \[V^{\prime}(A_{\rho}\bar{A}^{\rho}):=\frac{dV(A_{\rho}\bar{A}^{\rho})}{d(A_{\rho} \bar{A}^{\rho})}\:. \tag{7}\] The equations of motion (Proca equations) of the vector field and the complex conjugate are computed from the action Eq. (1) using the Euler-Lagrange equations for a complex vector field. One obtains \[\nabla^{\mu}\bar{F}_{\mu\nu} = V^{\prime}(A_{\rho}\bar{A}^{\rho})\bar{A}_{\nu}\:,\:\nabla^{\mu} F_{\mu\nu} = V^{\prime}(A_{\rho}\bar{A}^{\rho})A_{\nu}\:.\] The covariant derivative of Eq. (8) is zero, i.e., \(\nabla^{\mu}\nabla^{\nu}F_{\mu\nu}=0\). This leads to a dynamical constraint on the field derivative, resembling the Lorentz condition used in the Maxwell and Proca equations (also see [28; 72]): \[\nabla^{\nu}A_{\nu}=-\frac{\nabla^{\nu}\left[V^{\prime}(A_{\rho}\bar{A}^{ \rho})\right]}{V^{\prime}(A_{\rho}\bar{A}^{\rho})}A_{\nu}\:. \tag{9}\] This constraint could be useful in numerical simulations to track the numerical error and assess constraint violations of a given numerical scheme. The global \(U(1)\)-symmetry in the Lagrangian Eq. (1) under the transformation of the vector field \(A_{\mu}\) (and \(\bar{A}_{\mu}\)) gives rise to a conserved Noether current \[j^{\mu}=i\left(\bar{F}^{\mu\nu}A_{\nu}-F^{\mu\nu}\bar{A}_{\nu}\right)\:. \tag{10}\] The conserved quantity (i.e., the Noether charge) associated to Eq. (10) is obtained by integrating the conservation equation \(\nabla_{\mu}j^{\mu}=0\) over space, \[N_{\rm b}:=\int\sqrt{-g}g^{t\mu}j_{\mu}dx^{3}\:. \tag{11}\] \(N_{\rm b}\) is called the boson number and is related to the total number of bosons present in the system. It can equivalently also be interpreted as the total rest mass energy of the bosonic component of the FPS. We proceed by solving the Einstein equations Eq. (2) and the Proca equations Eq. (8) for spherically symmetric and static configurations in equilibrium. For that, we consider the spherically symmetric ansatz for the spacetime metric \[g_{\mu\nu}={\rm diag}\left(-\alpha^{2}(r),\:a^{2}(r),\:r^{2},\:r^{2}\sin^{2}( \theta)\right)\:. \tag{12}\] We further assume the perfect fluid to be static, such that the four-velocity can be written as \[u^{\mu}=\left(-\frac{1}{\alpha},0,0,0\right)\:,\:u_{\mu}=(\alpha,0,0,0)\:. \tag{13}\] For the vector field, we employ the harmonic phase ansatz and a purely radial vector field (see [72; 75; 80; 81; 87]). The vector field is then given by \[A_{\mu}(t,x)=e^{-i\omega t}(E(r),iB(r),0,0)\:, \tag{14}\] where \(\omega\) is the vector field frequency and \(E(r)\), \(B(r)\) are purely radial real functions. Using the spherical symmetric metric ansatz Eq. (12) together with the harmonic phase ansatz Eq. (14) for the vector field, we solve the Einstein equations and obtain the equations of motion. One obtains an expression for the radial derivative of \(a(r)\) by rearranging the \(tt\)-component of Eq. (2). We then divide the \(tt\)- and \(rr\)-components of Eq. (2) by \(\alpha^{2}\) and \(a^{2}\), respectively. We add both terms and find a direct relation between the first radial derivatives of \(a(r)\) and \(\alpha(r)\). We use this to solve for the derivative of \(\alpha(r)\). The evolution equations for the vector field components can be computed from the Proca equations Eq. (8). It does not matter which equation of Eq. (8) is used since the complex phase will cancel out and will leave only the radial functions in both cases. The \(\nu=r\) component yields the equation of motion for \(E(r)\). The \(\nu=t\) component of Eq. (8) gives us the equation of motion for \(B(r)\). Finally, the \(r\)-component of the conservation equation for the energy-momentum tensor (left side of Eq. (4)) provides a differential equation for the pressure \(P(r)\). For a more detailed derivation, we refer to [86]. The full equations of motion for the Einstein-Proca system coupled to matter are thus: \[a^{\prime} =\frac{da}{dr} =\frac{a}{2}\left[\frac{(1-a^{2})}{r}+8\pi ra^{2}\left(\ e+\frac{1}{ \alpha^{2}a^{2}}(E^{\prime}-\omega B)^{2}+V(A_{\rho}\bar{A}^{\rho})+2V^{\prime}( A_{\rho}\bar{A}^{\rho})\frac{E^{2}}{\alpha^{2}}\right)\right]\,, \tag{15a}\] \[\alpha^{\prime} =\frac{d\alpha}{dr} =\frac{\alpha}{2}\left[\frac{(a^{2}-1)}{r}+8\pi ra^{2}\left(P- \frac{1}{\alpha^{2}a^{2}}(E^{\prime}-\omega B)^{2}-V(A_{\rho}\bar{A}^{\rho})+2 V^{\prime}(A_{\rho}\bar{A}^{\rho})\frac{B^{2}}{a^{2}}\right)\right]\,,\] (15b) \[E^{\prime} =\frac{dE}{dr} =-V^{\prime}(A_{\rho}\bar{A}^{\rho})\frac{B\alpha^{2}}{\omega}+ \omega B\,,\] (15c) \[B^{\prime} =\frac{dB}{dr} =\left\{V^{\prime\prime}(A_{\rho}\bar{A}^{\rho})\left(\frac{2B^ {2}a^{\prime}}{a^{3}}+\frac{2EE^{\prime}}{\alpha^{2}}-\frac{2E^{2}\alpha^{ \prime}}{\alpha^{3}}\right)\frac{B\alpha^{2}}{\omega}-V^{\prime}(A_{\rho}\bar {A}^{\rho})\left(a^{2}E+\frac{2B\alpha\alpha^{\prime}}{\omega}\right)\right.\] \[-\left.\left(\frac{a^{\prime}}{a}+\frac{\alpha^{\prime}}{\alpha} -\frac{2}{r}\right)(E^{\prime}-\omega B)\right\}\left(V^{\prime\prime}(A_{ \rho}\bar{A}^{\rho})\frac{2}{\omega}\frac{B^{2}\alpha^{2}}{a^{2}}+V^{\prime}(A _{\rho}\bar{A}^{\rho})\frac{\alpha^{2}}{\omega}\right)^{-1}\,,\] (15d) \[P^{\prime} =\frac{dP}{dr} =-\left[e+P\right]\frac{\alpha^{\prime}}{\alpha}\,. \tag{15e}\] This system of equations is closed by providing an equation of state \(P(e)\) (or \(P(\rho,\epsilon)\)) for the nuclear matter part. Note that all equations are first-order differential equations. This is different to scalar FBSs where an additional variable has to be introduced to make the system first-order (see, e.g., [31; 15]). Another difference is that no derivative of the potential enters the equations of motion for the metric components in the scalar field case, but it does for the vector field case. For the considered system and ansatz for the metric Eq. (12) and vector field Eq. (14), the expressions for the fermion number Eq. (5) and boson number Eq. (11) simplify to \[N_{\rm f} =4\pi\int_{0}^{R_{\rm f}}a\rho r^{2}dr\,, \tag{16a}\] \[N_{\rm b} =8\pi\int_{0}^{\infty}B\frac{(\omega B-E^{\prime})}{\alpha a}r^{ 2}dr\,. \tag{16b}\] \(R_{\rm f}\) denotes the fermionic radius (i.e., the radius of the NS component). It is defined by the radial position at which the pressure \(P\) of the NS component reaches zero. The total gravitational mass is defined in the limit of large radii, imposing that the solution asymptotically converges to the Schwarzschild solution \[M_{\rm tot}:=\lim_{r\to\infty}\frac{r}{2}\left(1-\frac{1}{(a(r))^{2}}\right)\,. \tag{17}\] ### Initial Conditions We derive the boundary conditions of equations Eq. (15a)-Eq. (15e) at \(r=0\) and at \(r=\infty\). The values at the origin will later serve as initial conditions for the numerical integration. We first consider the equations of motion in the limit \(r\to 0\) while imposing regularity at the origin (i.e., the solution must not diverge). We first analyze equation Eq. (15a). The term proportional to \(1/r\) dominates at small radii and will diverge if \(r\to 0\). Thus, the only way to maintain regularity is to set \(a(r=0)=1\). It directly follows that \(a^{\prime}(r=0)=0\). Similarly, equation Eq. (15b) leads to \(\alpha^{\prime}(r=0)=0\). The exact value of \(\alpha(r=0)=\alpha_{0}\) is a priori undetermined and can be chosen in a way thought suitable. We will elaborate on this in section II.2. The initial conditions for the vector field components \(E(r)\) and \(B(r)\) can be obtained in a similar manner. We first consider Eq. (15d). In the limit \(r\to 0\), the term proportional to \(1/r\) dominates and regularity then demands that \(E^{\prime}=\omega B\). It follows that \(B^{\prime}(r=0)=0\). This result can be inserted into Eq. (15c), which leads to the relation \[E^{\prime}= \,\omega B=-V^{\prime}(A_{\rho}\bar{A}^{\rho})\frac{B\alpha^{2}}{ \omega}+\omega B \tag{18}\] \[\implies 0=V^{\prime}(A_{\rho}\bar{A}^{\rho})B\alpha^{2}\,.\] Since at \(r=0\), \(\alpha(r=0)\neq 0\) and \(V^{\prime}\neq 0\) in general, this relation can only be fulfilled if we demand that \(B(r=0)=0\). Plugging this relation into Eq. (15c) yields \(E^{\prime}(r=0)=0\). The central value of the field \(E^{\prime}(r=0)=E_{0}\) is therefore undetermined by the equations of motion, and thus is a free parameter of the theory. A similar analysis at large distances reveals the boundary conditions at \(r\to\infty\) for all variables. We impose an asymptotically flat spacetime. This requires that \(a(r\to\infty)=\alpha(r\to\infty)=1\). All terms proportional to \(r\) in Eq. (15a) and Eq. (15b) must vanish at infinity to fulfill the flat-spacetime limit. Therefore, the vector field components must vanish at infinity, \(E(r\to\infty)=0\) and \(B(r\to\infty)=0\). Pressure \(P(r)\), energy density \(e(r)\) and rest mass density \(\rho\) must be zero outside the NS component of the FPS. This will happen at the fermionic radius \(R_{\rm f}\). We summarize all boundary conditions in the following: \[\lim_{r\to\infty}a(r) =1\,\ a(0)=1\;,\] \[\lim_{r\to\infty}\alpha(r) =1\,\ \alpha(0)=\alpha_{0}\;,\] \[\lim_{r\to\infty}E(r) =0\,\ E(0)=E_{0}\,, \tag{19}\] \[\lim_{r\to\infty}B(r) =0\,\ B(0)=0\,,\] \[\rho(r>R_{\rm f}) =0\,\ \rho(0)=\rho_{c}\,.\] The initial condition for the metric component \(\alpha(0)=\alpha_{0}\) is fixed by its behavior at infinity. ### Analytical Results For a scalar (fermion) boson star, one can scale the field frequency \(\omega\) to absorb the initial value of \(\alpha_{0}\) so that it may be set to one (see, e.g., [15]). We investigate whether a similar scaling relation also exists for FPSs. We find that the equations of motion Eq. (15a)-Eq. (15e) are invariant when simultaneously scaling the following variables as \[\tilde{\alpha}=\sigma\alpha\,\ \tilde{\omega}=\sigma\omega\,\ \tilde{E}= \sigma E\,\ \mbox{where}\ \sigma\in\mathbb{R}\,. \tag{20}\] The potential \(V(A_{\rho}\bar{A}^{\rho})\) is always invariant with respect to this scaling because \[A_{\rho}\bar{A}^{\rho}=\left(\frac{B^{2}}{a^{2}}-\frac{E^{2}}{\alpha^{2}} \right)=\left(\frac{B^{2}}{a^{2}}-\frac{\tilde{E}^{2}}{\tilde{\alpha}^{2}} \right)\,. \tag{21}\] The invariance of Eq. (15a)-Eq. (15e) under the scaling relation Eq. (20) thus allows us to choose \(\sigma\) in such a way that the initial condition for \(\alpha(0)=\alpha_{0}\) may be set to \(\alpha_{0}=1\)2. We will make use of this relation in the numerical analysis. All pre-scaling physical values can be recovered from the asymptotic behavior of \(\alpha(r\to\infty)\) by performing the inverse transformation to Eq. (20). Note that the expression for total gravitational mass Eq. (17) is not affected by this scaling. In contrast to the scaling relation of boson stars with a scalar field, where only the frequency \(\omega\) and the metric component \(\alpha\) are re-scaled, the vector field component \(E\) is also affected in the case of Proca stars. To our knowledge, this is the first time the scaling relation Eq. (20) has been mentioned explicitly (apart from the Master thesis [86] which precedes this work). [81] briefly mentioned scaling the frequency but not the vector field component. Footnote 2: Or one could, in principle, also re-scale \(E_{0}\) to always be equal to one. We also report an analytical bound on the central vector field amplitude \(E(0)=E_{0}\). Equations Eq. (15c) and Eq. (15d) govern the dynamics of the vector field. Note that the term in the denominator of the equation of motion for \(B(r)\) Eq. (15d) could in some cases lead to singularities. We analyze the behavior of the denominator by setting it equal to zero. This leads to a remarkable behavior when considering a quartic self-interaction potential \(V\) of the form \[V(A_{\mu}\bar{A}^{\mu})=m^{2}A_{\mu}\bar{A}^{\mu}+\frac{\lambda}{2}(A_{\mu} \bar{A}^{\mu})^{2}\,, \tag{22}\] where \(m\) is the mass of the vector boson and \(\lambda\) is the self-interaction parameter. We insert the potential Eq. (22) into the singular term in Eq. (15d) and obtain \[\left(\frac{E^{2}}{\alpha^{2}}-\frac{3B^{2}}{a^{2}}\right)=\frac{m^{2}}{ \lambda}\,. \tag{23}\] This expression holds for all radii. We analyze its behavior in the limit \(r\to 0\) by applying the initial conditions given in Eq. (19). One obtains a critical value for the central field amplitude \(E_{0}\): \[E_{0,{\rm crit}}=\frac{m\alpha_{0}}{\sqrt{\lambda}}=\frac{\alpha_{0}}{\sqrt{8 \pi\Lambda_{\rm int}}}\,. \tag{24}\] We here also defined the dimensionless interaction parameter \(\Lambda_{\rm int}=\lambda/8\pi m^{2}\). This expression constitutes an analytical upper bound for the central amplitude of the vector field. This means that any FPS with initial conditions for the field larger than \(E_{0,{\rm crit}}\) will be physically forbidden, since Eq. (15d) will become singular and diverge. This result matches the analytical bound found by [81]. The relation implies that for strong self-interaction strengths \(\Lambda_{\rm int}\), the allowed range for Proca stars becomes increasingly small and vanishes in the limit of very strong self-interactions. This fact could conceivably be used to constrain the vector field parameters \(m\) and \(\lambda\). For example, a maximal vector field amplitude implies a maximal amount of accretion of vector bosons until the system becomes unstable. The field would then either dissipate to infinity, shed the excess vector field component, or collapse into a black hole. We leave a thorough investigation for future work. ### Stability Criterion Every FPS solution is characterized by the initial values for the central density \(\rho_{c}\) and the central value of the vector field \(E_{0}\). When studying them in astrophysical contexts, the question of stability of FPSs naturally arises. The stability of pure Proca stars and NSs to radial perturbations is well known (see [72] for Proca stars). The stable and unstable solutions are separated by the point at which the total gravitational mass reaches its maximum with regard to the central density \(\rho_{c}\) (for NS) and the central field \(E_{0}\) (for Proca stars). Since FPSs are two-parameter solutions, the stability criterion needs to be modified. It was first presented for scalar FBSs by [88] (also see [28] for a review). But the criterion is more general and can also be applied to systems of two gravitationally interacting fluids. This is why we apply it here for FPSs. The idea behind the generalized stability criterion is to find extrema in the total number of particles (fermion number \(N_{\rm f}\) or boson number \(N_{\rm b}\)) for a fixed total gravitational mass. The transition between stable and unstable configurations is given by the point at which \[\frac{dN_{\rm f}}{d\sigma}=\frac{dN_{\rm b}}{d\sigma}=0\,, \tag{25}\] where \(d/d\sigma\) denotes the derivative in the direction of constant total gravitational mass (see [88]). Up to a normalization factor, Eq. (25) can be written as \[\frac{dN_{\rm f}}{d\sigma}\propto-\frac{\partial M_{\rm tot}}{\partial\rho_{c }}\frac{\partial N_{\rm f}}{\partial E_{0}}+\frac{\partial M_{\rm tot}}{ \partial E_{0}}\frac{\partial N_{\rm f}}{\partial\rho_{c}}\,. \tag{26}\] If one is only interested in the precise points where FPSs become unstable, the unspecified normalization factor in Eq. (26) becomes irrelevant, since the whole relation is set to zero. In summary, the stability criterion Eq. (25) can be used to discriminate between astrophysically stable and unstable FPS solutions. When perturbed, unstable solutions will either collapse to a black hole, dissipate to infinity or migrate to a stable solution through internal re-configuration (see [28]). ### Numerical Methods In this work, we solve the equations Eq. (15a)-Eq. (15e) numerically to obtain self-consistent FPS solutions. We have implemented the algorithm in our code [15; 85]. The equations have one parameter undetermined by the boundary conditions Eq. (19), namely the vector field frequency \(\omega\). We use a shooting-algorithm to find \(\omega\) numerically. For given \(\rho_{c}\) and \(E_{0}\), there exist only discrete values of \(\omega\), such that the boundary conditions at infinity Eq. (19) are fulfilled. These discrete values are called eigenvalues or modes. There are infinitely many of these modes. They are characterized by the number of roots (i.e., zero-crossings) the field \(E(r)\) has. Usually we are only interested in the lowest mode, since only it is believed to be dynamically stable [28]. The lowest mode of the vector field always has one root in \(E(r)\). The following algorithm can however be used to find any desired mode. We integrate the system of ordinary differential equations Eq. (15a)-Eq. (15e) using a fifth order accurate Runge-Kutta-Fehlberg solver for some fixed value of \(\omega\). The vector field will then diverge towards positive or negative infinity at some finite radius. The system only converges at infinity if any mode is hit directly. But this is impossible to achieve numerically with finite precision. We thus make use of this diverging property to find the wanted frequency mode. When the frequency \(\omega\) is close to the wanted mode, the divergence will happen at increasingly large radii, the closer the chosen value for \(\omega\) is to the mode. A higher accuracy in finding \(\omega\) will therefore push the divergence to larger radii. When \(\omega\) is not exactly tuned to the mode, the vector field profile \(E(r)\) will diverge towards \(+\infty\) or \(-\infty\) and change its direction of divergence when \(\omega\) passes a mode. The direction of divergence depends on which mode is solved for. For modes with an even number of roots, the field will diverge to \(+\infty\) if the frequency \(\omega\) is below the mode, and it will diverge to \(-\infty\) if \(\omega\) is above the mode. This will be reversed for all modes with an odd number of roots. By making use of the direction of divergence, we gain a binary criterion to find the correct mode. The value of \(\omega\) can then be adapted - increased or decreased - based on the direction of divergence and the wanted mode. This procedure requires to integrate the system of equations multiple times with different values for \(\omega\), until the correct value is found. We implement this method in our code [85] using a bisection algorithm, which converges exponentially fast. We start with upper and lower values of \(\omega\), which are guaranteed to be smaller/larger than the wanted value of \(\omega\) at the mode. In practice, lower and upper bounds of \(\omega_{\rm bound}=[1,10]\) have proven to be numerically robust. We then perform the bisection search by taking the middle value of \(\omega\) in this range and counting the number of roots in \(E(r)\) at each step. This also allows us to discriminate between different modes and to target specific modes by demanding a certain number of roots in the field \(E(r)\). The bisection is complete when the current value of \(\omega\) found through bisection is close enough to the value of the mode. In our experience, the absolute accuracy needed to obtain robust solutions is on the order of \(\Delta\omega=|\omega_{\rm mode}-\omega_{\rm bisection}|\approx 10^{-15}\). Once a sufficiently accurate frequency \(\omega\) is found, we modify the integration, such that \(E(r)\) and \(B(r)\) are set to zero at a finite radius \(r_{B}^{*}\). This radius \(r_{B}^{*}\) is defined at the point where the field \(E(r)\) and its derivative \(E^{\prime}(r)\) are small. This roughly corresponds to the last minimum of \(E(r)\) before it diverges. The condition can be summarized as the point where \(E(r_{B}^{*})/E_{0}<10^{-4}\) and \(E^{\prime}(r_{B}^{*})\ll 1\). This is necessary because the interplay of the vector field and the NS matter can complicate the numerical solution. In some parts of the parameter space, especially for small initial densities \(\rho_{c}\), the vector field could diverge while still inside the NS component, i.e., before the pressure \(P(r)\) reaches zero (within numerical precision, we consider the pressure to be zero when \(P<10^{-15}\)). This divergence would make finding physical values such as the fermionic radius \(R_{\rm f}\) impossible. Therefore, we artificially set \(E=B=0\) for \(r>r_{B}^{*}\). This allows us to circumvent the divergence and accurately resolve the rest of the NS component. The condition was chosen so that the remaining contribution of the vector field to the other quantities (i.e., the metric components) is minimized. We have tested this method for different thresholds and confirmed that all extracted results are the same. After integrating the solution to radii outside the matter sources, we can extract global observables such as the total gravitational mass and radius. The outside of the source is located at radii \(r\) larger than both the fermionic radius \(R_{\rm f}\) and \(r_{B}^{*}\). In this regime, neither the NS matter nor the vector field contribute significantly. There, we can extract the total gravitational mass \(M_{\rm tot}\) Eq. (17) and then compute the integrals Eq. (16a) and Eq. (16b) to obtain the fermion/boson numbers \(N_{\rm f}\), \(N_{\rm b}\). The vector field convergence condition \(E(r_{B}^{*})/E_{0}<10^{-4}\) cannot be fulfilled for some configurations due to numerical precision limits. This generally happens for small initial field values \(E_{0}\lesssim 10^{-4}\), where the vector field extends far outside the NS component. In these cases, we extract the total gravitational mass \(M_{\rm tot}=\frac{1}{2}r_{\rm ext}(1-a^{-2}(r_{\rm ext}))\) at the point where its derivative has a global minimum. When the vector field diverges, also the metric components do, and with it also \(M_{\rm tot}\). By taking the point where the derivative of the mass has a global minimum, which roughly corresponds to where the vector field and its derivative is closest to zero, we get the best possible estimate of the mass of the system before the divergence. During our numerical analysis, we encountered the phenomenon that the bisection algorithm to find the frequency \(\omega\) could fail for some specific initial conditions for \(E_{0}\) and \(\rho_{c}\). We found this to be the case due to the bisection algorithm jumping over multiple modes in one iteration step. The wanted mode was then skipped and ended up outside the bisection bounds. The bisection then converged on an unwanted \(\omega\)-value, or ended up failing entirely. We solved this problem by employing a backup algorithm that activates if the bisection fails. It restarts the bisection for \(\omega\) but with different lower and upper bounds of \(\omega_{\rm bound}\). We tested the backup algorithm for 4800 FPS configurations with different vector field masses \(m\) and self-interaction strengths \(\Lambda_{\rm int}=\lambda/8\pi m^{2}\) with equally distributed initial conditions for \(E_{0}\) and \(\rho_{c}\). We found that 330 (\(\approx 6.8\,\%\)) of all configurations needed one restart of the bisection, and only 3 (\(\approx 0.06\,\%\)) of all configurations needed two restarts. In none of the tested cases, the bisection had to be restarted three times or more. ## III Results We consider FPSs with a quartic self-interaction potential of the same form as in Eq. (22). We further define the effective self-interaction parameter \(\Lambda_{\rm int}=\lambda/8\pi m^{2}\). The parameter \(\Lambda_{\rm int}\) is a useful measure for the self-interaction strength and parametrizes scaling relations for the total gravitational mass \(M_{\rm max}\approx 1.058M_{p}^{2}/m\)[72] (for small \(\Lambda_{\rm int}\)) and \(M_{\rm max}\approx\sqrt{\Lambda_{\rm int}}\ln(\Lambda_{\rm int})\,M_{p}^{2}/m\)[81] (for large \(\Lambda_{\rm int}\)). Note that the parameter \(\Lambda_{\rm int}\) was originally introduced in the context of pure Proca stars and thus the scaling relations will not be generally valid for the mixed system. They can however be useful to understand the limiting cases where the FPS is dominated by the bosonic component. Nonetheless, we regard \(\Lambda_{\rm int}\) to be a useful measure to compare different choices of the mass and self-interaction strength. The self-interaction parameter \(\Lambda_{\rm int}\) in our work differs from the one used in [81] by a factor of two, even though they are defined in the same way. This is because a different normalization was used for the vector field. We hereafter investigate models with parameters in the order of \(m\approx 1.34\cdot 10^{-10}\,eV\) and \(\Lambda_{\rm int}\approx 0-100\). This mass range is chosen so that the reduced Compton wavelength of the bosonic field is half the Schwarzschild radius of the Sun. \(m=1\) in our code units then corresponds to \(1.336\cdot 10^{-10}\,eV\) (see a detailed explanation in Appendix A). The range for the self-interaction parameter was chosen so that it fulfills the observational constraints for the DM cross-section of \(1\,cm^{2}/g\) obtained from the Bullet Cluster [89; 90]: \[\pi\Lambda_{\rm int}^{2}m =\frac{\lambda^{2}}{64\pi m^{3}}=\frac{\sigma}{m}\stackrel{{!}}{{<}}1\frac{\mbox{cm}^{2}}{\mbox{g}}\] \[\iff\Lambda_{\rm int}\stackrel{{!}}{{<}}10^{50} \sqrt{\frac{1.3410^{-10}\,eV}{m}}\;. \tag{27}\] For most calculations, we use the DD2 equation of state (with electrons) [91], taken from the CompOSE database [92], to describe the NS component. It was chosen because it is widely used by a number of groups and thus is well known in the literature. The DD2 EOS is based on a relativistic mean-field model with density-dependent coupling constants, which has been fitted to the properties of nuclei and results from Brueckner-Hartree-Fock calculations for dense nuclear matter. Therefore, the DD2 EOS describes also the EOS of pure neutron matter from chiral effective field theory (see [93]). For the purpose of our investigations, the particular choice of the nuclear equation of state is not of importance and has no effect on our general conclusions. ### Radial Profiles We compute the radial profiles of FPSs. In particular, we consider the radial dependence of the pressure \(P(r)\) and the vector field components \(E(r)\), \(B(r)\). Even though the radial distribution of physical quantities can not yet be observed directly (although one could infer the DM distribution using the geodesic motion of light [94]), a good understanding of the internal structure of FPSs can be used to deduce their global quantities and vice versa. Knowledge about the internal distribution is also relevant for numerical applications. Another reason we include the radial profiles here is to facilitate reproducibility of this work and for the sake of code-validation for future works. Radial profiles of pure Proca stars have already been discussed by [72] and for the case of a quartic self-interaction potential like Eq. (22) by [81]. We used the results of [81] in particular to verify that our code [85] reproduces the results correctly and consistently. In Figure 1, we show radial profiles of the pressure \(P(r)\) (orange) and the vector field components \(E(r)\) (black), \(B(r)\) (blue) of the zeroth mode of different FPSs with potential Eq. (22). In the left panel, we take a boson mass of \(m=1.34\cdot 10^{-10}\,eV\) and an interaction strength of \(\Lambda_{\rm int}=0\). The FPSs have central densities of \(\rho_{c}=5\rho_{\rm sat}\) (where \(\rho_{\rm sat}=2.7\cdot 10^{14}\,g/cm^{3}\) is the nuclear saturation density) and varying central vector field amplitudes \(E_{0}\). The radial profile of a pure NS is shown with the orange continuous line and has no corresponding vector field (because it would be zero everywhere). The presence of the DM can be seen to compactify the NS component with increasing central field amplitude \(E_{0}\). The field forms a DM core configuration. In the right panel of Figure 1, all parameters are left equal except for the vector boson mass, which is set to \(m=1.34\cdot 10^{-11}\,eV\). Due to the low DM mass, the correlation length increases, which increases the size of the vector field component and forms a DM cloud configuration. Since the amount of energy density of the vector field is distributed inside and outside the NS component, the effect on the radius is small. At around \(r=11.5\,km\), a kink can be seen in the radial profile of the field component \(B(r)\). This point coincides with the point where the fermionic radius of the FBS is located. This illustrates the gravitational back-reaction between the vector field and the NS component of the FBS. In Figure 2, we show radial profiles of the pressure \(P(r)\) (orange) and the vector field components \(E(r)\) (black), \(B(r)\) (blue) of an FPS. In the left panel, we show an FPS in the first mode, which can be identified by the fact that the \(E(r)\) component crosses the x-axis twice and \(B(r)\) crosses it once. The boson mass is \(m=1.005\cdot 10^{-10}\,eV\) and \(\Lambda_{\rm int}=0\). This time, the central density is taken to be \(\rho_{c}=4\rho_{\rm sat}\) and the central vector field amplitudes vary. The right panel of Figure 2 shows an FPS in the zeroth mode with a vector boson mass of \(m=1.34\cdot 10^{-10}\,eV\) and a self-interaction strength of \(\Lambda_{\rm int}=50\). The maximal amplitude is roughly \(E_{0,\rm crit}\approx 0.0282\) due to the analytical bound on \(E_{0}\), see Eq. (24). The limited field amplitude strongly limits the possible effect on the fermionic component and thus on the fermionic radius, especially in the limit of large \(\Lambda_{\rm int}\). It may therefore be difficult to detect strongly self-interacting vector DM within a NS if one only considers measurements of the fermionic radius. It is also conceivable that the maximum amplitude \(E_{0,\rm crit}\) implies a maximum amount of possible accretion of vector DM, which could be used to set bounds on the DM self-interaction strength. We leave the analysis of this aspect for a future work. ### Stable Solutions We compute a grid of FPSs with different central densities \(\rho_{c}\) and central vector field amplitudes \(E_{0}\). Using the array of solutions, we compute the stabil ity curve using the stability criterion Eq. (26). The stable solutions can then be filtered and analyzed further. This can be seen in the left panel of Figure 3, where we compute FPSs with a quartic self-interaction potential Eq. (22) with \(m=1.34\cdot 10^{-10}\,eV\) and \(\Lambda_{\rm int}=0\). We additionally compute the stability curve using the stability criterion Eq. (26). The stability curve defines the boundary between stable and unstable configurations under linear radial Figure 1: **Left panel:** Radial profiles of the pressure \(P(r)\) (orange) and the vector field components \(E(r)\) (black), \(B(r)\) (blue) of the zeroth mode of different FPSs with potential Eq. (22). The boson mass is \(m=1.34\cdot 10^{-10}\,eV\) and \(\Lambda_{\rm int}=0\). The FPSs have a central density of \(\rho_{c}=5\rho_{\rm sat}\) and varying central vector field amplitudes \(E_{0}\). The pressure has been re-scaled by a factor of 3 for convenience. The DM forms a core and compactifies the fermionic component. **Right panel:** Same as in the left panel, but this time the vector boson mass is set to \(m=1.34\cdot 10^{-11}\,eV\). The DM forms a cloud around the fermionic component. The radius of the fermionic component is barely affected by the field. A kink can be seen in the profile for \(B(r)\) at roughly \(11.5\,km\). This corresponds to the point where the fermionic radius is located. This illustrates the gravitational back-reaction between the vector field and NS matter. Figure 2: **Left panel:** Radial profiles of the pressure \(P(r)\) (orange) and the vector field components \(E(r)\) (black), \(B(r)\) (blue) of the first mode of different FPSs with potential Eq. (22). The boson mass is \(m=1.005\cdot 10^{-10}\,eV\) and \(\Lambda_{\rm int}=0\). The FPSs have a central density of \(\rho_{c}=4\rho_{\rm sat}\) and varying central vector field amplitudes \(E_{0}\). The pressure has been re-scaled by a factor of 3 for convenience. **Right panel:** Radial profiles of the pressure \(P(r)\) (orange) and the vector field components \(E(r)\) (black), \(B(r)\) (blue) of FPSs in the zeroth mode with potential Eq. (22). The boson mass is \(m=1.34\cdot 10^{-10}\,eV\) and the self-interaction strength is \(\Lambda_{\rm int}=50\). The FPSs have a central density of \(\rho_{c}=5\rho_{\rm sat}\) and varying central vector field amplitudes \(E_{0}\). The pressure has been re-scaled by a factor of 3 for convenience. Due to the analytical bound on \(E_{0}\) Eq. (24), the maximal amplitude is roughly \(E_{0,{\rm crit}}\approx 0.0282\). The limited field amplitude strongly limits the effect on the fermionic component. perturbations. The shape of the stability curve for FPSs is qualitatively very similar to the case of scalar FBSs (compare to [15]). For pure neutrons stars and Proca stars, respectively, the curve converges on the \(\rho_{c}\)- and \(E_{0}\)-axis at the point, where the non-mixed configurations have their maximum gravitational masses. We take only the FPSs inside the stability region, enclosed by the stability curve, and plot them in a (mass-radius) MR diagram. This leads to the graph in the right panel of Figure 3. Figure 4: **Left panel:** Total gravitational mass of different FPSs as a function of the restmass density \(\rho_{c}\) and central vector field amplitude \(E_{0}\), with \(m=1.34\cdot 10^{-10}\,eV\) and \(\Lambda_{\rm int}=5\). The black line corresponds to the stability curve, which separates stable solutions (in the lower left region) from unstable solutions (everywhere else). The stability curve reaches configurations with the maximum possible vector field amplitude \(E_{0,{\rm crit}}\approx 0.089\). This is a feature unique to FPSs. **Right panel:** Mass-radius diagram displaying the fermionic radius vs the total gravitational mass for FPS configurations that are within the stability region displayed in the left panel. Each point corresponds to a single configuration and is colour-coded according to the DM-fraction \(N_{\rm b}/(N_{\rm b}+N_{\rm f})\). The solid black-white line shows the mass-radius curve for pure fermionic matter. A vector field with mass of \(m=1.34\cdot 10^{-10}\,eV\) and \(\Lambda_{\rm int}=5\) was considered in addition to the DD2 EOS for the fermionic part. Figure 3: **Left panel:** Total gravitational mass of different FPSs as a function of the restmass density \(\rho_{c}\) and central vector field amplitude \(E_{0}\). The black line corresponds to the stability curve, which separates stable solutions (in the lower left region) from unstable solutions (everywhere else). **Right panel:** Mass-radius diagram displaying the fermionic radius vs the total gravitational mass for FPS configurations that are within the stability region displayed in the left panel. Each point corresponds to a single configuration and is colour-coded according to the DM-fraction \(N_{\rm b}/(N_{\rm b}+N_{\rm f})\). The solid black-white line shows the mass-radius curve for pure fermionic matter. In both cases, a vector field with mass of \(m=1.34\cdot 10^{-10}\,eV\) and no self-interactions was considered in addition to the DD2 EOS for the fermionic part. We see that stable FPS configurations form an MR region instead of an MR curve (which would be the case for single-fluid systems). The stable configurations form core or cloud solutions, depending on their DM-fraction \(N_{\rm b}/(N_{\rm b}+N_{\rm f})\). The FPSs with high DM-fractions have masses of roughly \(1\,M_{\odot}\). This is higher than for scalar FBSs with equal boson mass \(m\) (compare to [15]). This can be explained through the different scaling relations for pure Proca stars and boson stars. Another point where FPSs differ from FBSs is the existence of a maximal amplitude \(E_{0,\rm crit}\) Eq. (24) for the vector field. When increasing the self-interaction strength \(\Lambda_{\rm int}\), the maximal possible vector field amplitude shrinks. This affects the shape of the stability curve. In Figure 4 (left panel), we show such a case where the self-interaction strength is \(\Lambda_{\rm int}=5\). The stability curve does not reach the \(E_{0}\)-axis any more, but instead rises vertically from the pure NS configurations until it reaches the FPSs with maximal central vector field amplitude \(E_{0,\rm crit}\approx 0.089\). We have manually extended the stability curve so that it proceeds horizontally until it reaches the \(E_{0}\)-axis. It is noteworthy that this behavior starts at surprisingly small self-interaction strengths and persists up to higher \(\Lambda_{\rm int}\). In principle, also a third behavior of the stability curve of FPSs is conceivable. For some specific \(\Lambda_{\rm int}\), it should be possible that the stability curve does not admit one continuous shape like in Figure 3 or Figure 4, but that the stability curve is cut into two parts. Namely, one part which starts at the \(E_{0}\)-axis and then rises to reach the edge where \(E_{0,\rm crit}\) is located, and another part which starts at the \(\rho_{c}\)-axis and then rises roughly vertically until it too reaches the analytical bound for the vector field amplitude \(E_{0,\rm crit}\) (think of a horizontal line cutting through the stability curve in Figure 3 at, e.g., \(E_{0}=0.06\)). During our testing, we did not find any case where the stability curve follows this behavior. However, there is also no reason that we are aware of why such a behavior of the stability curve should be forbidden. This is why we presume that such a case might exist. We compute various FPSs with different values of the vector boson masses \(m=\{1,10,0.1\}\times 1.34\cdot 10^{-10}\,eV\) and self-interaction strengths \(\Lambda_{\rm int}=\{0,10,100\}\). We chose the same parameter values as in [15] to allow for easy comparability. In Figure 5, we show the mass and fermionic radii of all stable FPS configurations in an MR diagram. In Figure 6, we show the mass plotted against the effective gravitational radius \(R_{G}\). It is defined as the radius where \(99\,\%\) of the total rest mass \(N_{\rm f}+N_{\rm b}\) is contained. The stable solutions have been obtained using the stability criterion Eq. (26). We hereafter discuss some general trends and compare the results to the one obtained for scalar FBSs. The following analysis should thus be explicitly compared to figures 2 and 3 in [15]. We find that many of the general conclusions regarding FBSs can also be applied to FPSs. FPSs with small DM-fractions are dominated by the fermionic component, leading to only small changes in the fermionic radius. In the case of DM dominated FPSs, the solutions behave similar to pure Proca stars. This leads to higher masses as compared to FBSs, where the total gravitational mass of pure boson stars will be roughly half that of a Proca star with the same boson mass, as can be seen well for the cases where \(m=\{1,0.1\}\times 1.34\cdot 10^{-10}\,eV\). FPSs can thus reach higher total gravitational masses as compared to FBSs with the same DM mass and self-interaction strength. For \(m=1.34\cdot 10^{-9}\,eV\), the bosonic component is concentrated inside the fermionic one and forms a DM core. Even small amounts of DM can have a significant impact on the fermionic radius, since the whole vector field is concentrated entirely inside the NS component. More massive DM particles can thus have larger effects on the fermionic radius compared to low-mass DM at similar DM-fractions. This is due to the cloud-like structure of low-mass DM. For small DM masses, the majority of the DM will be concentrated outside the NS part - due to its larger correlation length - and will thus have smaller effects on the fermionic radius. The smaller the mass and the larger the self-interaction strength, the more likely the formation of a DM cloud is. The opposite is true for DM core solutions. FPSs tend to produce configurations with larger total masses compared to scalar FBSs. Their halos also extend to larger radii, as can be seen from the gravitational radius in Figure 6. In general, the gravitational radius of FPSs is larger in size as compared to scalar FBSs (compare to Figure 3 in [15]). The larger gravitational radius suggests that FPS have larger tidal deformabilities, compared to their scalar field counterparts (FBS) with equal \(m\) and \(\Lambda_{\rm int}\). This is because objects with larger radii are generally favored to tidally disrupt. This could favor higher vector boson masses compared to the corresponding scalar boson mass in the case of FBSs. A future quantitative analysis of the tidal deformability of FPSs is needed to definitively verify this hypothesis. When considering the gravitational radius of FPSs with small boson masses of \(m=1.34\cdot 10^{-11}\,eV\) (bottom row of Figure 6), the transition between DM-dominated and NS-dominated configurations appears more abrupt than in the FBS case (compare to Figure 5: Relation between total gravitational mass \(M_{\rm tot}\) and fermionic radius \(R_{\rm f}\) for different FPSs. The rows correspond to bosonic masses of \(m=\{1,10,0.1\}\times 1.34\cdot 10^{-10}\,eV\), columns correspond to self-interactions of \(\Lambda_{\rm int}=\{0,10,100\}\) respectively. We use the DD2 EOS for the fermionic part. Notice the different scale of the bottom plots. The gray region marks the Buchdahl limit, where no stable NS can exist. Observing only \(R_{\rm f}\) of these systems would appear to violate the Buchdahl limit, even though the FPS as a whole does not. Figure 6: Relation between total gravitational mass \(M_{\rm tot}\) and effective gravitational radius \(R_{G}\) for different FPSs. \(R_{G}\) is the radius where \(99\%\) of the total rest mass is contained. The rows correspond to bosonic masses of \(m=\{1,10,0.1\}\times 1.34\cdot 10^{-10}\,eV\), columns correspond to self-interactions of \(\Lambda_{\rm int}=\{0,10,100\}\) respectively. We use the DD2 EOS for the fermionic part. Notice the different scales of the bottom plots. For pure NSs, because the crust has comparatively low density, \(R_{G}\) is significantly smaller than \(R_{\rm f}\) (compare to Figure 5). \(R_{G}\) tends to be higher as compared to scalar FBSs for equal boson masses and self-interaction strength (compare to Figure 3 in [15]). Figure 3 in [15]). For example, when starting with a system with a DM-fraction of roughly \(0\%\) or \(80\%\), increasing the DM-fraction by small amounts can massively impact the total mass and gravitational radius of the combined system. Finally, note the outlier points in Figure 6 for \(m=1.34\cdot 10^{-11}\,eV\) and \(\Lambda_{\rm int}=100\) at roughly \(R_{G}=350\,km\). These are likely to be numerical artifacts and should thus not be regarded as physical. This is to be expected since for small DM masses and large self-interactions, the numerical solution gets increasingly difficult. This problem could be avoided by using smaller step-sizes and higher numerical precision. But this would also lead to longer run-times of the code. ### Comparison with Scalar FBS We show MR relations of FPSs and scalar FBSs with fixed DM-fractions \(N_{\rm b}/(N_{\rm b}+N_{\rm f})\). In the left panel of Figure 7, we show different FPSs with constant DM-fractions. The DDS EOS [91] was used for the NS component. For the vector boson, we chose masses of \(m=\{1,0.1\}\times 1.34\cdot 10^{-10}\,eV\) and no self-interactions. This figure should be explicitly compared to Figure 5 (left panel) in [15] as the same masses and DM-fractions were chosen. The MR curve of a pure NS with the DD2 EOS (black line) is shown as a reference. Depending on the boson mass, FPSs can have increased or decreased maximum total gravitational mass when there is vector DM present. FPSs tend to produce configurations with larger gravitational masses compared to FBSs with equal parameters (mass, self-interaction and DM-fraction). This is not surprising when considering the scaling relations of pure boson stars and Proca stars, respectively. The gravitational mass scales like \(M_{\rm max}\approx 0.633M_{p}^{2}/m\) for pure boson stars and like \(M_{\rm max}\approx 1.058M_{p}^{2}/m\) for pure Proca stars, where \(m\) is the mass of the scalar/vector boson, respectively. The presence of light bosonic DM can help to increase the total gravitational mass of a NS. This can make EOS which do not fulfill the observational constraints for the maximum NS mass viable again. Vector DM has a larger effect on the gravitational mass than scalar DM and thus smaller amounts of vector DM are needed to produce an equal increase in the total gravitational mass. In the right panel of Figure 7, we show different FPSs (orange and green lines) and FBSs (blue lines) for different boson masses, no self-interactions and constant DM-fractions \(N_{\rm b}/(N_{\rm b}+N_{\rm f})\). We used the DDS EOS [91] for the NS component. The parameters were chosen in a way to illustrate the degeneracies that can arise from different DM models or EOS for the NS component. For example, FPSs and FBSs with boson masses of \(m=1.34\cdot 10^{-11}\,eV\) (dashed lines) produce virtually indistinguishable mass-radius relations, when the FPSs and the FBSs have a DM-fraction of \(60\%\) and \(75\%\) respectively. A similar behavior can be seen for the cases where the boson mass is \(m=1.34\cdot 10^{-10}\,eV\) (dot-dashed lines). Here, FBSs with \(15\%\) DM-fraction produce similar MR curves to FPSs with \(20\%\) DM-fraction. In addition, the resulting MR curves are comparable to the curve corresponding to a pure NS with the KDE0v1 EOS [95]. They also match the curve corresponding to an FPS with \(10\%\) DM-fraction and a vector boson mass of \(m=2.24\cdot 10^{-10}\,eV\) (green line). In conclusion, FPSs can produce degenerate results in the MR plane with both FBSs and pure NS, given that different DM-fractions and EOS are allowed. Additional observables, such as the tidal deformability, are needed to break the degeneracy. However, it seems difficult to prevent degenerate solutions from existing in general, since FPSs themselves can be degenerate with other FPS-solutions that have different boson masses and DM-fractions. We further explore the degeneracy between FPS and FBS solutions. In Figure 8, we show the stable FBS and FPS solutions in an MR diagram. We used the scaling relations of the maximum mass for pure boson stars (\(M_{\rm max}\approx 0.633M_{p}^{2}/m\)) and pure Proca stars (\(M_{\rm max}\approx 1.058M_{p}^{2}/m\)) to match the boson masses in a way that both FPSs and FBSs will have the same gravitational mass in the pure boson star/Proca star limit. To guarantee matching solutions in this limit, we chose a scalar boson mass of \(m=1.34\cdot 10^{-10}\,eV\) and we chose a mass of \(1.058\div 0.633\approx 1.671\) times the mass of the scalar boson - i.e. \(m=2.24\cdot 10^{-10}\,eV\) - for the vector boson. We find a high degree of similarity between the MR region of FBSs and FPSs with the scaled masses. This makes both solutions almost indistinguishable. The small differences present between the left and right panel of Figure 8 can be attributed to a slightly different grid-spacing used for the initial conditions \(\rho_{c}\), \(\phi_{c}\) (and \(\rho_{c}\), \(E_{0}\)). This can be seen in the MR regions at small total gravitational masses \(M_{\rm tot}<0.5\,M_{\odot}\) and also at radii \(R_{\rm f}>15\,km\). The color shading further reveals a different distribution of DM-fractions for a given \(M_{\rm tot}\) and \(R_{\rm f}\), even though the difference is small. We expect a similar behavior to hold when considering different scalar and vector boson masses (with zero-self-interaction), given that they differ by the same factor of \(\approx 1.671\). This adds further confidence to the observation that FBSs and FPSs might be difficult to distinguish since a given solution might be another system but with different boson mass (or DM-fraction). Similar scaling relations also exist for boson stars and Proca stars in the limit of large self-interactions \(\Lambda_{\rm int}\). A similar procedure might therefore be possible when also matching the self-interaction strength appropriately. An independent measurement of the DM particle mass would break this degeneracy to a certain degree. But it would also be necessary to Figure 8: **Left panel:** Mass-radius diagram displaying the fermionic radius vs. the total gravitational mass for stable FBS configurations with scalar boson mass of \(m=1.34\cdot 10^{-10}\,eV\) and no self-interaction. Each point corresponds to a single configuration and is color-coded according to the DM-fraction \(N_{\rm b}/(N_{\rm b}+N_{\rm f})\). The solid black-white line shows the mass-radius curve for pure fermionic matter, modeled by the DDS EOS. **Right panel:** Mass-radius diagram displaying the fermionic radius vs. the total gravitational mass for stable FPS configurations with vector boson mass of \(m=2.24\cdot 10^{-10}\,eV\) and no self-interaction. Each point corresponds to a single configuration and is color-coded according to the DM-fraction \(N_{\rm b}/(N_{\rm b}+N_{\rm f})\). The solid black-white line shows the mass-radius curve for pure fermionic matter, modeled by the DDS EOS. **Right panel:** Mass-radius diagram displaying the fermionic radius vs. the total gravitational mass for stable FPS configurations with vector boson mass of \(m=2.24\cdot 10^{-10}\,eV\) and no self-interaction. Each point corresponds to a single configuration and is color-coded according to the DM-fraction \(N_{\rm b}/(N_{\rm b}+N_{\rm f})\). The solid black-white line shows the mass-radius curve for pure fermionic matter. The vector boson mass was chosen so that in the limit of pure boson stars/Proca stars, the same total gravitational mass is produced. Both diagrams show only marginal differences. Figure 7: **Left panel:** Mass-radius relations of FPSs with the DDS EOS [91] for vector boson masses \(m=\{1,0.1\}\times 1.34\cdot 10^{-10}\,eV\), no self-interactions and constant DM-fractions \(N_{\rm b}/(N_{\rm b}+N_{\rm f})\). This figure should be compared to Figure 5 (left panel) in [15] as the same masses and DM-fractions were chosen. The orange band marks the observational constraint of J0952-0607 [8] and the percentage numbers denote the respective DM-fractions. **Right panel:** Mass-radius relations of FPSs (orange and green lines) and FBSs (blue lines) with the DD2 EOS for different boson masses, no self-interactions and different DM-fractions. The black lines correspond to the pure NSs with the DD2 EOS and KDE0v1 EOS [95] respectively. FPS and FBS solutions with different masses and DM-fractions can both be degenerate with each other, or also degenerate with pure NSs with a different EOS. constrain the self-interaction strength and the DM-fraction through other means. For example using correlations of the DM abundance in the galactic disk (see [96; 97]) or using the bound on the maximal vector field amplitude. The scaling behavior between (fermion) boson stars and (fermion) Proca stars also suggests another application. If it persists for large non-zero self-interactions, it might be possible to use the effective bosonic EOS derived by Colpi et al. [98] also to model (fermion) Proca stars. Since the EOS by Colpi et al. was originally derived for a scalar field, one would then have to scale the boson mass by a factor of \(1.671\) and the self-interaction by an appropriate amount. The necessary scaling for the self-interaction will be dictated by the scaling relations for pure boson stars (\(M_{\rm max}\approx 0.22\sqrt{\Lambda_{\rm int}}\,M_{p}^{2}/m\)[98]) and Proca stars (\(M_{\rm max}\approx\sqrt{\Lambda_{\rm int}}\ln(\Lambda_{\rm int})\,M_{p}^{2}/m\)[81]) at large self-interaction strengths. We however note that great care is needed since Proca stars technically do not exist in the limit of large self-interactions (see the analytical bound on the vector field amplitude Eq. (24)). We plan to study this aspect in the future. ### Higher Modes and Different EOS We broaden our analysis to FPSs with different EOS for the fermionic component and to FPSs where the bosonic component exists in a higher mode. Higher modes are usually assumed to be unstable, but as numerical simulations of scalar boson stars have shown [99; 34], higher modes might be dynamically stable when gravitationally interacting in a multi-component system. We therefore start by considering FPSs in the first and second mode in Figure 9. In the left panel of Figure 9, we show the total gravitational mass and the fermionic radius of stable FPS configurations, where the bosonic component is in the first mode (as opposed to the ground mode, which is the zeroth mode). The vector boson mass is \(m=1.34\cdot 10^{-10}\,eV\), and the self-interaction is set to zero. We first note the fact, that stable solutions under linear radial perturbations, according to the stability criterion Eq. (26), exist at all. This is a non-trivial statement as higher modes of Proca stars (and also of scalar boson stars) are usually believed to be unstable. Note that our stability analysis does not consider the dynamical stability of the higher modes. They might thus be unstable in non-static scenarios. It is however possible that the higher modes of the bosonic part might be stabilized through the gravitational interaction with the fermionic part of the FPS. The FPSs in the first mode exhibit higher gravitational masses in the configurations dominated by the bosonic component, compared to FPSs in the zeroth mode (compare to Figure 3). The numerical value of the frequency \(\omega\) in the higher mode is also larger than the frequency in lower modes. This behavior is consistent with earlier works, which studied pure Proca stars analytically [81] and numerically [76]. They also observed that higher frequencies lead to larger total gravitational mass. The left panel of Figure 9 shows a number of outlier points at around \(11\,km\) and \(2.3\,M_{\odot}\). These are likely numerical artifacts due to the increased difficulty of finding accurate numerical solutions for higher modes. The right panel of Figure 9 shows stable FPS configurations in the second mode. The vector boson mass is \(m=1.34\cdot 10^{-10}\,eV\) and the self-interaction is set to zero. Here also, the existence of stable solutions is to be acknowledged. In the limit of high DM-fractions, the FPSs converge to the solution of pure Proca stars and reach total gravitational masses of roughly \(2.5\) times that of Proca stars in the zeroth mode (compare to Figure 3). In comparison to the case in the first mode (left panel of Figure 9), the quality of the overall solution can be seen to deteriorate further. We believe the outlier points at roughly \(<13\,km\) and \(1\,M_{\odot}\) to be non-physical numerical artifacts. The outlier points coincide with the solutions in the zeroth mode. This suggests that our solver did not find the second mode in these cases and converged on the zeroth mode instead. Solutions of FPSs in even higher modes should therefore be considered with great care. The difficulty of obtaining accurate numerical solutions is likely to increase further for higher modes. The quality of the solution is however sufficient to gain a qualitative understanding of FPSs in higher modes. In conclusion, higher modes are stable under linear radial perturbations and increase the total gravitational mass of FPSs by substantial amounts. We investigate the effect that different EOS have on FPSs. In Figure 10, we use the APR EOS [100] for the fermionic part. We chose a vector boson mass of \(m=1.34\cdot 10^{-10}\,eV\) with no self-interaction for the bosonic part. In the left panel, we notice that the shape of the stability curve (black curve) is affected by the choice of the EOS. On the \(\rho_{c}\)-axis, it converges to a value of around \(7.5\rho_{\rm sat}\). This is higher than the corresponding value of \(\rho_{c}\) when the DD2 EOS is used (compare to Figure 3) because the APR EOS is softer than the DD2 EOS. This means that the nuclear matter is easier to compress and higher central densities can be supported by the EOS. The easier compressibility also shows itself through smaller NS radii (see the right panel). In the limit of pure Proca stars, the stability curve converges to the same value as it does when the DD2 EOS is used (compare to Figure 3). The MR region shows a similar qualitative behavior as in the DD2 case. The high DM-fraction limit in particular shows a convergence to the solution to pure Proca stars. The APR EOS also allows higher central amplitudes of the vector field \(E_{0}\), compared to the DD2 EOS with equal boson mass and self-interaction strength. Figure 11 shows different FPS configurations where the FSG EOS [91] was used for the fermionic part. For the bosonic part, we used a boson mass of \(m=1.34\cdot 10^{-10}\,eV\) and a boson mass of \(m=1.34\cdot 10^{-10}\,eV\). The solid black-white line shows the mass-radius curve for pure fermionic matter. In both cases, a vector field with a mass of \(m=1.34\cdot 10^{-10}\,eV\) and a self-interaction strength of \(m=1.34\cdot 10^{-10}\,eV\) and a self-interaction strength of \(m=1.34\cdot 10^{-10}\,eV\). The solid black-white line shows the mass-radius curve for pure fermionic matter. In both cases, a vector field with a mass of \(m=1.34\cdot 10^{-10}\,eV\) and a self-interaction strength of \(m=1.34\cdot 10^{-10}\,eV\) and a self-interaction strength of \(m=1.34\cdot 10^{-10}\,eV\) and a self-interaction strength of \(m=1.34\cdot 10^{-10}\,eV\). The solid black-white line shows the mass-radius curve for pure fermionic matter. In both cases, a vector field with a mass of \(m=1.34\cdot 10^{-10}\,eV\) and a self-interaction strength of \(m=1.34\cdot 10^{-10}\,eV\) and a self-interaction strength of \(m=1.34\cdot 10^{-10}\,eV\) and a self-interaction strength of \(m=1.34\cdot 10^{-10}\,eV\). The solid black-white line shows the mass-radius curve for pure fermionic matter. Figure 10: **Left panel:** Total gravitational mass of different FPSs as a function of the rest mass density \(\rho_{c}\) and central vector field amplitude \(E_{0}\). The black line corresponds to the stability curve, which separates stable solutions (in the lower left region) from unstable solutions (everywhere else). The qualitative behavior of the stability curve of is similar to the case with the DD2 EOS (see Figure 3) **Right panel:** Mass-radius diagram displaying the fermionic radius vs. the total gravitational mass for FPS configurations that are within the stability region displayed in the left panel. Each point corresponds to a single configuration and is color-coded according to the DM-fraction \(N_{\rm b}/(N_{\rm b}+N_{\rm f})\). The solid black-white line shows the mass-radius curve for pure fermionic matter. In both cases, a vector field with a mass of \(m=1.34\cdot 10^{-10}\,eV\) and no self-interactions was considered in addition to the APR EOS [100] for the fermionic part. Figure 9: **Left panel:** Mass-radius diagram displaying the fermionic radius vs. the total gravitational mass for stable FPS configurations in the first mode with vector boson mass of \(m=1.34\cdot 10^{-10}\,eV\) and no self-interaction. Each point corresponds to a single configuration and is color-coded according to the DM-fraction \(N_{\rm b}/(N_{\rm b}+N_{\rm f})\). The solid black-white line shows the mass-radius curve for pure fermionic matter. **Right panel:** Mass-radius diagram displaying the fermionic radius vs. \(m=3.01\cdot 10^{-11}\,eV\) and no self-interaction. The FSG EOS is a soft EOS and thus reaches higher central densities \(\rho_{c}\) for pure NSs. It is excluded by current observational constraints (see Figure 7), as it cannot produce pure NSs with masses of \(M=2.35^{+0.17}_{-0.17}\,M_{\odot}\)[8]. However, adding DM to the pure NSs can significantly increase the maximum gravitational mass of the combined system. The FSG EOS is then able to reach the observational bound on the maximum NS mass in the presence of DM. In fact, the MR curve of the pure DDS EOS is entirely contained within the stability region of the FPSs with the FSG EOS. This again raises the point that some FPS solutions are degenerate with some NS solutions (see Figure 8), when allowing for different DM-fraction and DM masses. To figure out whether and which types of mixed DM-NS systems might exist, it will be crucial to perform sophisticated parameter searches of the system and obtain more measurements to constrain the DM and NS properties in future studies. ## IV Conclusions In this work, we studied the impact that bosonic dark matter (DM) has on the mass and radius of neutron stars (NSs). DM was modeled as a massive, self-interacting complex vector field. DM was further assumed to only interact gravitationally with the fermionic neutron star matter. We derived the equations of motion describing static spherically symmetric fermion Proca stars (FPSs) and computed their properties numerically. We also found a scaling relation between the frequency, vector field and metric components, and we derived an analytical upper bound on the vector field amplitude. We showed that the presence of the vector field can lead to core-like and to cloud-like solutions. Core-like solutions can increase the compactness of the NS component. For some configurations, observing only the fermionic radius and the total gravitational mass would appear to violate the Buchdahl limit. We found core-like solutions for vector boson masses of \(m\gtrsim 1.34\cdot 10^{-10}\,eV\) and small self-interactions \(\Lambda_{\rm int}=\lambda/8\pi m^{2}\). Cloud-like solutions appeared when \(m\lesssim 1.34\cdot 10^{-11}\,eV\) and \(\Lambda_{\rm int}\) is large. For some small boson masses \(m\lesssim 1.34\cdot 10^{-11}\,eV\), the presence of DM can significantly increase the total gravitational mass while leaving the fermionic radius approximately constant. We computed radial profiles of FPSs and found that the existence of a maximum possible vector field amplitude limits the effect of DM on the NS when the self-interaction \(\Lambda_{\rm int}\) is large. The maximum amplitude implies a maximum possible amount of vector boson DM accretion and could thus be used to set bounds on the DM properties. We also compared FPSs to FBSs with a scalar Figure 11: **Left panel:** Total gravitational mass of different FPSs as a function of the rest mass density \(\rho_{c}\) and central vector field amplitude \(E_{0}\). The black line corresponds to the stability curve, which separates stable solutions (in the lower left region) from unstable solutions (everywhere else). **Right panel:** Mass-radius diagram displaying the fermionic radius vs. the total gravitational mass for FPS configurations that are within the stability region displayed in the left panel. Each point corresponds to a single configuration and is color-coded according to the DM-fraction \(N_{\rm b}/(N_{\rm b}+N_{\rm f})\). The solid black-white line shows the mass-radius curve for pure fermionic matter. In both cases, a vector field with a mass of \(m=3.01\cdot 10^{-11}\,eV\) and no self-interactions was considered in addition to the FSG EOS [91] for the fermionic part. field. We used the same parameters as in [15] to simplify the comparison. For stable FPS configurations, we found that many of the general qualitative trends that apply to FBSs also apply to FPSs. But vector DM leads to higher FPS masses and larger gravitational radii for equal \(m\) and \(\Lambda_{\rm int}\). This could also imply a larger tidal deformability of FPSs compared to FBSs. Also, a measurement of the gravitational radius would favor larger vector boson masses compared to scalar boson masses. For FPS configurations of constant DM-fraction, we found that the effect of vector DM on the NS properties (total gravitational mass and fermionic radius) is larger compared to FBSs with equal DM-fraction, mass \(m\) and self-interaction strength \(\Lambda_{\rm int}\). One therefore needs a larger amount of scalar DM to cause the same effect as vector DM. For different boson masses and DM-fractions, we found that FPSs and FBSs can both be degenerate with each other and also be degenerate with pure NS with a different EOS. We found an especially high degree of similarity between FBS solutions with no self-interaction and a boson mass of \(m=1.34\cdot 10^{-11}\,eV\) with FPS solutions where the vector boson mass is larger by a factor of 1.671. We expect the similarity in the behavior to hold also for different boson masses (and also for non-zero self-interactions), as long as the vector boson mass is scaled accordingly by the right factor. These similarities also hint towards a possibility to use the effective EOS by Colpi et al. [98] also for (fermion) Proca stars. We however note that great care is needed since Proca stars do not exist in the limit of large self-interactions (see the analytical bound on the vector field amplitude Eq. (24)). The similarities between FBSs and FPSs might also be useful for numerical applications. Scalar (fermion) boson stars are easier to implement and numerically cheaper to solve than FPSs. One could then simply solve the equations for scalar (fermion) boson stars with a re-scaled mass (and self-interaction parameter \(\Lambda_{\rm int}\)) to compute the properties (\(M_{\rm tot}\), \(R_{\rm f}\)) of (fermion) Proca stars. The prevalence of degenerate solutions highlights the importance of measuring additional observables, such as the tidal deformability, to break the degeneracies. We confirmed the existence of higher modes that are stable under first-order radial perturbations. We found that higher modes lead to higher total gravitational masses of the mixed FPS systems. Using FPSs with different EOS for the fermionic part, we explicitly confirmed that for certain DM masses, previously excluded EOS are able to fulfill observational bounds if DM is present. Mixed systems of bosonic DM and NS matter can therefore be consistent with all current observational constraints if suitable boson masses and self-interaction strengths are chosen. ###### Acknowledgements. The authors acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the CRC-TR 211 'Strong-interaction matter under extreme conditions'-project number 315477589-TRR 211. CJ acknowledges support by the Hermann-Wilkomm-Stiftung 2023. ## Appendix A Units In this work, we use units in which the gravitational constant, the speed of light and the solar mass are set to \(G=c=M_{\odot}=1\). As a direct consequence, distances are measured in units of \(\approx 1.48\,km\), which corresponds to half the Schwarzschild radius of the Sun (also called the gravitational radius of the Sun). The Planck mass is \(M_{p}=\sqrt{\hbar c/G}\approx 1.1\times 10^{-38}M_{\odot}\). Since \(G=c=M_{\odot}=1\) it follows that \(\hbar\approx 1.2\times 10^{-76}\neq 1\). Boson stars (with a scalar field) are described using the Klein-Gordon equation, which in SI units and flat spacetime reads \((\Box-(mc/\hbar)^{2})\phi=0\). The term \(mc/\hbar\) is the inverse of the reduced Compton wavelength \(\lambda_{c}=\hbar/mc\), which sets the typical length scale for the system even in the self-gravitating case. We assume that the typical length scale of the boson is similar to the gravitational radius \(GM_{\odot}/c^{2}\), which in the case of mass scales of \(\sim 1\,M_{\odot}\) is approximately \(1.48\,km\). With \(m=\hbar/c\lambda_{c}\), this therefore leads to a mass scale of the bosonic particle of \(1.336\cdot 10^{-10}\,eV\). Previous works such as, e.g., [31; 15] thus specify the mass of the scalar particle in these units. A mass of \(m=1\) in our numerical code [85] then also corresponds to \(1.336\cdot 10^{-10}\,eV\). This choice of the boson mass then automatically leads to boson stars with masses in the range of \(\sim 1\,M_{\odot}\). The same reasoning can also be applied to the case where the boson is a vector boson. This is valid since all components of a vector field also fulfill the Klein-Gordon equations individually.
2303.08289
Improving Adversarial Robustness with Hypersphere Embedding and Angular-based Regularizations
Adversarial training (AT) methods have been found to be effective against adversarial attacks on deep neural networks. Many variants of AT have been proposed to improve its performance. Pang et al. [1] have recently shown that incorporating hypersphere embedding (HE) into the existing AT procedures enhances robustness. We observe that the existing AT procedures are not designed for the HE framework, and thus fail to adequately learn the angular discriminative information available in the HE framework. In this paper, we propose integrating HE into AT with regularization terms that exploit the rich angular information available in the HE framework. Specifically, our method, termed angular-AT, adds regularization terms to AT that explicitly enforce weight-feature compactness and inter-class separation; all expressed in terms of angular features. Experimental results show that angular-AT further improves adversarial robustness.
Olukorede Fakorede, Ashutosh Nirala, Modeste Atsague, Jin Tian
2023-03-15T00:35:03Z
http://arxiv.org/abs/2303.08289v1
# Improving Adversarial Robustness with Hypersphere Embedding and Angular-Based Regularizations ###### Abstract Adversarial training (AT) methods have been found to be effective against adversarial attacks on deep neural networks. Many variants of AT have been proposed to improve its performance. Pang et al. [1] have recently shown that incorporating hypersphere embedding (HE) into the existing AT procedures enhances robustness. We observe that the existing AT procedures are not designed for the HE framework, and thus fail to adequately learn the angular discriminative information available in the HE framework. In this paper, we propose integrating HE into AT with regularization terms that exploit the rich angular information available in the HE framework. Specifically, our method, termed angular-AT, adds regularization terms to AT that explicitly enforce weight-feature compactness and inter-class separation; all expressed in terms of angular features. Experimental results show that angular-AT further improves adversarial robustness. Olukorede Fakorede\({}^{*}\) Ashutosh Nirala Modeste Atsague Jin Tian Department of Computer Science, Iowa State University {fakorede, aknirala, modeste, jtian}@iastate.edu Adversarial Robustness, Deep Learning, Hypersphere Embedding, Adversarial Training ## 1 Introduction The application of deep neural networks (DNNs) into various domains has raised skepticism due to the observed vulnerability of DNNs to adversarial examples. Adversarial examples are produced when small, imperceptible but well-crafted perturbations are added to, e.g., natural images leading to a wrong prediction by the network. This observed flaw in DNNs has opened an active research area aimed at improving the robustness of DNNs against these malicious attacks. Of the many methods proposed to improve the robustness of DNNs against adversarial attacks, adversarial training (AT) [2, 3], which requires the introduction of adversarial examples in the training of robust models, has been found effective. The success achieved by AT has led to an array of AT variants, e.g. [4, 5, 6]. Additionally, efforts [7, 1] have been made to improve the performance of AT. One idea for improving the performance of AT is augmenting AT with hypersphere embedding (HE) [1]. HE involves enforcing discriminative constraints on a hypersphere manifold. This is done by normalizing the linear layer's weight and the penultimate layer's features, and using an additive angular penalty. While the traditional softmax cross-entropy loss has the disadvantage of not explicitly encouraging discriminative learning of features [8, 9], various angular softmax-cross-entropy loss functions that incorporate HE have been proposed to address this limitation. Notable implementations in deep learning literature that utilize HE include CosFace[9], ArcFace [10], and SphereFace [11], among others. A major feature of HE is that it encourages learning angularly discriminative features [9, 10, 11]. While Pang et al. [1] have demonstrated that incorporating HE improves the performance of the AT methods by directly applying the HE described in CosFace [9] into existing adversarial training methods, e.g., standard AT [3], ALP [4], and TRADES[6], we observe that these existing AT variants were not originally designed for the HE framework and were not taking advantage of the abundant angular information. In this paper, we propose a new training objective that integrates HE into AT and exploits the rich angular information in the HE framework. Our work is the first to address adversarial robustness exclusively using angular information. Our proposed Angular-AT objective consists of an angular softmax-cross-entropy loss plus two regularization terms that: (1) encourage weight-feature compactness by explicitly minimizing the angle between an adversarial feature vector and the weight vector corresponding to the true class, (2) encourage inter-class separation by maximizing the angles among class weight vectors. Lastly, we perform extensive experimental studies to show the effectiveness of the proposed method. ## 2 Related Works ### Hypersphere Embedding The standard softmax cross-entropy loss has been argued to lack sufficient feature discriminative power, especially when deployed in deep face recognition models [11]. To address this limitation, variants of HE, which improve intra-class compactness and inter-class variance, were introduced [9, 10, 8]. From a geometrical perspective, HE imposes discriminative constraints on a hypersphere manifold and improves the learning of angular discriminative features. Popular HE implementations include CosFace[9], ArcFace [10], and SphereFace [11]. ### Adversarial Robustness Many methods have been proposed to defend against adversarial examples [2, 3, 6, 12, 13, 4, 5], among which _AT_[2, 3] is considered the most effective. In their seminal work, Madry et al. [3] formulated adversarial training as a min-max optimization problem as follows: \[\min_{\mathbf{\omega}}\mathbb{E}_{(\textbf{x},\textbf{y})\sim\mathcal{D}}\left[ \max_{B_{s}(\textbf{x})}L_{\mathbf{\omega}}(\textbf{x}+\delta,y)\right] \tag{1}\] where \(L_{\omega}(.)\) is the loss function, \(\omega\) are the model parameters, \(y\) is the label of **x**, and \(\delta\) denotes an adversarial perturbation constrained by the radius \(\epsilon\). The inner maximization typically utilizes the Projected Gradient Descent (PGD) attack to craft adversarial examples. The outer minimization minimizes the high loss induced by the adversarial examples. Prominent variants of AT include [6, 5, 4], to cite a few. Various ideas such as [7, 14, 1] have also been explored to boost the performance of AT. Pang et al. [1] have incorporated HE into adversarial training to boost adversarial robustness. Our work follows [1] in integrating HE into AT. While [1] directly introduces HE into existing AT variants such as PGD-AT [3], TRADES [6], and ALP[4], we introduce a novel adversarial training objective that exploits the rich angular information available on the hypersphere manifold. ## 3 Notation and Preliminaries We denote the training set as \(\mathcal{D}=\{\textbf{x}_{i},y_{i}\}_{i=1}^{d}\), where \(\textbf{x}_{i}\in\mathcal{X}\subseteq\mathbb{R}^{n}\) represents a feature vector, \(n\) is the dimension of the feature vectors, \(y_{i}\in\{1,\cdots,K\}\) where \(K\) represents the number of labels, and \(d=|\mathcal{D}|\). For a feature vector \(\textbf{x}\in\mathbb{R}^{n}\), \(\|\textbf{x}\|_{p}=(\sum_{i=1}^{n}|\textbf{x}_{i}|^{p})^{\frac{1}{p}}\). We define the \(\epsilon\)-neighborhood of **x** as \(B_{\epsilon}(\textbf{x}):\{\textbf{x}^{\prime}\in\mathcal{X}:\|\textbf{x}^{ \prime}-\textbf{x}\|_{p}\leq\epsilon\}\). Let \(f_{\omega}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{K}\) denote a deep learning classifier with model parameters \(\omega\) that produce the output \[f_{\omega}(\textbf{x})=\mathbb{S}(\textbf{W}^{T}\textbf{z}+b) \tag{2}\] where \(\textbf{z}=\textbf{z}(\textbf{x};\omega)\) denotes the extracted features in the penultimate layer with the model parameters \(\omega\), the matrix **W** = \((\textbf{W}_{1},...,\textbf{W}_{K})\) is the weight matrix, and \(b\) is the bias term in the linear layer. \(\mathbb{S}(.)\): \(\mathbb{R}^{K}\rightarrow\mathbb{R}^{K}\) is the softmax function. ### Hypersphere Embedding HE typically involves four operations: setting the bias term in eq. (2) to zero, weight normalization (WN), feature normalization (FN), and setting an additive angular margin (AM). In eq. (2), when \(b=0\), \(\textbf{W}^{T}\textbf{z}+b\) becomes \(\textbf{W}^{T}\textbf{z}=(\textbf{W}_{1}^{T}\textbf{z},...,\textbf{W}_{K}^{ T}\textbf{z})\). The inner product \(\textbf{W}_{k}^{T}\textbf{z}=\|\textbf{W}_{k}\|\|\textbf{z}\|\cos\mathbf{\theta}_{k}\), where \(\mathbf{\theta}_{k}\) is the angle between **z** and **W\({}_{k}\)**. The WN and FN operations are computed as follows: \[\text{WN:}\widetilde{\textbf{W}_{k}}=\frac{\textbf{W}_{k}}{\|\textbf{W}_{k} \|},\ \ \text{FN:}\widetilde{\textbf{z}}=\frac{\textbf{z}}{\|\textbf{z}\|}. \tag{3}\] After applying the FN and WN operations, we have that \(\widetilde{\textbf{W}}^{T}\widetilde{\textbf{z}}=(\widetilde{\textbf{W}}_{1} ^{T}\widetilde{\textbf{z}},...,\widetilde{\textbf{W}}_{K}^{T}\widetilde{ \textbf{z}})=(\cos\mathbf{\theta}_{1},...,\cos\mathbf{\theta}_{K})\). \(\cos\mathbf{\theta}_{k}\) represents the logit for the class \(k\) and \(\mathbf{\theta}_{k}\) is the angle between feature **z** and class weight \(\textbf{W}_{k}\). Let \(\cos\mathbf{\theta}\) denote \((\cos\mathbf{\theta}_{1},...,\cos\mathbf{\theta}_{K})\). For a neural network with hypersphere embedding, we rewrite eq. (2) as: \[\widetilde{f}_{\omega}(\textbf{x})=\mathbb{S}(\widetilde{\textbf{W}}^{T} \widetilde{\textbf{z}})=\mathbb{S}(\cos\mathbf{\theta}), \tag{4}\] where \(\widetilde{f}_{\omega}(\textbf{x})\) is the output of the neural network with hypersphere embedding, which we shall refer to as HE-DNN from now on. Wang et al [9] proposed to train HE-DNN with the following cross-entropy loss with angular margin: \[L_{CE}(\widetilde{f}_{\omega}(\textbf{x}),y)=-\textbf{1}_{y}^{T}log\ \mathbb{S}(s\cdot(\cos\mathbf{\theta}-m\cdot\textbf{1}_{y})), \tag{5}\] where the hyperparameter \(s>0\) is a scaling parameter for improving numerical stability during training [15, 1], \(m\) is the angular margin. ## 4 Proposed Method It can be observed from eq. (4) that the output logits and the resulting posterior probabilities of a HE-DNN classifier depend on the angles between the normalized weight vectors of the linear layer and the normalized feature vector in the penultimate layer. Similarly, it can be argued that an adversarial attack crafted on HE-DNN attacks these angles. Given an example \(x\) with label \(y\), the goal of an adversarial attack is to craft adversarial example \(x^{\prime}\) from \(x\) that fools the classifier to classify \(x^{\prime}\) as \(y^{\prime}\) such that \(y^{\prime}\neq y\). Consider a binary HE-DNN classifier with a single output such that the cross-entropy loss aims to maximize \(\widetilde{\textbf{W}}^{T}\widetilde{\textbf{z}}=\cos\theta\) on input \(x\) with label \(y=2\). If \(x\) is correctly classified, then \(\cos\theta>0\). However, the adversarial goal of crafting \(x^{\prime}\) becomes making \(\cos\theta<0\), thereby attacking angle \(\theta\) between the normalized feature vector \(\widetilde{\textbf{z}}\) and the weight vector \(\widetilde{\textbf{W}}\). Given that the angles between the feature vector and weight vectors contain abundant discriminative information [10, 16, 17] and adversarial attacks attack these angles, we propose a regularization term that directly encourages the weight-feature compactness, more specifically, by minimizing the angle between adversarial feature vector and the weight vector corresponding to the ground-truth label \(y\). In addition, prior works [18] have argued strong connections between adversarial robustness and inter-class separability. We therefore propose an additional angular-based regularization term that improves the inter-class separability. ### Weight-Feature Compactness Generating adversarial examples involves minimizing a model's confidence on an input example w.r.t its true class. Thus, in HE-DNN, the output logit (cosine value) of an input corresponding to its true label is degraded by an adversarial attack. The lower cosine value occasioned by an adversarial attack corresponds to a larger angle between the feature embedding of the adversarial input and the weight vector of the true label, and consequently a smaller angle between the feature embedding and the weight vector of a wrong class. Hence, there exists a connection between weight-feature angular compactness and robustness. We provide a geometrical illustration of weight-feature angular compactness in Fig. 1. To improve robustness, we utilize a regularization term that encourages minimization of the angle between the adversarial feature embedding and the weight corresponding to the ground-truth label \(y\). We define the following regularization term to achieve this goal: \[l_{wfc}=[arccos(\widetilde{\mathbf{W}}_{y}^{T}\cdot\widetilde{\mathbf{z^{ \prime}}})]^{2}=(\boldsymbol{\theta}_{y}^{\prime})^{2} \tag{6}\] where \(\boldsymbol{\theta}_{y}^{\prime}\) is the angle between the feature embedding of the adversarial example \(\mathbf{x^{\prime}}\) and weight vector \(\boldsymbol{W}_{y}\) of true class \(\boldsymbol{y}\). ### Inter-Class Separation Here we consider the inter-class separation in HE-DNN. The weight matrix \(\boldsymbol{W}\), in the fully connected layer of a neural network, conceptually represents the various class centers [9, 10]. By penalizing the angles between these class centers on the hypersphere, we aim to improve angular discriminability and robustness of HE-DNN to adversarial attacks. We propose to uniformly distribute the class centers around the unit hypersphere. To achieve this goal, we utilize the popularly known Tammes problem [19, 20]. Tammes problem attempts to uniformly arrange n points on a unit sphere by maximizing the minimum pairwise distance. Using an idea similar to the Tammes problem, we penalize minimal angles between class centers by explicitly maximizing the minimum angles between class centers. Given a linear layer weight matrix \(\mathbf{W}=(\mathbf{W}_{1},...,\mathbf{W}_{K})\), where \(\mathbf{W}_{k}\) corresponds to the weights for class k, we aim to maximize the angles between each close pair of weight vectors. We minimize the maximum cosine similarity between pairwise weight vectors by the following regularization term: \[l_{sep}=\frac{1}{K}\sum_{i=1}^{K}max_{j,j\neq i}\mathbf{C}_{i,j} \tag{7}\] where \(\mathbf{C}_{i,j}=\widetilde{\mathbf{W}}_{i}^{T}\cdot\widetilde{\mathbf{W}}_{j}\) represents pairwise similarities, \(K\) is the number of classes, and \(\widetilde{\mathbf{W}}_{i}=\frac{\boldsymbol{W}_{i}}{\|\boldsymbol{W}_{i}\|}\). Globally computing maximum cosine similarity \(max_{i,j,i\neq j}\mathbf{C}_{i,j}\) is inefficient; thus, we compute the mean of each vector's maximum cosine similarity in eq. (7). Prior work such as [21] adopted the Tammes problem in non-adversarial settings to uniformly separate prototypes which are _a priori_ positioned on hyperspherical output. In this work, we uniquely apply Tammes-inspired uniform separation to improve the uniform separation of class identities in the linear layer of HE-DNNs. ### Training Objective We combine the regularization terms described in sections 4.1 and 4.2 with the loss function decribed in eq. (5) to arrive at the following Angular-AT training objective: \[\min_{\boldsymbol{\omega}}\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{D}}\Big{\{}L (\widetilde{f}_{\omega}(\mathbf{x^{\prime}}),y)+\alpha l_{wfc}+\beta l_{sep} \Big{\}} \tag{8}\] where \(\alpha\) and \(\beta\) are regularization hyperparameters. \(\mathbf{x^{\prime}}\) is an adversarial example crafted by perturbing input example \(\mathbf{x}\) using PGD attack. \(y\) is the true label of \(\mathbf{x}\). ## 5 Experiments We experimentally verify the effectiveness of the our method, and compare the robustness obtained with the state-of-the-art defenses on _CIFAR10/100_ and _TinyImagenet_. **Baselines.** We compare our proposed Angular-AT method against the two best performing defenses that are based on HE-DNN: PGD-AT-HE and TRADES-HE [1]. For completeness, we also compare against original defenses PGD-AT [3] and TRADES [6], which are based on the standard DNNs. **Defense settings.** We utilize ResNet18 (_CIFAR100_ and _TinyImagenet_) and Wideresnet-34-10 (_CIFAR10_) [22] for classification. The models are trained for 100 epochs, using mini-batch gradient descent with momentum 0.9, batch size 128, weight decay 3.5e-3 (ResNet-18) and 5e-4 (Wideresnet-34-10). The learning rates are set to 0.01 and 0.1 on ResNet-18 and Wideresnet-34-10 respectively. In both cases, the learning rates are decayed by a factor of 10 at 75th, and then 90th epoch. The adversarial examples used for training are obtained by perturbing each image using PGD attack, setting Figure 1: The figure on the left shows nearness of a feature vector \(\boldsymbol{z}\) to the weight vector of the ground truth. The figure on the right depicts how the resulting feature vector of the corresponding adversarial example, \(\boldsymbol{z^{\prime}}\), is pushed towards the weight vector of wrong class \(\boldsymbol{y^{\prime}}\neq\boldsymbol{y}\), thereby increasing the angle between the feature vector of the adversarial example and the weight vector of the true label. the perturbation \(\epsilon\) = 0.031, the perturbation step size to 0.007, and the number of iterations to 10. **Hyperparameter settings.** For the baselines PGD-AT-HE and TRADES-HE, \(m\) and \(s\) are respectively kept at 0.2 and 15.0, as described by the authors [1]. We set \(m\) = 0 and the scale \(s\) = 15.0. The values of regularization hyperparameters \(\alpha\) and \(\beta\) are heuristically determined, and the values yielding the best results after multiple experiments were selected. Hence, \(\alpha\) and \(\beta\) are set to 0.55, 0.48 respectively. **Evaluation settings.** We evaluated our defense under _White-box attack_ threats including PGD-20/500 [3], CW (\(L_{inf}\) version of CW loss optimized by PGD20) [23] and Autoattack [24]. The perturbation size is set to \(\epsilon\) = 0.031, step size \(\eta\) = 0.003. In addition, we evaluated the proposed defense on the SPSA[25] attack (a strong query-based _Blackbox attack_), with the perturbation size of 0.001 (for gradient estimation), sample size of 128, 80 iterations, and learning rate 0.01. Experiments were reproduced four times with different random seeds; the mean and standard deviation are subsequently computed. The results are reported as mean \(\pm\) std in the tables. The results, as reported in tables 1-3, shows that our method significantly improves the baselines. **Ablation Studies.** We study the contribution of each regularization to the robustness against PGD, CW, and AA attacks, using CIFAR10 on WRN-34-10. Our observations are reported in table 4. Training HE-DNN using only \(L(\widetilde{f}_{\omega}(x^{\prime}),y)\) yields good performance on PGD attacks, but weaker performance on stronger attacks like CW and AA attacks. \(L(\widetilde{f}_{\omega}(x^{\prime}),y)\) + \(l_{wfc}\) yield significantly better performance on CW and AA attacks, slightly at the expense of PGD attack robustness. The \(l_{sep}\) term improves robustness to PGD attacks, therefore \(l_{sep}\) compensates for the drop in PGD robustness caused by the \(l_{wfc}\) term.
2304.14607
A Brief Study of Privacy-Preserving Practices (PPP) in Data Mining
Data mining is the way toward mining fascinating patterns or information from an enormous level of the database. Data mining additionally opens another risk to privacy and data security.One of the maximum significant themes in the research fieldis privacy-preserving DM (PPDM). Along these lines, the investigation of ensuring delicate information and securing sensitive mined snippets of data without yielding the utility of the information in a dispersed domain.Extracted information from the analysis can be rules, clusters, meaningful patterns, trends or classification models. Privacy breach occur at some stage in the communication of data and aggregation of data. So far, many effective methods and techniques have been developed for privacy-preserving data mining, but yields into information loss and side effects on data utility and data mining effectiveness downgraded. In the f ocal point of consideration on the viability of Data Mining, Privacy and rightness should be improved and to lessen the expense.
Dhinakaran D, Joe Prathap P. M
2023-04-28T03:24:17Z
http://arxiv.org/abs/2304.14607v1
# A Brief Study of Privacy-Preserving Practices (PPP) in Data Mining ###### Abstract Data mining is the way toward mining fascinating patterns or information from an enormous level of the database. Data mining additionally opens another risk to privacy and data security.One of the maximum significant themes in the research fieldis privacy-preserving DM (PPDM). Along these lines, the investigation of ensuring delicate information and securing sensitive mined snippets of data without yielding the utility of the information in a dispersed domain.Extracted information from the analysis can be rules, clusters, meaningful patterns, trends or classification models. Privacy breach occur at some stage in the communication of data and aggregation of data. So far, many effective methods and techniques have been developed for privacy-preserving data mining, but yields into information loss and side effects on data utility and data mining effectiveness downgraded. In the focal point of consideration on the viability of Data Mining, Privacy and rightness should be improved and to lessen the expense. Data Mining (DM),Privacy-Preserving DM (PPDM), Privacy and Information Security. + Footnote †: _Published by: The Mattingley Publishing Co., Inc._ ## I Introduction Data miningis one of the quickly expanding fields in the PC business that manages to find valuable and fascinating examples covered up in tremendous measures of information put away in various data sources [1]. Data mining is aversatilefield uniting strategy from Database innovation, Statistics, Information recovery (IR), AI, and Machine learning, Pattern acknowledgment, Neural systems, Knowledge-based frameworks, High-execution computing, and Data perception to report the issue of data. It is used to extract valuable information for future predicting and improvement [2]. Data mining plays an essential role in different business associations, monetary, instructive and wellbeing associations and uncovering sensitive information from there data, which might be large harm if the information known to the outsider. From the perspective of the association, mining is useful for their future determining and upgrade. In this way, there is a need to avoid the revelation of private data and realize which is viewed as touchy in some random setting. Because of this, numerous endeavors have been devoted to attention to the issue of privacy preserving in data mining [3]. Thus, a few privacy-preserving techniques join with protection safeguarding systems has been created. ### Concerns of Data Mining A. Privacy concern Privacy-focused on confidentiality, where information about the individuals are not publicized to others [4]. hence providing less information regarding individual users while learning more quantity of information regarding statistical information. where information proprietors like clients, representatives, internet-based life clients are frightened to give their data because of an unapproved individual mayget to their sensitive information and utilize that information in a deceptive manner which may make hurt them.Privacy realized primarily by Cryptography, Anonymization, obfuscation and differential privacy. 1. Cryptography By using cryptographic algorithms, the data are transformed from one form to another, thus transformed data will be exact and protected. Cryptography has a limitation, where it fails when multiple parties are involved. 2.Anonymization Anonymizationinvolves either encryption or removal of an individual's identifiable information. limitations of Anonymization are heavy loss of pieces of information and the heavy possibility of linking attacks [5]. 3.Obfuscation Obfuscation is a procedure that modifies the information so as to shroud data. It is a type of information refinement which renovate individual's information into imprecise information making it intricate to recognize. limitations of obfuscation are, loss of individual's information and reverse-engineering a program will be intricate and protracted, it will not essentially make it impossible. 4.Differential Privacy It includes that calculations be inhumane toward changes in a specific person's record, in this way limiting information spills through the outcomes. The protection saving interface guarantees unequivocally safe access to the information and doesn't require from the information excavator any skill insecurity. utilizing differential protection procedures, we can augment the precision of questions from measurable data and decrease the likelihood of distinguishing a person's data. B. Security Concern Security is to keep unapproved clients from picking up anything about the original data.Organizations have lots of personal information's about employees, customers, patients, etc. They don't have adequate security frameworks set up to ensure this data. In heaps of situations where hackers access and took individual information of clients. Security is guaranteed principally through access control components, For example, confirmation and approval [6]. Security can't put a stop to protection divulgence.For example, if one of the users decrypts the datasets, hence entire datasets will be visible which impacts to loss of privacy. ## II Privacy Preserving Data Mining The study of accomplishing data mining objectives starved of prevailing the privacy of the data owners. If there is a huge amount of data means that it is possible to study a bunch of information about individuals from a group of people's data. Privacy-Preserving DM manages ensuring the protection of individual information/delicate information without relinquishing the utility of the original data [3]. For example, suppose different sets of Hospitals wish to identify valuable summative information or knowledge about the specific disease from their patient's report, while all hospitals will be not set to exposepatient's data in light of security acts.Hence, they have to depend on the privacy mechanism on their distributed database to achieve the needed information.PPDM started with the exertion of Agrawal and Srikant, which accomplished prevalence in the information mining research network ### Aim of PPDM algorithm Peoples are well conscious of privacy invasions on their sensitive data and they are very disinclined to share sensitive pieces of information with others. Hence theaim of PPDM is to, * Preserve privacy of the party's sensitive data. while they achieve useful information from a complete dataset. * To determine valuable information from available Sanitized data. * Must be imperviousto the several DM techniques. * Should not conciliation the access and the utilize of non-sensitive data. * To allow the data excavator to get perfect mining outcomes by not provided with original data. ### Common Approaches to protect privacy * Restrict entree to the records, such as adding up authentication like certification to data entries. * Replace individuality with pseudonyms or special cryptogram, hence sensitive information can't be located to an individual record. * Lindell and Pinkas (2009) introduced a concept where data is scattered among several locations and these locations cooperate to study the universal data mining outcomes without disclosing the data at their distinct locations [7]. * By simply applying data mining techniques over multiple sources independently at each place that will never share their information and finally combining the mined results. This concept yields better results locally but fails to give an accurate result globally. ### Privacy Preserving structure: Data collected from each individual user (database / data marts) are aggregated and placed in a common database. Then data transformation / sanitization process like blocking, suppression, perturbation, modification, generalization, sampling is done, where data are transformed to the format suits for analysis purpose. Thus, sensible data's will be not exposed even to dishonest data miners.At last data mining algorithms are applied on the transformed database to generate the valuable information. Figure 1: Privacy Preserving structure – Trusted Process ### Privacy Attacks 1. Linking attack Realizing of a person through a range of fields or characteristics, for example, the mix of postal district, age, sex, or a few unimportant data. An intruder can coordinate anonymized information with non-anonymized information in an alternate dataset, prompting security break. 2. Homogeneity Attack This assault uses the situation where everyone of the qualities for a touchy incentive inside a lot of k records is indistinguishable. In such property, although the information has been k-anonymized, the delicate incentive for the arrangement of k records might be anticipated. 3. Background Knowledge Attack This assault uses a relationship between at least one semi identifier properties with the touchy ascribe to diminish the arrangement of potential qualities for the delicate characteristic. 4. Deduction Attack The assault can utilize a few information mining procedures, for example, association rules and Bayesian reasoning, to dispatch an assault to misguidedly pick up private information about people by getting to some open data or the yield of some calculation depending on the individual data of people. _Published by: The Mattingley Publishing Co., Inc._ 5. Correlation Attack An assault with Background Knowledge or assistant data on co relational data has a high possibility of acquiring privacy data, consequently disregarding privacy. ## III Classification of PPDM Techniques ### Data Distribution Data Distribution is a process where data are detached as horizontally or vertically. 1. Horizontally partitioned data Involves placing different rows/tuples of the database into diverse tables. Perhaps people with age fewer than 18 are stored in Children's, while people with age superior than or identical to 18 are stored in Adults. The two partition tables are then Children's and Adults, while a view with a union might be created over mutually to provide a complete view of all peoples. 2. Vertically partitioned data: involves the creation of tables with aentity of selected attributes/columns from the unique table and using additional tables to store the remaining attributes/columns of the unique table [8]. Normalization likewise includes this parting of sections crosswise over tables, yet vertical apportioning goes more distant than that and parcels segments in any event, when recently standardized. ### Data Modification Altering the data commenced in the database using various methods. Fig. 2: Privacy Preserving structure –Un-trusted Process 1. Perturbation - substituting the columndatacommenced in the table by a differentdata. For example, substituting zero with one and one with zero, this process is said to be adding noise. 2. Blocking - substituting the column data from the table by a special symbol "?". 3. Swapping - Altering the column data from the table. 4. Sampling - Selecting only the sample of data from the data pool. 5. Encryption - Converting plain data into Cipher data using various Cryptographic techniques. C. Data mining algorithm The Third aspect of Classification of PPDM Tech is data mining algorithms that are applied to alter data to change information to get the helpful chunk of data that was enclosed up ahead of time. The mining calculations incorporate classification mining, association rule mining, clustering, and Bayesian networks and so on. D. Data or Rule Hiding Data - Protecting sensitive data values by concealing the data commenced in the data pool. Delicate data like name, personality, and address from the first dataset that can be connected, straightforwardly or in a roundabout way, to a distinct individual are covered up. Rule - Protecting Confidential patterns or information derived from data analysis by hiding its rule. E. Privacy Preservation Itadverts to the procedures that are used to protect privacy. Furthermore, to save security, information change ought to be done cautiously to accomplish high information utility.Privacy Preservationclassified in to five categories. * Anonymization * Perturbation * Cryptographic based PPDM * Randomized response based PPDM * Condensation Based **IV. PPDM TECHNIQUES** A. Anonymization Anonymizationrefers to classify the sensitive information of the data-owner and hiding it. Privacy is questionable if quasi-identifiers are associated to widely existing data, such attacks are known as linking attacks. Linking attacks can direct to predicting with a higher probability of an individual's data [9].Common anonymization approaches are: Generalization, Suppression, swapping, Bucketization and Randomization. Fig.3. provides a clear examination ofanonymization technique, where attribute name detached ahead. 1. Generalization Substitute value by a lesser amount of explicit yet semantically unsnverving value [9, 10]. For example, the age of cashier is 21 from the displayed table, as an outcome of generalization age will be generalized into 15 - 25, which is semantically reliable. Various forms of generalization are Full-domain generalization, Sibling generalization,cell generalization, Sub-tree generalization, and multidimensional generalization. \begin{tabular}{|c|c|c|c|c|} \hline Gende & Ag & Department & Designation & Salary \\ r & e & & & y \\ \hline Male & 21 & Sales & Cashier & 23,43 \\ \hline Femal & 24 & Merchandis & Merchandis & 32,05 \\ e & & e & er & 6 \\ \hline Male & 43 & Sales & Sales & 42,72 \\ \hline \end{tabular} _Published by: The Mattingley Publishing Co., Inc._ _Jounary - February 2020_ _ISSN: 0193 - 4120 Page No. 01 - 11_ Itadverts to the procedures that are used to protect privacy. Furthermore, to save security, information change ought to be done cautiously to accomplish high information utility.Privacy Preservationclassified in to five categories. * Anonymization * Perturbation * Cryptographic based PPDM * Randomized response based PPDM * Condensation Based **IV. PPDM TECHNIQUES** A. Anonymization Anonymizationrefers to classify the sensitive information of the data-owner and hiding it. Privacy is questionable if quasi-identifiers are associated to widely existing data, such attacks are known as linking attacks. Linking attacks can direct to predicting with a higher probability of an individual's data [9].Common anonymization approaches are: Generalization, Suppression, swapping, Bucketization and Randomization. Fig.3. provides a clear examination ofanonymization technique, where attribute name detached ahead. 1. Generalization Substitute value by a lesser amount of explicit yet semantically unsnverving value [9, 10]. For example, the age of cashier is 21 from the displayed table, as an outcome of generalization age will be generalized into 15 - 25, which is semantically reliable. Various forms of generalization are Full-domain generalization, Sibling generalization,cell generalization, Sub-tree generalization, and multidimensional generalization. \begin{tabular}{|c|c|c|c|c|c|} \hline Gende & Ag & Department & Designation & Salary \\ r & e & & t & & \\ \hline \multirow{2}{*}{At all} & 15- & \multirow{2}{*}{Sales} & \multirow{2}{*}{Cashier} & XX,43 & \multirow{2}{*}{0} \\ \cline{2-2} \cline{5-6} & & & & & \\ \hline \multirow{2}{*}{At all} & 15- & \multirow{2}{*}{Merchandi} & Merchandis & XX,05 & \multirow{2}{*}{6} \\ \cline{2-2} \cline{5-6} & & & & & \\ \cline{2-2} \cline{5-6} & & & & & \\ \hline \multirow{2}{*}{At all} & 35- & \multirow{2}{*}{Sales} & \multirow{2}{*}{Sales} & \multirow{2}{*}{XX,72} \\ \cline{2-2} \cline{5-6} & & & & & \\ \hline \end{tabular} _Published by: The Mattingley Publishing Co., Inc._ _ISSN: 0193 - 4120 Page No. 01 - 11_ \begin{tabular}{|c|c|c|c|c|} \hline & & Associate & 9 \\ \hline \begin{tabular}{c} Femal \\ e \\ \end{tabular} & 31 & Logistics & \begin{tabular}{c} Stocking \\ Associate \\ \end{tabular} & 37,48 \\ \hline Male & 27 & \begin{tabular}{c} Merchandis \\ e \\ \end{tabular} & \begin{tabular}{c} Merchandis \\ er \\ \end{tabular} & 38,80 \\ \hline Male & 54 & Marketing & Manager & 45,03 \\ \hline \begin{tabular}{c} Femal \\ e \\ \end{tabular} & 37 & Logistics & \begin{tabular}{c} Stocking \\ Associate \\ \end{tabular} & 44,27 \\ \hline \multicolumn{5}{|c|}{Original Data} \\ \hline \begin{tabular}{c} Gente \\ r \\ \end{tabular} & \begin{tabular}{c} Ag \\ e \\ \end{tabular} & \begin{tabular}{c} Department \\ \end{tabular} & Designation & \begin{tabular}{c} Salar \\ y \\ \end{tabular} \\ \hline Male & **\#** & Sales & Cashier & \begin{tabular}{c} **\# stifled [11]. For example, the age of cashier is 21 from the displayed table, as aresult of the Suppression technique, age will be Suppressed to ##, which the value is jammed with distinctsigns ##. Various forms of Suppression techniques are rounding, generalization, using intervals. 3. Swapping Trade of information which might have sensitive attributes among the two posse [12]. Example, Exchanging the X tuple sensitive attribute - Salary by Y tuple. This technique gives better outcomes, meaningful information and very effective for huge datasets. 4. Bucketization Bucketization is the method of dividing the primary data-table into various buckets, one of thebucketscomprisejust sensitive attributes and some other bucket contains the rest of the attributes of the original data table. Example, in the, showed table, salary is assumed to be a sensitive attribute, consequently partitioning that attribute and putting in a separate bucket and rest of the attributes corresponding age, gender, department, and designation placed in the independent table 5. Randomization Randomization process is a technique in the direction of irritating the information to the distributed data mining technique consequently that the information approximations of perceptive components are protected from the recognition [13] - [15]. Sufficient noise is appended in original records to protect from the recovery. The altered data is indistinguishable with the primary data. Various forms of randomization are Including Random numbers, Creating random vectors, Random change of a sequence. Sweeney (2002)proposed K-anonymity using Suppression and Generalization to attain K-anonymity. where each individual data is distinct from others as a minimum k-1. Emanicpating such data for analysis will decrease the take a chance of identification when collective with visibly available data.Many innovative approaches have been proposed, such as p-sensitive k-anonymity,t-closeness, (a, k)-anonymity,M-invariance,I-diversity and personalized anonymity [26]. Limitations Accurateness of the data-analysis on altered data is reduced. Hence this methodgrieves from heavy information loss. Homogeneousness and contextual knowledge attack also give way to the expose of individual's data. Additional two major downsides are, First It could be extremely intense for the proprietor of the database to figure out which of the properties are reachable and which are absent in outside tables. Second, k-obscurity model thinks about a specific approach of assault, while in genuine circumstances there is not any legitimization that why the aggressor ought not attempt different approaches. B. Perturbation The actual record values are supplanted with few roughly artificial data values thus, statistical information evaluated from the annoyed factsdoesn't contrast from the statistical information figured from the actual data.Intruder can't accomplish linking attack or convalesse sensitive facts from the obtained data.The annoyed data records don't correspond to true record propriators; in this way, an aggressor can't get back sensitive data from the distributed information or play out the delicate linkages. Since just factual properties of the records are held, in this way, bothered information which contains the individual records are useless to the beneficiary.Perturbation can be achieved by adding noise, data swapping, artificial data generation.Perturbation technique treats dissimilar attributes autonomously.Since in perturbation system, as conflicting to remaking the first qualities just the disseminations are remade. Thus, new calculations ought to be created which use these remade dispersions to execute mining on the information. In this way, along these lines for every individual information issue, for example,association rule mining, clustering, or classification,another appropriation-based information mining algorithm ought to be created. Limitations * Loss of hidden statistics existing in multidimensional data in distributed database system. * Original data values can't be reconstructed. ### Cryptographic based PPDM When several persons work organized to calculate common results and thus keep away from expose of sensitive information. The gatherings associated with such kind of assignment might be contenders or not confided in parties, so protection turns into the primary concern. Cryptographic systems are ideally inferred in such circumstances where various parties work mutually to survey results or offer non-sensitive mining results and in this way abstaining from uncovering of touchy data [17]. Cryptographic procedures discover its handiness in such circumstances as a result of two reasons: First, it gives a well-characterized model to security that includes techniques for computing and demonstrating it. Second, an expansive transcription of cryptographic computationis accessible to portray security saving information mining computation in this area. The data might be dispersed either horizontally or vertically. Vaidya and Clifton built up a Naive Bayes classifier for safeguarding protection on vertically divided information. Vaidya and Clifton in proposed a technique for grouping over vertically parceled information [18]. Every one of these techniques depend on an encryption convention is known as Secure Multiparty Computation (SMC) innovation. SMC characterizes two essential antagonistic models in particular (I) Semi-Honest model and (ii) Malicious model. Semi-honest model pursues conventions sincerely however can attempt to reason the mystery data of different gatherings. In the Malicious model, pernicious foes can successfully construe mystery data. There exist many numbers of solutions in case of Semi-Honest model, but in case of Malicious model there are extremely fewer studies have been made.Without a doubt, this methodology guarantees that changed information is precise and secure yet this methodology bombs when multiple gatherings are included. Likewise, the last mining outcomes may lead to protection loss of individual records. One of the applications of cryptography is, highly secure online auctions management system[6]. Merits * Can manage disadvantages of perturbation technique mentioned above. * Altered data are accurate and protected. Limitation * Where it fails when multiple parties are involved. * It is much low efficient than other techniques. ### Randomized response based PPDM This statistical technique has been established by Warner, which typically utilized in the perspective of altering data by probability distribution for methods such as a survey [13] - [15].Two models: Related-Question Model and Unrelated-Question Model have been createdasthe solution of survey problem. In the Related-Question Model, instead ofquerying every individual whether they need to property A the questioner asks every individual two related inquiries, the responses to which are restricting one another. The procedure of assortment of information in Randomized reaction method is completed in two stages * Information suppliers randomize their information and send randomized information to the information beneficiary. * The receiver rebuilds the primary distribution of the data by using a distribution reconstruction algorithm.Information got by every client is adjusted and if the quantity of clients expanded, the total data of these clients can be assessed with a great quantity of precision. The noise parts got freely of information. The original values can't be recovered, yet the conveyance of the first record esteems can be recovered. Laterally these lines, if A is the arbitrary variable representing the information circulation for the first record, B is their regular variable representing to the commotion appropriation, and C is the arbitrary variable speaking to the last record, at that point we have: \(\text{C}=\text{A}+\text{B}\), \(\text{A}=\text{C}-\text{B}\). Typically, the supposition taken that the fluctuation of the noise included is in bounteous with the goal that the first record esteems can't be effectively made a decision from the misshaped information. In Randomized reaction, the information is in such a structure, that the focal spot can't tell with probabilities superior to a pre-characterized limit whether the information from a client contains honest data or bogus data. Notwithstanding, data from every individual is disarranged, in the event that the quantity of clients is remarkably enormous, at that point the entire data of these clients can be evaluated with legitimate precision. Such property is helpful for choice tree characterization since choice tree order depends on total estimations of an informational index, instead of individual information things. Advantages of Randomized response model are * Randomized response based PPDM is extremely easy which does not need the information of other records in the data. * Noise is autonomous of data * Do not necessitate complete dataset for perturbation * Can be applied to data compilation time * Does not have to necessitate a reliable server with all the records to achieve anonymization procedure. * Very basic and doesn't require information on dispersions of different records in the information. Limitations * Diminishing the utility of the primary data for the mining point of view. * High altitude of individual's information loss. * Not appropriate for various attribute database. * Randomized response based PPDM considers all the records equal regardless of their local density. ### Condensation Based Model Condensation based technique develops compelled clusters in the dataset and then produces fake data from the statistics of the constructed clusters. It generates collections of non-homogeneous size from the data, with the end goal that it is unequivocal that all record lies in a cluster whose size is at minimum identical to its anonymity intensity. Condensation based technique uses a method that summarizes the data into indefinite clusters of a predefined size, for each cluster, certain statistics are sustained. Each cluster has a size no not as much as N, which is alluded to as the degree of that privacy-preserving tactic [19] - [21]. As the level builds, the degree of privacy likewise increments. All the while, a portion of data misfortune is there as it includes the build-up of a bigger amount of records into a lone measurable gathering substance. We exploit the statistics from each gathering so as to create the relating pseudo-information. Advantages: * This technique utilizes pseudo-information as conflicting to changes of primary data, which assist in the enhanced conservation of privacy. * This technique can be proficiently used for classification problems and in instance of data streams, where data is enormously active. * Condensation based technique is better as match up toother technologies as it utilizes pseudo-data relatively than altered data. * There is not any compelling reason to overhaul the data mining algorithms since pseudo information is having a similar organization as of original data. Limitations * Data mining end results get predisposed as a raft of data misfortune is there due to the build-up of a bigger number of records into a solitary measurable gathering element. Expect that the yields of anonymization approaches are prepared by the activity of post-processing, for example, a mapping capacity, to create the yields in another area. In such a circumstance, the rival is as yet incapable to separate sensitive information regardless of whether the foe knows the yields of the anonymization approaches and the yields of the post-handling. ## VI Conclusion PPDM intends to shield the secrecy of the data owner's sensitive data. while data owner's achievebeneficial information from the complete dataset. Obtaining both privacy and data utility is a great challenge. If focusing on Data Privacy then data utility or performance of mining results will be degraded. vice versa, if focusing on data utility then the privacy of the primary data will be lost. To steadinessamong the data utility and data privacy will be more challenge and it leads to many researchers to focus on this domain.
2305.18314
Extending the theory of propagating fluctuations: the first fully relativistic treatment and analytical Fourier-Green's functions
The aperiodic variability ubiquitously observed from accreting black hole X-ray binary systems is generally analysed within the framework of the so-called ``theory of propagating fluctuations''. In this paper we derive the Fourier transforms of the Green's function solutions of the thin disc equations. These solutions suffice to describe all possible solutions through standard convolution techniques. Solutions are found for both Newtonian discs and general relativistic solutions with a vanishing ISCO stress. We use this new relativistic theory to highlight the Kerr black hole spin dependence of a number of observable variability properties of black hole discs. The phase lags, coherence, and power density spectra of Kerr discs are shown to be strong functions of black hole spin. Observations of the aperiodic variability of black hole accretion sources may now, at least in principle, offer a new avenue to directly constrain black hole spins.
Andrew Mummery
2023-05-19T08:42:19Z
http://arxiv.org/abs/2305.18314v1
Extending the theory of propagating fluctuations: the first fully relativistic treatment and analytical Fourier-Green's functions ###### Abstract The aperiodic variability ubiquitously observed from accreting black hole X-ray binary systems is generally analysed within the framework of the so-called "theory of propagating fluctuations". In this paper we derive the Fourier transforms of the Green's function solutions of the thin disc equations. These solutions suffice to describe all possible solutions through standard convolution techniques. Solutions are found for both Newtonian discs and general relativistic solutions with a vanishing ISCO stress. We use this new relativistic theory to highlight the Kerr black hole spin dependence of a number of observable variability properties of black hole discs. The phase lags, coherence, and power density spectra of Kerr discs are shown to be strong functions of black hole spin. Observations of the aperiodic variability of black hole accretion sources may now, at least in principle, offer a new avenue to directly constrain black hole spins. keywords: accretion, accretion discs -- black hole physics - X-rays: binaries ## 1 Introduction Accreting black hole systems generally exhibits pronounced temporal variability, a result of their fundamentally turbulent nature (Balbus & Hawley, 1991). The power spectral densities of the luminosity emergent from accretion discs reveal fluctuations whose root mean square variation on short timescales is found to be linearly proportional to its mean evolving over longer timescales, a property of the log-normal distribution (Uttley & McHardy, 2001, Uttley et al., 2005). Different accreting sources display a remarkable similarity in their variability properties, despite the vast range of both length and time scales involved, across a broad population of sources. In any individual source variability is observed over many temporal orders of magnitude: from timescales as rapid as the local dynamical timescale of the innermost disc edge up to many orders of magnitude longer than any physical process which is involved in the direct production of the disc luminosity. The length scales spanned by different sources which show similar variability structure is vast: ranging from the compact discs in Galactic X-ray binaries (e.g., Gleissner et al., 2004), to the very large discs in active galactic nuclei (e.g., Vaughan et al., 2011). This behaviour has a robust observational grounding, and has been confirmed with observations at various different frequencies in black hole accretion disc sources, including both AGN in X-ray (Gaskell, 2004; Vaughan et al., 2011), AGN in optical (Lyutyi & Oknyanskii, 1987), and X-ray binaries at both X-ray (Gleissner et al., 2004) and optical (Gandhi, 2009) frequencies. We also note that non black hole disc sources, e.g., cataclysmic variables (Scaringi et al., 2012), and young stellar objects (Scaringi et al., 2015) also show the same variability structure. The typical variability structure of an accreting system is the following. The observed power density spectrum has a broad (apcriodic) component which is generally well described by a twice-broken power-law. In the case of some black hole binaries, these power density spectra display a narrow feature peaking in the range \(\sim 0.1\)-\(10\) Hz's, which are known as (type C) quasi-periodic oscillations (QPOs). We will discuss only the aperiodic variability of black hole sources in this paper, and not QPOs, which will have different physical origins. Observed light curves of the same sources taken in different energy bands are found to correlate with each other, but "harder" (higher energy) X-ray variability usually lags "softer" (lower energy) X-ray variability (Priedhorsky et al., 1979; Nolan et al., 1981). The magnitude of the hard-soft time-lag depends on Fourier frequency, but is typically found to be of order of 1 per cent of the variability time scale. Some sources on the other hand show negative time lags, with the soft-band variability lagging the hard-band variability (McHardy et al., 2007; Emmanoulopoulos et al., 2011; De Marco et al., 2013). This type of behaviour has been detected in both supermassive and stellar mass black holes. Finally, the variability in emission from different energy bands is found to be coherent at low frequencies, but becomes increasingly incoherent at higher frequencies (Nowak et al., 1999). This observed aperiodic variability structure is rather naturally explained by the so-called "theory of propagating fluctuations" (first put forward by Lyubarskii 1997), in which fluctuations in the disc's alpha parameter (or equivalently surface density) are excited at all radii in the accretion flow. These fluctuations then evolve throughout the disc and are observed as stochastic variability in the source's light curves. As fluctuations are excited at every disc radii different time scales are injected into the accretion flow corresponding to the local evolution timescales of the individual fluctuations at each distance from the central object (Lyubarskii 1997; Churazov et al. 2001; Ingram 2015). The propagating fluctuations model naturally explains, for example, the lagging of "hard" (higher energy) X-ray variability behind "softer" (lower energy) bands (Kotov et al. 2001; Ingram & van der Klis 2013). Excitations sourced in the outer, cooler, disc regions first produce softer flares before subsequently propagating inwards, sourcing harder flares in the hotter innermost disc regions. In this model, the power spectral shape of the broad band aperiodic noise depends on both the noise generating process and the response of the accretion flow. The magneto-rotational instability (MRI: Hawley & Balbus 1991; Balbus & Hawley 1998) is the underlying noise generator, which produces variability everywhere in the disc due to the interactions between magnetic fields and a differentially rotating gas. The response of an accretion flow to intrinsic fluctuations is governed by a diffusion equation (LyndenBell & Pringle 1974). Solving this diffusion equation for a \(\delta\)-function perturbation (i.e. calculating the Green's function) is an important step in calculating the resulting power spectrum of the mass accretion rate at radii which differ from where the noise originated. On the technical level, simplified Green's functions were used for modelling the propagating fluctuations by Lyubarskii (1997) and Kotov et al. (2001), where the fluctuations were assumed to evolve in an additive manner. The more physical model of _multiplicative_ fluctuations in the accretion flow were first considered by Ingram & van der Klis (2013) and Ingram & Done (2011), but in each of these cases only inward propagation was considered. However, fluctuations in (for example) an accretion flow's surface density do not simply evolve inwards towards the central object; they must conserve the flow's total angular momentum. This means that some material must propagate outwards in the flow, soaking up the angular momentum of the material propagating inwards, and fluctuations in the inner disc must therefore effect the properties of the outer disc. This important conceptual point was first analysed in detail by Mushtukov et al. (2018), who developed a framework for analysing the combined effects of inward and outward propagating fluctuations in an accretion flow. Mushtukov et al. (2018) used this new framework to show that this additional outward propagation has potentially important observational effects, including that propagating fluctuations can give rise not only to hard time lags as previously shown, but may also produce _negative_ lags (softer bands lagging harder bands) at high frequencies, a routinely observed effect which had previously been attributed to reprocessing. The Mushtukov et al. (2018) framework involves computing the Fourier transform of the Green's function solutions of the classical thin disc equations (which we shall henceforth call the Fourier-Green's functions). This Fourier-Green's integral has a long history in the literature, having been first written down in the original Lyubarskii (1997) paper, it has since reappeared in various different forms in a number of subsequent works, but has only ever been solved numerically. In this work we present two important advances in the theory of propagating fluctuations. The first is deriving the exact solution of the Fourier-Green's integral, which turns out to be surprisingly simple in its final form. With the exact analytical solutions of the Fourier integral now at hand, various properties of these solutions may be derived. In section 2, for example, we demonstrate that high frequency variability in the mass accretion rate is suppressed as \(\exp(-\Delta f^{1/2})\), where \(\Delta(x,x^{\prime})\) is a function of the magnitude of the difference between the two disc locations \(x\) and \(x^{\prime}\). The high and low frequency asymptotic behaviour of the power spectrum resulting from mass accretion rate variability is also determined, and related to the intrinsic variability in the disc surface density/alpha parameter. On a practical level, knowledge of these exact solutions will rapidly speed up, and improve the accuracy of, the process of fitting analytical models of accretion variability to observational data; the numerical cost of Fourier transforming thin disc Green's functions had previously been substantial. The second, and potentially more important, development is in presenting the first analysis of the Fourier-Green's solutions of the general relativistic thin disc equation. In this paper we solve the Fourier-Green's integral for a general Kerr metric, under the assumption that the dynamical disc stress vanishes at the ISCO. These solutions depend implicitly on the central black hole's spin through their dependence on the spacetime's ISCO radius, and for the first time the effects of the black hole's spin on the observed variability structure of an accretion flow can be examined. In the later sections of this paper we demonstrate that a number of observable variability properties of black hole discs are relatively strong functions of black hole spin, and that observations of the aperiodic variability of black hole accretion discs may now, at least in principle, offer a new avenue to directly determine black hole spins. The layout of this paper is the following. In section 2 we derive and analyse the formal solutions of the Fourier-Green's integral. In section 3 we specialise the analysis specifically to the solutions of the Newtonian disc equation, while in section 4 we discuss the corresponding relativistic solutions. In section 5 we introduce the Mushtukov et al. (2018) framework for relating these results to directly observable quantities, before showing in section 6 that the black hole spin imprints strong signals onto the observed aperiodic variability structure of observed light curves. We conclude in section 7, with some technical results presented in Appendices. The reader interested in the application of this analysis for observational constraints on black hole spins may wish to skip directly to section 5. ## 2 Thin disc Green's functions in the Fourier domain: general solution and properties ### The disc evolution equations The evolution of the disc surface density is described by a diffusion-type equations which fundamentally arise from the turbulent transportation of angular momentum within the disc. The evolution equation for a Newtonian theory of gravity was first analysed in detail by Lynden-Bell and Pringle (1974), while the relativistic equation was derived in Balbus (2017). We will introduce the general relativistic form of the evolution equation at this point, as it is simpler to take the Newtonian limit (\(r\gg GM/c^{2}\)) than the alternative. The coordinates used to describe the relativistic thin disc equation are the cylindrical Boyer-Lindquist representation of the Kerr metric: \(r\) (radial), \(\phi\) (azimuthal), and \(z\) (vertical). The governing equation describes the evolution of the azimuthally-averaged, height-integrated disc surface density \(\Sigma(r,t)\). The contravariant four velocity of the disc fluid is \(U^{\mu}\); the covariant counterpart is \(U_{\mu}\). The specific angular momentum corresponds to \(U_{\phi}\), a covariant quantity. There is an anomalous stress tensor present, \(W^{\sigma}_{\phi}\), due to low level disk turbulence, which is a measure of the correlation between the fluctuations in \(U^{r}\) and \(U_{\phi}\) (Eardley & Lightman 1975, Balbus 2017). This is, as the notation suggests, a mixed tensor. \(W^{\sigma}_{\phi}\) serves both to transport angular momentum as well as to extract the free-energy of the disc shear, which is then thermalised and radiated from the disc surface, both assumed to be local processes. Under these assumptions the governing disc equation can be expressed in the following compact form \[\frac{\partial\zeta}{\partial t}=\frac{W_{\phi}^{\tau}}{(U^{0})^{2}}\frac{ \partial}{\partial r}\left(\frac{U^{0}}{U_{\phi}^{\tau}}\left[\frac{\partial \zeta}{\partial\tau}\right]\right). \tag{1}\] here the primed notation \({}^{\prime}\) denotes an ordinary derivative with respect to \(r\), and \[\zeta\equiv\frac{r\Sigma W_{\phi}^{\tau}}{U^{0}}. \tag{2}\] The Newtonian limit corresponds to \(U^{0}=1,U_{\phi}^{\prime}=\sqrt{GM/4r}\). Naturally one must specify a functional form of the disc's turbulent stress tensor \(W_{\phi}^{\tau}\) to derive solutions of this equation, for the remainder of this paper we shall consider stress parameterisations of the form \[W_{\phi}^{\tau}=w\left(\frac{r}{r_{0}}\right)^{\mu}, \tag{3}\] which is a popular and analytically tractable choice. ### Solution of the Fourier integral The Green's function solutions of both the Newtonian (Lynden-Bell and Pringle, 1974) and general relativistic (Mummery, 2023) thin disc equations are of the general form \[G(x,x_{0},t)=\frac{q(x)}{t}\exp\left(\frac{-g(x)^{2}-g(x_{0})^{2}}{4t}\right)I _{\nu}\left(\frac{g(x)g(x_{0})}{2t}\right), \tag{4}\] where \(G(x,x_{0},t)\) describes the evolution of the variable \(\zeta\) for an initial delta-function spike located at \(t=0,x=x_{0}\). In this expression \(x\) is a radial coordinate, which is typically normalised by some characteristic scale, either the ISCO (in the relativistic case), or the initial radius \(r_{0}\) of the spike. In this expression \(I_{\nu}\) is the modified Bessel function of the first kind, of order \(\nu\). The index \(\nu\) is related to the stress parameterisation through \[\nu=1/(3-2\mu). \tag{5}\] For the particular case of the Newtonian Green's functions these functions are particularly simple \[q(x)\propto x^{1/4},\quad g(x)\propto x^{1/4\nu}. \tag{6}\] The functions \(g(x),g(x_{0})\) as defined here have dimensions of \(\sqrt{\rm time}\). Physically, the amplitude of these functions correspond to the (square root of) the timescale with which a perturbation at \(x\) propagates to the inner disc edge. This introduces a natural scale into these expressions, which will be discussed further in the following sections. It will be useful to note however that this solution is in fact a Laplace-mode superposition, of the form (Gradshteyn and Ryzhik et al., 2007; Lynden-Bell and Pringle, 1974; Balbus, 2017; Mummery, 2023) \[G(x,x_{0},t)=\int_{0}^{\infty}q(x)J_{\nu}(\sqrt{s}g(x))J_{\nu}(\sqrt{s}g(x_{ 0}))\exp(-st)\,{\rm d}s. \tag{7}\] The function denoted \(J_{\nu}\) is an ordinary Bessel function of the first kind. The Fourier transform of \(G(x,x_{0},t)\), denoted \(\widetilde{G}(x,x_{0},f)\) is defined by the complex integral \[\widetilde{G}(x,x_{0},f)\equiv\int_{0}^{\infty}G(x,x_{0},t)\exp(-2\pi ift)\, {\rm d}t, \tag{8}\] where we have used the fact that \(G(x,x_{0},t<0)=0\). When written in terms of the Laplace mode superposition, this integral becomes \[\widetilde{G}(x,x_{0},f)=q(x)\int_{0}^{\infty} \left[\int_{0}^{\infty}J_{\nu}(\sqrt{s}g(x))J_{\nu}(\sqrt{s}g(x_{ 0}))\right.\] \[\left.\exp(-st)\,{\rm d}s\right]\exp(-2\pi ift)\,{\rm d}t. \tag{9}\] As both integrals converge, we can swap the order of integration \[\widetilde{G}(x,x_{0},f)=q(x)\int_{0}^{\infty} \left[\int_{0}^{\infty}\exp(-st-2\pi ift)\,{\rm d}t\right]\] \[J_{\nu}(\sqrt{s}g(x))J_{\nu}(\sqrt{s}g(x_{0}))\,{\rm d}s \tag{10}\] which is more easily solved. Performing the \(t\) integral leaves \[\widetilde{G}(x,x_{0},f)=q(x)\int_{0}^{\infty}\frac{J_{\nu}(\sqrt{s}g(x))J_{ \nu}(\sqrt{s}g(x_{0}))}{s+2\pi if}\,{\rm d}s. \tag{11}\] By making the substitution \(u=\sqrt{s}\), this integral becomes \[\widetilde{G}(x,x_{0},f)=2q(x)\int_{0}^{\infty}\frac{uJ_{\nu}(ug(x))J_{\nu}(ug (x_{0}))}{u^{2}+\beta^{2}}\,{\rm d}u, \tag{12}\] where \[\beta\equiv(1+i)\sqrt{\pi f}. \tag{13}\] When written in this form the solution of the integral is a standard result, which can be found in the text of Gradshteyn and Ryzhik et al. (2007) \[\widetilde{G}(x,x_{0},f)=2q(x)\left\{\begin{array}{ll}&I_{\nu}(\beta g(x))K _{\nu}(\beta g(x_{0})),\quad x<x_{0},\\ &\\ &I_{\nu}(\beta g(x_{0}))K_{\nu}(\beta g(x)),\quad x>x_{0}.\end{array}\right. \tag{14}\] In this expression \(K_{\nu}\) is the modified Bessel function of the second kind. The Green's function solutions for the mass accretion rate, denoted \(G_{\dot{M}}\), are of interest in understanding the variability properties of black hole discs, as it is often assumed that variability in the mass accretion rate is directly communicated into variability in the locally emitted flux (e.g., Lyubarskii, 1997; Ingram and van der Klis, 2013; Ingram and Done, 2012; Mushtukov et al., 2018). The mass accretion rate Green's function has the following form (we again write this generally so as to consider both the Newtonian and relativistic solutions simultaneously) \[G_{\dot{M}}(x,x_{0},t)=p(x)\frac{\partial}{\partial x}G(x,x_{0},t). \tag{15}\] In the Newtonian limit the function \(p(x)\) is simple \[p(x)\propto x^{1/2}. \tag{16}\] In the Fourier domain \[\widetilde{G}_{\dot{M}}(x,x_{0},f) \equiv\int_{0}^{\infty}G_{\dot{M}}(x,x_{0},t)\exp(-2\pi ift)\,{\rm d }t,\] \[=\int_{0}^{\infty}p(x)\frac{\partial}{\partial x}\left[G(x,x_{0},t )\right]\exp(-2\pi ift)\,{\rm d}t,\] \[=p(x)\frac{\partial}{\partial x}\widetilde{G}(x,x_{0},f), \tag{17}\] where in going to the final line we have used the fact that the \(x\) derivative and \(t\) integral commute. We therefore have the general solution \[\frac{1}{2}\widetilde{G}_{\dot{M}}=p(x)\left\{\begin{array}{l}K_{\nu}(\beta g(x_{ 0}))\,\partial_{x}\left[q(x)I_{\nu}(\beta g(x))\right],\quad x<x_{0},\\ \\ I_{\nu}(\beta g(x_{0}))\,\partial_{x}\left[q(x)K_{\nu}(\beta g(x))\right],\quad x >x_{0},\end{array}\right. \tag{18}\] where we use the notation \(\partial_{x}\equiv\partial/\partial x\). As we shall demonstrate in section 3, this equation further simplifies for the particular case of the Newtonian disc equations. Before we specialise to either the Newtonian or relativistic regimes, we analyse the asymptotic properties of these Fourier-Green's function. ### Asymptotic properties The asymptotic (\(f\to\infty\) and \(f\to 0\)) properties of \(\widetilde{G}_{\dot{M}}\) can be determined from this general formula. These asymptotic limits should be understood as the limiting behaviour at Fourier frequencies which are significantly larger (or smaller) than the characteristic accretion frequency associated with radius \(x\). The characteristic accretion frequency for a Newtonian solution with perturbation at \(r_{0}\) has the following form (Lynden-Bell and Pringle 1974) \[f_{0}=\frac{1}{t_{\rm acc}(r_{0})}=\frac{(3-2\mu)^{2}}{2}\sqrt{\frac{w^{2}}{GM _{\rm BH}r_{0}^{3}}}. \tag{19}\] (We derive the relativistic analogue of this expression in Appendix A). The amplitude of the functions \(g(x)\) are \(1/\sqrt{f_{0}}\). #### 2.3.1 High frequency Fourier modes The large frequency limit (\(f\gg f_{0}\)) of the two Bessel functions are the following \[\lim_{\beta\to\infty}K_{\nu}(\beta g)=\sqrt{\frac{\pi}{2\beta g}}\exp(-\beta g), \tag{20}\] and \[\lim_{\beta\to\infty}I_{\nu}(\beta g)=\sqrt{\frac{1}{2\pi\beta g}}\exp(+\beta g). \tag{21}\] We therefore have the following simple result, valid in both of the \(x<x_{0}\) and \(x>x_{0}\) regimes: \[\widetilde{G}_{\dot{M}}(x,x_{0},f\to\infty)\sim\frac{p(x)q(x)g^{ \prime}(x)}{\sqrt{g(x)g(x_{0})}}\\ \exp(-\sqrt{\pi f}(1+i)\,|g(x)-g(x_{0})|), \tag{22}\] where \(|z|\) denotes the absolute value of \(z\), and \({}^{\prime}\) denotes a derivative with respect to \(x\). We see here that high frequency Fourier modes are exponential suppressed, with a suppression scale which depends on the inverse of the absolute magnitude of the difference between any two disc radii. Physically, this corresponds to an exponential suppression of the propagation of modes with frequencies higher than the local accretion frequency. It is interesting to note that this expression is symmetric in the sign of \(x-x_{0}\). This is an important result: at high Fourier frequencies (relative to the local accretion frequency), outward propagation of material is equally as important as the inward propagation of material. This is not true at all Fourier frequencies, as we demonstrate in future sections. The fact that the suppression of high-frequency modes is proportional to \(\exp(-\Delta f^{1/2})\) is unsurprising, and is a result of the disc angular-momentum diffusion processes. To see this explicitly, consider the simple one-dimensional diffusion equation \[\frac{\partial\psi}{\partial t}=D\frac{\partial^{2}\psi}{\partial x^{2}}. \tag{23}\] In terms of the Fourier modes (\(\psi=\int_{-\infty}^{+\infty}\widetilde{\psi}\exp(2\pi ift)\mathrm{d}f\)), this equation reads \[2\pi if\widetilde{\psi}=D\frac{\partial^{2}\widetilde{\psi}}{\partial x^{2}}, \tag{24}\] with a solution that is well behaved at large \(x\) of \[\widetilde{\psi}(x,f)=A\exp\left(-\sqrt{\frac{\pi f}{D}}(1+i)\,x\right). \tag{25}\] It is clear to see that a high frequency exponential \(\exp(-f^{1/2})\) suppression is a generic property of diffusive systems. #### 2.3.2 Low frequency Fourier modes The small frequency limit (\(f\ll f_{0}\)) of \(\widetilde{G}_{\dot{M}}\) can also be understood from the asymptotic properties of the Bessel functions \(K_{\nu}\) and \(I_{\nu}\). In addition we can understand a priori the asymptotic properties in the small frequency limit, as \(f\to 0\) corresponds physically to the integral \[\widetilde{G}_{\dot{M}}(x,x_{0},f\to 0)\to\int_{0}^{\infty}G_{\dot{M}}(x,x_{0},t) \,\mathrm{d}t. \tag{26}\] The integral over all times of the mass accretion rate at radius \(x\), initiated by a perturbation at radius \(x_{0}\), must equal (as all of the disc material is eventually accreted) to \[\widetilde{G}_{\dot{M}}(x,x_{0},f\to 0)\to\begin{cases}-M_{d},&x<x_{0},\\ \\ 0,&x>x_{0}.\end{cases} \tag{27}\] To prove this more rigorously, consider the small frequency limits of the two Bessel functions: \[\lim_{\beta\to 0}K_{\nu}(\beta g)=\frac{\Gamma(\nu)}{2}\left(\frac{\beta g}{2} \right)^{-\nu}+\frac{\Gamma(-\nu)}{2}\left(\frac{\beta g}{2}\right)^{+\nu}+ \mathcal{O}((\beta g)^{2-\nu}) \tag{28}\] and \[\lim_{\beta\to 0}I_{\nu}(\beta g)=\frac{1}{\Gamma(\nu+1)}\left(\frac{\beta g}{2} \right)^{+\nu}+\mathcal{O}((\beta g)^{2+\nu}) \tag{29}\] where \(\Gamma(n)\) is the usual gamma function. It will transpire that we are required to keep the first two terms in the expansion of \(K_{\nu}\) to compute the correct \(f\to 0\) limit of these solutions. For \(x<x_{0}\) (inward propagating modes) we have \[\left|\widetilde{G}_{\dot{M}}(x<x_{0},f\to 0)\right|\sim\frac{p(x)}{\nu} \left(\frac{1}{g(x_{0})}\right)^{\nu}\frac{\partial}{\partial x}\left[q(x)(g(x) )^{\nu}\right], \tag{30}\] i.e., a constant which is independent of \(f\). With the proper normalisation of \(q(x)\) and \(g(x)\) this constant will be equal to the disc mass. For outward propagating modes (\(x>x_{0}\)) we have \[\left|\widetilde{G}_{\dot{M}}(x>x_{0},f\to 0)\right|\sim\frac{p(x)}{ \nu}\left(g(x)\right)^{\nu}\frac{\partial}{\partial x}\left[\frac{q(x)}{(g(x) )^{\nu}}\right]\] \[+p(x)\frac{\Gamma(-\nu)}{\Gamma(\nu+1)}\left(\frac{g(x_{0})}{4} \right)^{\nu}|\beta|^{2\nu}\frac{\partial}{\partial x}\left[q(x)(g(x))^{\nu} \right]+\mathcal{O}(|\beta|^{2}), \tag{31}\] which means \[\left|\widetilde{G}_{M}(x>x_{0},f\to 0)\right|\sim\begin{cases}f^{\nu},&q(x)\propto(g(x))^{ \nu},&\nu<1,\\ f^{1},&q(x)\propto(g(x))^{\nu},&\nu>1,\\ f^{0},&\mathrm{otherwise}.\end{cases} \tag{32}\] Unsurprisingly, for the particular case of a Newtonian disc model \(q(x)\propto x^{1/4}\) and \(g(x)\propto x^{1/4\nu}\), and the amplitude of the outward propagating low frequency modes goes to zero as a power-law of frequency in the limit of \(f\to 0\). At low Fourier frequencies (relative to the local accretion frequency), only the inwardly propagating material significantly contributes to the local variability of the accretion rate. As conventional approaches typically neglect outward propagation, they are likely to be most accurate at low frequencies. ### Discontinuity of the mass accretion rate Fourier-Green's function at \(x=x_{0}\) As can be seen from the above analysis, the properties of the mass accretion rate Fourier-Green's functions must be discontinuous at the location of the perturbation \(x=x_{0}\). This can be proved rather generally by using the Wronskian of the modified Bessel functions (Abramowitz & Stegun, 1965): \[\mathcal{W}\left[I_{\nu},K_{\nu}\right]\equiv I_{\nu}(z)\frac{\mathrm{d}}{ \mathrm{d}z}K_{\nu}(z)-K_{\nu}(z)\frac{\mathrm{d}}{\mathrm{d}z}I_{\nu}(z)=- \frac{1}{z}, \tag{33}\] which implies a magnitude of the discontinuity of \[\widetilde{G}_{\dot{M}}(x\to x_{0},x_{0}>x,f)-\widetilde{G}_{\dot{M}}(x \to x_{0},x_{0}<x,f)\\ =p(x)q(x)\frac{\mathrm{d}}{\mathrm{d}x}\ln g(x). \tag{34}\] Physically this discontinuity results from the eventual accretion of all of the disc material through the inner disc edge. The preferred mass flow direction (inwards) fundamentally breaks the \(x<x_{0}\), \(x>x_{0}\) symmetry of the mass accretion rate Fourier-Green's functions. ### The mass accretion rate power spectrum As derived by Mushtukov _et al._ (2018) the power spectrum of the mass accretion rate at a particular radius, denoted \(S_{\dot{m}}(x,f)\), is given by \[S_{\dot{m}}(x,f)=\int_{x_{\mathrm{in}}}^{x_{\mathrm{out}}}\left(\frac{1}{x^{ \prime}}\right)^{2}l(x^{\prime})\left|\widetilde{G}_{\dot{M}}(x,x^{\prime}, f)\right|^{2}S_{\Sigma}(x^{\prime},f)\,\mathrm{d}x^{\prime}, \tag{35}\] where \(l(x^{\prime})\) is the radial scale over which the initial perturbations can be considered as coherent with one another, and \(S_{\Sigma}(x,f)\) is the power spectrum of the initial surface density perturbations at radius \(x\). In general it is possible that fluctuations in certain regions of the disc might be coherent over a larger radial scale than in other regions, and so in general \(l(x^{\prime})\) is a function of radius. In this work however we shall assume that all disc radii have the same coherence length. Mushtukov showed numerically that at high and low frequencies the power \(P(f)=fS_{\dot{m}}(f)\) behaved like a power-law in frequency, assuming that the input surface density power spectrum \(S_{\Sigma}\) was Lorentzian: \[S_{\Sigma}(x,f)=\frac{2p}{\pi}\frac{f_{\mathrm{br}}}{f^{2}+f_{\mathrm{br}}^{ 2}}, \tag{36}\] where \(p\) and \(f_{\mathrm{br}}\) are the total power and break frequency (simply parameters of the model) at radius \(x\) respectively. As the focus of this work is on single epoch variability the parameter \(p\), the amplitude of the locally driven power in the surface density perturbations, is treated as a simple constant. However, in a real physical system \(p\) has an important property: it scales quadratically with the local average (background) mass accretion rate \(\dot{M}^{2}\). This is a property of the propagating fluctuations model, which was first shown by Lyubarskii (1997), and was expanded upon by Mushtukov _et al._ (2018). This behaviour of \(p\) is important for understanding the variability properties of accreting sources at different stages of their evolution, as it ultimately results in the well known linear flux-rms relation (Uttley & McHardy, 2001; Uttley et al., 2005; see Mushtukov et al., 2018 for a proof of this statement). Finally, in addition to the leading order term presented here, there is formally an additional non-linear term present in equation 35, which arise from local fluctuations of the surface density super-imposing on top of existing variability in the accretion flow. Mushtukov et al. (2018) found that the inclusion of these non-linear terms alters the quantitative properties of the local power spectrum at the \(1\%\) level, and do not alter the qualitative properties of the theory. The amplitude of deviation does however grow with the injected variability amplitude, and so this simplification should be kept in mind. These non-linear effects are beyond the scope of the present work, and will not be considered further. In the following sub-sections we rigorously prove various numerical results presented in Mushtukov _et al._ (2018). #### 2.5.1 High frequency power spectrum For \(f\to\infty\) the inward and outward propagating modes both have transfer functions which behave like \[\left|\widetilde{G}_{\dot{M}}(x,x_{0},f\to\infty)\right|\sim\frac {p(x)q(x)g^{\prime}(x)}{\sqrt{g(x)g(x_{0})}}\\ \exp(-\sqrt{\pi f}\left|g(x)-g(x_{0})\right|), \tag{37}\] thus \[S_{\dot{m}}(x,f\to\infty)\sim\left(\frac{p(x)q(x)g^{\prime}(x)}{ \sqrt{g(x)}}\right)^{2}\int_{x_{\mathrm{in}}}^{x_{\mathrm{out}}}\left(\frac{1}{ x^{\prime}}\right)^{2}\frac{l(x^{\prime})}{g(x^{\prime})}\\ \exp(-2\sqrt{\pi f}\left|g(x)-g(x^{\prime})\right|)S_{\Sigma}(x^{ \prime},f)\,\mathrm{d}x^{\prime}, \tag{38}\] which, provided that the high frequency fall-off of \(S_{\Sigma}(f)\) is weaker than \(\exp(-Af^{1/2})\), is an integral which can be performed by Laplace's method. The leading order behaviour is simply \[S_{\dot{m}}(x,f\to\infty)\sim f^{-1/2}S_{\Sigma}(x,f\to\infty). \tag{39}\] This result highlights that at the highest frequencies, the observed variability is dominated by locally driven variability, with exponentially small contributions from distant disc regions. For the particular case of a Lorentzian power spectrum for \(\Sigma\), we have \[P(f\to\infty)\equiv fS_{\dot{m}}(x,f\to\infty)\sim f^{-3/2}. \tag{40}\] This exact behaviour was discovered numerically by Mushtukov et al. (2018). #### 2.5.2 Low frequency power spectrum For \(f\to 0\) the inward propagating modes have transfer functions which become independent of frequency \[\left|\widetilde{G}_{\dot{M}}(x<x_{0},x_{0},f\to 0)\right|\sim f^{0}, \tag{41}\] thus \[S_{\rm in}(x,f\to 0)\sim\int_{x}^{x_{\rm out}}F_{1}(x^{\prime},x)\,S_{ \Sigma}(x^{\prime},f)\,{\rm d}x^{\prime}\\ +f^{\alpha}\int_{x_{\rm in}}^{x}F_{2}(x^{\prime},x)\,S_{\Sigma}(x^{ \prime},f)\,{\rm d}x^{\prime}, \tag{42}\] where \(F_{1}(x,x^{\prime})\) and \(F_{2}(x,x^{\prime})\) are independent of frequency, and \(\alpha\geq 0\). The leading order behaviour of the mass accretion rate power spectrum is therefore given, in the low frequency limit, simply by that of the initial surface density perturbations at radii larger than \(x\) (the first integral) \[S_{\rm in}(x,f\to 0)\sim\int_{x}^{x_{\rm out}}F_{1}(x^{\prime},x)\,S_{ \Sigma}(x^{\prime},f\to 0)\,{\rm d}x^{\prime}. \tag{43}\] As expected from the earlier analysis (section 2.3), only those perturbations initialised at larger radii \(x_{0}>x\) which then propagate inwards contribute to the power spectrum in the low frequency limit. For the particular case of a Lorentzian power spectrum for \(a\), we have \[P(f\to 0)\equiv fS_{\rm in}(f\to 0)\sim f^{1}, \tag{44}\] as found numerically by Mushtukov et al. (2018). ## 3 Newtonian Fourier-Green's functions The particular Green's function solutions for a Newtonian theory of gravity have \[q(x)\propto x^{1/4},\quad p(x)\propto x^{1/2},\quad g(x)\propto x^{1/4\nu}. \tag{45}\] It transpires that the derivative in equation 18 simplifies greatly, a result of the identities \[\frac{{\rm d}}{{\rm d}z}I_{l}(z)=I_{l-1}(z)-\frac{l}{z}I_{l}(z), \tag{46}\] and \[\frac{{\rm d}}{{\rm d}z}K_{l}(z)=-K_{l-1}(z)-\frac{l}{z}K_{l}(z). \tag{47}\] We therefore have the remarkably simple result \[\widetilde{G}_{\dot{M}}=\begin{cases}+A\beta x^{(1-\nu)/4\nu}K_{\nu}(\beta x _{0}^{1/4\nu})I_{\nu-1}(\beta ex^{1/4\nu}),&x<x_{0},\\ \\ -A\beta x^{(1-\nu)/4\nu}I_{\nu}(\beta x_{0}^{1/4\nu})K_{\nu-1}(\beta ex^{1/4 \nu}),&x>x_{0},\end{cases} \tag{48}\] where (units where \(GM=c=w=1\); see also eq. 19) \[A=x_{0}^{1/4}\epsilon M_{d},\quad\epsilon=2\nu\sqrt{2x_{0}^{\mu}},\quad\mu= \frac{3-1/\nu}{2}, \tag{49}\] and we remind the reader \[\beta\equiv(1+i)\sqrt{\pi f}. \tag{50}\] Note that Newtonian gravity is entirely scale free, and \(x\) here should be thought of as the disc radius suitably normalised by some arbitrary radial scale in the problem of interest. For relativistic systems the solution is explicitly a function of \(r/r_{g}\), where \(r_{g}=GM/c^{2}\) is the gravitational radii of the black hole. In Figure 1 we display the amplitude of the Newtonian Fourier-Green's function, as a function of Fourier frequency, for a number of different disc radii \(x\) (listed in caption), and \(x_{0}=10\). It is clear to see that the properties of the Fourier-Green's functions are as predicted by the asymptotic analysis of the previous section. At high Fourier frequencies there is an exponential suppression of the Fourier mode amplitude, with modes at disc radii \(x\) closer to \(x_{0}\) having substantially higher amplitude at high frequencies when compared to disc radii further from \(x_{0}\) (contrast the solid curves with the dot-dashed curves). At low Fourier frequencies the inward propagating modes (\(x<x_{0}\)) the Fourier-Green's functions approach \(1\) (when normalised by the disc mass), while outward propagating mode (\(x>x_{0}\)) approach zero as a power-law in frequency. This low-frequency asymptotic behaviour is further demonstrated in Figure 2, where we plot the amplitude of the Fourier-Green's functions for different indices \(\nu\), for \(x=20,x_{0}=10\). At low frequencies each amplitude approaches a power-law in frequency, with power-law index given by equation 32 (black dashed curves). In Fig. 3 we examine the effects of varying the stress index on the inward propagating Fourier-Green's functions. While less severe than for the outward propagating modes, the stress parameterisation does quantitatively effect the properties of the Fourier-Green solutions of inward propagating modes. The angle of the Newtonian Fourier-Green's function is defined as \[\Phi_{\widetilde{G}_{\dot{M}}}=\arg\left(\widetilde{G}_{\dot{M}}\right)\equiv \tan^{-1}\left(\frac{\mathcal{I}\left[\widetilde{G}_{\dot{M}}\right]}{ \mathcal{R}\left[\widetilde{G}_{\dot{M}}\right]}\right), \tag{51}\] where \(\mathcal{I}[z]\) and \(\mathcal{R}[z]\) denote the imaginary and real parts of the complex variable \(z\) respectively, and the appropriate branches of the \(\tan^{-1}\) function are chosen so that \(-\pi<\arg(z)<\pi\). The angle of these Fourier-Green's functions is related to the time lag, a more useful observable quantity, via \[t_{\rm lag}=\frac{\Phi_{\widetilde{G}_{\dot{M}}}}{2\pi f}. \tag{52}\] The angle of the Newtonian Fourier-Green's functions, as a function Figure 1: The amplitude of the Newtonian Fourier-Green’s function of the mass accretion rate. Red curves are for inward propagating modes \(x<x_{0}\), and blue curves show outward propagating modes \(x>x_{0}\). For this figure we take \(x_{0}=10\), and the inward propagating modes have \(x=1\) (dot-dashed), \(2\) (dotted), \(4\) (dashed) and \(8\) (solid), while the outward propagating modes have \(x=11\) (solid), \(18\) (dashed), \(25\) (dotted) and \(30\) (dot-dashed). The Fourier-Green’s functions are normalised so that the total accreted mass is equal to \(1\). of disc radius \(x\), behave qualitatively as follows. At low Fourier frequencies the angle of the Fourier-Green's functions are approximately constant, whereas at high Fourier frequencies the angle of the Fourier-Green's functions cycle through \(\pi\to-\pi\) (or \(-\pi\to\pi\) for \(x<x_{0}\)) as a function of increasing radius. The angle of the Fourier-Green's functions are discontinuous at \(x=x_{0}\). The cycling of the Fourier-Green's function angle at high frequencies is simple to understand analytically, using the results of the proceeding section: \[\widetilde{G}_{\dot{M}}(x,x_{0},f\to\infty)\sim C\exp(-i\sqrt{\pi f}\left|g(x) -g(x_{0})\right|), \tag{53}\] where \(C\) is purely real, meaning \[\tan\left(\Phi_{\widetilde{G}_{\dot{M}}}\right)\to\tan\left[-\sqrt{\pi f}\left( g(x)-g(x_{0})\right)\right], \tag{54}\] resulting in a cyclic behaviour. The mass accretion rate power density spectrum \(S_{\rm in}(x,f)\), defined in section 2.5, is shown in Figure 4. In Figure 4 we take an accretion rate fluctuation coherence length \(l(x^{\prime})=1\), and in the Laplacian input for \(S_{\rm E}\) we take \(p=1\), and \(f_{\rm br}=\sqrt{GM/r^{3}}\), the Keplerian frequency at disc radius \(r\). Also displayed as black dashed curves are the high and low frequency asymptotic results derived in section 2.5. The amplitude of the accretion rate power density spectrum increases with decreasing disc radius, a result which is particularly true at high Fourier frequencies. The complex cross-spectrum, which measures the correlation between variability at disc radii \(x_{1}\) and \(x_{2}\), is given by (Mushtukov _et al._, 2018) \[C_{\rm in}(x_{1},x_{2},f)=\int_{x_{\rm in}}^{x_{\rm out}} \widetilde{G}_{\dot{M}}(x_{1},x^{\prime},f)\widetilde{G}_{\dot{M}}^{\dagger}( x_{2},x^{\prime},f)\\ \left(\frac{1}{x^{\prime}}\right)^{2}l(x^{\prime})S_{\rm E}(x^{ \prime},f)\,{\rm d}x^{\prime}, \tag{55}\] where \(z^{\dagger}\) denotes the complex conjugate of \(z\). We define the coherence function \[{\rm Coh}_{\rm in}(x_{1},x_{2},f)\equiv\frac{\left|C_{\rm in}(x_{1},x_{2},f) \right|^{2}}{S_{\rm in}(x_{1},f)S_{\rm in}(x_{2},f)} \tag{56}\] which is limited to the range \(0<{\rm Coh}_{\rm in}(x_{1},x_{2},f)<1\). As its name suggests, the coherence function measures the level of coherence between two variable quantities, in this case the variability at radii \(x_{1}\) and \(x_{2}\) in the disc. Note that \({\rm Coh}=1\) corresponds to a fully coherent fluctuations at both radii, while \({\rm Coh}=0\) represents incoherent fluctuations. The coherence function is plotted as a function of Fourier frequency in Figure 5, for a number of disc radii \(x_{1}\), and \(x_{2}=50\). At high enough Fourier frequencies the coherence between any two Figure 4: The power density spectrum of the mass accretion rate multiplied by the Fourier frequency, at a number of disc radii \(x\) (denoted in legend), as a function of Fourier frequency. Displayed as black dashed curves are the high and low frequency asymptotic results derived in section 2.5. The amplitude of the accretion rate power density spectrum increases with decreasing disc radius, a result which is particularly true at high Fourier frequencies. Figure 3: The amplitude of the Newtonian Fourier-Green’s function of the mass accretion rate, produced with \(x_{0}=10\) and \(x=9\), for a variety of different indices \(\nu\). The stress index modifies the properties of the mass accretion rate Fourier-Green’s functions at intermediate frequencies. Figure 2: The amplitude of the Newtonian Fourier-Green’s function of the mass accretion rate, produced with \(x_{0}=10\) and \(x=20\), for a variety of different indices \(\nu\). disc radii decays exponentially. At low Fourier frequencies the coherence between disc radii tends to unity. Clearly the coherence function is a complicated function of Fourier frequency at intermediate frequency scales. In Figure 6 we plot the angle of the mass accretion rate cross spectrum between disc radii \(x_{1}\) (displayed on legend), and \(x_{2}=50\). At low Fourier frequencies disc variability at radii \(x_{1}\) leads variability at disc radii \(x_{2}>x_{1}\), whereas disc variability at \(x_{1}>x_{2}\). At high Fourier frequencies variability at small radii (e.g., the blue curve) can lag, rather than lead, variability at larger radii. At the highest Fourier frequencies the cross spectrum probes from \(-\pi\rightarrow\pi\), as a function of frequency. In this section we have calculated a number of the properties of the Newtonian Fourier-Green's functions, showing that their properties are exactly as predicted by the analytical analysis of the previous section. For the remainder of this paper we shall focus on the properties of relativistic discs. ## 4 Relativistic Fourier-Green's functions The Fourier-Green's functions of the previous section are solutions of the Newtonian disc equation. A large number of observed variable disc systems however are those discs evolving around black holes, and must therefore be described by a relativistic theory. Analytical solutions of the relativistic disc equations have recently been derived by Mummery (2023), and are discussed below. ### Relativistic Green's functions in the time domain Balbus (2017) demonstrated that in full general relativity the governing disc equation can be expressed in the following compact form \[\frac{\partial\zeta}{\partial t}=\frac{W_{\phi}^{r}}{(U^{0})^{2}}\frac{ \partial}{\partial r}\left(\frac{U^{0}}{U_{\phi}^{t}}\left[\frac{\partial \zeta}{\partial r}\right]\right). \tag{57}\] here the primed notation \({}^{\prime}\) denotes an ordinary derivative with respect to \(r\), and \[\zeta\equiv\frac{r\Sigma W_{\phi}^{r}}{U^{0}}. \tag{58}\] Only two of the orbital components of the disc's flow appear in this equation. These are the time dilation factor \[U^{0}=\frac{1+a\sqrt{r_{g}/r^{3}}}{\left(1-3r_{g}/r+2a\sqrt{r_{g}/r^{3}} \right)^{1/2}}, \tag{59}\] Figure 5: The coherence function of the mass accretion rate, between disc radii \(x_{1}\) (displayed on legend), and \(x_{2}=50\). At high enough Fourier frequencies the coherence between any two disc radii decays exponentially. At low Fourier frequencies the coherence between disc radii tends to 1. Figure 6: The angle of the mass accretion rate cross spectrum between disc radii \(x_{1}\) (displayed on legend), and \(x_{2}=50\). At low Fourier frequencies disc variability at radii \(x_{1}\) leads variability at disc radii \(x_{2}>x_{1}\), whereas disc variability at \(x_{1}>x_{2}\) lags variability at radii \(x_{2}\). At high Fourier frequencies variability at small radii (e.g., the blue curve) can lag, rather than lead, variability at larger radii. At the highest Fourier frequencies the cross spectrum cycles from \(-\pi\rightarrow\pi\), as a function of increasing frequency. Figure 7: Upper: The Green’s function solution of the variable \(y\equiv r\Sigma W_{\phi}^{r}\) for a Kerr black hole with spin \(a=0\). The blue dashed curves are the numerical solutions of the full general relativistic disc equations, while the green solid curves are the analytical solution of Mummery (2023). The initial radius was \(r_{0}=50r_{g}\) and the curves are plotted at dimensionless time \(t/t_{\rm visc}=0.003,0.015,0.045,0.09,0.15,0.225,0.45\) and \(4.5\). Lower: The absolute value of the difference between the numerical and analytical Green’s function solutions. To allow a proper comparison at each time both the numerical and analytical Green’s functions are renormalised to have a peak amplitude of 1. This Figure is reproduced from Mummery (2023). and the circular orbit angular momentum gradient \[U^{\prime}_{\phi}=\frac{\sqrt{GM}\left(a\sqrt{r_{g}}+r^{3/2}\right)\left(r^{2}-6r _{g}r-3a^{2}+8a\sqrt{r_{g}r}\right)}{2r^{4}\left(1-3r_{g}/r+2a\sqrt{r_{g}/r^{3}} \right)^{3/2}}. \tag{60}\] In these expressions, \(a\) is the black hole's angular momentum parameter (having dimensions of length), \(M\) is the black hole's mass, \(r_{g}=GM/c^{2}\) the gravitational radius, and \(G\) and \(c\) are Newton's constant and the speed of light respectively. Clearly, the relativistic disc equation (57) is extremely algebraically complex, and in fact the general relativistic thin disc equation does not have exact solutions that can be written in terms of elementary functions. However, in a recent work Mummery (2023) derived the leading order general relativistic Green's function solution, valid for the case of a vanishing stress at the innermost stable circular orbit (ISCO). The Mummery (2023) Green's function solution has the same functional form as the Newtonian solutions discussed above, but with (note that these solutions are in units where \(G=M=w=1\), see Appendix A for the relevant solutions presented in physical units) \[g(x)=\frac{x^{\alpha}}{2\alpha}\sqrt{1-\frac{2}{x}}\left[1-\frac {x^{-1}}{(\alpha-1)}{}_{2}F_{1}\left(1,\frac{3}{2}-\alpha;2-\alpha;\frac{2}{x} \right)\right]\\ +\frac{2^{\alpha-2}}{\alpha(\alpha-1)}\sqrt{\pi}\frac{\Gamma(2- \alpha)}{\Gamma(3/2-\alpha)}, \tag{61}\] \[q(x)=x^{1/4}\sqrt{x^{-\alpha}g(x)}\exp\left(\frac{1}{2x}\right)\left[1-\frac{ 2}{x}\right]^{5/4-3/8\alpha}, \tag{62}\] and \[p(x)=\frac{x^{1/2}\exp\left(-1/x\right)}{1-2/x}, \tag{63}\] where \[x\equiv\frac{2r}{r_{I}},\quad\alpha=\frac{1}{4\nu}, \tag{64}\] and \({}_{2}F_{1}(a,b;c,z)\) is the hypergeometric function. For a full description of the approximations employed in deriving this solution see Mummery (2023). Note that in the \(x\to\infty\) limit the above solutions revert to their Newtonian form. While the above solutions are not formally exact, in Figure 7 we plot the numerically (blue dashed curve) and analytically (green solid curve) computed \(rEW^{\tau}_{\phi}\) profiles, assuming an initial radius of \(r_{0}=50r_{g}\) and Kerr angular momentum parameter \(a=0\). The curves are plotted at dimensionless times \(t/t_{\rm visc}=0.003,0.015,0.045,0.09,0.15,0.225,0.45\) and \(4.5\), the curves at later times are identifiable through their decreasing peak amplitude. It is remarkable how accurately the analytical Green's function solutions described above reproduces the properties of the full numerical solutions. In Figure 8 we show the Green's function of the mass accretion rate, plotted as a function of radius, for the spin parameter \(a=0\), at a number of different dimensionless times denoted on each plot. The initial radius in both cases was taken to be \(r_{0}=25r_{g}\). A value of \(\dot{M}(r,t)<0\) denotes mass inflow (towards the ISCO), while \(\dot{M}(r,t)>0\) denotes mass outflow (some mass must move outwards within the disc so as to conserve the total angular momentum of the flow). The normalisation of the accretion rate was chosen so that the time-integrated ISCO accretion rate was equal to 1. ### Relativistic Green's functions in the frequency domain With this solution in hand, we may write down the Fourier-Green's function solutions of the relativistic disc equation by substituting the above definitions into equation 18. The full expressions for the relativistic Fourier-Greens functions are rather complex, and are presented in Appendix A. These solutions have the same gross properties in the Fourier domain as their Newtonian analogues, but they differ in the details, as we now discuss. In Figures 9 and 10 we plot the amplitude of the general relativistic Fourier-Green's functions of the mass accretion rate, for inward (Fig. 9) and outward (Fig. 10) propagating modes. In black we plot the corresponding Newtonian Fourier-Green's solutions. We note that for inward propagating modes the relativistic Fourier-Green's modes are more strongly suppressed at high frequencies, but are larger at intermediate frequencies, than their Newtonian analogues. In contrast, for outward propagating modes the relativistic Fourier-Green's modes are more strongly suppressed at all frequencies. It is important to note that the outward propagating relativistic Fourier-Green's modes must be treated with some care. Mathematically this results from the fact that the solutions described in the proceeding sub-section are not exact, but are asymptotic "leading order" solutions (see Mummery (2023) for a detailed discussion). Physically care is required because at very late times the exact numerical and analytical solution begin to deviate, and an extremely small fraction of the initial disc mass is not accreted in these solutions. As a result \[q(x)\neq A(g(x))^{\nu} \tag{65}\] and therefore (following the reasoning outlined in section 2) \[\lim_{f\to 0}\widetilde{G}_{\dot{M}}(x>x_{0},f)\to\delta(x,x_{0})M_{d}\neq 0. \tag{66}\] The discrepancy is small, as a result of the high accuracy of these analytical solutions (Fig. 7), and typically only effects extremely small frequencies \(f/f_{0}\ll 10^{-6}\) to a small degree \[\delta\ll 10^{-4}. \tag{67}\] As such, the conclusions derived in the remainder of this paper are Figure 8: The Green’s function solution of the mass accretion rate for a Schwarzschild black hole (\(a=0\)). The initial radius was \(r_{0}=25r_{g}\) and the curves are plotted at the dimensionless times denoted in the legend. The normalisation of the accretion rate was chosen so that the time-integrated ISCO accretion rate was equal to 1. This Figure is reproduced from Mummery (2023). not noticeably quantitatively or qualitatively effected by this slight discrepancy. The function \(\delta(x,x_{0})\) can be written in a closed form (Appendix A), and then simply subtracted off the low frequency Fourier-Green's modes. ### Differences between the Newtonian and Relativistic Fourier-Green's functions One of the main differences between the Newtonian and relativistic Fourier-Green's functions is highlighted in Figures. 9 and 10; relativistic Fourier-Green solutions are more strongly suppressed at high Fourier frequencies than their Newtonian analogues. This can be understood by plotting the high-frequency mode suppression function \(\Delta(r,r_{0})\equiv|g(r)-g(r_{0})|\) (Fig. 11). High frequency modes are suppressed as as \(\exp(-\Delta f^{1/2})\) (see section 2). In Fig. 11 we plot \(\Delta(r,r_{0})\) as a function of black hole spin for \(r_{0}=20r_{g}\). We see that relativistic modes are more strongly suppressed at high Fourier frequencies than Newtonian modes, but that higher black hole spins reduce this suppression. One interesting result that is highlighted in Figure 9 is that the inwards propagating relativistic and Newtonian Fourier-Green's functions differ at intermediate frequencies, with Newtonian modes having a more pronounced "kink" at \(f\sim 10^{-2}f_{0}\). This difference propagates through into different properties of the local accretion rate power spectra of relativistic and Newtonian discs (Fig. 12). In Figure 12 we compute the local (at radius \(r\)) mass accretion rate power spectrum using equation 35, for a range of different black hole spins and also for Newtonian discs with different "inner radii". To mimic an effective inner disc radii for the Newtonian solutions we set the input power of the surface density variability (see equation 36) to be zero for \(r\leq r_{\rm in}\). It is clear to see in Figure 12 that the relativistic and Newtonian Fourier-Green's functions result in significantly different local accretion rate power spectra. While this Figure 11: The high-frequency mode suppression function \(\Delta(r,r_{0})\equiv|g(r)-g(r_{0})|\), which suppresses modes as \(\exp(-\Delta f^{1/2})\) at high Fourier frequencies, as a function of black hole spin for \(r_{0}=20r_{g}\). We see that relativistic modes are more strongly suppressed at high Fourier frequencies than Newtonian modes, but that higher black hole spins reduces this suppression. Figure 10: The amplitude of the general relativistic Fourier-Green’s functions of the mass accretion rate (blue), for outward propagating modes \(x>x_{0}\). This calculation was for a Kerr black hole \(a=0.5\). Newtonian modes are displayed in black. For this figure we take \(r_{0}=10r_{g}\), and the outward propagating modes have \(r/r_{g}=11\) (dot-dashed), \(13\) (dotted), \(15\) (dashed) and \(17\) (solid). The Fourier-Green’s functions are normalised so that the total accreted mass is equal to \(1\). We note that the outward propagating relativistic Fourier-Green’s modes are more strongly suppressed at all frequencies, when compared to Newtonian modes. Figure 9: The amplitude of the general relativistic Fourier-Green’s functions of the mass accretion rate (red), for inward propagating modes \(x<x_{0}\). This calculation was for a Kerr black hole with \(a=0.5\). Newtonian modes are displayed in black. For this figure we take \(r_{0}=10r_{g}\), and the inward propagating modes have \(r/r_{g}=4\) (dot-dashed), \(5\) (dotted), \(7\) (dashed) and \(9\) (solid). The Fourier-Green’s functions are normalised so that the total accreted mass is equal to \(1\). We note that the relativistic Fourier-Green’s modes are more strongly suppressed at high frequencies, but are larger at intermediate frequencies. is particularly true for the case of a Newtonian disc with small inner radius \(r_{\rm in}=0.01\) (black dashed curve), this is still true for Newtonian discs with more realistic inner radii (red dashed curve). ## 5 Observable quantities The previous sections have focused on the properties of the Green's function solutions of the Newtonian and relativistic disc equations in the Fourier domain. The Fourier properties of the local mass accretion rate are of course not by themselves directly observable, and in this section we focus our discussion onto the properties of the emitted disc flux, which is a more readily observable quantity. The problem now becomes one of determining how mass accretion rate fluctuations are observed in the light curves of accreting sources. In a purely steady state flow the local energy available to be radiated is directly proportional to the local mass accretion rate, and in the literature it is generally assumed that the _local variability_ in the accretion rate is directly proportional to the variability in the local emission (e.g., Lyubarskii 1997, Ingram & van der Klis 2013, Ingram & Done 2012, Mushtukov et al, 2018). This is unlikely to hold much beyond small fluctuations in \(\dot{M}\), as the locally radiated energy in a time dependent flow is not directly proportional to the accretion rate, but instead to the energy liberated by the local disc shear. In a time dependent accretion flow the local mass accretion rate can in fact be negative (see Fig. 8), while the locally emitted flux will of course remain positive. In this work, in common with the literature, we will also make the assumption that the variability in the local accretion rate is directly proportional to the variability in the locally emitted photon flux. We stress however that it is important to bear in mind that this assumption can only hold for small variations in both the photon flux and mass accretion rate. The emission in a certain band (which we shall call "hard" \(h\), or "soft" \(s\)) is therefore assumed to be correlated with the local accretion rate, with some "emissivity profile" \(s(r)\) and \(h(r)\)(e.g., Lyubarskii 1997, Ingram & van der Klis 2013, Ingram & Done 2012, Mushtukov et al, 2018) which specifies the constant of proportionality between the two quantities. Much of the variable emission in a typical observation of an X-ray binary system is sourced from a hot "corona". The corona is a population of hot electrons, located close to the black hole, whose existence is inferred from the observation of a power law spectral component resulting physically from photons being Compton up-scattered as they pass through the electrons. The geometry of this corona is currently poorly understood, and could in principal be located above the disc, or as part of a hot "inner flow". The advantage of working with emissivity profiles, as opposed to any particular physical model for the emission, is that under the assumption that coronal emission also scales locally with the mass accretion rate (perhaps as a result of variability in the seed photon field), variability in both thermal and non-thermal emission components may be modelled with additional degrees of freedom. We discuss potential extensions to this analysis in section 7. It is common in the literature to assume that these emissivity profiles are given by power laws of disc radius \(r\) (although there is no compelling justification for this), and in this work we shall parameterise our emissivity profiles with the following functional form \[s(r) =s_{0}\left(\frac{r}{r_{I}}\right)^{-\gamma_{h}}\left(1-\sqrt{ \frac{r_{I}}{r}}\right), \tag{68}\] \[h(r) =h_{0}\left(\frac{r}{r_{I}}\right)^{-\gamma_{h}}\left(1-\sqrt{ \frac{r_{I}}{r}}\right). \tag{69}\] Here \(r_{I}\) is the black hole's ISCO radius, \(\gamma_{h,s}\) are phenomenological emissivity indices, and the final term \(1-\sqrt{r_{I}/r}\) enforces the vanishing ISCO stress condition used in deriving the relativistic Green's functions. The constants \(s_{0}\) and \(h_{0}\) are arbitrary scaling factors included for dimensional reasons which we shall simply set equal to 1 for the remainder of this paper. We require \(\gamma_{h}>\gamma_{s}\geq 2\), so that the hard emission is produced at radii interior to the soft emission (\(\gamma_{h}>\gamma_{s}\)) and that the flux observed in either band falls off with radius at least as quickly as the total liberated photon flux (\(\gamma_{h,s}\geq 2\)). The first requirement, that \(\gamma_{h}>\gamma_{s}\), is in effect simply the definition of the "hard" and "soft" bands: the flux from the inner regions is emitted at higher photon energies, and will thus contribute more to harder (higher energy) bands. As the disc cools at larger radii, the relative contribution of that disc region to harder bands will be suppressed more than its contribution to softer bands. In the following three subsections we shall present formal expressions for three readily observable quantities: the power density spectrum of a band, and the cross spectrum and coherence between different bands. It will then be demonstrated that each of these quantities is a relatively strong function of black hole spin, and could in principle be used to constrain the black hole spins of astrophysical sources. ### The power density spectrum of a band The first observable quantity we consider is the power density spectrum of the flux variability in a given observing band. For simplicity we shall quote the results derived by Mushtukov et al. (2018), before discussing the assumptions inherent to the modelling. The following expression for the power density spectrum of a band (we here denote the band by \(h\), for "hard band" emission) assumes that the total luminosity available to be radiated in a given region of the accretion flow is proportional to the local mass accretion rate \(\dot{M}(r,t)\). Further assuming that the local variability of the mass accretion rate (\(\dot{m}\)) is small in comparison with the average Figure 12: The power density spectrum of the mass accretion rate multiplied by the Fourier frequency, at a disc radius \(r=10r_{g}\) for a number of black hole spins (denoted in legend), as a function of Fourier frequency. Displayed as black and red dashed curves are the equivalent Newtonian solutions with different “inner radii” (the radius at which the input power is set to zero, see text). Local relativistic and Newtonian accretion rate power density spectra are noticeably distinct. mass accretion rate, the fluctuations of the flux in some energy band will also be proportional to \(\dot{m}(r,t)\). Under the above assumptions Mushtukov et al. (2018) derived the following expression for the power density spectrum of band \(h\), denoted \(S_{h}(f)\) \[S_{h}(f)=\int_{\mathcal{D}}\int_{\mathcal{D}}h(r_{1})h(r_{2})C_{in}(r_{1},r_{2},f)\,\mathrm{d}r_{1}\,\mathrm{d}r_{2}, \tag{70}\] where we use the shorthand \[\int_{\mathcal{D}}f(r)\,\mathrm{d}r\equiv\int_{r_{\mathrm{in}}}^{r_{\mathrm{out }}}f(r)\,\mathrm{d}r, \tag{71}\] to indicate an integral over the entire disc surface \(\mathcal{D}\). We remind the reader that \[C_{in}(r_{1},r_{2},f)=\int_{\mathcal{D}}\widetilde{G}_{\dot{M}} (r_{1},r^{\prime},f)\,\widetilde{G}_{\dot{M}}^{\dagger}(r_{2},r^{\prime},f)\\ \left(\frac{1}{r^{\prime}}\right)^{2}l(r^{\prime})S_{\Sigma}(r^{ \prime},f)\,\mathrm{d}r^{\prime}, \tag{72}\] and so we have in reality a triple integral to compute \[S_{h}(f)=\int_{\mathcal{D}}\int_{\mathcal{D}}\int_{\mathcal{D}} h(r_{1})h(r_{2})\widetilde{G}_{\dot{M}}(r_{1},r^{\prime},f)\, \widetilde{G}_{\dot{M}}^{\dagger}(r_{2},r^{\prime},f)\\ \left(\frac{1}{r^{\prime}}\right)^{2}l(r^{\prime})S_{\Sigma}(r^{ \prime},f)\,\mathrm{d}r^{\prime}\,\mathrm{d}r_{1}\,\mathrm{d}r_{2}. \tag{73}\] Note that, as argued in Mushtukov et al. (2018), as the amplitude of the local surface density variability scales with \(\dot{M}^{2}\), integrating \(S_{h}(f)\) over all frequencies, and then square rooting, one recovers the linear rms-flux relationship. The physical reason behind the power density spectrum being related to a _triple_ integral is the following. The flux in a given energy band is, per our assumptions, given by an integral of the local mass accretion rate, with a weighting function \(h(r)\), over the entire disc \(F_{h}(t)\propto\int_{\mathcal{D}}h(r^{\prime})\dot{M}(r^{\prime},t)\,\mathrm{ d}r^{\prime}\). The power density spectrum corresponds to the square of the Fourier-transformed flux \(S_{h}(f)\equiv\widetilde{F}_{h}(f)\widetilde{F}_{h}^{\dagger}(f)\), which introduces the second integration. However, variability in the accretion rate at a given radius \(r\) is caused by the integrated contributions of surface density fluctuations _at all disc radii_, with differing levels of correlation encapsulated by \(C_{in}(r_{1},r_{2},f)\). Summing each of these individual contributions introduces the third integral over the disc surface. ### The cross spectrum between bands Observations of an accreting system may be contemporaneously taken at numerous different photon energies (or "bands"). A natural observational question is then how strongly is the variability in the emission observed across a "soft" band correlated with the variability in the emission observed across a "hard" band. This correlation is quantified by the hard-soft cross spectrum, a quantity given explicitly by (Mushtukov et al., 2018) \[C_{h,s}(f)=\int_{\mathcal{D}}\int_{\mathcal{D}}h(r_{1})s(r_{2})C_{in}(r_{1},r_ {2},f)\,\mathrm{d}r_{1}\,\mathrm{d}r_{2}. \tag{74}\] The physical cause of this triple disc integral is identical to that of the power density spectrum. The hard-soft cross spectrum is a complex quantity, and its phase encapsulates an important physical quantity, namely the phase-lag in fluctuations in the hard sate emission with respect to the soft state emission. This phase lag is explicitly equal to \[\tan\Phi_{h,s}(f)=\frac{\mathcal{I}[C_{h,s}(f)]}{\mathcal{R}[C_{h,s}(f)]}, \tag{75}\] where \(\mathcal{I}[z]\) and \(\mathcal{R}[z]\) represent the real and imaginary parts of \(z\) respectively. This quantity can also be equivalently expressed as a time lag between hard and soft fluctuations: \[t_{\mathrm{lag}\,h,s}(f)=\frac{\Phi_{h,s}(f)}{2\pi f}. \tag{76}\] ### The coherence between bands The final observable quantity we shall discuss in this paper is the coherence of the fluctuations in hard and soft observing bands, a quantity explicitly given by \[\mathrm{Coh}_{h,s}(f)=\frac{|C_{h,s}(f)|^{2}}{S_{h}(f)S_{s}(f)}. \tag{77}\] Figure 14: The amplitude of the general relativistic Fourier-Green’s functions of the mass accretion rate, for outward propagating modes \(r>r_{0}\), for black holes of different Kerr spin parameters. Newtonian modes are displayed in black. For this figure we take \(r_{0}=15r_{g}\), and the inward propagating modes have \(r/r_{g}=20\). Figure 13: The amplitude of the general relativistic Fourier-Green’s functions of the mass accretion rate, for inward propagating modes \(r<r_{0}\), for black holes of different Kerr spin parameters. Newtonian modes are displayed in black. For this figure we take \(r_{0}=20r_{g}\), and the inward propagating modes have \(r/r_{g}=15\). The coherence function satisfies \(0\leq{\rm Co}{\rm h}_{h,s}\leq 1\), where \({\rm Co}{\rm h}=1\) corresponds to a fully coherent fluctuations in both bands, while \({\rm Co}{\rm h}=0\) represents incoherent fluctuations. ## 6 The back hole spin dependence of observable quantities The relativistic Fourier-Green's functions derived in this paper are functions of the black holes spin through their explicit dependence on the ISCO radius of the black hole (eq. 64). This explicit spin dependence can be seen in Figures 13, 14 and 15, where we plot the amplitude of the relativistic Fourier-Green's functions, the phase and coherence of mass accretion rate fluctuations for a number of different black hole spins. An interesting result seen in Figs. 13 and 14 is that the higher the black hole spin is made the closer to the Newtonian Fourier-Green's function (black dashed curves) the solution becomes. In Fig. 15 we highlight that the black hole spin quantitatively effects the properties of the local mass accretion rate. In this section we demonstrate how this spin dependence is also present in the integrated observable properties of the photon flux emitted from the disc surface. As an explicit example of this result, see Figure 16, where we plot the power density spectrum of the hard band (upper left), the hard-soft coherence (upper right), the hard-soft phase lag (lower left) and time delay (lower right) for four different black hole spins (\(a=0\), blue; \(a=0.5\), black; \(a=+0.99\), orange; and \(a=-0.99\), green). For the time delay plot we denote by solid dots positive lags (hard lags soft), and by crosses negative lags (hard leads soft). The units of the power spectrum are arbitrary and would be set in a physical system by the input power in the surface density variability, the units of frequency and time delay are scaled to a system with black hole mass \(M_{\rm BH}=10M_{\odot}\), and disc parameters \(\alpha=H/R=0.1\). To scale these results to any other system the frequency axis should be scaled by a factor \[N=\left(\frac{\alpha}{0.1}\right)\left(\frac{H/R}{0.1}\right)^{2}\left(\frac{ 10M_{\odot}}{M_{\rm BH}}\right), \tag{78}\] while the time-lags should be scaled by \(1/N\). This particular plot has emissivity parameters \(\gamma_{s}=3\), \(\gamma_{h}=5\). For the input surface density fluctuations we take a correlation length \(l(r)=1r_{g}\), an integrated power \(p=1\), and a break frequency equal to one percent of the local Keplerian frequency \(f_{K}=\sqrt{GM/r^{3}}\) (see equation 36). It is clear from Fig. 16 that the black hole spin has a substantial effect on the observed variability properties of a black hole accretion flow. This is a result of fundamental theoretical importance, and the key observational result of this paper. As can be seen in the upper left panel of Fig. 16, the power density spectra of the accretion variability predicted by this theory are qualitatively simple. At high and low Fourier frequencies the observed power density spectra are given by simple power-laws of frequency. The precise power-law indices of the high and low frequency slopes are determined by the input surface density variability profile, as discussed and derived in section 2. It is interesting to note that the power density spectrum of a given band peaks at higher Fourier frequencies for larger black hole spins, but generally with a smaller magnitude. The upper-right panel of Fig. 16 displays the coherence of the hard-soft variability. At low Fourier frequencies the coherence function tends to a near-unity constant for each black hole spin, but at larger Fourier frequencies the coherence is in general an extremely non-trivial function frequency. It is interesting to note that at roughly the frequency at which the phase lags turn from positive to negative (lower left panel) the hard-soft variability becomes increasingly incoherent. We note two properties of the coherence as a function of black hole spin: the hard-soft variability is both inherently more coherent (larger \({\rm Co}{\rm h}_{h,s}\)), and is inherently smoother as a function of Fourier frequency, for larger black hole spins. A simple interpretation of this spin dependence of the coherence is that the emission from Kerr discs with higher spins primarily originates from a region of the disc which has a smaller radial extent when compared to lower spins. This is because the emission from highly-spinning Kerr systems is dominated by the hottest and very innermost regions, which are physically close together. The coherence between two radii is a strong function of their separation, with larger separations having significantly lower coherence, and smaller separations a correspondingly larger coherence. In the lower two panels we plot the phase and time lags associated with the hard and soft bands. As has been discussed by many authors, it is natural that the variability in "hard" bands lag the variability in "soft" bands (solid dots, Fig. 16), a result of the fluctuations excited in the outer disc regions propagating inwards towards the hotter inner regions where the hard flux is generated. For these Fourier frequencies the time lags are roughly given by the typical time of propagation of fluctuations from outer radii, where Figure 15: _Upper:_ The coherence function of the mass accretion rate at \(r_{1}=10r_{g}\) and \(r_{2}=15r_{g}\) for Kerr metrics with different spin parameters (denoted on legend). _Lower:_ The phase of the mass accretion rate cross spectrum between disc radii \(r_{1}=10r_{g}\) and \(r_{2}=15r_{g}\). At higher frequencies than displayed in this lower plot the phase wraps between \(-\pi\) and \(\pi\). Both the coherence and phase of the mass accretion rate are relatively sensitive to the Kerr metric spin parameter. the soft flux is predominantly produced, to inner radii, where the "hard" flux is predominantly produced. However, negative phase lags (when the soft energy band variability lags hard energy variability) are possible at high frequencies (see lower left and right panels of Fig. 16). The negative lags are a result of the fact that the variability of the mass accretion rate at the inner radii can affect the variability at the outer radii through outward propagating modes (as argued first by Mushtukov et al. 2018). These negative phase lags are only present at high Fourier frequencies, a result which can be understood with respect to the analytical analysis of section 2. At high Fourier frequencies the Fourier-Green's functions of inward and outward propagation are symmetric and therefore equally important. At low frequencies however outward propagation is suppressed (as a power-law in frequency), and inward propagation dominates resulting in positive lags. We note that the predicted negative phase lags are comparable to the observed negative lags in stellar mass black hole systems (Uttley et al. 2011; De Marco et al. 2015) and AGN (Zoghbi et al. 2010; Walton et al. 2013; Alston et al. 2014). It is interesting that the fundamental diffusive accretion process in discs can in principle play a key role in the generation of negative time lags. The precise quantitative results of Fig. 16 of course depend on the precise assumptions inherent to the modelling, namely the emissivity indices \(\gamma_{s}\) and \(\gamma_{h}\). While varying these parameters changes the numerical values of each of the observed quantities (power spectrum, coherence and phase lags), the qualitative spin dependence of the different quantities remains unchanged. This is demonstrated in Figs. 17 and 18 where we again plot the power density spectrum of the hard band (upper left), the hard-soft coherence (upper right), the hard-soft phase lag (lower left) and time delay (lower right) for four different black hole spins (\(a=0\), blue; \(a=0.5\), black; \(a=+0.99\), orange; and \(a=-0.99\), green). In Fig. 17 we take \(\gamma_{s}=2\), \(\gamma_{h}=3.5\), while in Fig. 18 we take \(\gamma_{s}=3\), \(\gamma_{h}=8\). The following "rules of thumb" appear to describe well the spin-dependence of the observed flux variability, independent of the choice of precise emissivity profile * The magnitude of the maximum hard-soft phase lag decreases with increasing black hole spin * The magnitude of the maximum _negative_ hard-soft phase lag increases with increasing black hole spin * The frequency at which the maximum _negative_ hard-soft phase lag occurs increases with increasing black hole spin * The power density spectrum of a given band peaks at higher Fourier frequencies for larger black hole spins * The power density spectrum of a given band peaks with smaller magnitude for larger black hole spins * The hard-soft variability is inherently more coherent (larger \(\mathrm{Coh}_{h,s}\)) for larger black hole spins Figure 16: The power density spectrum of the hard band (upper left), the hard-soft coherence (upper right), the hard-soft phase lag (lower left) and time delay (lower right) for emissivity parameters \(\gamma_{s}=3\), \(\gamma_{h}=5\) and four different black hole spins (\(a=0\), blue; \(a=0.5\), black; \(a=+0.99\), orange; and \(a=-0.99\), green). For the time delay plot we denote by solid dots positive lags (hard lags soft), and by crosses negative lags (hard leads soft). The units of the power spectrum are arbitrary and would be set in a physical system by the input power in the surface density variability, the units of frequency and time delay are scaled to a system with black hole mass \(M_{\mathrm{BH}}=10M_{\odot}\), and disc parameters \(\alpha=H/R=0.1\). * The hard-soft coherence is inherently smoother as a function of Fourier frequency for larger black hole spins The reason that the frequencies at which various key observational properties occur increase with black hole spin is simply as a result of the reducing ISCO radius of the more rapidly rotating Kerr spacetime. These smaller radii have associated with them larger orbital and accretion frequencies, which are then observable in the variability signatures of these systems. These rules of thumb are robust, and are not even dependent on the chosen functional form of the emissivity profiles. In Fig. 19 we once again plot the power density spectrum of the hard band (upper left), the hard-soft coherence (upper right), the hard-soft phase lag (lower left) and time delay (lower right) for four different black hole spins (\(a=0\), blue; \(a=0.5\), black; \(a=+0.99\), orange; and \(a=-0.99\), green). In this Figure however we take emissivity profiles given by \[s(r) =s_{0}\exp\left(-\frac{r}{r_{I}}\right)\left(1-\sqrt{\frac{r_{I }}{r}}\right), \tag{79}\] \[h(r) =h_{0}\exp\left(-3\frac{r}{r_{I}}\right)\left(1-\sqrt{\frac{r_{I }}{r}}\right). \tag{80}\] This could in fact be a more physically reasonable profile, as it may more accurately describe the suppression of hard and soft emission by the Wien-tail of the local blackbody disc emission function \(F(E)\propto\exp(-E_{h,s}/kT),E_{h}>E_{s}\). In Figure 19 we once again see that the precise numerical values of each of the observable quantities depends on the precise emissivity profile chosen, but that the gross spin-dependence of the variability is qualitatively unchanged. ## 7 Future extensions to the model The relativistic variability model presented in the previous section employed a number of simplifications. In this section we recap and discuss these simplifications, their physical basis, and how they may be improved upon in future work. The principal simplification employed in this analysis is in relating the variability in the mass accretion rate to the variability in the observed photon field. As we have discussed, in this work we employ phenomenological emissivity profiles. The advantage of working with emissivity profiles, as opposed to any particular physical model for the emission is that, provided that both thermal and coronal emission scale locally with the mass accretion rate, variability in both thermal and non-thermal emission components may be modelled with the additional degrees of freedom provided by the emissivity profiles. However, a more detailed treatment of the local emission is of interest, and we briefly discuss a possible modelling approach be Figure 17: The power density spectrum of the hard band (upper left), the hard-soft coherence (upper right), the hard-soft phase lag (lower left) and time delay (lower right) for emissivity parameters \(\gamma_{s}=2\), \(\gamma_{h}=3.5\) and four different black hole spins (\(a=0\), blue; \(a=0.5\), black; \(a=+0.99\), orange; and \(a=-0.99\), green). For the time delay plot we denote by solid dots positive lags (hard lags soft), and by crosses negative lags (hard leads soft). The units of the power spectrum are arbitrary and would be set in a physical system by the input power in the surface density variability, the units of frequency and time delay are scaled to a system with black hole mass \(M_{\rm BH}=10M_{\odot}\), and disc parameters \(\alpha=H/R=0.1\). low. Assuming that the disc emits thermally with an effective temperature profile \(T_{\rm eff}(r)\), the flux at an observed energy \(E\) is proportional to \[F\propto\int_{\mathcal{D}}\frac{2\pi r}{\exp\left(E/kT_{\rm eff}(r)\right)-1}\, \mathrm{d}r, \tag{81}\] where this expression neglects the effects of gravitational lensing, gravitational redshifts and the Doppler boosting of radiation. The variable flux \(\delta F\) is given, assuming a small fluctuation in \(\dot{M}\) and to linear order, by \[\delta F\propto\int_{\mathcal{D}}\frac{\delta\dot{M}}{\dot{M}}\left(\frac{E}{ kT_{\rm eff}(r)}\right)\frac{2\pi r\exp\left(E/kT_{\rm eff}(r)\right)}{\left( \exp\left(E/kT_{\rm eff}(r)\right)-1\right)^{2}}\,\mathrm{d}r, \tag{82}\] where we have assumed that \(\delta T/T=\delta\dot{M}/4\dot{M}\). Therefore, to linear order, the thermal flux from an accretion flow can be treated by the same methods developed here, with an "emissivity profile" given by eq. 82. The treatment of non-thermal radiation would be more complex, but a variable input thermal flux of the form given above could in principle be propagated through a Comptonising electron population. At this level of detail however additional relativistic effects will likely become important. This would include energy-dependent delays in the propagation of photons through the spacetime of Kerr black holes, with photons emitted deeper in the gravitational well of the black hole following paths to the observer more severely warped by gravity. In addition, the effects of gravitational and Doppler energy shifting of photons will likely modify the results presented here, at the quantitative level. These higher order effects will all themselves be sensitive to the black hole's spin parameter, and a more detailed treatment of photon propagation and its effects on observed variability are of real interest. A further improvement of the analysis presented here would be in improving the treating of the input surface density fluctuations. In this work we have assumed that the input surface density perturbations are described by a Lorentzian profile with phenomenological parameters \(p\) and \(f_{\rm br}\) (eq. 36). While physically motivated, it would be of interest in future works to calibrate this model input with numerical analyses of the fundamental disc equations (e.g. Hogg and Reynolds 2016, Turner and Reynolds 2021). ## 8 Conclusions In this paper we have presented two important advances in the theoretical framework for describing aperiodic variability from accreting sources (the so-called theory of propagating fluctuations). First, we present the exact analytical solutions of the Fourier integral of the Green's functions of the classical thin disc equations. With analytical solutions now at hand, various asymptotic properties of these solutions may be derived. In section 2 we demonstrated that high Figure 18: The power density spectrum of the hard band (upper left), the hard-soft coherence (upper right), the hard-soft phase lag (lower left) and time delay (lower right) for emissivity parameters \(\gamma_{s}=3\), \(\gamma_{h}=8\) and four different black hole spins (\(a=0\), blue; \(a=0.5\), black; \(a=+0.99\), orange; and \(a=-0.99\), green). For the time delay plot we denote by solid dots positive lags (hard lags soft), and by crosses negative lags (hard leads soft). The units of the power spectrum are arbitrary and would be set in a physical system by the input power in the surface density variability, the units of frequency and time delay are scaled to a system with black hole mass \(M_{\rm BH}=10M_{\odot}\), and disc parameters \(\alpha=H/R=0.1\). frequency variability in the mass accretion rate is suppressed as \(\exp(-\Delta f^{1/2})\), where \(\Delta(x,x^{\prime})\) is a function of the magnitude of the difference between the two disc locations \(x\) and \(x^{\prime}\), and corresponds physically to the (square root of) the accretion propagation time between \(x\) and \(x^{\prime}\). The high and low frequency asymptotic behaviour of the power spectrum of variability are also determined, and related to the intrinsic variability in the disc surface density/alpha parameter. We have demonstrated that the power spectrum of the local mass accretion rate spectrum is, at high Fourier frequencies, dominated by locally driven variability, with exponentially small contributions from distant disc regions. At high Fourier frequencies the inward and outward propagation of material are equally important, with the Fourier-Green's functions symmetric in \(x-x^{\prime}\) in this limit. At low Fourier frequencies however the variability is dominated by perturbations sourced at radii which are more distant from the central object, which then propagate inwards. Outward propagation is suppressed at low frequencies, as a power law in frequency. In addition, these exact solutions will rapidly speed up the process of fitting analytical models of accretion variability to observational data; the numerical cost of Fourier transforming thin disc Green's functions had previously been substantial. The second key development is in presenting the first analysis of the Fourier-Green's function solutions of the general relativistic thin disc equation. In this paper we have presented the Fourier-Green's function solutions valid for in a relativistic theory of gravity, under the assumption that the dynamical disc stress vanishes at the ISCO. These solutions depend implicitly on the central black hole's spin through their dependence on the spacetime's ISCO radius. We use this new theoretical development to highlight the Kerr black hole spin dependence of a number of observable variability properties of black hole discs. The power density spectrum of the hard band (upper left), the hard-soft coherence (upper right), the hard-soft phase lag (lower left) and time delay (lower right) are displayed in Figures 16, 17, 18 and 19 for four different black hole spins (\(a=0\), blue; \(a=0.5\), black; \(a=+0.99\), orange; and \(a=-0.99\), green), and a number of different parameterisations of the disc emissivity. Clearly the black hole spin imparts a strong signal onto the observable variability properties of black hole disc systems. While the precise choice of emissivity profile of the hard and soft bands quantitatively effect the system's observed variability properties, the following "rules of thumb" appear to describe well the spin-dependence of the observed flux variability, independent of the choice of precise emissivity profile. These rules of thumb may be of use even in systems where a detailed analysis of the variability is not performed. Figure 19: The power density spectrum of the hard band (upper left), the hard-soft coherence (upper right), the hard-soft phase lag (lower left) and time delay (lower right) for exponential emissivity profiles (see text), and four different black hole spins (\(a=0\), blue; \(a=0.5\), black; \(a=+0.99\), orange; and \(a=-0.99\), green). For the time delay plot we denote by solid dots positive lags hard lags soft), and by crosses negative lags (hard leads soft). The units of the power spectrum are arbitrary and would be set in a physical system by the input power in the surface density variability, the units of frequency and time delay are scaled to a system with black hole mass \(M_{\rm BH}=10M_{\odot}\), and disc parameters \(\alpha=H/R=0.1\). * The magnitude of the maximum hard-soft phase lag decreases with increasing black hole spin * The magnitude of the maximum _negative_ hard-soft phase lag increases with increasing black hole spin * The frequency at which the maximum _negative_ hard-soft phase lag occurs increases with increasing black hole spin * The power density spectrum of a given band peaks at higher Fourier frequencies for larger black hole spins * The power density spectrum of a given band peaks with smaller magnitude for larger black hole spins * The hard-soft variability is inherently more coherent (larger \(\mathrm{Coh}_{h,s}\)) for larger black hole spins * The hard-soft coherence is inherently smoother as a function of Fourier frequency for larger black hole spins The results presented in this paper therefore open up the possibility of using the aperiodic variability observed from black hole accretion systems to constrain the central black hole's spin, a parameter of fundamental observational and theoretical interest. ## Acknowledgments I would like to thank Alexander Mushtukov for interesting discussions which initiated this work. I am particularly grateful to Adam Ingram for extremely illuminating discussions regarding the propagating fluctuation model. I am grateful to the reviewer, whose detailed report strengthened the analysis in a number of places. This work was supported by a Leverhulme Trust International Professorship grant [number LIP- 202-014]. For the purpose of Open Access, I have applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. ## Data Accessibility Statement No observational data was used in producing this manuscript. Python scripts which compute the relativistic Fourier-Green functions and make Figures similar to those in section 6, are available at [https://github.com/andymummeryastro/GR_prop_fluc](https://github.com/andymummeryastro/GR_prop_fluc).
2306.16669
Scheduling on parallel machines with a common server in charge of loading and unloading operations
This paper addresses the scheduling problem on two identical parallel machines with a single server in charge of loading and unloading operations of jobs. Each job has to be loaded by the server before being processed on one of the two machines and unloaded by the same server after its processing. No delay is allowed between loading and processing, and between processing and unloading. The objective function involves the minimization of the makespan. This problem referred to as P2, S1|sj , tj |Cmax generalizes the classical parallel machine scheduling problem with a single server which performs only the loading (i.e., setup) operation of each job. For this NP-hard problem, no solution algorithm was proposed in the literature. Therefore, we present two mixedinteger linear programming (MILP) formulations, one with completion-time variables along with two valid inequalities and one with time-indexed variables. In addition, we propose some polynomial-time solvable cases and a tight theoretical lower bound. In addition, we show that the minimization of the makespan is equivalent to the minimization of the total idle times on the machines. To solve large-sized instances of the problem, an efficient General Variable Neighborhood Search (GVNS) metaheuristic with two mechanisms for finding an initial solution is designed. The GVNS is evaluated by comparing its performance with the results provided by the MILPs and another metaheuristic. The results show that the average percentage deviation from the theoretical lower-bound of GVNS is within 0.642%. Some managerial insights are presented and our results are compared with the related literature.
Abdelhak Elidrissi, Rachid Banmansour, Keramat Hasani, Frank Werner
2023-06-29T03:59:58Z
http://arxiv.org/abs/2306.16669v1
# Scheduling on parallel machines with a common server in charge of loading and unloading operations ###### Abstract This paper addresses the scheduling problem on two identical parallel machines with a single server in charge of loading and unloading operations of jobs. Each job has to be loaded by the server before being processed on one of the two machines and unloaded by the same server after its processing. No delay is allowed between loading and processing, and between processing and unloading. The objective function involves the minimization of the makespan. This problem referred to as \(P2,S1|s_{j},t_{j}|C_{max}\) generalizes the classical parallel machine scheduling problem with a single server which performs only the loading (i.e., setup) operation of each job. For this \(\mathcal{NP}\)-hard problem, no solution algorithm was proposed in the literature. Therefore, we present two mixed-integer linear programming (MILP) formulations, one with completion-time variables along with two valid inequalities and one with time-indexed variables. In addition, we propose some polynomial-time solvable cases and a tight theoretical lower bound. In addition, we show that the minimization of the makespan is equivalent to the minimization of the total idle times on the machines. To solve large-sized instances of the problem, an efficient General Variable Neighborhood Search (GVNS) metaheuristic with two mechanisms for finding an initial solution is designed. The GVNS is evaluated by comparing its performance with the results provided by the MILPs and another metaheuristic. The results show that the average percentage deviation from the theoretical lower-bound of GVNS is within \(0.642\%\). Some managerial insights are presented and our results are compared with the related literature. keywords: Parallel machine scheduling, Single server, Loading operations, Unloading operations, Mixed-integer linear program, General variable neighborhood search + Footnote †: journal: ## 1 Introduction and literature review Parallel machine scheduling problem with a single server (PMSSS problem) has received much attention over the last two decades. In the PMSSS problem, the server is in charge of the setup operation of the jobs. This setup operation can be defined as the time required to prepare the necessary resource (e.g., machines, people) to perform a task (e.g., job, operation) (Allahverdi and Soroush (2008); Bektur and Sarac (2019); Hamzadayi and Yildiz (2017); Kim and Lee (2012)). Indeed, in the classical parallel machine scheduling problem, it is assumed that the jobs are to be executed without prior setup. However, this assumption is not always satisfied in practice, where industrial systems are more flexible (e.g., flexible manufacturing system). Under certain conditions, this assumption can lead also to a shortfall and/or waste of time. In addition, in the PMSSS problem, it is assumed that after the loading and processing operations, the job is automatically removed from the machine and no unloading operation is considered. The PMSSS problem has many industrial applications. In network computing, the network server sets up the workstations by loading the required software. In production applications, the setting up of machines involves the simultaneous use of a common resource which might be a robot or a human operator attending each setup (Bektur and Sarac (2019)). In automated material handling systems, robotic cells or in the semiconductor industry (Kim and Lee (2012)), it is necessary to share a common server, for example a robot, by a number of machines to carry out the machine setups. Then the job processing is executed automatically and independently by the individual machines. The literature regarding the problem PMSSS can be classified into four main categories. _i_) the first category, where only one single server is used for the setup operations (Kravchenko and Werner, 1997, 1998; Hasani et al., 2014a; Kim and Lee, 2012; Hamzadayi and Yildiz, 2017; Bektur and Sarac, 2019; Elidrissi et al., 2021); _ii_) the second category, with multiple servers for unloading jobs (without considering the loading operations) (Ou et al., 2010); _iii_) the third category, where two servers are considered, the first server is used for the loading operations and the second one for the unloading operations (Jiang et al., 2017; Benmansour and Sifaleras, 2021; Elidrissi et al., 2022); _iv_) the last category, where only one single server is used for both loading and unloading operations (Xie et al., 2012; Hu et al., 2013; Jiang et al., 2014, 2015, 2015). Table 1 summarizes the papers included into the categories : _ii_), _iii_) and _iv_). Elidrissi et al. (2021) presented a short review of the papers considering the category _i_). In this paper, we address the scheduling problem with two identical parallel machines and a single server in charge of loading and unloading operations of jobs. The objective involves the minimization of the makespan. The static version is considered, where the information about the problem is available before the scheduling starts. Following the standard three-field notation (Graham et al., 1979), the considered problem can be denoted as \(P2,S1|s_{j},t_{j}|C_{max}\), where \(P2\) represents the two identical parallel machines, \(S1\) represents the single server, \(s_{j}\) is the loading time of job \(J_{j}\), \(t_{j}\) is the unloading time of job \(J_{j}\) and \(C_{max}\) is the objective to be minimized (i.e., the makespan). For the problem involving several identical servers, Kravchenko and Werner (1998) studied the problem with \(k\geq 2\) servers in order to minimize the makespan. The authors state that the multiple servers are in charge of only the loading operation of the jobs. In addition, they showed that the problem is unary \(\mathcal{NP}\)-hard for each \(k<m\). Later, Werner and Kravchenko (2010) showed that the problem with \(k\) servers with an objective function involving the minimization of the makespan is binary \(\mathcal{NP}\)-hard. In the context of the milk run operations of a logistics company that faces limited unloading docks at the warehouse, Ou et al. (2010) studied the problem of scheduling an arbitrary number of identical parallel machines with multiple unloading servers, with an objective function involving the minimization of the total completion time. The authors showed that the shortest processing time (SPT) algorithm has a worst-case bound of \(2\) and proposed other heuristic algorithms as well as a branch-and-bound algorithm to solve the problem. Later, Jiang et al. (2017) studied the problem \(P2,S2|s_{j}=t_{j}=1|C_{max}\) with unit loading times and unloading times. They showed that the classical list scheduling (LS) and the largest processing time (LPT) heuristics have worst-case ratios of 8/5 and 6/5, respectively. Later, Benmansour and Sifaleras (2021) suggested a mathematical programming formulation and a general variable neighborhood search (GVNS) metaheuristic for the general case of the problem \(P2,S2|s_{j},t_{j}|C_{max}\) with only two identical parallel machines. Recently, Elidrissi et al. (2022) addressed the problem \(P,S2|s_{j},t_{j}|C_{max}\) with an arbitrary number of machines. The authors considered the regular case of the problem, where \(\forall i,j\quad p_{i}<s_{j}+p_{j}+t_{j}\). They proposed two mathematical programming formulations and three versions of the GVNS metaheuristic with different mechanisms for finding an initial solution. In the scheduling literature, the problem \(P2,S1|s_{j},t_{j}|C_{max}\) involving both loading and unloading operations has attracted the attention of the researchers. Xie et al. (2012) addressed the problem \(P2,S1|s_{j},t_{j}|C_{max}\). They derived some optimal properties, and they showed that the LPT heuristic generates a tight worst-case bound of \(3/2-1/2m\). Hu et al. (2013) considered the classical algorithms LS and LPT for the problem \(P2,S1|s_{j},t_{j}|C_{max}\) where \(s_{j}=t_{j}=1\). They showed that LS and LPT generate tight worst case ratios of 12/7 and 4/3, respectively. Jiang et al. (2014) addressed the problem \(P2,S1|pmpt,s_{j}=t_{j}=1|C_{max}\) with preemption, and unit loading and unloading times. They presented an \(O(n\log n)\) solution algorithm for the problem. Later, Jiang et al. (2015a) considered the online version of the problem \(P2,S1|s_{j},t_{j}|C_{max}\). The authors suggested an algorithm with a competitive ratio of 5/3. In another paper, Jiang et al. (2015b) studied the problem \(P2,S1|s_{j}=t_{j}=1|C_{max}\) with unit loading and unloading times. They showed that the LS and LPT algorithms have tight worst-case ratios of 12/7 and 4/3, respectively. As far as we know, no solution methods are proposed in the literature for the problem \(P2,S1|s_{j},t_{j}|C_{max}\). A goal of our paper aims at bridging this gap. We also compare our results with the literature regarding the problem \(P2,S2|s_{j},t_{j}|C_{max}\) involving a dedicated loading server and a dedicated unloading server. The main contributions of this paper are as follows: * To the best of our knowledge, no study proposes solution methods for the parallel machine scheduling problem with a single server in charge of loading and unloading operations of the jobs. Our study generalizes the classical parallel machine scheduling problem with a single server by considering the unloading operations. * We present for the first time in the literature two mixed-integer linear programming formulations for the problem \(P2,S1|s_{j},t_{j}|C_{max}\). The first one is based on completion-time variables and the second one is based on time-indexed variables. Two valid inequalities are suggested to enhance the completion-time variables formulation. * We show that for the problem \(P2,S1|s_{j},t_{j}|C_{max}\), the minimization of the makespan is equivalent to the minimization of the idle times of the machines. In addition, three polynomial-time solvable cases and a tight theoretical lower bound are proposed. * We design an efficient GVNS algorithm with two mechanisms for finding an initial solution to solve large-sized instances of the problem. We provide a new data set and examine the solution quality of different problem instances. The performance of GVNS is compared with a greedy randomized adaptive search procedures metaheuristic. * Some managerial insights are presented, and our results are compared with the literature regarding the problem \(P2,S2|s_{j},t_{j}|C_{max}\) involving two dedicated servers (one for the loading operations and one for the unloading operations). The rest of this paper is organized as follows. Section 2 presents a formal description of the problem. In Section 3, we present two MILP formulations along with two valid inequalities for the addressed problem. A machines idle-time property, polynomial-time solvable cases and a lower bound are presented in Section 4. In Section 5, an iterative improvement procedure and two metaheuristics are presented. Numerical experiments are discussed in Section 6. Section 7 presents some managerial insights and a comparison with the literature. Finally, concluding remarks are given in Section 8. ## 2 Definition of the problem and notation The aim of this section is to give a detailed description of the problem \(P2,S1|s_{j},t_{j}|C_{max}\). We are given a set \(M=\{1,2\}\) of two identical parallel machines that are available to process a set \(\mathcal{N}_{1}=\{J_{1},\ldots,J_{n}\}\) of \(n\) independent jobs. Each job \(J_{j}\in N\) is available at the beginning of the scheduling period and has a known integer processing time \(p_{j}>0\). Before its processing, each job \(J_{j}\) has to be loaded by the loading server, and the loading time is \(s_{j}>0\). After its processing, a job has to be unloaded from the machine by the unloading server, and the unloading time is \(t_{j}>0\). The processing operation starts immediately after the end of the loading operation, and the unloading operation starts immediately after the end of the processing operation. During the loading (resp. unloading) operation, both the machine and the loading server (respectively unloading server) are occupied and after loading (resp. unloading) a job, the loading server (resp. unloading server) becomes available for loading (resp. unloading) the next job. Furthermore, there is no precedence constraints among jobs, and preemption is not allowed. The objective is to find a feasible schedule that minimizes the makespan. The following notation is used to define this problem: _Sets_ * \(n\): number of jobs * \(M=\{1,2\}\): set of two machines * \(\mathcal{N}_{1}=\{J_{1},\ldots,J_{n}\}\): set of jobs to be processed on the machines * \(\mathcal{N}_{2}=\{J_{n+1},\ldots,J_{2n}\}\): set of loading dummy jobs to be processed on the server * \(\mathcal{N}_{3}=\{J_{2n+1},\ldots,J_{3n}\}\): set of unloading dummy jobs to be processed on the server * \(\mathcal{N}=\mathcal{N}_{1}\cup\mathcal{N}_{2}\cup\mathcal{N}_{3}\) _Parameters_ * \(s_{j}\): loading time of job \(J_{j}\) * \(p_{j}\): processing time of job \(J_{j}\) * \(t_{j}\): unloading time of job \(J_{j}\) * \(A_{j}\): length of job \(J_{j}\) (\(A_{j}=s_{j}+p_{j}+t_{j}\)) * \(B\): a large positive integer _Continuous decision variables_ * \(C_{j}\): completion time of job \(J_{j}\) * \(C_{j+n}\): completion time of the loading dummy job \(J_{j+n}\) * \(C_{j+2n}\): completion time of the unloading dummy job \(J_{j+2n}\) For the purpose of modeling, we adopt the following notations, where the parameter \(\rho\) represents the duration of the jobs and dummy jobs, either on the machine or on the server. \[\rho_{j}=\left\{\begin{array}{ll}A_{j}&\forall j\in\mathcal{N}_{1}\\ s_{j-n}&\forall j\in\mathcal{N}_{2}\\ t_{j-2n}&\forall j\in\mathcal{N}_{3}\end{array}\right.\] ## 3 Mixed-integer linear programming (MILP) formulations MILP formulations are well studied in the literature for different scheduling problems, such as a single machine, parallel machines, a flow shop, a job shop and an open shop, etc. (see Michael (2018)). The main MILP formulations for scheduling problems can be classified according to the nature of the decision variables (Unlu and Mason, 2010; Kramer et al., 2021; Elidrissi et al., 2021). In this section, we derive two MILP formulations based on completion-time variables and time-indexed variables for the problem \(P2,S1|s_{j},t_{j}|C_{max}\). Before presenting the two MILP formulations, we define our suggested dummy-job representation. ### Dummy-job representation A dummy-job representation (see Elidrissi et al. (2021)) is used in this paper to simplify the problem and make it possible to model the problem as a relatively neat MILP model. Indeed, in our modeling, we consider the single server as the \((m+1)^{th}\) (i.e., third) machine. Each time the server is used to load (resp. unload) job \(J_{j}\in\mathcal{N}_{1}\) on machine \(k\in M\), then a dummy job \(J_{j+n}\in\mathcal{N}_{2}\) (resp. \(J_{j+2n}\in\mathcal{N}_{3}\)) is processed on the dummy machine \((m+1)\) at the same time. This dummy job \(J_{i+n}\) (resp. \(J_{i+2n}\)) has a processing time equal to the loading time (resp. unloading time) of the job \(J_{j}\) (i.e., \(s_{j}=p_{j+n}\) and \(t_{j}=p_{j+2n}\)\(\forall j\in\{1,\ldots,n\}\)) (see Figure 1). To define the MILP formulations, we adopt the dummy-job representation. ### Formulation 1: Completion-time variables In this section, we propose a completion-time variables formulation (\(CF\)) for the problem \(P2,S1|s_{j},t_{j}|C_{max}\). A completion-times variables or disjunctive formulation (see Balas (1985)) has been widely used to model different scheduling problems (Baker and Keller, 2010; Keha et al., 2009; Elidrissi et al., 2022). In our formulation, we use the following decision variables : Binary decision variables : \[x_{i,k}=\left\{\begin{array}{ll}1&\mbox{ if job $J_{i}$ is processed on machine $k$}\\ 0&\mbox{otherwise}\end{array}\right.\] \[z_{i,j}=\left\{\begin{array}{ll}1&\mbox{ if job $J_{i}$ is processed before job $J_{j}$}\\ 0&\mbox{otherwise}\end{array}\right.\] \[y_{i+n,j+n}=\left\{\begin{array}{ll}1&\mbox{ if the loading dummy job $J_{i+n}$ is processed before the loading dummy}\\ &\mbox{ job $J_{j+n}$ on the server}\\ 0&\mbox{otherwise}\end{array}\right.\] \[y_{i+2n,j+2n}=\left\{\begin{array}{ll}1&\mbox{ if the unloading dummy job $J_{i+2n}$ is processed before the unloading dummy}\\ &\mbox{ job $J_{j+2n}$ on the server}\\ 0&\mbox{otherwise}\end{array}\right.\] The objective function (1) indicates that the makespan (i.e., the completion time of the last job that finishes its processing on the machines) is to be minimized. Constraint set (2) represents the restriction that the makespan of an optimal schedule is greater than or equal to the completion time of the last executed job. Constraint set (3) states that each job must be processed on exactly one machine. Constraint set (4) ensures that the completion time of each job is at least greater than or equal to the sum of the loading, the unloading and the processing times of this job. In addition, the completion time of each loading dummy job (resp. unloading dummy job) is at least greater than or equal to its loading time (resp. unloading time). Constraint sets (5) and (6) indicate that no two jobs \(J_{i}\) and \(J_{j}\), scheduled on the same machine (i.e., \(x_{i,k}=x_{j,k}=1\)), can overlap in time. Constraint sets (7) and (8) state that no two dummy jobs \(J_{j+n}\) (resp. \(J_{j+2n}\)) and \(J_{i+n}\) (resp. \(J_{i+2n}\)), scheduled on the single server can overlap in time. Figure 1: Dummy-job representation. \[(CF) \min C_{max} \tag{1}\] \[s.t. C_{max}\geq C_{i}\quad\forall i\in\mathcal{N}_{1}\] (2) \[\sum_{k\in M}x_{i,k}=1\quad\forall i\in\mathcal{N}_{1}\] (3) \[C_{i}\geq\rho_{i}\quad\forall i\in\mathcal{N}_{1}\cup\mathcal{N }_{2}\cup\mathcal{N}_{3}\] (4) \[C_{i}+\rho_{j}\leq C_{j}+B(3-x_{i,k}-x_{j,k}-z_{i,j})\quad \forall i,j\in N_{1},i\neq j\] (5) \[z_{i,j}+z_{j,i}=1\quad\forall i,j\in\mathcal{N}_{1},i\neq j\] (6) \[C_{i}+\rho_{j}\leq C_{j}+B(1-y_{i,j})\quad\forall i,j\in\mathcal{ N}_{2}\cup\mathcal{N}_{3},i\neq j\] (7) \[y_{i,j}+y_{j,i}=1\quad\forall i,j\in\mathcal{N}_{2}\cup N_{3},i\neq j\] (8) \[C_{i}=C_{i+n}+\rho_{i}-\rho_{i+n}\quad\forall i\in\mathcal{N}_{1}\] (9) \[C_{i}=C_{i+2n}\quad\forall i\in\mathcal{N}_{1}\] (10) \[z_{i,j}\in\{0,1\}\quad\forall i\in\mathcal{N}_{1}\] (11) \[x_{i,k}\in\{0,1\}\quad\forall i\in\mathcal{N}_{1},\forall k\in M\] (12) \[y_{i,j}\in\{0,1\}\quad\forall i,j\in\mathcal{N}_{2}\cup\mathcal{ N}_{3} \tag{13}\] Constraints (9) calculate the completion time of each job \(J_{i}\). \(C_{i}\) is equal to the completion time of the loading operation, \(C_{i+n}\), plus the processing time and the unloading time of the same job (i.e., \(\rho_{i}-\rho_{i+n}\)). Finally, the completion time of the job \(J_{i}\) is equal to the completion time of the unloading operation of the same job (10). Constraint sets (11) - (13) define the variables \(z_{i,j}\), \(x_{i,k}\) and \(y_{i,j}\) as binary ones. ### Strengthening the completion-time variables formulation We present here two valid inequalities to reduce the time required to solve problem \(P2,S1|s_{j},t_{j}|C_{max}\) by the \(CF\) formulation. **Proposition 1**.: _The following constraints are valid for CF formulation._ \[C_{max}\geq\sum_{j\in\mathcal{N}_{1}}A_{j}x_{j,k} \forall k\in M \tag{14}\] Proof.: \(\sum_{j=1}^{n}\ A_{j}x_{j,k}\) represents the total work load time of the machine \(k\) (idle times are not counted). It is obvious to see that \(C_{max}\geq\sum_{j\in\mathcal{N}_{1}}\ A_{j}x_{j,k}\). Hence, inequalities (14) hold. Since the two machines are identical, Constraints (15) break the symmetry among the machines. **Proposition 2**.: _The following constraints are valid for the \(CF\) formulation._ \[\sum_{j\in\mathcal{N}_{1},j<i}x_{j,k-1}\geq x_{i,k} \forall i\in\mathcal{N}_{1},\forall k\in M\setminus\{1\} \tag{15}\] Note that we refer to Eq. (1)-(13) as \(CF\) and by considering the set of constraints Eq. (1)-(15) as \(CF^{+}\). A computational comparison between \(CF\) and \(CF_{1}^{+}\) is conducted in Section 6. ### Formulation 2 : time-indexed variables In this section, we propose a time-indexed variables formulation (\(TIF\)) for the problem \(P2,S1|s_{j},t_{j}|C_{max}\). A time-indexed variables formulation was introduced by Sousa and Wolsey (1992) for the non-preemptive single machine scheduling problem. It has been used to model different scheduling problems (see (Keha et al., 2009; Baker and Keller, 2010; Unlu and Mason, 2010). This formulation is based on a time discretization. The time is divided into periods \(1,2,3,\ldots,T\), where period \(t\) starts at time \(t-1\) and ends at time \(t\). The horizon \(T\) is an important part of the formulation and its size depends on. Any upper bound (\(UB\)) can be chosen as \(T\). However, a tighter upper bound is preferable to reduce the problem size as the number of time points is pseudo-polynomial in the size of the input. In our formulation, we choose \(T=\sum_{j\in\mathcal{N}_{1}}(A_{j})\). The decision variables are defined as follows: \[x_{\{i,t^{\prime}\}}=\left\{\begin{array}{ll}1&\mbox{if job $i$ starts processing at time $t^{\prime}$}\\ 0&\mbox{otherwise}\end{array}\right.\] \[(TIF) \min C_{max} \tag{16}\] \[s.t. \sum_{t^{\prime}=0}^{T-\rho_{i}}(t^{\prime}+\rho_{i})x_{\{i,t^{ \prime}\}}\leq C_{max}\quad\forall i\in\mathcal{N}_{1}\] (17) \[\sum_{i\in\mathcal{N}_{1}}\sum_{s=max(0,t^{\prime}-\rho_{i}+1)}^{ t^{\prime}}x_{\{i,s\}}\leq 2\quad\forall t^{\prime}\in[0,T]\] (18) \[\sum_{i\in\mathcal{N}_{2}\cup N_{3}}\sum_{s=max(0,t^{\prime}- \rho_{i}+1)}^{t^{\prime}}x_{\{i,s\}}\leq 1\quad\forall t^{\prime}\in[0,T]\] (19) \[\sum_{t^{\prime}=0}^{T-\rho_{i}}x_{\{i,t^{\prime}\}}=1\quad \forall i\in\mathcal{N}_{1}\] (20) \[\sum_{t^{\prime}=0}^{T-\rho_{i}}x_{\{i,t^{\prime}\}}=1\quad \forall i\in\mathcal{N}_{2}\] (21) \[\sum_{t^{\prime}=0}^{T-\rho_{i}}x_{\{i,t^{\prime}\}}=1\quad \forall i\in\mathcal{N}_{3}\] (22) \[x_{\{i,t^{\prime}\}}=x_{\{i+n,t^{\prime}\}}\quad\forall i\in \mathcal{N}_{1},\forall t^{\prime}\in[0,T]\] (23) \[x_{\{i,t^{\prime}\}}=x_{\{i+2n,t^{\prime}+s_{i}+p_{i}\}}\quad \forall i\in\mathcal{N}_{1},\forall t^{\prime}\in[0,T-\rho_{i}]\] (24) \[x_{\{i,t^{\prime}\}}\in\{0,1\}\quad\forall i\in\mathcal{N}_{1}, \forall t^{\prime}\in[0,T-\rho_{i}] \tag{25}\] In this formulation, the objective function (16) indicates that the makespan, is to be minimized. Constraint set (17) represents the fact that the makespan of an optimal schedule is greater than or equal to the completion time of all executed jobs, where job \(J_{i}\) that starts its loading operation at time point \(t\) (i.e., the job for which \(x_{i,t^{\prime}}=1\)) and will finish at time \(C_{i}=t+\rho_{i}\). The completion time of job \(J_{i}\) is calculated as \(C_{i}=\sum_{t=0}^{T-\rho_{i}}(t^{\prime}+\rho_{i})x_{i,t^{\prime}}\). The set of constraints (18) specifies that at any given time, at most two jobs can be processed on all machines. Constraints (19) ensure that at any given time, at most one dummy job (loading dummy job or unloading dummy job) can be processed by the server (i.e., the dummy machine). Constraints (20) express that each job \(J_{i}\) must start at some time point \(t^{\prime}\) in the scheduling horizon, where \(t^{\prime}\leq T-s_{i}-p_{i}-t_{i}\). Constraints (21) state that each loading dummy job must start at some time point \(t^{\prime}\) on the dummy machine in the scheduling horizon, where \(t^{\prime}\leq T-s_{i}\). Constraints (22) guarantee that each unloading dummy job must start at some time point \(t^{\prime}\) on the dummy machine in the scheduling horizon, where \(t^{\prime}\leq T-u_{i}\). Constraints (23) express that the start time of the loading dummy job \(J_{i+n}\) on the dummy machine and the start time of the job \(J_{i}\) is the same (i.e. \(x_{i,t^{\prime}}=x_{i+n,t^{\prime}}\)). Constraints (24) ensure that the unloading operation of the job \(J_{i}\) starts immediately after the end of the processing operation of the same job (i.e. \(x_{i,t^{\prime}}=x_{i+2n,t^{\prime}+s_{i}+p_{i}}\)). Finally, constraints (25) define the feasibility domain of the decision variables. ### Enhanced time-indexed formulation We show here how to reduce the number of variables and constraints required by the \(TIF\) formulation (16)-(25) and therefore, improving its computational behavior. The size of the \(TIF\) formulation depends on the time horizon \(T\). Thus, a reduction of the length of the time horizon is necessary. To do so, we fix the value of \(T\) to the approximate makespan solution given by the GVNS metaheuristic presented in Section 5.2. It is clear that the new value of \(T\) is less than the upper bound \(UB=\sum_{j\in\mathcal{N}_{1}}(A_{j})\). A comparative study between these two values of \(T\) is conducted in the section on computational results (Section 5.2). We refer to \(TIF\) with the reduced value of the time horizon \(T\) as formulation \(TIF^{+}\). ## 4 Machines Idle-time property, polynomial-time solvable cases and lower bounds ### Machines Idle-time property In this section, we show that for the problem \(P2,S1|s_{j},t_{j}|C_{max}\), the minimization of the makespan is equivalent to the minimization of the total _idle time_ of the machines. First, we denote by \(\widehat{IT}\) the total _idle time_ of the machines. The machine _idle time_ is the time a machine which has just finished the unloading operation of a job is idle before it starts the loading operation of the next job (we recall that in loading and unloading operations, both the machine and the server are occupied). Indeed, this _idle time_ is due to the unavailability of the server. Note that we include in this definition the _Idle-time_ on a machine after all of its processing is completed, but before the other machine completes its processing (see Koulamas (1996)). In addition, we denote by \(IT_{k}\) the total machine _idle time_ in a machine \(k\). Therefore, Proposition 3 and Proposition 4 can be derived. **Proposition 3**.: _The total idle time of machine \(k\) is computed as follows:_ \[IT_{k}=C_{max}-\sum_{j\in\mathcal{N}_{1}}x_{j,k}A_{j}\quad\forall k\in M \tag{26}\] **Proposition 4**.: _The total idle time of the machines is equal to:_ \[\widehat{IT}=mC_{max}-\sum_{j\in\mathcal{N}_{1}}A_{j} \tag{27}\] Proof.: since we have \[\sum_{k\in M}x_{j,k}=1\quad\forall j\in\mathcal{N}_{1}\] \[IT_{k}=C_{max}-\sum_{j\in\mathcal{N}_{1}}x_{j,k}(s_{j}+p_{j}+t_{j})\quad\forall k\in M\] we obtain \[\widehat{IT} =\sum_{k\in M}IT_{k}\] \[=\sum_{k\in M}\left(C_{max}-\sum_{j\in\mathcal{N}_{1}}x_{j,k}(s_{j }+p_{j}+t_{j})\right)\] \[=\sum_{k\in M}C_{max}-\sum_{k\in M}\sum_{j\in\mathcal{N}_{1}}x_{j,k}(s_{j}+p_{j}+t_{j})\] \[=\sum_{k\in M}C_{max}-\sum_{j\in\mathcal{N}_{1}}\sum_{k\in M}x_{j,k}(s_{j}+p_{j}+t_{j})\] \[=mC_{max}-\sum_{j\in\mathcal{N}_{1}}(s_{j}+p_{j}+t_{j})\] \[=mC_{max}-\sum_{j\in\mathcal{N}_{1}}A_{j}\] Therefore, for the problem \(P2,S1|s_{j},t_{j}|C_{max}\), the minimization of the makespan is equivalent to the minimization of the total _idle time_ of the machines. ### Polynomial-time solvable cases We now present some polynomial-time solvable cases for the problem \(P2,S1|s_{j},t_{j}|C_{max}\). **Proposition 5**.: _We consider a set of jobs, where \(s_{i}=p_{j}\quad\forall i,j\in\mathcal{N}_{1}\) and \(p_{i}=t_{j}\quad\forall i,j\in\mathcal{N}_{1}\). Then all permutations define an optimal schedule. In this case, the optimal makespan is equal to the sum of all loading and unloading times of jobs._ Proof.: We assume that the processing time of the job scheduled at position 1 is equal to the loading time of the job scheduled at position 2 and the processing time of the job scheduled at position 2 is equal to the unloading time of the job scheduled at position 1. Then, the job at position 2 will start immediately after the end of the loading operation of the job at position 1 and the unloading operation of the job at position 2 will start immediately after the end of the unloading operation of the job at position 1. Therefore, the completion time of the job at position 2 is equal to \(C_{[2]}=C_{[1]}+t_{[2]}\), and the waiting time of the server is equal to 0. Now, if we consider \(n\) jobs to be scheduled with \(\forall i,j\in\mathcal{N}_{1}\quad s_{i}=p_{j}\) and \(\forall i,j\in\mathcal{N}_{1}\quad p_{i}=t_{j}\), then the jobs will alternate on the two machines, and the total waiting time of the server is equal to zero (see Figure 2). Therefore, in this case all permutations represent an optimal schedule, and the optimal makespan (\(C^{*}_{max}\)) is equal to the sum of all loading and unloading times (i.e., \(C^{*}_{max}=\sum_{j\in\mathcal{N}_{1}}(s_{j}+t_{j})\)). **Proposition 6**.: _Consider a set of jobs, where \(p_{j}<s_{i}\quad\forall i,j\in\mathcal{N}_{1}\). Then all permutations define an optimal solution. In this case, the optimal makespan is equal to the sum of the lengths of all jobs (\(C^{*}_{max}=\sum_{j\in\mathcal{N}_{1}}A_{j}\))._ Proof.: We assume that the processing time of the job scheduled at position 1 is strictly less than the loading time of the job scheduled at position 2. Thus, the job at position 2 cannot be scheduled immediately after the end of the loading operation of the job scheduled at position 1 (see 3a). This is because only one single server is available in the system. Thus, the job at position 2 can be scheduled only after the end of the unloading operation of the job at position 1. In this case, the completion time of the job scheduled at position 2 is equal to \(C_{[2]}=C_{[1]}+A_{[2]}\). Now, if we consider \(n\) jobs with \(\forall i,j\in\mathcal{N}_{1}\quad p_{j}<s_{j}\), then each job at position \([i]\) can start its leading operation immediately after the end of the unloading operation of the job scheduled at the position \([i-1]\). Therefore, in this case all permutations represent an optimal schedule, and the optimal makespan is equal to the sum of all the lengths of the jobs (i.e., \(C^{*}_{max}=\sum_{j\in\mathcal{N}_{1}}(A_{j})\)). **Proposition 7**.: _Consider a set of jobs, where \(s_{i}\leq p_{j}\quad\forall i,j\in\mathcal{N}_{1}\) and \(p_{i}<t_{j}\quad\forall i,j\in\mathcal{N}_{1}\). Then all permutations define an optimal solution. In this case, the optimal makespan is equal to the sum of the lengths of all jobs (\(C^{*}_{max}=\sum_{j\in\mathcal{N}_{1}}A_{j}\))._ Proof.: First, we assume that the loading time of the job to be scheduled at position 2 is less than or equal to Figure 3: Polynomial-time solvable case 2. Figure 2: Polynomial-time solvable case 1. the processing time of the job scheduled at position 1. Hence, the job to be scheduled at position 2 can start its loading operation in the interval between the end of the loading operation and the start of the unloading operation of the job at position 1. Now, suppose that the processing time of the job at position 2 is strictly less than the unloading time of the job at position 1. Then the job to be scheduled at position 2 can only start its loading operation after the end of the unloading operation of the job at position 1 (see 4b). Therefore, all permutations define an optimal schedule, and the optimal makespan is equal to the sum of the lengths of all jobs (i.e., \(C^{*}_{max}=\sum_{j\in\mathcal{N}_{i}}(A_{j})\)). ### Lower bound We now introduce a theoretical lower bound (\(LB_{T}\)) on the optimal objective function value of the problem \(P2,S1|s_{j},t_{j}|C_{max}\), namely \(LB_{T}=\max(LB_{1},LB_{2})\), where \(LB_{1}\) and \(LB_{2}\) are given in Propositions 8 and 9, respectively. **Proposition 8**.: \[LB_{1}=\frac{\min_{j\in\mathcal{N}_{i}}s_{j}+\sum_{j\in\mathcal{N}_{1}}A_{j}+ \min_{j\in\mathcal{N}_{1}}t_{j}}{2}\] _is a valid lower bound for the problem \(P2,S1|s_{j},t_{j}|C_{max}\)._ Proof.: Let \(C^{*}_{max}\) denote the objective function value of an optimal schedule of the problem \(P2,S1|s_{j},t_{j}|C_{max}\). If there is no idle time between two consecutive jobs scheduled on the same machine (i.e., the gap between the end of the processing time and the start time of the loading operation of two jobs scheduled on the same machine is equal to zero) in an optimal schedule of the problem \(P2,S1|s_{j},t_{j}|C_{max}\), then \(C^{*}_{max}\) will be equal to the sum of all loading times, processing times and unloading times plus \(\min_{j\in\mathcal{N}_{1}}t_{j}\) and \(\min_{j\in\mathcal{N}_{1}}s_{j}\) divided by the number of machines \(m=2\). The fact of adding \(\min_{j\in\mathcal{N}_{1}}t_{j}\) and \(\min_{j\in\mathcal{N}_{1}}s_{j}\) with all the loading times, processing times and unloading times will constitute the total load to be executed by the two machines. It is then sufficient to divide this charge by 2 (i.e., two machines) to obtain the aforementioned lower bound. **Proposition 9**.: \[LB_{2}=\sum_{j\in\mathcal{N}_{1}}(A_{j}-p_{j})\] Figure 4: Polynomial-time solvable case 3. _is a valid lower bound for the problem \(P2,S1|s_{j},t_{j}|C_{max}\)._ Proof.: This lower bound can be easily derived from Proposition 4. Therefore: \[LB_{T}=\max\left(\frac{\min_{j\in\mathcal{N}_{1}}s_{j}+\sum_{j\in\mathcal{N}_{1} }A_{j}+\min_{j\in\mathcal{N}_{1}}t_{j}}{2},\sum_{j\in\mathcal{N}_{1}}(A_{j}-p_ {j})\right)\] ## 5 Solution approaches This section presents the solution methods to solve large-sized instances of the problem \(P2,S1|s_{j},t_{j}|C_{max}\). First, the solution representation and an initial solution based on an iterative improvement procedure in the insertion neighborhood are presented (Section 5.1). Then, two metaheuristics, namely General variable neighborhood search (GVNS) (Section 5.2) and Greedy randomized adaptive search procedures (GRASP) are proposed (Section 28). The solution approaches are evaluated by extensive computational experiments described in Section 6. ### Solution representation and initial solution A solution of the problem \(P2,S1|s_{j},t_{j}|C_{max}\) can be represented as a permutation \(\Pi=\{\pi_{1},\ldots,\pi_{k},\ldots,\pi_{n}\}\) of the job set \(\mathcal{N}_{1}\), where \(\pi_{k}\) represents the job scheduled at the \(k^{th}\) position. Any permutation of jobs is feasible if a particular machine and the single server are available simultaneously. A job at the \(k^{th}\) position is scheduled as soon as possible on an available machine taking into account the loading and unloading constraints of the single server. Note that in our problem the loading, processing and unloading operations are not separable. We now present an iterative improvement procedure based on the insertion neighborhood that is used as initial solution for our suggested GVNS (Section 5.2). This procedure has been successfully used in different scheduling problems (see Ruiz and Stutzle (2007, 2008)). In each step, a job \(\pi_{k}\) is removed at random from \(\Pi\) and then inserted at all possible \(n\) positions. The procedure stops if no improvement is found. It is depicted at Algorithm 1. In Section 6, we show the benefit of using the iterative improvement procedure as a solution finding mechanism for the GVNS metaheuristic. ### General variable neighborhood search Variable Neighborhood Search (VNS) is a local search based metaheuristic introduced by Mladenovic and Hansen (1997). It aims to generate a solution that is a local optimum with respect to one or several neighborhood structures. VNS has been successfully applied to different scheduling problems (see Todosjevic et al. (2016); Chung et al. (2019); Elidrissi et al. (2022); Maecker et al. (2023)). It consists of three main steps: _i_) Shaking step (diversification), _ii_) Local Search step (intensification), and _iii_) Change Neighborhood step (Move or Not). We notice that VNS has been less used as a solution method for the PMSSS problem. Mainly, the following metaheuristics have been applied to the PMSSS problem: _simulated annealing_(Kim and Lee, 2012; Hasani et al., 2014, 2016; Hamzadayi and Yildiz, 2016, 2017; Bektur and Sarac, 2019); _genetic algorithm_(Abdekhodaee et al., 2006; Huang et al., 2010; Hasani et al., 2014; Hamzadayi and Yildiz, 2017); _tabu search_(Kim and Lee, 2012; Hasani et al., 2014; Bektur and Sarac, 2019; Alharkan et al., 2019); _ant colony optimization_(Arnaout, 2017); _geometric particle swarm optimization_(Alharkan et al., 2019); _iterative local search_(Silva et al., 2019); and _worm optimization_(Arnaout, 2021). In this section, we propose a General VNS (GVNS) which uses a variable neighborhood descent (VND) as a local search (Hansen et al., 2017). GVNS starts with an initial solution (generated by the iterative improvement procedure or randomly). Then, the shaking procedure and VND are applied to try to improve the current solution. Finally, this procedure continues until all predefined neighborhoods have been explored and a stopping criterion is met (e.g., a time limit). As far as we know, this is the first study in the literature to propose a GVNS for a parallel machine scheduling problem involving a single server. #### 5.2.1 Neighborhood structures Three commonly used neighborhood structures in the literature are adapted to the problem \(P2,S1|s_{j},t_{j}|C_{max}\). The first one is an Interchange-based neighborhood, the second one is an Insert-based neighborhood and the last one is a Reverse-based neighborhood. These structures have been widely applied to solve different scheduling problems (see Hasani et al. (2014, 2014); Alharkan et al. (2019)). The neighborhood structures are defined as follows: * Interchange(\(\Pi\)) : It consists of selecting a pair of jobs and exchanging their positions. We consider a solution of the problem denoted as \(\Pi_{s}=\{\pi_{1}^{s},\ldots,\pi_{k}^{s},\ldots,\pi_{n}^{s}\}\), and one of its neighbors \(\Pi_{t}=\{\pi_{1}^{t},\ldots,\pi_{k}^{t},\ldots,\pi_{n}^{t}\}\). We fix two different positions (\(a\neq b\)), and we exchange the jobs scheduled at the two positions (i.e., \(\pi_{a}^{t}=\pi_{b}^{s}\), \(\pi_{b}^{s}=\pi_{a}^{t}\) and \(\forall x,x\neq(a,b)\)\(\pi_{x}^{t}=\pi_{x}^{s}\)). * Reverse(\(\Pi\)) : It consists of all solutions obtained from the solution \(\Pi\) reversing a subsequence of \(\Pi\). More precisely, given two jobs \(\pi_{a}\) and \(\pi_{b}\), we construct a new sequence by first deleting the connection between \(\pi_{a}\) and its successor \(\pi_{a+1}\) and the connection between \(\pi_{j}\) and its successor \(\pi_{b+1}\). Next, we connect \(\pi_{a}\) with \(\pi_{b}\) and \(\pi_{a+1}\) with \(\pi_{b+1}\). * Insert(\(\Pi\)) : It consists of all solutions obtained from the solution \(\Pi\) by removing a job and inserting it at another position in the sequence. We consider a solution \(\Pi_{s}=\{\pi_{1}^{s},\ldots,\pi_{k}^{s},\ldots,\pi_{n}^{s}\}\), and one of its neighbors \(\Pi_{t}=\{\pi_{1}^{t},\ldots,\pi_{k}^{t},\ldots,\pi_{n}^{t}\}\). If \(a<b\), \(\pi_{a}^{t}=\pi_{b}^{s}\), \(\pi_{a+1}^{t}=\pi_{a}^{s},\ldots,\pi_{b}^{t}=\pi_{b-1}^{s}\). Otherwise, if \(b<a\), \(\pi_{b}^{t}=\pi_{b+1}^{s},\ldots,\pi_{a-1}^{t}=\pi_{a}^{s}\), \(\pi_{a}^{t}=\pi_{b}^{t}\). #### 5.2.2 Variable neighborhood descent We present here the variable neighborhood descent procedure designed for the problem \(P2,S1|s_{j},t_{j}|C_{max}\). It uses the neighborhood structures described in Section 5.2.1. VND has a solution which is a local optimum with respect to the Interchange, Insert and Reverse neighborhood structures. The order in which the neighborhoods are explored and the way how to move from one neighborhood to another one modify the performance of VND. For this problem, after performing preliminary experiments that are not presented here (as they concern minor parameters with respect to the overall approaches), the following settings are proposed. First, we use a basic sequential VND as a strategy to switch from one neighborhood to another one. Second, the following neighborhood structures order is chosen in the VND procedure: \(i)\)Interchange, \(ii)\)Insert and \(iii)\)Reverse. Finally, the first-improvement strategy (stop generating neighbors as soon as a current solution can be improved in terms of quality) turns out to be better than the best-improvement strategy (generate all the neighbors and choose the best one). The overall VND pseudo-code is presented in Algorithm 2. ``` Data: A sequence \(\Pi\) Result: \(\Pi^{t}\)procedureBasic_Sequential_VND() repeat 1\(l\gets 1\); 2while\(l\leq 3\,\text{do}\) switch\(l\)do 3case\(l=1\)do 4\(\Pi^{t}\leftarrow\textsc{interchange}(\Pi)\); break; 5 end if 6case\(l=2\)do 7\(\Pi^{t}\leftarrow\textsc{insert}(\Pi)\); break; 8 9 end for 10\(l\gets l+1\); 11if\(C_{max}(\Pi^{t})<C_{max}(\Pi)\)then 12\(\Pi\leftarrow\Pi^{t}\); 13\(l\gets 1\); 14 15 end if 16 17untilthere is no improvement return\(\Pi\); ``` **Algorithm 2**Sequential VND #### 5.2.3 The proposed GVNS and shaking procedure To escape from the local optima and have a chance to obtain a global optimum, we propose the following shaking procedure depicted in Algorithm 3. This procedure consists of generating \(k\) random jumps from the current solution \(\Pi^{\prime}\) using the neighborhood structure Reverse (i.e., \(k\) random iterations are performed in Reverse). After preliminary experiments, only one neighborhood structure is used (Reverse), and the value of the diversification factor \(k\) is chosen as 15, since they offer the best combination between solution quality (i.e., the quality of the obtained solution) and speed (i.e., the time to generate a feasible solution). The overall pseudo-code of GVNS as it is designed to solve the problem \(P2,S1|s_{j},t_{j}|C_{max}\) is depicted in Algorithm 4. After generating an initial solution (Step 1), a shaking procedure is then applied (Step 2). Once the shaking is performed, VND (Algorithm 2) starts exploring the three proposed neighborhood structures (Step 3). Step 2 and Step 3 until a stopping criterion (CPU) is met (the time limit denoted as \(t_{max}\)). Since GVNS is a trajectory-based procedure, starting from an initial solution is needed. In this paper, we compare two variants of GVNS, namely, one starting from the iterative improvement procedure which we denote as GVNS I, and one starting from a random solution which we denote as GVNS II. GVNS I and GVNS II are both compared in Section 6. ``` Data:\(\Pi\), \(k\) : diversification parameter Result:\(\Pi\) procedureShakise() for\(i=1\) to \(k\)do \(\Pi^{\prime}\) : a random solution with respect to the neighborhood structure Reverse; end for\(\Pi\leftarrow\Pi^{\prime}\); return\(\Pi\) ``` **Algorithm 3**Shaking ``` Data: A sequence \(\Pi\) of the problem \(P2,S1|s_{j},t_{j}|C_{max}\), \(t_{max}\): time limit, \(k_{max}\) Result:\(\Pi\), \(C_{max}(\Pi)\) procedureGNNSO \(\Pi\leftarrow\) Initial Solution(); repeat \(k\gets 1\); repeat \(\Pi^{\prime}\leftarrow\) Shakise(\(\Pi,k\)); \(\Pi^{\prime\prime}\leftarrow\) Base_Sequential_VND(\(\Pi^{\prime}\)); if\(C_{max}(\Pi^{\prime\prime})<C_{max}(\Pi)\)then \(\Pi\leftarrow\Pi^{\prime\prime}\); \(k\gets 1\); else \(k\gets k+1\); end if until\(k>k_{max}\) until\(CPU>t_{max}\) return\(\Pi\) ``` **Algorithm 4**General VNS for the problem \(P2,S1|s_{j},t_{j}|C_{max}\) ### Greedy randomized adaptive search procedures The Greedy randomized adaptive search procedure (GRASP) is a local search metaheuristic introduced by (Foo and Resende, 1995). It has been suggested to solve different scheduling problems (Baez et al., 2019; Yepes-Borrero et al., 2020). Like GVNS, GRASP has two main phases: diversification phase which is based on a greedy randomized construction procedure and an intensification phase based on the use of a local search procedure. Both phases are repeated in every iteration until a stopping criterion is met (e.g., the number of iterations or/and a time limit). In this section, we propose a hybridization of the GRASP metaheuristic with the VND procedure for the problem \(P2,S1|s_{j},t_{j}|C_{max}\). Indeed, VND is used as a local search method (as it contributes to a significant improvement of the quality of solutions in the preliminary experiments). The overall pseudo-code of our designed GRASP with VND as a local search is presented in the following (Algorithm 5). To the best of our knowledge, our paper is the first one in the literature implementing a GRASP metaheuristic for a variant of the PMSSS problem. ``` Data: A sequence \(\Pi\) of the problem \(P2,S1|s_{j},t_{j}|C_{max}\), \(t_{max}\): time limit Result: \(\Pi\) procedureGRASP() repeat\(\Pi^{\prime}\leftarrow\) Greedy_Randomized_Construction(\(\Pi\))\(\Pi^{\prime\prime}\leftarrow\)Base_Sequential_VND(\(\Pi^{\prime}\))if\(C_{max}(\Pi^{\prime\prime})<C_{max}(\Pi\))then\(\Pi\leftarrow\Pi^{\prime\prime}\) end ifuntil\(CPU>t_{max}\)return\(\Pi\) ``` **Algorithm 5**Greedy Randomized Adaptive Search Procedures #### 5.3.1 Greedy randomized construction The Greedy randomized construction (GRC) procedure of GRASP is presented in Algorithm 6. A solution \(\Pi^{\prime}=\{\pi^{\prime}_{1},\dots,\pi^{\prime}_{k},\dots,\pi^{\prime}_{n}\}\) is generated iteratively. Indeed, at each iteration of the GRC procedure, a new job is added. First, all jobs from the initial list \(\Pi\) are initially inserted into the Candidate List (\(CL\)). Then, the incremental cost associated with the incorporation of a job \(\pi_{s}\) from \(CL\) into the solution under construction is calculated. The incremental cost (\(IC\)) is calculated taking into account the loading and unloading constraints of the single server. Once the \(IC\) of all jobs is calculated, we choose the largest and smallest ones, which are denoted by \(\Phi_{max}\) and \(\Phi_{min}\), respectively. A Restrict Candidate List (\(RCL\)) is then created with the best candidate jobs that satisfy the following Inequality (28). \[\Phi(\pi_{s})\leq\Phi_{min}+\alpha(\Phi_{max}-\Phi_{min})\quad\forall\pi_{s} \in CL \tag{28}\] The parameter \(\alpha\) controls the amounts of greediness and randomness in the GRC procedure. \(\alpha_{g}\) is generated uniformly at random from the interval [0, 1] (a purely random construction corresponds to \(\alpha=1\), whereas the greedy construction corresponds to \(\alpha=0\)). Hence, a job \(\pi_{s}\) from \(RCL\) is selected and scheduled on the first available machine taking into consideration the loading and unloading operations performed by the single server. Finally, \(\pi_{s}\) is removed from \(CL\) and added to the output solution \(\Pi^{\prime}\). GRASP stops when all jobs from \(CL\) are scheduled on the two available machines. In order to improve the solution generated by the GRC procedure, the VND procedure (Algorithm 2) with the same neighborhood structures as described in Section 5.2.2 is used. The stopping criterion of GRASP is the CPU time limit \(t_{max}\). ``` Data: A job sequence \(\Pi\) Result: \(\Pi^{\prime}\) procedure Greedy_Randomized_Construction() 1Initialization: \(\Pi^{\prime}\leftarrow\varnothing\), \(a_{g}\in[0,1]\), \(CL\leftarrow\Pi\); 2 Evaluate the incremental cost \(\Phi(\pi_{k})\) of each job \(\pi_{k}\in CL\); 3repeat 4 Calculate \(\Phi_{min}=\min_{\pi_{k}\in CL}\Phi(\pi_{k})\); 5 Calculate \(\Phi_{max}=\max_{\pi_{k}\in CL}\Phi(\pi_{k})\); 6 RCL \(\leftarrow\{\pi_{k}\in CL|\Phi(\pi_{k})\leq 5\,\delta_{min}+\alpha(\Phi_{max}- \Phi_{min})\}\); 7 Select a job \(\pi_{s}\) from the RCL list at random; 8 Schedule the selected job \(\pi_{s}\) on the first available machine at the earliest possible time; 9\(CL\gets CL\setminus\{\pi_{s}\}\); 10\(\Pi^{\prime}\leftarrow\Pi^{\prime}\cup\{\pi_{s}\}\); 11 Reevaluate the incremental cost \(\Phi(\pi_{k})\) for each remaining job \(\pi_{k}\in CL\); 12until\(CL=\varnothing\) return\(\Pi^{\prime}\) ``` **Algorithm 6**Greedy Randomized Construction ## 6 Computational experiments This section evaluates the computational performance of the mathematical formulations and the metaheuristic approaches. First, the characteristics of the test instances are provided (Section 6.1). The performance of the mathematical formulations is presented and discussed in Section 6.2. Finally, the performance of the metaheuristic approaches is summarized and discussed in Section 6.3. The computational experiments were conducted on a personal computer Intel(R) Core(TM) with i7-4600M 2.90 GHz CPU and 16GB of RAM, running Windows 7. To solve the \(CF\), \(CF^{+}\), \(TIF\), and \(TIF^{+}\) formulations, we have used the Concert Technology library of CPLEX 12.6 version with default settings in C++. The time limit for solving the formulations was set to 3600 s. The meheuristics GVNS I, GVNS II, and GRASP were implemented in the C++ language. We recall that in the proposed GVNS I, the initial solution is obtained using an iterative improvement procedure. In addition, in GVNS II the initial solution is randomly generated. For the proposed GVNS I, GVNS II and GRASP, the time limit (\(t_{max}\)) is set to 10 seconds for small-sized instances (\(n\in\{8,10,12,25\}\)), \(t_{max}\) is set to 100 seconds for medium-sized instances (\(n\in\{50,100\}\)), and \(t_{max}\) is set to 300 seconds for large-sized instances (\(n\in\{250,500\}\)). According to the best practice of the related literature, the metaheuristics were executed 10 times in all experiments, except for the small-sized instances for which one run is sufficient, and the best and average results are provided. ### Benchmark instances To the best of our knowledge, there are no publicly available benchmark instances for the problem \(P2,S1|s_{j},t_{j}|C_{max}\) involving a single server for the loading and unloading operations. Therefore, we have generated a set of instances according to the recent literature, as proposed by Kim and Lee (2021) and Lee and Kim (2021). The instances are characterized by the following features: * The number of jobs \(n\in\{8,10,12,25,50,100,250,500\}\). * The integer processing times \(p_{j}\) are uniformly distributed in the interval [10, 100]. * The integer loading times \(s_{j}=\alpha\times p_{j}\) (\(\alpha\in\{\alpha_{1},\alpha_{2},\alpha_{3}\}\)), where \(\alpha\) is a coefficient randomly generated by the uniform distribution with \(\alpha_{1}\in[0.01,0.1]\), \(\alpha_{2}\in[0.1,0.2]\), and \(\alpha_{3}\in[0.1,0.5]\). * The integer unloading times \(t_{j}=\alpha\times p_{j}\). Note that \(\alpha_{1}\), \(\alpha_{2}\), and \(\alpha_{3}\), respectively, correspond to small, moderate, and large loading/unloading times variance (see Kim and Lee (2021); Lee and Kim (2021)). For each combination of \((n,\alpha_{i})\)\(\forall i\in\{1,\ldots,3\}\), ten instances were created, resulting in a total of 240 new instances. We recall that small-sized instances are those with \(n\in\{8,10,12,25\}\), medium-sized instances are those with \(n\in\{50,100\}\) and large-sized instances are those with \(n\in\{250,500\}\). ### Exact approaches In Table 2, we compare the performance of the \(CV\), \(CV^{+}\), \(TIF\) and \(TIF^{+}\) formulations for small and medium-sized instances. Note that the results for large-sized instances \(n\in\{250,500\}\) are not reported, since all formulations are not able to produce a feasible solution within the time limit of 3600 s. The results are provided for each number \(n\) of jobs and for each loading/unloading times variance (\(\alpha\in\{\alpha_{1},\alpha_{2},\alpha_{3}\}\)). In addition, for each formulation, the following information is given: \(i)\) the number of instances solved to optimality, \(\#opt\); \(ii)\) the average time required to obtain an optimal solution, \(t(s)\); \(iii)\) the number of instances with a feasible solution, \(\#is\); and \(iv)\) the average percentage gap to optimality, \(gap_{LB}(\%)\). The following observations can be made: \begin{table} \begin{tabular}{|c|c|c c c|c c|c c|c c|} \hline instances & \multicolumn{3}{c|}{\(CF\)} & \multicolumn{3}{c|}{\(TF^{+}\)} & \multicolumn{3}{c|}{\(TIF^{+}\)} & \multicolumn{3}{c|}{\(TIF^{+}\)} \\ \hline \(n\) & \(\alpha\) & \(\#opt\) & \(t(s)\) & \(\#is\) [\(gap_{LB}(\%)\)] & \(\#opt\) & \(t(s)\) & \(\#is\) [\(gap_{LB}(\%)\)] & \(\#opt\) & \(t(s)\) & \(\#is\) [\(gap_{LB}(\%)\)] \\ \hline 8 & \(\alpha_{1}\) & 10 & 6.47 & 0 [0] & 10 & 0.53 & 0 [0] & 10 & 4.35 & 0 [0] & 10 & 3.26 & 0 [0] \\ & \(\alpha_{1}\) & 10 & 5.84 & 0 [0] & 10 & 0.74 & 0 [0] & 10 & 12.81 & 0 [0] & 10 & 7.09 & 0 [0] \\ & \(\alpha_{1}\) & 10 & 4.92 & 0 [0] & 10 & 0.87 & 0 [0] & 10 & 36.86 & 0 [0] & 10 & 11.21 & 0 [0] \\ \hline 10 & \(\alpha_{1}\) & 10 & 880.51 & 0 [0] & 10 & 6.58 & 0 [0] & 10 & 15.09 & 0 [0] & 10 & 6.81 & 0 [0] \\ & \(\alpha_{1}\) & 10 & 166.43 & 0 [0] & 10 & 40.07 & 0 [0] & 10 & 28.64 & 0 [0] & 10 & 13.83 & 0 [0] \\ & \(\alpha_{1}\) & 10 & 85.68 & 0 [0] & 10 & 10.70 & 0 [0] & 10 & 97.43 & 0 [0] & 10 & 40.66 & 0 [0] \\ \hline 12 & \(\alpha_{1}\) & 0 & 3600 & 10 [14.74] & 8 & 1648.67 & 2 [0.29] & 10 & 51.74 & 0 [0] & 10 & 12.4 & 0 [0] \\ & \(\alpha_{1}\) & 0 & 3600 & 10 [18.34] & 8 & 1666.33 & 2 [0.18] & 10 & 160.67 & 0 [0] & 10 & 44.74 & 0 [0] \\ & \(\alpha_{1}\) & 4 & 1469.49 & 6 [20.02] & 7 & 893.33 & 3 [0.69] & 10 & 555.21 & 0 [0] & 10 & 149.91 & 0 [0] \\ \hline 25 & \(\alpha_{1}\) & 0 & 3600 & 10 [81.39] & 0 & 3600 & 10 [0.16] & 4 & 2462.14 & 6 [37.69] & 6 & 1717.90 & 4 [24.72] \\ & \(\alpha_{1}\) & 0 & 3600 & 9 [77.37] & 0 & 3600 & 10 [0.26] & 1 & 3537 & 9 [36.89] & 1 & 2456.37 & 9 [31.23] \\ & \(\alpha_{1}\) & 0 & 3600 & 1 [66.66] & 0 & 3600 & 10 [0.91] & 0 & 3600 & 10 [42.10] & 0 & 3600 & 10 [38.26] \\ \hline 50 & \(\alpha_{1}\) & 0 & 3600 & 10 [91.79] & 0 & 3600 & 10 [0.26] & 0 & 3600 & 10 [66.46] & 0 & 3600 & 10 [52.03] \\ & \(\alpha_{1}\) & 0 & 3600 & 9 [91.53] & 0 & 3600 & 10 [0.60] & 0 & 3600 & 10 [71.87] & 0 & 3600 & 10 [58.81] \\ & \(\alpha_{1}\) & 0 & 3600 & 10 [90.36] & 0 & 3600 & 10 [3.40] & 0 & 3600 & 10 [73.41] & 0 & 3600 & 10 [55.26] \\ \hline 100 & \(\alpha_{1}\) & * & * & * & 0 & 3600 & 10 [0.96] & 0 & 3600 & 10 [79.63] & 0 & 3600 & 10 [94.90] \\ & \(\alpha_{1}\) & * & * & * & 0 & 3600 & 10 [3.79] & * & * & * & 0 & 3600 & 10 [99.98] \\ & \(\alpha_{1}\) & * & * & * & 0 & 3600 & 6 [15.15] & * & * & * & 0 & 3600 & 10 [100.00] \\ \hline \end{tabular} \end{table} Table 2: Comparison of \(CF\), \(CF^{+}\), \(TIF\), and \(TIF^{+}\) for \(n\in\{8,10,12,25,50,100\}\). * For \(n=8\) : Based on the formulations \(CF\), \(CF^{+}\), \(TIF\) and \(TIF^{+}\), CPLEX is able to find an optimal solution for any instance. It can be noticed that for the improved formulations \(CV^{+}\) and \(TIF^{+}\), CPLEX is able to produce an optimal solution in significantly less computational time in comparison with the original formulation. The best overall performance is demonstrated by \(CF^{+}\) with an average computational time required to find an optimal solution equal to \(0.71\) s. * For \(n=10\) : For all formulations, CPLEX is able to find an optimal solution for any instance. The best overall performance is demonstrated by \(CF^{+}\) for \(\alpha_{1}\), \(TIF^{+}\) for \(\alpha_{2}\), and \(CF^{+}\) for \(\alpha_{3}\). Based on the formulation \(CF^{+}\), CPLEX is able to produce an optimal solution in significantly less computational time in comparison with \(CF\). The best overall performance is demonstrated by \(CF^{+}\) with an average computational time required to find an optimal solution equal to \(19.11\) s. * For \(n=12\): \(TIF\) and \(TIF^{+}\) are the only formulations for which CPLEX is able to find an optimal solution for any instance. Based on the formulation \(CF\), CPLEX is able to produce an optimal solution for only \(4\) instances (among \(30\) ones). In addition, based on the improved formulation \(CF^{+}\), CPLEX is able to find an optimal solution for \(24\) instances (among \(30\) ones). Note that the improved formulation \(CF^{+}\) reduced significantly the value of \(gap_{LB}(\%)\). The best overall performance is demonstrated by \(TIF^{+}\) with an average computational time required to find an optimal solution equal to \(69.01\) s. * For \(n=25\) : \(TIF^{+}\) is the only formulation for which CPLEX is able to find an optimal solution for \(6\) instances for \(\alpha_{1}\), and \(1\) instance for \(\alpha_{2}\). In addition, based on the formulations \(CF\) and \(CF^{+}\), CPLEX is able to produce a feasible solution for any instance at best. It can be noticed that the improved formulation \(CF^{+}\) reduced significantly the value of \(gap_{LB}(\%)\) as compared with the original one. The best overall performance is demonstrated by \(TIF^{+}\). * For \(n=50\): For all formulations, CPLEX is able to find a feasible solution for any instance at best. The improved formulations \(CF^{+}\) reduced significantly the value of \(gap_{LB}(\%)\). Note that based on the formulation \(CF\), CPLEX is not able to find a feasible solution for \(1\) instance. The best overall performance is demonstrated by \(CF^{+}\) with small values of \(gap_{LB}(\%)\). * For \(n=100\) : Based on the formulation \(CF\), CPLEX is not able to produce a feasible solution for any instance. Based on formulation \(CF^{+}\), CPLEX is able to find a feasible solution for \(26\) instances for \(\alpha_{3}\) (among the \(30\) ones). In addition, based on formulation \(TIF\), CPLEX is not able to find a feasible solution for all instances for \(\alpha_{2}\) and \(\alpha_{3}\). It can be noticed that the improved formulations \(CF^{+}\) reduced significantly the value of \(gap_{LB}(\%)\). To sum up, the computational comparison of \(CF\), \(CF^{+}\), \(TIF\) and \(TIF^{+}\) shows that their performance is related to the number of jobs and the loading/unloading times variance. In addition, using the proposed strengthening constraints (14) and (15), the \(CF^{+}\) formulation produced lower bounds better than all other formulations for all instances (instances with a feasible solution). Moreover, for \(n\in\{8,10\}\), the best formulation in terms of the average computing time required to obtain an optimal solution is \(CF^{+}\). For \(n=12\), the best performance is demonstrated by \(TIF^{+}\) (since it solved to optimality all instances). For \(n=25\), the best performance is demonstrated by \(TIF^{+}\) (since it solved \(7\) instances among the \(30\) ones to optimality). For \(n\in\{50,100\}\), the best formulation in terms of the average percentage gap to optimality is demonstrated by \(CF^{+}\). It turns out that the formulations \(CF^{+}\) and \(TIF^{+}\) are complementary. Subsequently, we compare only \(CF^{+}\) and \(TIF^{+}\) with the other approaches since they produce the best results. Note that \(TIF^{+}\) was able to prove optimality within 3600 s for 7 instances among the 30 ones with \(n=25\). Therefore, metaheuristics are needed being able to find an approximate solution in a very short computational time. ### Metaheuristic approaches #### 6.3.1 Results for small-sized instances In Tables 3, 4, 5, we compare the performance of \(CF^{+}\), \(TIF^{+}\), GVNS I, GVNS II, and GRASP for instances with \(n\in\{8,10,12\}\), where an optimal solution can be found within 3600 s by \(CF^{+}\) and \(TIF^{+}\). Each instance is characterized by the following information. The ID (for example II denotes the \(1^{th}\) instance with \(n=8\) and \(\alpha=\alpha_{1}\)); the number \(n\) of jobs; the loading/unloading times variance; the value of the theoretical lower bound \(LB_{T}\) computed as in Section 4.3. Next, the optimal makespan (denoted by \(C^{*}_{max}\)) obtained with the \(CF^{+}\) and \(TIF^{+}\) formulations is given. Finally, the computational time (CPU) to find an optimal solution is given for \(CF^{+}\), \(TIF^{+}\), GVNS I, GVNS II, and GRASP. Note that after each 10 instances of Table 3, 4, 5 (for example I1,...,I10), the average results for each column are reported (the best results are indicated in bold). The following observations can be made: * In Table 3 : \(CF^{+}\), \(TIF^{+}\), GVNS I, GVNS II, and GRASP are compared. It can be noticed that the three proposed metaheuristics can reach an optimal solution for any instance in significantly less computational \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{3}{|c|}{Instance} & \multicolumn{2}{c|}{\(CF^{+}\)} & \(TIF^{+}\) & GVNS IGVNS II & GRASP \\ \hline ID & n & \(\alpha\) & \(LB_{T}\) & \(C^{*}_{max}\) & \multicolumn{3}{c|}{CPU} \\ \hline I1 & & 295 & 295 & 0.456 & 4.114 & 0.000 & 0.001 & 0.002 \\ I2 & & 288.5 & 289 & 0.612 & 5.486 & 0.000 & 0.006 & 0.054 \\ I3 & & 258.5 & 259 & 0.529 & 4.274 & 0.000 & 0.000 & 0.000 \\ I4 & & 217 & 217 & 0.418 & 2.821 & 0.006 & 0.001 & 0.001 \\ I5 & 8\(\alpha_{i}\) & 236.5 & 237 & 0.346 & 2.945 & 0.000 & 0.000 & 0.000 \\ I6 & & 237 & 238 & 0.749 & 303 & 0.000 & 0.000 & 0.000 \\ I7 & & 216 & 218 & 0.306 & 2.953 & 0.001 & 0.001 & 0.000 \\ I8 & & 229.5 & 230 & 0.629 & 3.061 & 0.001 & 0.000 & 0.000 \\ I9 & & 193 & 193 & 0.731 & 19.535 & 0.000 & 0.001 & 0.001 \\ I10 & & 194.5 & 196 & 0.560 & 1.942 & 0.000 & 0.001 & 0.001 \\ \hline \multicolumn{3}{|c|}{**Avg. 236.55**} & **237.27** & 0.534 & 3.256 & **0.001** & **0.001** & 0.006 \\ \hline I11 & & 382.5 & 383 & 0.104 & 9.911 & 0.013 & 0.052 & 0.091 \\ I12 & & 277 & 277 & 0.540 & 5.428 & 0.002 & 0.004 & 0.007 \\ I13 & & 274.5 & 276 & 0.561 & 0.058 & 0.000 & 0.004 & 0.007 \\ I14 & & 336.5 & 328 & 0.726 & 5.673 & 0.001 & 0.001 & 0.000 \\ I15 & 8\(\alpha_{2}\) & 259.5 & 260 & 0.519 & 3.853 & 0.001 & 0.003 & 0.001 \\ I16 & & 311.5 & 312 & 0.845 & 6.795 & 0.000 & 0.003 & 0.001 \\ I17 & & 319 & 320 & 0.683 & 0.1524 & 0.003 & 0.001 & 0.004 \\ I18 & & 284.5 & 286 & 0.831 & 5.876 & 0.002 & 0.003 & 0.003 \\ I19 & & 334.5 & 336 & 0.851 & 7.455 & 0.001 & 0.001 & 0.001 \\ I20 & & 347 & 349 & 0.812 & 9.326 & 0.003 & 0.002 & 0.001 \\ \hline \multicolumn{3}{|c|}{**Avg. 311.65**} & **312.7** & 0.737 & 7.090 & **0.003** & 0.007 & 0.011 \\ \hline I21 & & 324 & 325 & 0.978 & 0.029 & 0.002 & 0.002 & 0.008 \\ I22 & & 406.5 & 408 & 0.101 & 10.123 & 0.127 & 0.004 & 8.597 \\ I23 & & 324 & 325 & 0.990 & 15.213 & 0.006 & 0.008 & 0.405 \\ I24 & & 246.5 & 248 & 0.634 & 3.621 & 0.002 & 0.016 & 0.001 \\ I25 & 8\(\alpha_{3}\) & 350.5 & 352 & 0.839 & 15.506 & 0.005 & 0.004 & 0.002 \\ I26 & & 315.1 & 335 & 0.769 & 13.742 & 0.024 & 0.380 & 1.893 \\ I27 & & 264.5 & 266 & 0.810 & 3.301 & 0.002 & 0.007 & 0.299 \\ I28 & & 293 & 300 & 0.861 & 2.168 & 0.000 & 0.005 & 0.201 \\ I29 & & 385 & 387 & 0.830 & 9.247 & 0.008 & 0.010 & 0.800 \\ I30 & & 331 & 331 & 0.898 & 8.179 & 0.005 & 0.003 & 0.499 \\ \hline \multicolumn{3}{|c|}{**Avg. 325.65**} & **327.70** & 0.869 & 11.213 & **0.018** & 0.044 & 1.278 \\ \hline \end{tabular} \end{table} Table 3: Comparison of \(CF^{+}\), \(TIF^{+}\), GVNS I, GVNS II, and GRASP in terms of CPU time for \(n=8\). time than the two formulations. For example, for \((n=8,\alpha=\alpha_{3})\), the average computational time for \(CF^{+}\) is 0.869 s, while the average computational time for GVNS I is 0.018 s. It can be noted that the value of the theoretical lower bound (\(LB_{T}\)) is very tight (since the average gap between the optimal makespan and \(LB_{T}\) is equal to 0.427%). The best overall performance is demonstrated by GVNS I with a total average computational time of 0.007 s for \(n=8\). * In Table 4 : GVNS I, GVNS II, and GRASP can reach an optimal solution for any instance in significantly less computational time than the \(CF^{+}\) and \(TIF^{+}\) formulations. For example, for \((n=10,\alpha=\alpha_{1})\), the average computational time for \(TIF^{+}\) is 6.812 s, while the average computational time for GVNS II is equal to 0.003 s. The average gap between \(C^{*}_{max}\) and \(LB_{T}\) is equal to 0.176%. The best overall performance is shown by GVNS I with a total average computational time of 0.043 s for \(n=10\). * In Table 4 : \(TIF^{+}\), GVNS I, GVNS II, and GRASP are compared (since \(CF^{+}\) is not able to produce an optimal solution for all instances). It can be noticed that GVNS I, GVNS II, and GRASP can reach an optimal solution for any instance in significantly less computational time than the \(TIF^{+}\) formulation. For example, for \((n=12,\alpha=\alpha_{3})\), the average computational time for \(TIF^{+}\) is equal to 149.914 s, while the average computational time for GVNS I is 2.785 s. Again, the value of \(LB_{T}\) is very tight since the average gap between \(C^{*}_{max}\) and \(LB_{T}\) is equal to 0.115%. The best overall performance is demonstrated by GVNS I with a total average computational time of 0.930 s for \(n=10\). In Table 6, we compare the performance of \(CF^{+}\), \(TIF^{+}\), GVNS I, GVNS II, and GRASP for \(n=25\). Each instance is characterized by the following information. The ID; the number \(n\) of jobs; the loading/unloading \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{3}{|c|}{Instance} & \multicolumn{2}{|c|}{\(CF^{+}\)} & \(TIF^{+}\) & GVNS II & \multicolumn{2}{|c|}{GVNS II} & GRASP \\ \hline ID & \(n\) & \(\alpha\) & \(LB_{T}\) & \(C^{*}_{max}\) & \multicolumn{4}{|c|}{CPU} \\ \hline [MISSING_PAGE_POST] 5 & 444 & 13.581 & 55.662 & 0.076 & 0.552 & 0.127 \\ \hline \multicolumn{3}{|c|}{**Avg.**} & **433.45** & **434.60** & 10.695 & 40.663 & **0.105** & 0.864 & 1.592 \\ \hline \end{tabular} \end{table} Table 4: Comparison of \(CF^{+}\), \(TIF^{+}\), GVNS I, GVNS II, and GRASP in terms of CPU time for \(n=10\). times variance; and the value of the theoretical lower bound \(LB_{T}\). For the formulation \(CF^{+}\) (respresp. \(TIF^{+}\)), the following information is given. The upper bound \(UB_{CF^{+}}\) (respectively \(UB_{TIF^{+}}\)), the lower bound \(LB_{CF^{+}}\) (respectively \(LB_{TIF^{+}}\)), the percentage gap to optimality \(Gap_{CF^{+}}(\%)\) (resp. \(Gap_{TIF^{+}}(\%)\)), and the CPU time \begin{table} \begin{tabular}{|c|c|c c c|c c c|c c c|c c c|c c c|} \hline \multicolumn{2}{|c|}{Instance} & \multicolumn{4}{c|}{\(CF^{k}\)} & \multicolumn{4}{c|}{\(TIF^{k}\)} & \multicolumn{4}{c|}{GVNS I} & \multicolumn{4}{c|}{GVNS II} & \multicolumn{4}{c|}{GRASP} \\ \hline ID & \(n\) & \(L_{BF}\) & \(UB_{CF^{+}}\) & \(LB_{DF^{+}}\) & \(AGp_{CF^{+}}(\%)\) & CPU & \(UB_{TIF^{+}}\) & \(LB_{BF^{+}}\) & \(AGp_{TIF^{+}}(\%)\) & CPU & \(C^{max}\) & \(C^{max}\) & CPU & \(C^{max}\) & \(C^{max}\) & \(C^{max}\) & CPU \\ \hline [MISSING_PAGE_POST] 5 & 1187 & 1179.5 & 0.63 & 36 required to prove optimality (below 3600 s). The following results are presented for GVNS I, GVNS II and GRASP: the best (resp. the average) makespan value over 10 runs denoted as \(C_{max}^{best}\) (resp. \(C_{max}^{avg}\)). Finally, the average computational times are also provided that are computed over the 10 runs, and the computational time of a run corresponds to the time at which the best solution is found (the best results are indicated in bold face). The following observations can be made. Based on \(CF^{+}\), CPLEX is not able to produce an optimal solution for all instances. Furthermore, based on the \(TIF^{+}\) formulation, CPLEX is able to find an optimal solution for 6 instances for (\(n=25,\alpha=\alpha_{1}\)) (I2, I3, I5, I7, I9 and I10). For these instances, GVNS I, GVNS II, and GRASP are able to find the same optimal solution in a significantly less computational time in comparison with \(TIF^{+}\). For (\(n=25,\alpha=\alpha_{2}\)), based on \(TIF^{+}\) formulation, CPLEX is able to generate an optimal solution for only 1 instance (I103). GVNS I, GVNS II, and GRASP are able to find the optimal solution for instance I103 in a significantly less computing time in comparison with \(TIF^{+}\). For \(\alpha_{3}\), \(CF^{+}\) produced better upper bounds than \(TIF^{+}\). On average, GVNS I, GVNS II, and GRASP are able to produce approximate solutions of better quality in comparison with the upper bounds generated by the formulations \(CF^{+}\) and \(TIF^{+}\). To sum up, for \(n=25\) GVNS I, GVNS II, and GRASP have a similar performance. In addition, the difference between \(C_{max}^{best}\) and \(C_{max}^{avg}\) is very small (often below one unit). #### 6.3.2 Results for medium-sized instances In Table 10 in the Appendix, we compare the performance of \(CF^{+}\), \(TIF^{+}\), GVNS I, GVNS II, and GRASP for \(n=50\), and in Table 11, we compare the performance of GVNS I, GVNS II, and GRASP only with \(CF^{+}\) (since the \(TIF^{+}\) formulation is not able to find a feasible solution for the majority of instances). Tables 10, 11 have the same structure as Table 6, and the best results are indicated in bold face. The following observations can be made. * For \(n=50\) : Overall, among the 30 instances, GVNS I found 25 best solutions (83.33%), whereas GVNS II and GRASP found 23 (76.66%) and 20 (66.66%) ones, respectively. It can be noted that the difference between \(C_{max}^{best}\) and \(C_{max}^{avg}\) is on average very small for GVNS I, GVNS II and GRASP. For the instances with \(\alpha_{3}\), \(CF^{+}\) produced better upper bounds than \(TIF^{+}\). On average, GVNS I, GVNS II and GRASP are able to produce approximate solutions of better quality in comparison with the upper bounds generated by the formulations \(CF^{+}\) and \(TIF^{+}\). In addition, for all instances of \(\alpha_{1}\) and \(\alpha_{2}\), the gap between \(C_{max}^{best}\) and \(LB_{T}\) is very small for GVNS I, GVNS II and GRASP. The best performance in terms of solution quality and computational time is demonstrated by GVNS I, with 24 best solutions and an overall average computational time of 23.86 s. * For \(n=100\) : Overall, among the 30 instances, GVNS I found 24 best solutions (80%), whereas GVNS II and GRASP found only 16 (53.33%) and 11 (36.66%) ones, respectively. Based on \(CF^{+}\), CPLEX is able to produce a feasible solution at best for 24 instances (4 instances without a feasible solution). For the instances with \(\alpha_{1}\), the gap between \(C_{max}^{best}\) and \(LB_{T}\) is very small for GVNS I, GVNS II and GRASP. On average, GVNS I, GVNS II and GRASP are able to produce approximate solutions of better quality in comparison with the upper bounds generated by the formulation \(CF^{+}\). For the instances with \(\alpha_{3}\) and for GVNS I, GVNS II and GRASP, the difference between \(C_{max}^{best}\) and \(C_{max}^{avg}\) grows significantly. The best overall performance is demonstrated by GVNS I, with 24 best solutions and an overall average computational time of 27.31 s. #### 6.3.3 Results for large-sized instances Tables A.12, A.13 describe the performance of GVNS I, GVNS II, and GRASP for large-sized instances. They have the same structure as Table A.11, and the best results are indicated in bold face. Note that no mathematical formulation is able to obtain a feasible solution within 3600 s. The following observations can be made: * For \(n=250\) : Overall, among the 30 instances, GVNS I found 27 best solutions (90%), whereas GVNS II and GRASP found only 3 (10%) and 1 (3.33%), respectively. For all instances and for GVNS I, GVNS II and GRASP, the difference between \(C_{max}^{best}\) and \(C_{max}^{avg}\) grows significantly. The average computational time for GVNS I (resp. GVNS II and GRASP) is equal to 113 s (resp. 185.64 and 154.10). The best overall performance is demonstrated by GVNS I with 27 best solutions and an overall average computational time of 113 s. * For \(n=500\) : Overall, among the 30 instances, GVNS I found 29 best solutions (96.66%), whereas GVNS II and GRASP found only 1 (3.33%) and 0 (0%), respectively. For GVNS I, GVNS II and GRASP, the difference between \(C_{max}^{best}\) and \(C_{max}^{avg}\) grows significantly especially for the instances with \(\alpha_{3}\). The average computational time for GVNS I (resp. GVNS II and GRASP) is equal to 99.72 s (resp. 225.52 and 218.88). The best overall performance is demonstrated by GVNS I with 29 best solutions and an average computational time of 99.72 s. To sum up, for small-sized instances, GVNS I, GVNS II and GRASP are able to produce an optimal solution for all instances (among the 120 instances) in significantly less computing time in comparison with \(CF^{+}\) and \(TIF^{+}\) formulations. For medium and large-sized instances, among the 120 instances, GVNS I found 105 best solutions (87.50%), whereas GVNS II and GRASP found only 43 (35.83%) and 32 (26.66%) ones, respectively. This success can be explained by the quality of the initial solution since the iterative improvement procedure contributes significantly to the minimization of the makespan. Note that the difference between \(C_{max}^{best}\) and \(C_{max}^{avg}\) for all methods grows with \(n\) and \(\alpha\). These results can lead to the indication that the instances with \(\alpha_{3}\) are more difficult to solve in comparison with \(\alpha_{1}\) and \(\alpha_{2}\), which is expected since the loading/unloading times variance is large for \(\alpha_{3}\). #### 6.3.4 Discussion Table 7 presents the performance of GVNS I, GVNS II, and GRASP in terms of the percentage deviation from the theoretical lower bound (\(Gap_{LBP}(\%)\)) according to the number of jobs and the loading/unloading variance coefficient. In order to compute each percentage deviation, two values are compared: the value of \(LB_{T}\), and the value of the best approximate makespan obtained over 10 runs by the considered method (see Equation 29). \[Gap_{LB_{T}}(\%)=100\times\frac{C_{max}^{best}-LB_{T}}{LB_{T}} \tag{29}\] The average percentage deviation from the theoretical lower bound of GVNS I (respectively GVNS II and GRASP) for the instances with \(\alpha_{1}\) and \(n\in\{8,10,12,25,50,100,250,500\}\) is equal to \(0.06\) (respectively \(0.09\) and \(0.10\)). The average percentage deviation from the theoretical lower bound of GVNS I (resp. GVNS II and GRASP) for the instances with \(\alpha_{2}\) and \(n\in\{8,10,12,25,50,100,250,500\}\) is equal to \(0.19\) (respectively \(0.38\%\) and \(0.39\%\)). Finally, the average percentage deviation from the theoretical lower bound of GVNS I (respectively GVNS II and GRASP) for \(\alpha_{3}\) and \(n\in\{8,10,12,25,50,100,250,500\}\) is equal to \(1.67\%\) (resp. \(2.45\%\) and \(2.58\%\)). One can see that the average percentage deviation from the theoretical lower bound increases with the increase of the loading/unloading times variance. Indeed, as shown in the preceding section, the instances with a large loading/unloading times variance are more difficult to solve than the other instances (we recall that the difference between \(C_{max}^{best}\) and \(C_{max}^{avg}\) is very small for \(\alpha_{1}\) and \(\alpha_{2}\)). The total average percentage deviation from the theoretical lower bound of GVNS I (respectively GVNS II and GRASP) for all instances is equal to \(0.642\%\) (respectively \(0.98\%\) and \(1.02\%\)). To sum up, we can observe the superiority of GVNS I over GVNS II and GRASP. Moreover, Table 8 summarizes the performance of GVNS I, GVNS II, and GRASP in terms of the percentage deviation from the best-known solution (the best one over all the runs of all the metaheuristics, and the one obtained by the considered metaheuristic) according to \(n\) and \(\alpha\). For each metaheuristic, the following features are given: the minimum value of the percentage deviation over all instance', Min; the average value of \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{3}{|c|}{\(Gap_{LB_{T}}(\%)\)} \\ \cline{2-5} \(n\) & \(\alpha\) & GVNSI & GVNSI & GRASP \\ \hline 8 & \(\alpha_{1}\) & 0.29 & 0.29 & 0.29 \\ & \(\alpha_{2}\) & 0.34 & 0.34 & 0.34 \\ & \(\alpha_{3}\) & 0.66 & 0.66 & 0.66 \\ \hline 10 & \(\alpha_{1}\) & 0.07 & 0.07 & 0.07 \\ & \(\alpha_{2}\) & 0.19 & 0.19 & 0.19 \\ & \(\alpha_{3}\) & 0.27 & 0.27 & 0.27 \\ \hline 12 & \(\alpha_{1}\) & 0.05 & 0.05 & 0.05 \\ & \(\alpha_{2}\) & 0.16 & 0.16 & 0.16 \\ & \(\alpha_{3}\) & 0.13 & 0.13 & 0.13 \\ \hline 25 & \(\alpha_{1}\) & 0.02 & 0.02 & 0.02 \\ & \(\alpha_{2}\) & 0.02 & 0.02 & 0.02 \\ & \(\alpha_{3}\) & 0.54 & 0.39 & 0.57 \\ \hline 50 & \(\alpha_{1}\) & 0.02 & 0.02 & 0.02 \\ & \(\alpha_{2}\) & 0.02 & 0.01 & 0.02 \\ & \(\alpha_{3}\) & 1.50 & 1.57 & 1.80 \\ \hline 100 & \(\alpha_{1}\) & 0.01 & 0.01 & 0.01 \\ & \(\alpha_{2}\) & 0.14 & 0.15 & 0.17 \\ & \(\alpha_{3}\) & 2.91 & 3.80 & 3.68 \\ \hline 250 & \(\alpha_{1}\) & 0.03 & 0.06 & 0.09 \\ & \(\alpha_{2}\) & 0.35 & 0.82 & 0.85 \\ & \(\alpha_{3}\) & 3.70 & 5.73 & 6.22 \\ \hline 500 & \(\alpha_{1}\) & 0.02 & 0.23 & 0.21 \\ & \(\alpha_{2}\) & 0.31 & 1.37 & 1.34 \\ & \(\alpha_{3}\) & 3.66 & 7.07 & 7.31 \\ \hline \end{tabular} \end{table} Table 7: Comparison of GVNS I, GVNS II, and GRASP in terms of percentage deviation from the theoretical lower bound for \(n\in\{8,10,12,25,50,100,250,500\}\). the percentage deviation over all instance', Avg; and the maximum value of the percentage deviation over all instance', Max. The last line of the table shows total average results. The results show again that GVNS I based on the iterative improvement procedure as initial-solution finding mechanism, on average, yielded a superior performance in terms of minimum, average and maximum gaps when compared to GVNS II, and GRASP. ## 7 Managerial insights In this section, some managerial insights are presented regarding the investigated problem \(P2,S1|s_{j},t_{j}|C_{max}\). We propose to compare our results with the ones of Benmansour and Sifaleras (2021). Indeed, Benmansour and Sifaleras (2021) suggested a MILP formulation and a GVNS metaheuristic for the problem \(P2,S2|s_{j},t_{j}|C_{max}\) involving two dedicated servers : one for the loading operations and one for the unloading operations. Indeed, in the problem \(P2,S2|s_{j},t_{j}|C_{max}\), each job has to be loaded by a dedicated (loading) server and unloaded by a dedicated (unloading) server, respectively, immediately before and after being processed on one of the two machines, while in the problem \(P2,S1|s_{j},t_{j}|C_{max}\), only one resource (server) is in charge of both the loading and unloading operations. The objective of this section is to show the impact of removing the unloading server on the makespan (i.e., the single server will be in charge of both the loading and unloading operations). We propose first to improve the mathematical formulation proposed in Benmansour and Sifaleras (2021) denoted by \(MIP_{2S}\) by adding the two valid inequalities proposed in Section 3.3. The new obtained formulation is denoted by \(MIP_{2S}^{+}\). Therefore, the formulations \(MIP_{2S}\) and \(MIP_{2S}^{+}\) are compared with the formulations \(CF^{+}\) and \(TIF^{+}\) \begin{table} \begin{tabular}{|c|c|c c c|c c c|c c c|} \hline \multirow{3}{*}{n} & \multirow{3}{*}{\(\alpha\)} & \multicolumn{8}{c|}{\(Gap_{Dex}(\%)\)} \\ \cline{3-11} & & \multicolumn{3}{c|}{GVNS I} & \multicolumn{3}{c|}{GVNS II} & \multicolumn{3}{c|}{GRASP} \\ \cline{3-11} & & Min. & Avg. & Max. & Min. & Avg. & Max. & Min. & Avg. & Max. \\ \hline 8 & \(\alpha_{1}\) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & \(\alpha_{2}\) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & \(\alpha_{3}\) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline 10 & \(\alpha_{1}\) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & \(\alpha_{2}\) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & \(\alpha_{3}\) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline 12 & \(\alpha_{1}\) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & \(\alpha_{2}\) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & \(\alpha_{3}\) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline 25 & \(\alpha_{1}\) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & \(\alpha_{2}\) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & \(\alpha_{3}\) & 0.00 & 0.24 & 0.72 & 0.00 & 0.10 & 0.48 & 0.00 & 0.27 & 0.99 \\ \hline 50 & \(\alpha_{1}\) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & \(\alpha_{2}\) & 0.00 & 0.01 & 0.06 & 0.00 & 0.00 & 0.00 & 0.00 & 0.01 & 0.06 \\ & \(\alpha_{3}\) & 0.00 & 0.13 & 0.61 & 0.00 & 0.20 & 0.66 & 0.00 & 0.43 & 1.26 \\ \hline 100 & \(\alpha_{1}\) & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & \(\alpha_{2}\) & 0.00 & 0.04 & 0.15 & 0.00 & 0.05 & 0.17 & 0.00 & 0.08 & 0.18 \\ & \(\alpha_{3}\) & 0.00 & 0.00 & 0.00 & 0.00 & 0.87 & 1.47 & 0.16 & 0.75 & 1.29 \\ \hline 250 & \(\alpha_{1}\) & 0.00 & 0.00 & 0.01 & 0.00 & 0.03 & 0.06 & 0.04 & 0.06 & 0.13 \\ & \(\alpha_{2}\) & 0.00 & 0.02 & 0.15 & 0.22 & 0.48 & 0.74 & 0.00 & 0.51 & 0.84 \\ & \(\alpha_{3}\) & 0.00 & 0.00 & 0.04 & 0.00 & 1.97 & 3.45 & 0.91 & 2.44 & 3.94 \\ \hline 500 & \(\alpha_{1}\) & 0.00 & 0.00 & 0.00 & 0.14 & 0.21 & 0.29 & 0.11 & 0.19 & 0.25 \\ & \(\alpha_{2}\) & 0.00 & 0.00 & 0.00 & 0.77 & 1.06 & 1.40 & 0.63 & 1.02 & 1.27 \\ & \(\alpha_{3}\) & 0.00 & 0.10 & 0.96 & 0.00 & 3.41 & 4.32 & 1.83 & 3.64 & 4.80 \\ \hline **Avg.** & **0.00** & **0.02** & **0.11** & **0.05** & **0.35** & **0.54** & **0.15** & **0.39** & **0.63** \\ \hline \end{tabular} \end{table} Table 8: Percentage deviation from the best known solution for GVNS I, GVNS II, and GRASP. (since they presented a better performance than \(CF\) and \(TIF\)). In total, four mathematical formulations are obtained and compared: two formulations regarding the problem \(P2,S1|s_{j},t_{j}|C_{max}\) with one single server (\(CF^{+}\) and \(TIF^{+}\)), and two formulations regarding the problem \(P2,S2|s_{j},t_{j}|C_{max}\) with two dedicated servers (\(MIP_{2S}\) and \(MIP_{2S}^{+}\)). The computational experiments were conducted using the same computer as described in Section 6. In addition, the time limit for solving the formulations \(CF^{+}\), \(TIF^{+}\), \(MIP_{2S}\), and \(MIP_{2S}^{+}\) was set to 3600 s. Note that the four formulations are compared using the same instances as presented in Section 6.1 with up to 25 jobs (since we can obtain a proof of optimality for at least one formulation for each problem). To solve the \(CF^{+}\), \(TIF^{+}\), \(MIP_{2S}\), and \(MIP_{2S}^{+}\) formulations, we have used the Concert Technology library of the CPLEX 12.6 version with default settings in C++. In Table 9, we compare the performance of \(CV^{+}\), \(TIF^{+}\), \(MIP_{2S}\) and \(MIP_{2S}^{*}\) for \(n=8\). First, each instance is characterized by the following information: the ID; the number \(n\) of jobs; the loading/unloading variance coefficient (\(\alpha_{1}\), \(\alpha_{2}\), and \(\alpha_{3}\)). Next, for each mathematical formulation, the following features are given: \(i)\) the optimal makespan solution (\(C_{max}^{*}\)) and \(ii)\) the time required to find an optimal solution (CPU). Finally, the gap between the optimal makespan of the problem with one server and the optimal makespan of the problem with two servers denoted as \(Gap_{MI}(\%)\) (calculated in Equation 30) are presented: \[Gap_{MI}(\%)=100\times\frac{C_{max}^{*}(1server)-C_{max}^{*}(2 servers)}{C_{max}^{*}(2servers)} \tag{30}\] The following observations can be made: * Based on the formulations \(CV^{+}\), \(TIF^{+}\), \(MIP_{2S}\) and \(MIP_{2S}^{*}\), CPLEX is able to find an optimal solution for any instance. The average computing time for \(CV^{+}\) is 0.71 s, and the average computational time for \(MIP_{2S}^{*}\) is 0.27 s. * All formulations are able to find the same optimal solution for 5 instances (I1, I4, I5, I7 and I24). * The average value of \(Gap_{MI}(\%)\) is equal to 0.31% for \(\alpha_{1}\), it is equal to 0.76% for \(\alpha_{2}\), and it is equal to 1.45% for \(\alpha_{3}\). Therefore, the value \(Gap_{MI}(\%)\) increases for a large loading/unloading times variance. * The overall average value of \(Gap_{MI}(\%)\) over the 30 instances is equal to 0.84%. It can be noticed that the gap between the two problems is very small and for 5 instances, the same optimal solution is obtained with only one single server. In Tables A.14, A.15, A.16, we compare the performance of \(MIP_{2S}\), \(MIP_{2S}^{+}\), \(CF^{+}\), \(TIF^{+}\), for \(n\in\{10,12,25\}\). First, each instance is characterized by the following information: the ID; the number \(n\) of jobs; the loading/unloading times variance coefficient (\(\alpha_{1}\), \(\alpha_{2}\), and \(\alpha_{3}\)). Then for each formulation, the following information is given: the upper bound (\(UB_{MIP_{2S}}\), \(UB_{MIP_{2S}^{+}}\), \(UB_{CF^{+}}\), \(UB_{TIF^{+}}\)), the lower bound (\(LB_{MIP_{2S}}\), \(LB_{MIP_{2S}^{+}}\), \(LB_{CF^{+}}\), \(LB_{TIF^{+}}\)), the percentage gap to optimality (\(Gap_{MIP_{2S}}(\%)\), \(Gap_{MIP_{2S}^{+}}(\%)\), \(Gap_{CF^{+}}(\%)\), \(Gap_{TIF^{+}}(\%)\)), and the time required to prove optimality (CPU). Finally, the gap between the optimal makespan of the problem with one server and the optimal makespan of the problem with two servers \(Gap_{MI}(\%)\) is given. (In Table A.16, \(Gap_{MI}(\%)\) is not reported since CPLEX is not able to find an optimal solution for any formulation). The following observations can be made: * For \(n=10\): Based on the formulations \(MIP_{2S}^{+}\), \(CF^{+}\), \(TIF^{+}\), CPLEX is able to find an optimal solution for any instance. Based on the formulation \(MIP_{2S}\), CPLEX is able to produce an optimal solution only for 17 instances among the 30 ones. It can be noted that for the improved formulation \(MIP_{2S}^{+}\), CPLEX is able to produce optimal solutions in less computational time in comparison with the original one. The average CPU time for \(MIP_{2S}^{+}\) is equal to 0.33 s, whereas the average CPU time for \(CF^{+}\) is equal to 19.12 s. The overall average value of \(Gap_{MI}(\%)\) over the 30 instances is equal to 0.62%. * For \(n=12\) : Based on the formulations \(MIP_{2S}^{+}\) and \(TIF^{+}\), CPLEX is able to find an optimal solution for any instance. Based on the formulation \(CF^{+}\), CPLEX is able to produce an optimal solution only for 23 instances among the 30 ones. In addition, CPLEX is not able to produce an optimal solution solution for any instance using the formulation \(MIP_{2S}\). It can be noted that for the improved formulation \(MIP_{2S}^{+}\), CPLEX is able to produce optimal solutions in less computational time in comparison with the original one. The average CPU time for \(MIP_{2S}^{+}\) is equal to 0.37 s, whereas the average CPU time for \(TIF^{+}\) is equal to 69.02 s. The overall average value of \(Gap_{MI}(\%)\) over the 30 instances is equal to 0.61%. * For \(n=25\) : In the case of 1 server, \(TIF^{+}\) is the only formulation for which CPLEX is able to produce an optimal solution for 7 instances among the 30 ones. In the case of 2 servers, \(MIP_{2S}^{+}\) is the only formulation for which CPLEX is able to produce an optimal solution for 20 instances among the 30 ones. For \(\alpha_{3}\), using the formulations \(MIP_{2S}\) and \(MIP_{2S}^{+}\), CPLEX is not able to produce a feasible solution for all instances (except 1 instance I118). The overall average value of \(Gap_{MI}(\%)\) for the instances that can be solved to optimality using \(MIP_{2S}^{+}\) and \(TIF^{+}\) (I92, I93, I95, I97, I99, I100, and I103) is equal to 0.13%. \begin{table} \begin{tabular}{|l|r|r|r|r|r|r|r|r|} \hline \multicolumn{2}{|c|}{Instance} & \multicolumn{3}{|c|}{2 servers} & \multicolumn{3}{|c|}{1 server} \\ \cline{3-8} \multicolumn{2}{|c|}{} & \multicolumn{3}{|c|}{\(MIP_{2S}^{+}\)} & \multicolumn{3}{|c|}{\(MIP_{2S}^{+}\)} & \multicolumn{3}{|c|}{\(CIF^{+}\)} & \multicolumn{3}{|c|}{\(TIF^{+}\)} & \multirow{2}{*}{\(Gap_{MI}(\%)\)} \\ \cline{2-2} \cline{6-8} \multicolumn{2}{|c|}{ID} & \(n\) & \(\alpha\) & \(C_{max}^{*}\) & CPU & \(C_{max}^{*}\) & CPU & \(C_{max}^{*}\) & CPU & \\ \hline I1 & & & 295 & 29.8 & 295 & 0.33 & 295 & 0.46 & 295 & 4.11 & 0 \\ I2 & & & 288 & 2.19 & 288 & 0.29 & 289 & 0.61 & 289 & 5.49 & 0.35 \\ I3 & & & 258 & 2.66 & 258 & 0.31 & 259 & 0.53 & 259 & 4.27 & 0.39 \\ I4 & & & 217 & 22 & 2217 & 0.36 & 217 & 0.42 & 217 & 2.82 & 0 \\ I5 & 8 & \(\alpha_{1}\) & 237 & 2.84 & 237 & 0.28 & 237 & 0.35 & 237 & 2.95 & 0 \\ I6 & & & 236 & 3.20 & 236 & 0.30 & 238 & 0.75 & 238 & 3.03 & 0.85 \\ I7 & & & 218 & 4.10 & 218 & 0.26 & 218 & 0.31 & 218 & 2.95 & 0 \\ I8 & & & 229 & 7.32 & 229 & 0.26 & 230 & 0.63 & 230 & 3.06 & 0.44 \\ I9 & & & 192 & 1.48 & 192 & 0.34 & 193 & 0.73 & 193 & 1.94 & 0.52 \\ I10 & & & 195 & 1.82 & 195 & 0.45 & 196 & 0.56 & 196 & 1.94 & 0.51 \\ \hline I11 & & & 376 & 11.79 & 376 & 0.23 & 383 & 105 & 383 & 9.91 & 1.86 \\ I12 & & & 276 & 1.05 & 276 & 0.23 & 277 & 0.54 & 277 & 5.43 & 0.36 \\ I13 & & & 275 & 2.82 & 275 & 0.32 & 276 & 0.56 & 276 & 6.06 & 0.36 \\ I14 & & & 325 & 1.49 & 325 & 0.27 & 328 & 0.73 & 328 & 5.67 & 0.92 \\ I15 & 8 & \(\alpha_{2}\) & 259 & 1.14 & 259 & 0.24 & 260 & 0.52 & 260 & 3.85 & 0.39 \\ I16 & & & 310 & 8.60 & 310 & 0.26 & 312 & 0.85 & 312 & 6.80 & 0.65 \\ I17 & & & 319 & 2.19 & 319 & 0.23 & 320 & 0.64 & 320 & 10.52 & 0.31 \\ I18 & & & 285 & 1.06 & 285 & 0.25 & 286 & 0.38 & 286 & 5.88 & 0.35 \\ I19 & & & 333 & 3.03 & 333 & 0.21 & 336 & 0.85 & 336 & 7.46 & 0.90 \\ I20 & & & 344 & 1.96 & 344 & 0.25 & 349 & 0.81 & 349 & 9.33 & 1.45 \\ \hline I21 & & & 321 & 2.27 & 321 & 0.26 & 325 & 0.98 & 325 & 8.03 & 1.25 \\ I22 & & & 397 & 4.82 & 397 & 0.30 & 408 & 1.08 & 408 & 10.12 & 2.77 \\ I23 & & & 321 & 5.72 & 321 & 0.24 & 325 & 0.99 & 325 & 15.21 & 1.25 \\ I24 & & & 248 & 2.07 & 248 & 0.22 & 248 & 0.63 & 248 & 3.62 & 0 \\ I25 & 8 & \(\alpha_{3}\) & 345 & 1.78 & 345 & 0.24 & 352 & 0.84 & 352 & 15.51 & 2.03 \\ I26 & & & 329 & 1.53 & 329 & 0.27 & 335 & 0.77 & 335 & 13.74 & 1.82 \\ I27 & & & 262 & 1.69 & 262 & 0.29 & 266 & 0.81 & 266 & 3.30 & 1.53 \\ I28 & & & 297 & 1.03 & 297 & 0.27 & 300 & 0.86 & 300 & 25.17 & 1.01 \\ I29 & & & 381 & 1.98 & 381 & 0.31 & 387 & 0.83 & 387 & 9.25 & 1.57 \\ I30 & & & 327 & 2.43 & 327 & 0.19 & 331 & 0.90 & 331 & 8.18 & 1.22 \\ \hline \multicolumn{2}{|c|}{Avg.} & \multicolumn{1}{c|}{289.83} & \multicolumn{1}{c|}{3.04} & \multicolumn{1}{c|}{289.83} & \multicolumn{1}{c|}{0.27} & \multicolumn{1}{c|}{292.53} & \multicolumn{1}{c|}{7.1} & \multicolumn{1}{c|}{292.53} & \multicolumn{1}{c|}{7.19} & \multicolumn{1}{c|}{0.84} \\ \hline \end{tabular} \end{table} Table 9: Comparison of \(CF^{+}\), \(TIF^{+}\), \(MIP_{2S}^{+}\) with \(MIP_{2S}\) by Bennam and Sifaleras (2021) for \(n=8\). As it was shown in the previous Table 9 and in the Tables A.14, A.15, A.16 in the Appendix, the gap between the optimal makespan of the problem with one server and the optimal makespan of the problem with two servers (\(Gap_{MI}(\%)\)) is very small for almost all small-sized instances with \(n\in\{8,10,12,25\}\). Therefore, the use of an extra resource (unloading server) has a small impact on the reduction of the makespan. It can be noticed that in some cases, the same optimal makespan is obtained using only one single server. Indeed, since the loading, processing and unloading operations are non separable, the waiting time of a machine due to the unavailability of the unloading server will always be significant. To sum up, from a managerial point of view, we indicate to use only one single server for both the loading and unloading operations, which can lead to a significant reduction of costs (e.g., electricity and maintenance) and hence a better preservation of environment. ## 8 Conclusions In this paper, the scheduling problem with two identical parallel machines and a single server in charge of loading and unloading operations of jobs was addressed. Each job has to be loaded by a single server immediately before its processing. Once the processing operations finished, each job has to be immediately unloaded by the same server. The objective function considered was the minimization of the makespan. We presented two mixed-integer linear programming (MILP) formulations to model the problem, namely: a completion-time variables formulation \(CF\) and a time-indexed formulation \(TIF\), and we proposed sets of inequalities that can be used to improve the \(CF\) formulation. We also proposed some polynomial-time solvable cases and a tight lower bound. In addition, we showed that the minimization of the makespan is equivalent to the minimization of the total idle time of the machines. Since the mathematical formulations were not able to cope with the majority of instances, an efficient General Variable Neighborhood Search metaheuristic (GVNS) with two mechanisms for finding an initial solution (one with an iterative improvement procedure (GVNS I) and one with a random initial solution (GVNS II)), and a Greedy Randomized Adaptive Search Procedures (GRASP) metaheuristic were designed. To validate the performance of the proposed GVNS I, GVNS II and GRASP approaches, exhaustive computational experiments on 240 instances were performed. For small-sized instances with up to 25 jobs, GVNS I, GVNS II and GRASP algorithms outperformed the MILP formulations in terms of the computational time to find an optimal solution. However, for medium and large-sized instances, the GVNS I yielded better results than the other approaches. In particular, the average percentage deviation from the theoretical lower bound was equal to 0.642%. Finally, we presented some managerial insights, and our MILP formulations were compared with the ones of Benmansour and Sifaleras (2021) regarding the problem \(P2,S2|s_{j},p_{j}|C_{max}\) involving a dedicated loading server and a dedicated unloading server. It turned out that adding an unloading server (extra resource) contributed less to the reduction of the makespan (since the average percentage deviation of the optimal makespan for the two problems is equal to 0.69% for small-sized instances with up to 12 jobs), and in some cases the same optimal solution can be obtained using only one single server. In future research, it would be interesting to adapt the presented approaches to the more general problem \(P,S1|s_{j},t_{j}|C_{max}\) involving an arbitrary number of machines. Further works could also measure the effect of the unloading server on the makespan for the problem \(P,S2|s_{j},t_{j}|C_{max}\) ## References * Abdekhodaee et al. (2006) Abdekhodaee AH, Wirth A, Gan HS. Scheduling two parallel machines with a single server: the general case. Computers & Operations Research 2006;33(4):994-1009. * Alharkan et al. (2019) Alharkan I, Saleh M, Ghaleb MA, Kaid H, Farhan A, Almarfadi A. Tabu search and particle swarm optimization algorithms for two identical parallel machines scheduling problem with a single server. Journal of King Saud University-Engineering Sciences 2019;. * Allahverdi and Soroush (2008) Allahverdi A, Soroush H. The significance of reducing setup times/setup costs. European Journal of Operational Research 2008;187(3):978-84. * Arnaout (2017) Arnaout JP. Heuristics for the two-machine scheduling problem with a single server. International Transactions in Operational Research 2017;24(6):1347-55. * Arnaout (2021) Arnaout JP. Worm optimisation algorithm to minimise the makespan for the two-machine scheduling problem with a single server. International Journal of Operational Research 2021;41(2):270-81. * Baez et al. (2019) Baez S, Angel-Bello F, Alvarez A, Melian-Batista B. A hybrid metaheuristic algorithm for a parallel machine scheduling problem with dependent setup times. Computers & Industrial Engineering 2019;131:295-305. * Baker and Keller (2010) Baker KR, Keller B. Solving the single-machine sequencing problem using integer programming. Computers & Industrial Engineering 2010;59(4):730-5. * Balas (1985) Balas E. On the facial structure of scheduling polyhedra. In: Mathematical Programming Essays in Honor of George B. Dantzig Part I. Springer; 1985. p. 179-218. * Bektur and Sarac (2019) Bektur G, Sarac T. A mathematical model and heuristic algorithms for an unrelated parallel machine scheduling problem with sequence-dependent setup times, machine eligibility restrictions and a common server. Computers & Operations Research 2019;103:46-63. * Benmansour and Sifaleras (2021) Benmansour R, Sifaleras A. Scheduling in parallel machines with two servers: the restrictive case. In: Variable Neighborhood Search: 8th International Conference, ICVNS 2021, Abu Dhabi, United Arab Emirates, March 21-25, 2021, Proceedings 8. Springer International Publishing; 2021. p. 71-82. * Chung et al. (2019) Chung T, Gupta JN, Zhao H, Werner F. Minimizing the makespan on two identical parallel machines with mold constraints. Computers & Operations Research 2019;105:141-55. * Elidrissi et al. (2021) Elidrissi A, Benmansour R, Benbrahim M, Duvivier D. Mathematical formulations for the parallel machine scheduling problem with a single server. International Journal of Production Research 2021;59(20):6166-84. * Elidrissi et al. (2022) Elidrissi A, Benmansour R, Sifaleras A. General variable neighborhood search for the parallel machine scheduling problem with two common servers. Optimization Letters 2022;1-31. * Feo and Resende (1995) Feo TA, Resende MG. Greedy randomized adaptive search procedures. Journal of global optimization 1995;6(2):109-33. * Graham et al. (1979) Graham RL, Lawler EL, Lenstra JK, Kan AR. Optimization and approximation in deterministic sequencing and scheduling: a survey. In: Annals of discrete mathematics. Elsevier; volume 5; 1979. p. 287-326. * Graham et al. (2019) Hamzadayi A, Yildiz G. Event driven strategy based complete rescheduling approaches for dynamic m identical parallel machines scheduling problem with a common server. Computers & Industrial Engineering 2016;91:66-84. * Hamzadayi and Yildiz (2017) Hamzadayi A, Yildiz G. Modeling and solving static m identical parallel machines scheduling problem with a common server and sequence dependent setup times. Computers & Industrial Engineering 2017;106:287-98. * Hansen et al. (2017) Hansen P, Mladenovic N, Todosijevic R, Hanafi S. Variable neighborhood search: basics and variants. EURO Journal on Computational Optimization 2017;5(3):423-54. * Hasani et al. (2014) Hasani K, Kravchenko SA, Werner F. Block models for scheduling jobs on two parallel machines with a single server. Computers & Operations Research 2014a;41:94-7. * Hasani et al. (2014) Hasani K, Kravchenko SA, Werner F. A hybridization of harmony search and simulated annealing to minimize mean flow time for the two-machine scheduling problem with a single server. Int J Oper Res 2014b;3(1):9-26. * Hasani et al. (2014) Hasani K, Kravchenko SA, Werner F. Minimising interference for scheduling two parallel machines with a single server. International Journal of Production Research 2014c;52(24):7148-58. * Hasani et al. (2014) Hasani K, Kravchenko SA, Werner F. Simulated annealing and genetic algorithms for the two-machine scheduling problem with a single server. International Journal of Production Research 2014d;52(13):3778-92. * Hu et al. (2013) Hu J, Zhang Q, Dong J, Jiang Y. Parallel machine scheduling with a single server: Loading and unloading. In: International Conference on Combinatorial Optimization and Applications. Springer; 2013. p. 106-16. * Huang et al. (2010) Huang S, Cai L, Zhang X. Parallel dedicated machine scheduling problem with sequence-dependent setups and a single server. Computers & Industrial Engineering 2010;58(1):165-74. * Jiang et al. (2014) Jiang Y, Wang H, Zhou P. An optimal preemptive algorithm for the single-server parallel-machine scheduling with loading and unloading times. Asia-Pacific Journal of Operational Research 2014;31(05):1450039. * Jiang et al. (2015) Jiang Y, Yu F, Zhou P, Hu J. Online algorithms for scheduling two parallel machines with a single server. International Transactions in Operational Research 2015a;22(5):913-27. * Jiang et al. (2015) Jiang Y, Zhang Q, Hu J, Dong J, Ji M. Single-server parallel-machine scheduling with loading and unloading times. Journal of Combinatorial Optimization 2015b;30(2):201-13. * Jiang et al. (2017) Jiang Y, Zhou P, Wang H, Hu J. Scheduling on two parallel machines with two dedicated servers. The ANZIAM Journal 2017;58(3-4):314-23. * Keha et al. (2009) Keha AB, Khowala K, Fowler JW. Mixed integer programming formulations for single machine scheduling problems. Computers & Industrial Engineering 2009;56(1):357-67. * Kim and Lee (2021) Kim HJ, Lee JH. Scheduling uniform parallel dedicated machines with job splitting, sequence-dependent setup times, and multiple servers. Computers & Operations Research 2021;126:105115. * Kim and Lee (2012) Kim MY, Lee YH. Mip models and hybrid algorithm for minimizing the makespan of parallel machines scheduling problem with a single server. Computers & Operations Research 2012;39(11):2457-68. * Kim et al. (2015) Koulamas CP. Scheduling two parallel semiautomatic machines to minimize machine interference. Computers & Operations Research 1996;23(10):945-56. * Kramer et al. (2021) Kramer A, Iori M, Lacomme P. Mathematical formulations for scheduling jobs on identical parallel machines with family setup times and total weighted completion time minimization. European Journal of Operational Research 2021;289(3):825-40. * Kravchenko and Werner (1997) Kravchenko SA, Werner F. Parallel machine scheduling problems with a single server. Mathematical and Computer Modelling 1997;26(12):1-11. * Kravchenko and Werner (1998) Kravchenko SA, Werner F. Scheduling on parallel machines with a single and multiple servers. Otto-von-Guericke-Universitat Magdeburg 1998;30(98):1-18. * Lee and Kim (2021) Lee JH, Kim HJ. A heuristic algorithm for identical parallel machine scheduling: splitting jobs, sequence-dependent setup times, and limited setup operators. Flexible Services and Manufacturing Journal 2021;33:992-1026. * Maecker et al. (2023) Maecker S, Shen L, Monch L. Unrelated parallel machine scheduling with eligibility constraints and delivery times to minimize total weighted tardiness. Computers & Operations Research 2023;149:105999. * Michael (2018) Michael LP. Scheduling: theory, algorithms, and systems. Springer, 2018. * Mladenovic and Hansen (1997) Mladenovic N, Hansen P. Variable neighborhood search. Computers & operations research 1997;24(11):1097-100. * Ou et al. (2010) Ou J, Qi X, Lee CY. Parallel machine scheduling with multiple unloading servers. Journal of Scheduling 2010;13(3):213-26. * Ruiz and Stutzle (2007) Ruiz R, Stutzle T. A simple and effective iterated greedy algorithm for the permutation flowshop scheduling problem. European journal of operational research 2007;177(3):2033-49. * Ruiz and Stutzle (2008) Ruiz R, Stutzle T. An iterated greedy heuristic for the sequence dependent setup times flowshop problem with makespan and weighted tardiness objectives. European Journal of Operational Research 2008;187(3):1143-59. * Silva et al. (2019) Silva JMP, Teixeira E, Subramanian A. Exact and metaheuristic approaches for identical parallel machine scheduling with a common server and sequence-dependent setup times. Journal of the Operational Research Society 2019;:1-15. * Sousa and Wolsey (1992) Sousa JP, Wolsey LA. A time indexed formulation of non-preemptive single machine scheduling problems. Mathematical programming 1992;54(1):353-67. * Todosijevic et al. (2016) Todosijevic R, Benmansour R, Hanafi S, Mladenovic N, Artiba A. Nested general variable neighborhood search for the periodic maintenance problem. European Journal of Operational Research 2016;252(2):385-96. * Unlu and Mason (2010) Unlu Y, Mason SJ. Evaluation of mixed integer programming formulations for non-preemptive parallel machine scheduling problems. Computers & Industrial Engineering 2010;58(4):785-800. * Werner and Kravchenko (2010) Werner F, Kravchenko SA. Scheduling with multiple servers. Automation and Remote Control 2010;71(10):2109-21. * Wainwright and Kravchenko (2010) Xie X, Li Y, Zhou H, Zheng Y. Scheduling parallel machines with a single server. In: Proceedings of 2012 International Conference on Measurement, Information and Control. IEEE; volume 1; 2012. p. 453-6. * Yepes-Borrero et al. (2020) Yepes-Borrero JC, Villa F, Perea F, Caballero-Villalobos JP. Grasp algorithm for the unrelated parallel machine scheduling problem with setup times and additional resources. Expert Systems with Applications 2020;141:112959. ## Appendix A Detailed results \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Instance} & \multicolumn{3}{c|}{GVNS I} & \multicolumn{3}{c|}{GVNS II} & \multicolumn{3}{c|}{GRASP} \\ \hline ID & \(n\) & \(\alpha\) & \(LB_{T}\) & \(C_{max}^{Const}\) & \(C_{max}^{aug}\) & CPU & \(C_{max}^{aug}\) & \(C_{max}^{aug}\) & CPU & \(C_{max}^{aug}\) & CPU \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 1.12: Comparison of GVNS I, GVNS II and GRASP for \(n=500\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Instance} & \multicolumn{3}{c|}{GVNS I} & \multicolumn{3}{c|}{GVNS II} & \multicolumn{3}{c|}{GRASP} \\ \hline ID & \(n\) & \(\alpha\) & \(LB_{T}\) & \(C_{max}^{Const}\) & \(C_{max}^{aug}\) & CPU & \(C_{max}^{aug}\) & \(C_{\rm{opt}}\) & CPU & \(C_{max}^{aug}\) & CPU \\ \hline 1181 & & 7985 & **7988** & 79939.5 & 15.7 & **7988** & 7994.4 & 18 ## Appendix A Proofs ### Proof of Theorem 1 Proof of Theorem 1.: We first prove the following. **Lemma 1**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\). Then \(\mathcal{F}\) is a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 2**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\). Then \(\mathcal{F}\) is a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 3**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\). Then \(\mathcal{F}\) is a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 4**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\). Then \(\mathcal{F}\) is a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 5**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\). Then \(\mathcal{F}\) is a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 6**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\). Then \(\mathcal{F}\) is a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 7**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\). Then \(\mathcal{F}\) is a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 8**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\). Then \(\mathcal{F}\) is a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 9**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\). Then \(\mathcal{F}\) is a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 10**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\). Then \(\mathcal{F}\) is a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 11**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\). Then \(\mathcal{F}\) is a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 12**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\). Then \(\mathcal{F}\) is a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 13**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\). Then \(\mathcal{F}\) is a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 14**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\). Then \(\mathcal{F}\) is a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 15**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\). Then \(\mathcal{F}\) is a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 16**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\). Then \(\mathcal{F}\) is a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 17**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\). Then \(\mathcal{F}\) is a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 18**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\). Then \(\mathcal{F}\) is a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 19**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\). Then \(\mathcal{F}\) is a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 20**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\). Then \(\mathcal{F}\) is a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 21**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\). Then \(\mathcal{F}\) is a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 22**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\). Then \(\mathcal{F}\) is a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 23**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 24**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 25**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 26**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 27**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 28**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 29**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 20**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 21**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 22**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 23**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 24**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 25**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 26**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 27**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 28**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 29**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 20**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 210**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 211**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 212**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 213**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 214**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 215**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 216**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 217**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 218**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 219**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 222**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. **Lemma 223**.: _Let \(\mathcal{F}\) be a finite set of \(\mathcal{F}\)._ Proof of Theorem 1.: We first prove the following. ## Appendix A Proofs ### 1.1 Proof of Theorem 1.1 Proof of Theorem 1.1.: Let \(\mathcal{F}\) be a set of \(\mathcal{F}\). Let \(\mathcal{F}\) be a set of \(\mathcal{F}\).
2304.00672
Variation of mixed Hodge structure and its applications
We treat generalizations of Koll\'ar's torsion-freeness, vanishing theorem, and so on, for projective morphisms between complex analytic spaces as an application of the theory of variations of mixed Hodge structure. The results will play a crucial role in the theory of minimal models for projective morphisms of complex analytic spaces. In this paper, we do not use Saito's theory of mixed Hodge modules.
Osamu Fujino, Taro Fujisawa
2023-04-03T00:51:26Z
http://arxiv.org/abs/2304.00672v2
# Variation of mixed Hodge structure and its applications ###### Abstract. We treat generalizations of Kollar's torsion-freeness, vanishing theorem, and so on, for projective morphisms between complex analytic spaces as an application of the theory of variations of mixed Hodge structure. The results will play a crucial role in the theory of minimal models for projective morphisms of complex analytic spaces. In this paper, we do not use Saito's theory of mixed Hodge modules. Key words and phrases:Hodge bundles, vanishing theorems, strict support condition, torsion-freeness, injectivity theorem, variation of mixed Hodge structure, semipositivity theorem, minimal model program 2 _We put_ \[\mathcal{V}^{k}_{Y^{*}}:=R^{k}(f|_{X^{*}\setminus D^{*}}){\mathbb{R}}_{X^{*} \setminus D^{*}}\otimes\mathcal{O}_{Y^{*}}\] _for every \(k\). The Hodge filtration and the weight filtration on \(\mathcal{V}^{k}_{Y^{*}}\) are denoted by \(F\) and \(L\) respectively. Moreover the lower canonical extension of \(\mathcal{V}^{k}_{Y^{*}}\) is denoted by \({}^{l}\mathcal{V}^{k}_{Y^{*}}\). The weight filtration \(L\) on \(\mathcal{V}^{k}_{Y^{*}}\) is extended to \({}^{l}\mathcal{V}^{k}_{Y^{*}}\) by \(L_{m}({}^{l}\mathcal{V}^{k}_{Y^{*}})={}^{l}L_{m}(\mathcal{V}^{k}_{Y^{*}})\) for every \(m\). Then we have the following:_ * _There exists a unique finite decreasing filtration_ \(F\) _on_ \({}^{l}\mathcal{V}^{k}_{Y^{*}}\) _such that_ * \(F^{p}({}^{l}\mathcal{V}^{k}_{Y^{*}})|_{Y^{*}}\simeq F^{p}(\mathcal{V}^{k}_{Y^{ * }})\)_, and_ * \(\mathrm{Gr}^{p}_{F}\,\mathrm{Gr}^{L}_{m}({}^{l}\mathcal{V}^{k}_{Y^{*}})\) _is a locally free_ \(\mathcal{O}_{Y}\)_-module of finite rank_ _for every_ \(k,m,p\)_._ * \(R^{d-i}f_{*}\mathcal{O}_{X}(-D)\) _is isomorphic to_ \[\mathrm{Gr}^{0}_{F}({}^{l}\mathcal{V}^{d-i}_{Y^{*}})=F^{0}({}^{l}\mathcal{V}^ {d-i}_{Y^{*}})/F^{1}({}^{l}\mathcal{V}^{d-i}_{Y^{*}})\] _for every_ \(i\)_. In particular,_ \(R^{d-i}f_{*}\mathcal{O}_{X}(-D)\) _is locally free for every_ \(i\)_._ * \(R^{i}f_{*}\omega_{X/Y}(D)\) _is isomorphic to_ \[\big{(}\mathrm{Gr}^{0}_{F}({}^{l}\mathcal{V}^{d-i}_{Y^{*}})\big{)}^{*}= \mathcal{H}om_{\mathcal{O}_{Y}}(\mathrm{Gr}^{0}_{F}({}^{l}\mathcal{V}^{d-i}_ {Y^{*}}),\mathcal{O}_{Y})\] _for every_ \(i\)_. In particular,_ \(R^{i}f_{*}\omega_{X/Y}(D)\) _is locally free for every_ \(i\)_._ For the precise definition of upper and lower canonical extensions in Theorem 1.1, see [12, Remark 7.4]. In Theorem 1.1, \(X\) may be reducible, and we are mainly interested in the case where \(X\) is reducible. By Theorem 1.1, we can use the Fujita-Zucker-Kawamata semipositivity theorem in the complex analytic setting. **Theorem 1.2** (Semipositivity).: _In Theorem 1.1, we further assume that every local monodromy on the local system \(R^{d-i}(f|_{X^{*}\setminus D^{*}}){\mathbb{R}}_{X^{*}\setminus D^{*}}\) around \(\Sigma\) is unipotent. Let \(\varphi\colon V\to X\) be any morphism from a projective variety \(V\). Then \(\varphi^{*}R^{i}f_{*}\omega_{X/Y}(D)\) is a nef locally free sheaf on \(V\)._ In order to prove Theorem 1.1, we will establish: **Theorem 1.3** (Weight spectral sequence).: _Let \((X,D)\) be an analytic simple normal crossing pair such that \(D\) is reduced and let \(f\colon X\to Y\) be a proper morphism between complex analytic spaces. We assume that \(Y\) is a smooth complex variety and that there exists a normal crossing divisor \(\Sigma\) on \(Y\) such that every stratum of \((X,D)\) is dominant onto \(Y\), and smooth over \(Y\setminus\Sigma\). If we assume that every stratum of \((X,D)\) is a Kahler manifold in addition, then we have a spectral sequence:_ \[E^{p,q}_{1}=\bigoplus_{S}R^{q}f_{*}\mathcal{O}_{S}\Rightarrow R^{p+q}f_{*} \mathcal{O}_{X}(-D),\] _where \(S\) runs through all \((\dim X-p)\)-dimensional strata of \((X,D)\), such that it degenerates at \(E_{2}\) and its \(E_{1}\)-differential \(d_{1}\) splits._ By combining Theorem 1.3 with Takegoshi's results (see [1]), we can prove: **Theorem 1.4** (Torsion-freeness and vanishing theorem).: _Let \((X,D)\) be an analytic simple normal crossing pair such that \(D\) is reduced and let \(f\colon X\to Y\) be a projective morphism between complex analytic spaces. We assume that \(Y\) is a complex variety and that every stratum of \((X,D)\) is dominant onto \(Y\). Then we have the following properties._ * \((\)_Torsion-freeness\()._ \(R^{q}f_{*}\omega_{X}(D)\) _is a torsion-free sheaf for every_ \(q\) 2. (_Vanishing theorem_)_. Let \(\pi\colon Y\to Z\) be a projective morphism between complex analytic spaces and let \(\mathcal{A}\) be a \(\pi\)-ample line bundle on \(Y\). Then \[R^{p}\pi_{*}\left(\mathcal{A}\otimes R^{q}f_{*}\omega_{X}(D)\right)=0\] holds for every \(p>0\) and every \(q\)._ Of course, Theorem 1.4 is a generalization of Kollar's torsion-freeness and vanishing theorem (see [Ko1]) for reducible complex analytic spaces. We make a remark on the relationship between [FF1] and this paper. **Remark 1.5**.: In [FF1], we have already treated Theorems 1.1 and 1.4 when \(X\) and \(Y\) are algebraic and \(f\colon X\to Y\) is projective. Roughly speaking, in [FF1, SS6], we first establish Theorem 1.4 when \(X\) is quasi-projective and \(f\colon X\to Y\) is algebraic. Then, by using it, we prove Theorem 1.1 under the assumption that \(X\) and \(Y\) are algebraic and \(f\colon X\to Y\) is projective in [FF1, SS7]. When \(X\) is quasi-projective, we can use the theory of mixed Hodge structures. Hence we can obtain desired vanishing theorems and torsion-freeness without using the theory of variations of mixed Hodge structure (for the details, see [Fn3, Chapter 5]). In this paper, we will directly prove Theorems 1.1 and 1.3 with the aid of some results established for Kahler manifolds (see [T]). Then, we will prove Theorem 1.4 as an application. Theorem 1.3 is new even when \(X\) and \(Y\) are algebraic and \(f\colon X\to Y\) is projective. By using Theorem 1.4, we have: **Theorem 1.6** (see [Fn9, Theorem 3.1]).: _Let \((X,D)\) be an analytic simple normal crossing pair such that \(D\) is reduced and let \(f\colon X\to Y\) be a projective morphism between complex analytic spaces. Then we have the following properties._ 1. (_Strict support condition_)_. Every associated subvariety of_ \(R^{q}f_{*}\omega_{X}(D)\) _is the_ \(f\)_-image of some stratum of_ \((X,D)\) _for every_ \(q\)_._ 2. (_Vanishing theorem_)_. Let_ \(\pi\colon Y\to Z\) _be a projective morphism between complex analytic spaces and let_ \(\mathcal{A}\) _be a_ \(\pi\)_-ample line bundle on_ \(Y\)_. Then_ \[R^{p}\pi_{*}\left(\mathcal{A}\otimes R^{q}f_{*}\omega_{X}(D)\right)=0\] _holds for every_ \(p>0\) _and every_ \(q\)_._ 3. (_Injectivity theorem_)_. Let_ \(\mathcal{L}\) _be an_ \(f\)_-semiample line bundle on_ \(X\)_. Let_ \(s\) _be a nonzero element of_ \(H^{0}(X,\mathcal{L}^{\otimes k})\) _for some nonnegative integer_ \(k\) _such that the zero locus of_ \(s\) _does not contain any strata of_ \((X,D)\)_. Then, for every_ \(q\)_, the map_ \[\times s\colon R^{q}f_{*}\left(\omega_{X}(D)\otimes\mathcal{L}^{\otimes l} \right)\to R^{q}f_{*}\left(\omega_{X}(D)\otimes\mathcal{L}^{\otimes k+l}\right)\] _induced by_ \(\otimes s\) _is injective for every positive integer_ \(l\)_._ Note that Theorem 1.6 was first obtained in [Fn9, Theorem 3.1] under a weaker assumption that \(f\colon X\to Y\) is Kahler by using Saito's theory of mixed Hodge modules. Theorems 1.7 and 1.8 are the main results of [Fn9]. Although they may look artificial and technical, they are very useful and indispensable for the study of varieties and pairs whose singularities are worse than kawamata log terminal (see [A], [Fn3, Chapter 6], [Fn6], [Fn7], [Fn10], [Fn11], and so on). In [Fn9], we showed that Theorems 1.7 and 1.8 follow from Theorem 1.6 (i) and (ii). Note that Theorem 1.6 (iii) is an easy consequence of Theorem 1.6 (i) and (ii). Hence this paper gives an approach to Theorems 1.7 and 1.8 without using Saito's theory of mixed Hodge modules. **Theorem 1.7** (see [10, Theorem 1.1]).: _Let \((X,\Delta)\) be an analytic simple normal crossing pair such that \(\Delta\) is a boundary \(\mathbb{R}\)-divisor on \(X\). Let \(f\colon X\to Y\) be a projective morphism to a complex analytic space \(Y\) and let \(\mathcal{L}\) be a line bundle on \(X\). Let \(q\) be an arbitrary nonnegative integer. Then we have the following properties._ * \((\)_Strict support condition\()\). If_ \(\mathcal{L}-(\omega_{X}+\Delta)\) _is_ \(f\)_-semiample, then every associated subvariety of_ \(R^{q}f_{*}\mathcal{L}\) _is the_ \(f\)_-image of some stratum of_ \((X,\Delta)\)_._ * \((\)_Vanishing theorem\(). If_ \(\mathcal{L}-(\omega_{X}+\Delta)\sim_{\mathbb{R}}f^{*}\mathcal{H}\) _holds for some_ \(\pi\)_-ample_ \(\mathbb{R}\)_-line bundle_ \(\mathcal{H}\) _on_ \(Y\)_, where_ \(\pi\colon Y\to Z\) _is a projective morphism to a complex analytic space_ \(Z\)_, then we have_ \(R^{p}\pi_{*}R^{q}f_{*}\mathcal{L}=0\) _for every_ \(p>0\)_._ **Theorem 1.8** (Vanishing theorem of Reid-Fukuda type, see [10, Theorem 1.2]).: _Let \((X,\Delta)\) be an analytic simple normal crossing pair such that \(\Delta\) is a boundary \(\mathbb{R}\)-divisor on \(X\). Let \(f\colon X\to Y\) and \(\pi\colon Y\to Z\) be projective morphisms between complex analytic spaces and let \(\mathcal{L}\) be a line bundle on \(X\). If \(\mathcal{L}-(\omega_{X}+\Delta)\sim_{\mathbb{R}}f^{*}\mathcal{H}\) holds such that \(\mathcal{H}\) is an \(\mathbb{R}\)-line bundle, which is nef and log big over \(Z\) with respect to \(f\colon(X,\Delta)\to Y\), on \(Y\), then \(R^{p}\pi_{*}R^{q}f_{*}\mathcal{L}=0\) holds for every \(p>0\) and every \(q\)._ In this paper, we do not prove Theorems 1.7 and 1.8. For the details of Theorems 1.7 and 1.8, see [10]. Although the motivation of the first author is obviously the minimal model theory for projective morphisms between complex analytic spaces, we do not treat the minimal model program in this paper. We recommend that the interested reader looks at [10], [10], [11], and so on. Theorems 1.7 and 1.8 have already played a crucial role in [11] and [11], where we established the fundamental theorems of the theory of minimal models for projective morphisms between complex analytic spaces. Anyway, by this paper, [11] and [11] become free from Saito's theory of mixed Hodge modules. The relationship between [10] and this paper is as follows. **Remark 1.9**.: In [11, Corollary 1 and 4.7. Remark] (see [10, Theorem 2.6]), we constructed a weight spectral sequence of mixed Hodge modules. It is much more general than Theorem 1.3 in some sense. By combining it with Takegoshi's results (see [T]), we proved Theorems 1.6, 1.7, 1.8, and so on, in [10]. From the Hodge theoretic viewpoint, one of the main ingredients of this paper is Steenbrink's result obtained in [12] and [13]. We look at the organization of this paper. In Section 2, we will briefly explain basic definitions and results necessary for this paper. In Subsection 2.1, we will explain some useful lemmas on analytic simple normal crossing pairs. In Subsection 2.2, we will briefly review Kollar's package in the complex analytic setting. Section 3 is the main part of this paper, where we will prove Theorems 1.1 and 1.3. We will also see that a generalization of the Fujita-Zucker-Kawamata semipositivity theorem holds in the complex analytic setting (see Theorem 1.2). In Section 4, we will prove Theorem 1.4. In Section 5, we will prove Theorem 1.6. Section 6 is a supplementary section, where we will explain a new construction of the rational structure for the cohomological \(\mathbb{Q}\)-mixed Hodge complex in [13]. We hope that it will help the reader understand [13] and [13]. **Acknowledgments.** The authors thank Yuta Kusakabe very much for answering their questions. The first author was partially supported by JSPS KAKENHI Grant Numbers JP19H01787, JP20H00111, JP21H00974, JP21H04994. The second author was partially supported by JSPS KAKENHI Grant Number JP20K03542. In this paper, every complex analytic space is assumed to be _Hausdorff_ and _second-countable_. Note that an irreducible and reduced complex analytic space is called a _complex_ variety_. We will freely use the basic results on complex analytic geometry in [BS] and [Fi]. ## 2. Preliminaries In this section, we will collect some basic definitions. Let us start with the definition of _analytic simple normal crossing pairs_. **Definition 2.1** (Analytic simple normal crossing pairs).: Let \(X\) be a simple normal crossing divisor on a smooth complex analytic space \(M\) and let \(B\) be an \(\mathbb{R}\)-divisor on \(M\) such that the support of \(B+X\) is a simple normal crossing divisor on \(M\) and that \(B\) and \(X\) have no common irreducible components. Then we put \(D:=B|_{X}\) and consider the pair \((X,D)\). We call \((X,D)\) an _analytic globally embedded simple normal crossing pair_ and \(M\) the _ambient space_ of \((X,D)\). If the pair \((X,D)\) is locally isomorphic to an analytic globally embedded simple normal crossing pair at any point of \(X\) and the irreducible components of \(X\) and \(D\) are all smooth, then \((X,D)\) is called an _analytic simple normal crossing pair_. When \((X,D)\) is an analytic simple normal crossing pair, \(X\) has an invertible dualizing sheaf \(\omega_{X}\). We usually use the symbol \(K_{X}\) as a formal divisor class with an isomorphism \(\mathcal{O}_{X}(K_{X})\simeq\omega_{X}\) if there is no danger of confusion. We note that we can not always define \(K_{X}\) globally with \(\mathcal{O}_{X}(K_{X})\simeq\omega_{X}\). In general, it only exists locally on \(X\). The notion of _strata_ plays a crucial role. **Definition 2.2** (Strata).: Let \((X,D)\) be an analytic simple normal crossing pair as in Definition 2.1. Let \(\nu\colon X^{\nu}\to X\) be the normalization. We put \[K_{X^{\nu}}+\Theta=\nu^{*}(K_{X}+D).\] This means that \(\Theta\) is the union of \(\nu_{*}^{-1}D\) and the inverse image of the singular locus of \(X\). We note that \(X^{\nu}\) is smooth and the support of \(\Theta\) is a simple normal crossing divisor on \(X^{\nu}\). If \(W\) is an irreducible component of \(X\) or the \(\nu\)-image of some log canonical center of \((X^{\nu},\Theta)\), then \(W\) is called a _stratum_ of \((X,D)\). **Remark 2.3**.: In this paper, \(D\) is always assumed to be reduced. Hence, \(\Theta\) in Definition 2.2 is a reduced simple normal crossing divisor on \(X^{\nu}\). We do not need \(\mathbb{Q}\)-divisors nor \(\mathbb{R}\)-divisors in this paper. We recall Siu's theorem on complex analytic sheaves, which is a special case of [Si, Theorem 4]. We need it for Theorem 1.6 (i) and Theorem 1.7 (i). **Theorem 2.4**.: _Let \(\mathcal{F}\) be a coherent sheaf on a complex analytic space \(X\). Then there exists a locally finite family \(\{Y_{i}\}_{i\in I}\) of complex analytic subvarieties of \(X\) such that_ \[\operatorname{Ass}_{\mathcal{O}_{X,x}}(\mathcal{F}_{x})=\{\mathfrak{p}_{x,1 },\dots,\mathfrak{p}_{x,r(x)}\}\] _holds for every point \(x\in X\), where \(\mathfrak{p}_{x,1},\dots,\mathfrak{p}_{x,r(x)}\) are the prime ideals of \(\mathcal{O}_{X,x}\) associated to the irreducible components of the germs \(Y_{i,x}\) of \(Y_{i}\) at \(x\) with \(x\in Y_{i}\). We note that each \(Y_{i}\) is called an associated subvariety of \(\mathcal{F}\)._ **Definition 2.5** (Relatively nef, ample, and big line bundles).: Let \(f\colon X\to Y\) be a projective morphism of complex analytic spaces and let \(\mathcal{L}\) be a line bundle on \(X\). Then we say that * \(\mathcal{L}\) is \(f\)_-nef_ if \(\mathcal{L}\cdot C\geq 0\) holds for every curve \(C\) on \(X\) such that \(f(C)\) is a point, and * \(\mathcal{L}\) is _\(f\)-ample_ if \(\mathcal{L}|_{f^{-1}(y)}\) is ample in the usual sense for every \(y\in Y\). We further assume that \(f\colon X\to Y\) is a projective surjective morphism of complex varieties. Then we say that * \(\mathcal{L}\) is _\(f\)-big_ if there exists some positive real number \(c\) such that \(\operatorname{rank}f_{*}\mathcal{L}^{\otimes m}>c\cdot m^{d}\) holds for \(m\gg 0\), where \(d=\dim X-\dim Y\). We need the notion of _nef locally free sheaves_ in Theorem 1.2. **Definition 2.6** (Nef locally free sheaves).: Let \(\mathcal{E}\) be a locally free sheaf of finite rank on a projective variety \(V\). If \(\mathcal{O}_{\mathbb{P}_{V}(\mathcal{E})}(1)\) is nef, that is, \(\mathcal{O}_{\mathbb{P}_{V}(\mathcal{E})}(1)\cdot C\geq 0\) holds for every curve \(C\) on \(\mathbb{P}_{V}(\mathcal{E})\), then \(\mathcal{E}\) is called a _nef_ locally free sheaf on \(V\). A nef locally free sheaf is sometimes called a _semipositive vector bundle_ or a _semipositive locally free sheaf_ in the literature. ### Lemmas on analytic simple normal crossing pairs In this subsection, we will collect some useful lemmas on analytic simple normal crossing pairs. We will repeatedly use these lemmas in subsequent sections. **Lemma 2.7** (see [10, Lemmas 2.13 and 2.15]).: _Let \((X,D)\) and \((X^{\prime},D^{\prime})\) be simple normal crossing pairs such that \(D\) and \(D^{\prime}\) are reduced. Let \(g\colon X^{\prime}\to X\) be a projective bimeromorphic morphism. Assume that there exists a Zariski open subset \(U\) of \(X\) such that \(g\colon U^{\prime}:=g^{-1}(U)\to U\) is an isomorphism and that \(U\)_(_resp. \(U^{\prime}\)_) _intersects every stratum of \((X,D)\)_(_resp. \((X^{\prime},D^{\prime})\)_)_. Then \(R^{i}g_{*}\mathcal{O}_{X^{\prime}}=0\) and \(R^{i}g_{*}\mathcal{O}_{X^{\prime}}(K_{X^{\prime}}+D^{\prime})=0\) for every \(i>0\), and \(g_{*}\mathcal{O}_{X^{\prime}}\simeq\mathcal{O}_{X}\) and \(g_{*}\mathcal{O}_{X^{\prime}}(K_{X^{\prime}}+D^{\prime})\simeq\mathcal{O}_{X}( K_{X}+D)\) hold._ Proof.: By [10, Lemma 2.15], we have \(R^{i}g_{*}\mathcal{O}_{X^{\prime}}=0\) for every \(i>0\) and \(g_{*}\mathcal{O}_{X^{\prime}}\simeq\mathcal{O}_{X}\). Since \(D\) and \(D^{\prime}\) are reduced, we can easily check that \[K_{X^{\prime}}+D^{\prime}=g^{*}(K_{X}+D)+E \tag{2.1}\] holds for some effective \(g\)-exceptional Cartier divisor \(E\) on \(X^{\prime}\) and that \(D^{\prime}=g_{*}^{-1}D\) holds. By (2.1), we have \(g_{*}\mathcal{O}_{X^{\prime}}(K_{X^{\prime}}+D^{\prime})\simeq\mathcal{O}_{X} (K_{X}+D)\). By [10, Lemma 2.13], we obtain \(R^{i}g_{*}\mathcal{O}_{X^{\prime}}(K_{X^{\prime}}+D^{\prime})=0\) for every \(i>0\). We finish the proof. **Lemma 2.8** (see [10, Lemma 5.1]).: _Let \((X,D)\) be an analytic simple normal crossing pair such that \(D\) is reduced and let \(f\colon X\to Y\) be a projective morphism between complex analytic spaces. Let \(L\) be a Cartier divisor on \(X\). We take an arbitrary point \(P\in Y\). Then, after shrinking \(Y\) around \(P\) suitably, we can construct the following commutative diagram:_ _such that_ * \(\iota_{Y}\colon Y\hookrightarrow\Delta^{m}\) _is a closed embedding into a polydisc_ \(\Delta^{m}\) _with_ \(\iota_{Y}(P)=0\in\Delta^{m}\)_,_ * \((Z,D_{Z})\) _is an analytic globally embedded simple normal crossing pair such that_ \(D_{Z}\) _is reduced,_ * \(M\) _is the ambient space of_ \((Z,D_{Z})\) _and is projective over_ \(\Delta^{m}\) _._ * _there exists a Cartier divisor_ \(L_{Z}\) _on_ \(Z\) _satisfying_ \[L_{Z}-(K_{Z}+D_{Z})=p^{*}(L-(K_{X}+D)),\] \(p_{*}\mathcal{O}_{Z}(L_{Z})\simeq\mathcal{O}_{X}(L)\)_, and_ \(R^{i}p_{*}\mathcal{O}_{Z}(L_{Z})=0\) _for every_ \(i>0\)_,_ * \(p(W)\) _is a stratum of_ \((X,D)\) _for every stratum_ \(W\) _of_ \((Z,D_{Z})\)_,_ * _there exists a Zariski open subset_ \(U\) _of_ \(X\)_, which intersects every stratum of_ \(X\)_, such that_ \(p\) _is an isomorphism over_ \(U\)_,_ * \(p\) _maps every stratum of_ \(Z\) _bimeromorphically onto some stratum of_ \(X\)_, and_ * _for any stratum_ \(S\) _of_ \((X,D)\)_, there exists a stratum_ \(W\) _of_ \((Z,D_{Z})\) _such that_ \(S=p(W)\)_._ Proof.: The proof of [10, Lemma 5.1], where we allow \(D\) to be a boundary \(\mathbb{R}\)-divisor, works without any modifications. **Lemma 2.9** (see [10, Lemma 2.17]).: _Let \((X,D)\) be an analytic globally embedded simple normal crossing pair such that \(D\) is reduced and let \(M\) be the ambient space of \((X,D)\). Let \(\sigma\colon M^{\prime}\to M\) be the blow-up along \(C\) and let \(X^{\prime}\) denote the reduced structure of the total transform of \(X\) on \(M^{\prime}\). We put_ \[K_{X^{\prime}}+D^{\prime}:=g^{*}(K_{X}+D),\] _where \(g:=\sigma|_{X^{\prime}}\). Then we have the following properties:_ * \((X^{\prime},D^{\prime})\) _is an analytic globally embedded simple normal crossing pair such that_ \(D^{\prime}\) _is reduced,_ * \(M^{\prime}\) _is the ambient space of_ \((X^{\prime},D^{\prime})\)_,_ * \(g_{*}\mathcal{O}_{X^{\prime}}\simeq\mathcal{O}_{X}\) _holds and_ \(R^{i}g_{*}\mathcal{O}_{X^{\prime}}=0\) _for every_ \(i>0\)_,_ * _the strata of_ \((X,D)\) _are exactly the images of the strata of_ \((X^{\prime},D^{\prime})\)_, and_ * \(\sigma^{-1}(C)\) _is a maximal_ (_with respect to the inclusion_) _stratum of_ \((X^{\prime},D^{\prime})\)_, that is,_ \(\sigma^{-1}(C)\) _is an irreducible component of_ \(X^{\prime}\)_._ Proof.: The proof of [10, Lemma 2.17], where we allow \(D\) to be a boundary \(\mathbb{R}\)-divisor, works without any modifications. ### Complex analytic generalization of Kollar's package Here, let us briefly review Kollar's package (see [11] and [11]) in the complex analytic setting. We recommend that the interested reader looks at [11, Chapter V. 3.7. Theorem] and [10]. Theorem 2.10 is a variant of Takegoshi's vanishing theorem (see [10, Theorem IV Relative vanishing Theorem]). We note that it is well known when \(f\colon X\to Y\) is a projective morphism of algebraic varieties. **Theorem 2.10** (Vanishing theorem).: _Let \(f\colon X\to Y\) and \(\pi\colon Y\to Z\) be projective surjective morphisms between complex varieties such that \(X\) is smooth. Let \(\mathcal{M}\) be a line bundle on \(Y\). Assume that \(\mathcal{M}\) is \(\pi\)-nef and \(\pi\)-big over \(Z\). Then_ \[R^{p}\pi_{*}\left(\mathcal{M}\otimes R^{q}f_{*}\omega_{X}\right)=0 \tag{2.2}\] _holds for every \(p>0\) and every \(q\). In particular, if further \(\pi\) is bimeromorphic, then_ \[R^{p}\pi_{*}R^{q}f_{*}\omega_{X}=0 \tag{2.3}\] _holds for every \(p>0\) and every \(q\)._ Proof.: The vanishing theorem (2.2) is more or less well known to the experts. For the details, see, for example, [Fn2, Corollary 1.5]. Note that (2.3) is a special case of (2.2). This is because the trivial line bundle on \(Y\) is \(\pi\)-nef and \(\pi\)-big when \(\pi\) is bimeromorphic. Lemma 2.11 is an easy consequence of Theorem 2.10. **Lemma 2.11**.: _Let \(f_{i}\colon X_{i}\to Y\) be a projective surjective morphism of complex varieties such that \(X_{i}\) is smooth for every \(1\leq i\leq k\). Let \(\pi\colon Y\to Z\) be a projective bimeromorphic morphism between complex varieties. We put_ \[\mathcal{F}:=\bigoplus_{i=1}^{k}R^{q_{i}}f_{i*}\omega_{X_{i}},\] _where \(q_{i}\) is some nonnegative integer for every \(i\). Let \(\mathcal{G}\) be a coherent sheaf on \(Y\). Assume that \(\mathcal{G}\) is a direct summand of \(\mathcal{F}\). Then \(R^{p}\pi_{*}\mathcal{G}=0\) holds for every \(p>0\). In particular, \(\pi_{*}\mathcal{G}\) is a direct summand of_ \[\pi_{*}\mathcal{F}=\bigoplus_{i=1}^{k}\pi_{*}R^{q_{i}}f_{i*}\omega_{X_{i}} \simeq\bigoplus_{i=1}^{k}R^{q_{i}}(\pi\circ f_{i})_{*}\omega_{X_{i}}.\] Proof.: It is sufficient to prove that \(R^{p}\pi_{*}R^{q_{i}}f_{i*}\omega_{X_{i}}=0\) holds for every \(p>0\). Hence, this lemma is an easy consequence of Theorem 2.10. Theorem 2.12 below is a special case of Takegoshi's torsion-freeness (see [T, Theorem II Torsion freeness Theorem]). When \(f\colon X\to Y\) is a projective surjective morphism between projective varieties, it is nothing but Kollar's famous torsion-freeness (see [Ko1, Theorem 2.1 (i)]). **Theorem 2.12** (Torsion-freeness).: _Let \(f\colon X\to Y\) be a projective surjective morphism of complex varieties such that \(X\) is smooth. Then \(R^{q}f_{*}\omega_{X}\) is torsion-free for every \(q\)._ When \(f\colon X\to Y\) is algebraic, Theorem 2.13 below was first obtained independently by Kollar (see [Ko2, Theorem 2.6]) and Nakayama (see [N2, Theorem 1]). When \(f\colon X\to Y\) is a projective morphism of smooth complex varieties, it was obtained by Moriwaki (see [Mo, Theorem (2.4)]). **Theorem 2.13** (Hodge filtration, see [T, Theorem V Local freeness Theorem (ii)] and [N3, Chapter V, 3.7. Theorem (4)]).: _Let \(f\colon X\to Y\) be a proper surjective morphism between smooth complex varieties and let \(\Sigma\) be a normal crossing divisor on \(Y\) such that \(f\) is smooth over \(Y^{*}:=Y\setminus\Sigma\). We assume that \(X\) is a Kahler manifold. Then \(R^{q}f_{*}\omega_{X/Y}\) is locally free and is characterized as the upper canonical extension of the corresponding bottom Hodge filtration on \(Y^{*}\) for every \(q\)._ We make a remark on the proof of Theorem 2.13. **Remark 2.14**.: One of the main ingredients of [N2] is Steenbrink's result established in [St1] and [St2] (see [N2, Theorem 3]). Although it was explicitly stated only for projective morphisms, it also holds for proper morphisms from Kahler manifolds (see Remark 3.4 below). Hence the argument in [N2] works for Kahler manifolds with the aid of [T]. We recommend that the interested reader looks at [N1, Conjectures 7.2 and 7.3] and [N2]. ## 3. On variations of mixed Hodge structure In this section, we will prove Theorems 1.1, 1.2, and 1.3. Our approach to Theorem 1.1 (ii)-(iv) here is different from [FF1] (see also [Fn5, Section 13]) because we do not assume that \((X,D)\) is projective over \(Y\) in this section. We use the terminologies in [FF1, Section 4]. Let us start with the proof of Theorem 1.1 (i). _Proof of Theorem 1.1_ (i). The proof is almost the same as the proof of Theorem 4.15 of [FF1]. Here we briefly recall several constructions and results in [FF1, Section 4], which is necessary for the proof of Theorem 1.1 (ii)-(iv) and Theorem 1.3. Let \(f\colon(X,D)\to Y\) be as in Theorems 1.1 and 1.3. Let \[X=\bigcup_{i\in I}X_{i}\quad\text{and}\quad D=\bigcup_{\lambda\in\Lambda}D_{\lambda}\] be the irreducible decompositions of \(X\) and \(D\), respectively. Fixing orders \(<\) on \(\Lambda\) and \(I\), we put \[D_{k}\cap X_{l}=\coprod_{\begin{subarray}{c}\lambda_{0}<\lambda_{1}<\dots< \lambda_{k}\\ i_{0}<i_{1}<\dots<i_{l}\end{subarray}}D_{\lambda_{0}}\cap D_{\lambda_{1}} \cap\dots\cap D_{\lambda_{k}}\cap X_{i_{0}}\cap X_{i_{1}}\cap\dots\cap X_{i_{ l}}\] for \(k,l\geq 0\) (see [FF1, 4.14]). Here we use the convention \[D_{k} =D_{k}\cap X_{-1}=\coprod_{\lambda_{0}<\lambda_{1}<\dots<\lambda _{k}}D_{\lambda_{0}}\cap D_{\lambda_{1}}\cap\dots\cap D_{\lambda_{k}}\] \[X_{l} =D_{-1}\cap X_{l}=\coprod_{i_{0}<i_{1}<\dots<i_{l}}X_{i_{0}}\cap X _{i_{1}}\cap\dots\cap X_{i_{l}}\] for \(k,l\geq 0\). By setting \[(X,D)_{n}:=(D\cap X)_{n}\setminus D_{n}=\coprod_{\begin{subarray}{c}k+l+1=n \\ l\geq 0\end{subarray}}D_{k}\cap X_{l},\] we obtain an augmented semisimplicial variety \(\varepsilon\colon(X,D)_{\bullet}\to X\). Note that \((X,D)_{n}\) is the disjoint union of all the strata of \((X,D)\) of dimension \(\dim X-n\) for all \(n\in\mathbb{Z}_{\geq 0}\). We set \(f_{n}:=f\varepsilon_{n}\colon(X,D)_{n}\to Y\) for every \(n\). Then \(f_{n}\) is smooth over \(Y^{*}=Y\setminus\Sigma\). Then the complex \(\varepsilon_{*}\mathbb{R}_{(X,D)_{\bullet}}\) is given by \[(\varepsilon_{*}\mathbb{R}_{(X,D)_{\bullet}})^{n}=(\varepsilon_{n})_{*} \mathbb{R}_{(X,D)_{n}}=\bigoplus_{l\geq 0}\mathbb{R}_{D_{n-l-1}\cap X_{l}}\] with the Cech type morphism \(\delta\) as the differential. Note that this complex is the single complex associated to the double complex obtained by deleting the first vertical column of the double complex in [FF1, p.626, 4.14], and by replacing \(\mathbb{Q}\) with \(\mathbb{R}\). Then we have quasi-isomorphisms \[i_{!}\mathbb{R}_{X\setminus D}\xrightarrow{\sim}(0\to\mathbb{R}_{X}\to \mathbb{R}_{D_{0}}\xrightarrow{\delta}\mathbb{R}_{D_{1}}\xrightarrow{\delta} \dots)\xrightarrow{\sim}\varepsilon_{*}\mathbb{R}_{(X,D)_{\bullet}}\] from the double complex in [FF1] mentioned above, where \(i\) denotes the open immersion \(X\setminus D\hookrightarrow X\). By setting \[L_{m}(\varepsilon_{*}\mathbb{R}_{(X,D)_{\bullet}})^{n}=\begin{cases}0&n<-m\\ (\varepsilon_{n})_{*}\mathbb{R}_{(X,D)_{n}}&n\geq-m\end{cases}\] a finite increasing filtration \(L\) is defined on \(\varepsilon_{*}{\mathbb{R}}_{(X,D)_{\bullet}}\). We have the relative de Rham complex \(\Omega_{(X,D)_{\bullet}/Y}\) for the morphism \(f\varepsilon\colon(X,D)_{\bullet}\to Y\). Then the complex \(\varepsilon_{*}\Omega_{(X,D)_{\bullet}/Y}\) is given by \[(\varepsilon_{*}\Omega_{(X,D)_{\bullet}/Y})^{n}=\bigoplus_{k\geq 0}(\varepsilon_ {k})_{*}\Omega_{(X,D)_{k}/Y}^{n-k}\] with the differential \(\delta+(-1)^{k}d\) on \((\varepsilon_{k})_{*}\Omega_{(X,D)_{k}/Y}^{n-k}\), where \(\delta\) denotes the Cech type morphism for \((X,D)_{\bullet}\) and \(d\) denotes the differential of the relative de Rham complex \(\Omega_{(X,D)_{n}/Y}\). By setting \[L_{m}(\varepsilon_{*}\Omega_{(X,D)_{\bullet}/Y})^{n}=\bigoplus_ {k\geq-m}(\varepsilon_{k})_{*}\Omega_{(X,D)_{k}/Y}^{n-k}\] \[F^{p}(\varepsilon_{*}\Omega_{(X,D)_{\bullet}/Y})^{n}=\bigoplus_ {0\leq k\leq n-p}(\varepsilon_{k})_{*}\Omega_{(X,D)_{k}/Y}^{n-k},\] a finite increasing filtration \(L\) and a finite decreasing filtration \(F\) on \(\varepsilon_{*}\Omega_{(X,D)_{\bullet}/Y}\) are defined. The canonical morphism \({\mathbb{R}}_{(X,D)_{n}}\to{\mathcal{O}}_{(X,D)_{n}}\) induces a morphism of complexes \(\iota\colon\varepsilon_{*}{\mathbb{R}}_{(X,D)_{\bullet}}\to\varepsilon_{*} \Omega_{(X,D)_{\bullet}/Y}\). By setting \[K =((K_{\mathbb{R}},L),(K_{\mathcal{O}},L,F),\alpha)\] \[=((Rf_{*}\varepsilon_{*}{\mathbb{R}}_{(X,D)_{\bullet}},L),(Rf_{* }\varepsilon_{*}\Omega_{(X,D)_{\bullet}/Y},L,F),Rf_{*}\iota)\] (see [14, 4.1]), we obtain a triple \(K\) consisting of * a complex of \({\mathbb{R}}\)-sheaves \(K_{\mathbb{R}}\) on \(Y\) equipped with a finite increasing filtration \(L\), * a complex of \({\mathcal{O}}_{Y}\)-modules \(K_{\mathcal{O}}\) on \(Y\) equipped with a finite increasing filtration \(L\) and a finite decreasing filtration \(F\), * a morphism of filtered complexes of \({\mathbb{R}}\)-sheaves \(\alpha\colon(K_{\mathbb{R}},L)\to(K_{\mathcal{O}},L)\) satisfying the following: 1. There exists a quasi-isomorphism \(R(f|_{X\setminus D})!{\mathbb{R}}_{X\setminus D}\simeq K_{\mathbb{R}}\). 2. There exists a quasi-isomorphism \(\operatorname{Gr}_{F}^{p}K_{\mathcal{O}}\simeq Rf_{*}\varepsilon_{*}\Omega_{( X,D)_{\bullet}/Y}^{p}[-p]\). for every \(p\). In particular, \(Rf_{*}{\mathcal{O}}_{X}(-D)\simeq\operatorname{Gr}_{F}^{0}K_{\mathcal{O}}\). 3. For every \(m\in{\mathbb{Z}}\), \[\operatorname{Gr}_{m}^{L}K =(\operatorname{Gr}_{m}^{L}K_{\mathbb{R}},(\operatorname{Gr}_{m}^ {L}K_{\mathcal{O}},F),\operatorname{Gr}_{m}^{L}\alpha)\] \[\simeq\bigoplus_{S}(R(f_{S})_{*}{\mathbb{R}}_{S}[m],(R(f_{S})_{*} \Omega_{S/Y}[m],F),R(f_{S})_{*}\iota_{S}[m]),\] where \(S\) runs through all \((\dim X+m)\)-dimensional strata of \((X,D)\) and \(\iota_{S}\) is the composite \({\mathbb{R}}_{S}\hookrightarrow{\mathbb{C}}_{S}\to\Omega_{S/Y}\). We consider a triple, consisting of the spectral sequences and a morphism between them, \[\begin{split} E_{r}^{p,q}(K,L)=&(E_{r}^{p,q}(K_{ \mathbb{R}},L),(E_{r}^{p,q}(K_{\mathcal{O}},L),F),E_{r}^{p,q}(\alpha))\\ &\Rightarrow E_{\infty}^{p,q}(K,L)=(E_{\infty}^{p,q}(K_{\mathbb{R }},L),(E_{\infty}^{p,q}(K_{\mathcal{O}},L),F),E_{\infty}^{p,q}(\alpha)),\end{split} \tag{3.2}\] where \(F\) on \(E_{r}^{p,q}(K_{\mathcal{O}},L)\) denotes the inductive filtration (la filtration recurrente in [1, (1.3.11)]) and \(F\) on \(E_{\infty}^{p,q}(K_{\mathcal{O}},L)\) is the filtration induced from \(F\) on \(H^{p+q}(K_{\mathcal{O}})\) via the isomorphism \(E_{\infty}^{p,q}(K_{\mathcal{O}},L)\simeq\operatorname{Gr}_{-p}^{L}H^{p+q}(K_ {\mathcal{O}})\). The morphism of \(E_{r}\)-terms is denoted by \[d_{r}^{p,q}(K,L)=(d_{r}^{p,q}(K_{\mathbb{R}},L),d_{r}^{p,q}(K_{\mathcal{O}},L)) \colon E_{r}^{p,q}(K,L)\to E_{r}^{p+r,q-r+1}(K,L).\] Since every stratum \(S\) is a Kahler manifold and \(f_{S}\) is smooth over \(Y^{*}\), the isomorphism in (3.1.3) implies the following: 1. \(E_{r}^{p,q}(K,L)|_{Y^{*}}\) is a polarizable variation of \(\mathbb{R}\)-Hodge structure of weight \(q\) for all \(p,q\) and \(r\geq 1\). 2. The spectral sequence (3.2) degenerates at \(E_{2}\)-terms on \(Y^{*}\), in other words, \(d_{r}^{p,q}(K,L)|_{Y^{*}}=0\) for all \(p,q\) and \(r\geq 2\). 3. \((E_{2}^{p,q}(K_{\mathcal{O}},L),F)|_{Y^{*}}\simeq(E_{\infty}^{p,q}(K_{ \mathcal{O}},L),F)|_{Y^{*}}\) for all \(p,q\). 4. \(((H^{k}(K_{\mathbb{R}}),L[k]),(H^{k}(K_{\mathcal{O}}),L[k],F),H^{k}(\alpha))|_{Y ^{*}}\) is a graded polarizable variation of \(\mathbb{R}\)-mixed Hodge structure on \(Y^{*}\) for all \(k\). 5. \(\operatorname{Gr}_{F}^{a}H^{k}(K_{\mathcal{O}})|_{Y^{*}}\simeq H^{k}( \operatorname{Gr}_{F}^{a}K_{\mathcal{O}})|_{Y^{*}}\) for all \(a,k\). 6. \(\operatorname{Gr}_{F}^{a}E_{r}^{p,q}(K_{\mathcal{O}},L)|_{Y^{*}}\simeq E_{r}^ {p,q}(\operatorname{Gr}_{F}^{a}K_{\mathcal{O}},L)|_{Y^{*}}\) for all \(a,p,q\) and \(r\geq 0\). The proof of these properties are left to the reader (cf. [D2, Scholie (8.1.9) and Proposition (7.2.8)]). By (3.1.1), we have \(R^{k}(f|_{X^{*}\setminus D^{*}})_{!}\mathbb{R}_{X\setminus D}\simeq H^{k}(K_ {\mathbb{R}})|_{Y^{*}}\), which implies \(\mathcal{V}_{Y^{*}}^{k}\simeq H^{k}(K_{\mathcal{O}})|_{Y^{*}}\) for all \(k\). By using these isomorphism, we introduce filtrations \(L\) on \(R^{k}(f|_{X^{*}\setminus D^{*}})_{!}\mathbb{R}_{X\setminus D}\) and \(\mathcal{V}_{Y^{*}}^{k}\), \(F\) on \(\mathcal{V}_{Y^{*}}^{k}\), and obtain a graded polarizable variation of \(\mathbb{R}\)-mixed Hodge structure \[((R^{k}(f|_{X^{*}\setminus D^{*}})_{!}\mathbb{R}_{X\setminus D},L[k]),( \mathcal{V}_{Y^{*}}^{k},L[k],F))\] on \(Y^{*}\) as desired. Here we note that we have an isomorphism \[\operatorname{Gr}_{F}^{0}\mathcal{V}_{Y^{*}}^{k}\simeq R^{k}f_{*}\mathcal{O} _{X}(-D)|_{Y^{*}} \tag{3.4}\] for every \(k\) by (3.1.2) and (3.3.5). Next, we will prove Theorem 1.3. Proof of Theorem 1.3.: We use the notations and terminologies in the proof of Theorem 1.1 (i). We will prove that the spectral sequence \[E_{r}^{p,q}(\operatorname{Gr}_{F}^{0}K_{\mathcal{O}},L)\Rightarrow E^{p+q}( \operatorname{Gr}_{F}^{0}K_{\mathcal{O}},L) \tag{3.5}\] associated to the filtered complex \((\operatorname{Gr}_{F}^{0}K_{\mathcal{O}},L)\) satisfies the desired properties. The morphisms of \(E_{r}\)-terms are denoted by \[d_{r}^{p,q}(\operatorname{Gr}_{F}^{0}K_{\mathcal{O}},L)\colon E_{r}^{p,q}( \operatorname{Gr}_{F}^{0}K_{\mathcal{O}},L)\to E_{r}^{p+r,q-r+1}( \operatorname{Gr}_{F}^{0}K_{\mathcal{O}},L).\] By (3.1.3), the spectral sequence (3.5) satisfies \[\begin{split} E_{1}^{p,q}(\operatorname{Gr}_{F}^{0}K_{\mathcal{O }},L)&\simeq H^{p+q}(\operatorname{Gr}_{-p}^{L}\operatorname{Gr}_ {F}^{0}K_{\mathcal{O}})\\ &\simeq H^{p+q}(\operatorname{Gr}_{F}^{0}\operatorname{Gr}_{-p}^{L }K_{\mathcal{O}})\simeq\bigoplus_{S}R^{q}(f_{S})_{*}\mathcal{O}_{S},\end{split} \tag{3.6}\] where \(S\) runs through all \((\dim X-p)\)-dimensional strata of \((X,D)\), and \[E^{p+q}(\operatorname{Gr}_{F}^{0}K_{\mathcal{O}},L)\simeq H^{p+q}( \operatorname{Gr}_{F}^{0}K_{\mathcal{O}})\simeq R^{p+q}f_{*}\mathcal{O}_{X}(-D).\] Thus it suffices to prove that (3.5) degenerates at \(E_{2}\)-terms and \(d_{1}^{p,q}(\operatorname{Gr}_{F}^{0}K_{\mathcal{O}},L)\) split for all \(p,q\). We consider the spectral sequence (3.2) again. Note that we have \[\operatorname{Gr}_{F}^{0}d_{r}^{p,q}(K_{\mathcal{O}},L)\big{|}_{Y^{*}}=d_{r}^ {p,q}(\operatorname{Gr}_{F}^{0}K_{\mathcal{O}},L)\big{|}_{Y^{*}} \tag{3.7}\] for all \(p,q,r\) under the isomorphism in (3.3.6). In the abelian category of the polarizable variations of \(\mathbb{R}\)-Hodge structure of weight \(q\) on \(Y^{*}\), we temporarily set \[I^{p,q} =(I^{p,q}_{\mathbb{R}},(I^{p,q}_{\mathcal{O}},F))\] \[=\text{Image}(E^{p,q}_{1}(K,L)|_{Y^{*}}\to E^{p+1,q}_{1}(K,L)|_{Y^{*}}) \subset E^{p+1,q}_{1}(K,L)|_{Y^{*}}\] for \(p,q\in\mathbb{Z}\). Because the category of the polarizable variations of \(\mathbb{R}\)-Hodge structure of weight \(q\) is semisimple, we have a direct sum decomposition \[E^{p,q}_{1}(K,L)|_{Y^{*}}\simeq E^{p,q}_{2}(K,L)|_{Y^{*}}\oplus I^{p-1,q}\oplus I ^{p,q}\] as polarizable variations of \(\mathbb{R}\)-Hodge structure, under which \(d^{p,q}_{1}(K,L)|_{Y^{*}}\) is identified with the composite of the natural morphisms \(E^{p,q}_{1}(K,L)|_{Y^{*}}\to I^{p,q}\) and \(I^{p,q}\hookrightarrow E^{p+1,q}_{1}(K,L)|_{Y^{*}}\) for all \(p,q\). In particular, we have \[(E^{p,q}_{1}(K_{\mathcal{O}},L),F)|_{Y^{*}}\simeq(E^{p,q}_{2}(K_{\mathcal{O}},L),F)|_{Y^{*}}\oplus(I^{p-1,q}_{\mathcal{O}},F)\oplus(I^{p,q}_{\mathcal{O}},F) \tag{3.8}\] as filtered \(\mathcal{O}_{Y^{*}}\)-modules. Moreover, we consider the lower canonical extensions of \[E^{p,q}_{1}(K_{\mathcal{O}},L)|_{Y^{*}},I^{p,q}_{\mathcal{O}},E^{p,q}_{2}(K_{ \mathcal{O}},L)|_{Y^{*}}\] for all \(p,q\) and denote them by \[{}^{l}E^{p,q}_{1}(K_{\mathcal{O}},L)|_{Y^{*}},{}^{l}I^{p,q}_{\mathcal{O}},{}^ {l}E^{p,q}_{2}(K_{\mathcal{O}},L)|_{Y^{*}}\] respectively. The filtrations \(F\) on \(E^{p,q}_{1}(K_{\mathcal{O}},L)|_{Y^{*}}\), \(I^{p,q}_{\mathcal{O}}\), and \(E^{p,q}_{2}(K_{\mathcal{O}},L)|_{Y^{*}}\) can be uniquely extended to the filtrations on their lower canonical extensions by applying Schmid's nilpotent orbit theorem (see [Sc, (4.12)]). Here we emphasize that \(F\) on these lower canonical extensions are the filtrations by subbundles. Then the isomorphism (3.8) is extended to an isomorphism \[({}^{l}E^{p,q}_{1}(K_{\mathcal{O}},L)|_{Y^{*}},F)\simeq({}^{l}E^{p,q}_{2}(K_{ \mathcal{O}},L)|_{Y^{*}},F)\oplus({}^{l}I^{p-1,q}_{\mathcal{O}},F)\oplus({}^{ l}I^{p,q}_{\mathcal{O}},F) \tag{3.9}\] by the uniqueness properties of the lower canonical extensions and of the filtrations by subbundles (cf. [FF1, Corollary 5.2]). Under the identification (3.9), the composite of the morphisms \(({}^{l}E^{p,q}_{1}(K_{\mathcal{O}},L)|_{Y^{*}},F)\to({}^{l}I^{p,q}_{\mathcal{O }},F)\) and \(({}^{l}I^{p,q}_{\mathcal{O}},F)\hookrightarrow({}^{l}E^{p+1,q}_{1}(K_{\mathcal{ O}},L)|_{Y^{*}},F)\) gives us the morphism \[{}^{l}d^{p,q}_{1}\colon({}^{l}E^{p,q}_{1}(K_{\mathcal{O}},L)|_{Y^{*}},F)\to({ }^{l}E^{p+1,q}_{1}(K_{\mathcal{O}},L)|_{Y^{*}},F)\] with the property \(({}^{l}d^{p,q}_{1})|_{Y^{*}}=d^{p,q}_{1}(K_{\mathcal{O}},L)|_{Y^{*}}\) for all \(p,q\). By (3.6) and \[E^{p,q}_{1}(K_{\mathcal{O}},L)|_{Y^{*}}\simeq\bigoplus_{S}R^{q}(f_{S})_{*} \Omega_{S/Y}|_{Y^{*}}\simeq\bigoplus_{S}(R^{q}(f_{S})_{*}\mathbb{R}_{S})|_{Y^ {*}}\otimes\mathcal{O}_{Y^{*}},\] where \(S\) runs through all \((\dim X-p)\)-dimensional strata of \((X,D)\) as before, we have the isomorphism \[\text{Gr}^{0}_{F}({}^{l}E^{p,q}_{1}(K_{\mathcal{O}},L)|_{Y^{*}})\simeq E^{p,q} _{1}(\text{Gr}^{0}_{F}\,K_{\mathcal{O}},L), \tag{3.10}\] whose restriction to \(Y^{*}\) coincides with the canonical isomorphism in (3.3.6), by the dual of Theorem 2.13. In particular, \(E^{p,q}_{1}(\text{Gr}^{p}_{F}\,K_{\mathcal{O}},L)\) is a locally free \(\mathcal{O}_{Y}\)-module of finite rank for all \(p,q\). Under the identification (3.10), we have \[\text{Gr}^{0}_{F}({}^{l}d^{p,q}_{1})=d^{p,q}_{1}(\text{Gr}^{0}_{F}\,K_{ \mathcal{O}},L) \tag{3.11}\] by Lemma 3.1 below, because \[\text{Gr}^{0}_{F}({}^{l}d^{p,q}_{1})|_{Y^{*}}=\text{Gr}^{0}_{F}\,d^{p,q}_{1}(K_ {\mathcal{O}},L)|_{Y^{*}}=d^{p,q}_{1}(\text{Gr}^{0}_{F}\,K_{\mathcal{O}},L)|_{Y^ {*}}\] under the isomorphism in (3.3.6) by (3.7) and because \[E_{1}^{p,q}(\operatorname{Gr}_{F}^{0}K_{\mathcal{O}},L),\operatorname{Gr}_{F}^{0 }({}^{l}E_{1}^{p,q}(K_{\mathcal{O}},L)|_{Y^{*}})\] are locally free \(\mathcal{O}_{Y}\)-modules of finite rank for all \(p,q\). By (3.11), and the decomposition (3.9), the morphism \(d_{1}^{p,q}(\operatorname{Gr}_{F}^{0}K_{\mathcal{O}},L)\) splits and \[E_{2}^{p,q}(\operatorname{Gr}_{F}^{0}K_{\mathcal{O}},L)\simeq\operatorname{Gr }_{F}^{0}({}^{l}E_{2}^{p,q}(K_{\mathcal{O}},L)|_{Y^{*}})\] for all \(p,q\). In particular, \(E_{2}^{p,q}(\operatorname{Gr}_{F}^{0}K_{\mathcal{O}},L)\) is a locally free \(\mathcal{O}_{Y}\)-module of finite rank for all \(p,q\). Since \[d_{r}^{p,q}(\operatorname{Gr}_{F}^{0}K_{\mathcal{O}},L)|_{Y^{*}}=\operatorname {Gr}_{F}^{0}d_{r}^{p,q}(K_{\mathcal{O}},L)|_{Y^{*}}=0\] for \(r\geq 2\) by (3.7) and (3.3.2), we inductively obtain \[d_{r}^{p,q}(\operatorname{Gr}_{F}^{0}K_{\mathcal{O}},L)=0\] for \(r\geq 2\) by using Lemma 3.1 below. In other words, the spectral sequence (3.5) degenerates at \(E_{2}\)-terms. The following elementary lemma, used in the proof above, will be constantly used in this section. **Lemma 3.1**.: _Let \(\mathcal{F}\) and \(\mathcal{G}\) be locally free \(\mathcal{O}_{Y}\)-modules of finite rank on \(Y\) and \(\varphi,\psi\colon\mathcal{F}\to\mathcal{G}\) morphisms of \(\mathcal{O}_{Y}\)-modules. If \(\varphi|_{Y^{*}}=\psi|_{Y^{*}}\), then \(\varphi=\psi\). In particular, if \(\varphi|_{Y^{*}}=0\) then \(\varphi=0\)._ Proof.: It is obvious. In order to prove Theorem 1.1 (ii)-(iv), we recall results in [St1] and [St2] in a slightly generalized form. **Definition 3.2**.: Let \(f\colon X\to Y\) be a surjective morphism of smooth complex varieties and \(\Sigma\) a simple normal crossing divisor on \(Y\). We assume that \(E=(f^{*}\Sigma)_{\operatorname{red}}\) is a simple normal crossing divisor on \(X\). For such \(f\), we set \[\Omega^{1}_{X/Y}(\log E)=\operatorname{Coker}(f^{*}\Omega^{1}_{Y}(\log\Sigma) \to\Omega^{1}_{X}(\log E))\] and \[\Omega^{p}_{X/Y}(\log E)=\bigwedge^{p}\Omega^{1}_{X/Y}(\log E)\] for every \(p\). An \(f^{-1}\mathcal{O}_{Y}\)-differential \(d\colon\Omega^{p}_{X/Y}(\log E)\to\Omega^{p+1}_{X/Y}(\log E)\) can be uniquely defined by the commutative diagram \[\begin{CD}\Omega^{p}_{X}(\log E)@>{}>{}>\Omega^{p}_{X/Y}(\log E)\\ @V{d}V{}V@V{}V{d}V\\ \Omega^{p+1}_{X}(\log E)@>{}>{}>\Omega^{p+1}_{X/Y}(\log E),\end{CD}\] where the horizontal arrows are the canonical surjections induced from the surjection \(\Omega^{1}_{X}(\log E)\to\Omega^{1}_{X/Y}(\log E)\). Thus we obtain a complex of \(f^{-1}\mathcal{O}_{Y}\)-modules \(\Omega_{X/Y}(\log E)\), which is called the relative log de Rham complex of \(f\). **Lemma 3.3**.: _Let \(f\colon X\to Y\) be a proper surjective morphism from a Kahler manifold \(X\) to a smooth complex variety \(Y\). Assume that there exists a smooth divisor \(\Sigma\) on \(Y\) such that_ (3.12.1) \(f\) _is smooth over \(Y^{*}=Y\setminus\Sigma\),_ (3.12.2) \(E=(f^{*}\Sigma)_{\rm red}\) _is a simple normal crossing divisor on_ \(X\) _having finitely many irreducible components, and_ (3.12.3) \(\Omega^{1}_{X/Y}(\log E)\) _is a locally free_ \(\mathcal{O}_{X}\)_-module of finite rank._ _Then we have_ \[R^{k}f_{*}\Omega_{X/Y}(\log E)\simeq{}^{l}(R^{k}f_{*}\Omega_{X/Y}(\log E)|_{Y^ {*}})\simeq{}^{l}(\mathcal{O}_{Y^{*}}\otimes(R^{k}f_{*}\mathbb{C}_{\mathbb{X}} )|_{Y^{*}})\] _for all \(k\), where \({}^{l}(\cdot)\) stands for the lower canonical extension as before. In particular, \(R^{k}f_{*}\Omega_{X/Y}(\log E)\) is a locally free \(\mathcal{O}_{Y}\)-module of finite rank for all \(k\). Moreover, \(R^{k}f_{*}\Omega^{p}_{X/Y}(\log E)\) is also a locally free \(\mathcal{O}_{Y}\)-module of finite rank, and the stupid filtration \((\)filtration bete in [D1, (1.4.7)]\()\)\(F\) on \(\Omega_{X/Y}(\log E)\) induces the natural exact sequence_ \[0\to R^{k}f_{*}F^{p+1}\Omega_{X/Y}(\log E)\to R^{k}f_{*}F^{p}\Omega_{X/Y}( \log E)\to R^{k}f_{*}\Omega^{p}_{X/Y}(\log E)\to 0 \tag{3.13}\] _for all \(k,p\)._ Proof.: We may assume \(Y=\Delta^{k}\) with the coordinates \(t_{1},\ldots,t_{k}\) and \(\Sigma=\{t_{1}=0\}\). For any \(x\in E\), we can take local coordinates \(x_{1},\ldots,x_{n}\) centered at \(x\) on \(X\) with \[f^{*}t_{1}=x_{1}^{a_{1}}\cdots x_{l}^{a_{l}}\] for some \(a_{1},\ldots,a_{l}\in\mathbb{Z}_{>0}\) by (3.12.2). We set \(f_{i}=f^{*}t_{i}\) for \(i=2,\ldots,k\). On the other hand, we have the canonical exact sequence \[0\to f^{*}\Omega^{1}_{Y}(\log\Sigma)_{x}\otimes\mathbb{C}(x)\to\Omega^{1}_{X} (\log E)_{x}\otimes\mathbb{C}(x)\to\Omega^{1}_{X/Y}(\log E)_{x}\otimes\mathbb{ C}(x)\to 0, \tag{3.14}\] where \(\mathbb{C}(x)\) denotes the residue field at \(x\), because \(\Omega^{1}_{X/Y}(\log E)\) is a locally free \(\mathcal{O}_{X}\)-module of rank \(\dim X-\dim Y\) by (3.12.1) and (3.12.3). Under the isomorphisms \[\Omega^{1}_{Y}(\log\Sigma)\simeq\mathcal{O}_{Y}\frac{dt_{1}}{t_{1}}\oplus( \bigoplus_{i=2}^{k}\mathcal{O}_{Y}dt_{i}),\] \[\Omega^{1}_{X}(\log E)\simeq(\bigoplus_{i=1}^{l}\mathcal{O}_{X}\frac{dx_{i}}{ x_{i}})\oplus(\bigoplus_{i=l+1}^{n}\mathcal{O}_{X}dx_{i})\] the morphism \(f^{*}\Omega^{1}_{Y}(\log\Sigma)_{x}\otimes\mathbb{C}(x)\to\Omega^{1}_{X}(\log E )_{x}\otimes\mathbb{C}(x)\) is represented by the matrix \[\left(\begin{array}{cccc}a_{1}&\ldots&a_{l}&0&\ldots&0\\ \hline 0&\ldots&0&\\ \vdots&\ddots&\vdots&\dfrac{\partial f_{i}}{\partial x_{j}}(0)\\ 0&\ldots&0&\end{array}\right) \tag{3.15}\] where \(i\) and \(j\) run through \(2,\ldots,k\) and \(l+1,\ldots,n\) respectively. The exactness of (3.14) implies that the matrix (3.15) is of rank \(k\), and then we may assume \[\operatorname{rank}\left(\frac{\partial f_{i}}{\partial x_{j}}(0)\right)_{2 \leq i\leq k,l+1\leq j\leq l+k-1}=k-1\] by changing the order of \(x_{l+1},\ldots,x_{n}\). Replacing \(x_{l+1},\ldots,x_{l+k-1}\) by \(f_{2},\ldots,f_{k}\), we obtain a new local coordinates \((x_{1},\ldots,x_{n})\) at \(x\), under which the morphism \(f\) is given in the form \[t_{1}=x_{1}^{a_{1}}\cdots x_{l}^{a_{l}},t_{2}=x_{l+1},\ldots,t_{k}=x_{l+k-1} \tag{3.16}\] around \(x\). We set \(f_{s}\colon X_{s}\to\Delta=\Delta\times\{s\}\) by the Cartesian square \[\begin{CD}X_{s}@>{}>{}>X\\ @V{f_{s}}V{}V@V{}V{f}V\\ \Delta@>{}>{}>Y\end{CD}\] for any \(s=(t_{2},\dots,t_{k})\in\Delta^{k-1}\). Then \(X_{s}\) is smooth, \(f_{s}\) is smooth over \(\Delta^{*}=\Delta\setminus\{0\}\) and \(\operatorname{Supp}f_{s}^{-1}(0)\) is a simple normal crossing divisor on \(X_{s}\) by the local description (3.16). Hence \(R^{k}(f_{s})_{*}\Omega_{X_{s}/\Delta}(\log(E\cap X_{s}))\) and \(R^{k}(f_{s})_{*}\Omega_{X_{s}/\Delta}^{p}(\log(E\cap X_{s}))\) are locally free of finite rank for every \(k,p\) by [St1, (2.18) Theorem] and by [St2, (2.11) Theorem]. Therefore \(R^{k}f_{*}\Omega_{X/Y}(\log E)\) and \(R^{k}f_{*}\Omega_{X/Y}^{p}(\log E)\) are locally free \(\mathcal{O}_{Y}\)-modules of finite rank for all \(k,p\) by the base change theorem. Once we know that \(R^{k}f_{*}\Omega_{X/Y}(\log E)\) is locally free, it is the lower canonical extension of its restriction to \(Y^{*}=Y\setminus\Sigma\) by [St1, (2.20) Proposition]. Next, we consider the spectral sequence \[E_{r}^{p,q}(Rf_{*}\Omega_{X/Y}(\log E),F)\Rightarrow E^{p+q}(Rf_{*}\Omega_{X/Y }(\log E))=R^{p+q}f_{*}\Omega_{X/Y}(\log E) \tag{3.17}\] and denote the morphism of \(E_{r}\)-terms by \[d_{r}^{p,q}\colon E_{r}^{p,q}(Rf_{*}\Omega_{X/Y}(\log E),F)\to E_{r}^{p+r,q-r+ 1}(Rf_{*}\Omega_{X/Y}(\log E),F)\] for a while. Then \(d_{r}^{p,q}|_{Y^{*}}=0\) for all \(p,q\) and \(r\geq 1\) because the restriction of this spectral sequence to \(Y^{*}\) degenerates at \(E_{1}\)-terms. Since \[E_{1}^{p,q}(Rf_{*}\Omega_{X/Y}(\log E),F)\simeq R^{q}f_{*}\Omega_{X/Y}^{p}( \log E)\] is a locally free \(\mathcal{O}_{Y}\)-module of finite rank for all \(p,q\), we have \(d_{1}^{p,q}=0\) for all \(p,q\) by Lemma 3.1. This implies that \[E_{2}^{p,q}(Rf_{*}\Omega_{X/Y}(\log E),F)\simeq E_{1}^{p,q}(Rf_{*}\Omega_{X/Y} (\log E),F)\] is locally free for all \(p,q\) and that \(d_{2}^{p,q}=0\) for all \(p,q\) by Lemma 3.1 again. Inductively, we obtain \(d_{r}^{p,q}=0\) for all \(p,q\) and \(r\geq 1\). Thus the spectral sequence (3.17) degenerates at \(E_{1}\)-terms, or equivalently, (3.13) is exact. **Remark 3.4**.: In [St2], \(f_{s}\) is assumed to be a projective morphism. However, we can check that the proof of (2.11) Theorem in [St2] is also valid to a proper morphism from a Kahler manifold by using results in [PS, I.2.5 Almost Kahler \(V\)-manifolds]. See also Theorem 6.9 below. **Corollary 3.5**.: _In the situation of Lemma 3.3, we have the canonical isomorphisms_ \[R^{k}f_{*}F^{p}\Omega_{X/Y}(\log E)\simeq F^{p}R^{k}f_{*}\Omega _{X/Y}(\log E),\] \[R^{k}f_{*}\Omega_{X/Y}^{p}(\log E)\simeq\operatorname{Gr}_{F}^{ p}R^{k}f_{*}\Omega_{X/Y}(\log E)\] _for all \(k,p\). In particular, \(F^{p}R^{k}f_{*}\Omega_{X/Y}(\log E)\) is a subbundle of \(R^{k}f_{*}\Omega_{X/Y}(\log E)\)._ **Lemma 3.6**.: _Let \(f\colon X\to Y\) be a proper surjective morphism between smooth complex varieties. Assume that there exists a smooth divisor \(\Sigma\) such that_ * \(f\) _is smooth over_ \(Y^{*}=Y\setminus\Sigma\)_, and_ * \(E=(f^{*}\Sigma)_{\operatorname{red}}\) _is a simple normal crossing divisor on_ \(X\) _having finitely many irreducible components._ _Then there exists a closed analytic subset \(\Sigma_{0}\subset\Sigma\) with \(\dim\Sigma_{0}\leq\dim Y-2\), such that \(\Omega_{X/Y}^{1}(\log E)\) is locally free on \(f^{-1}(Y\setminus\Sigma_{0})\)._ Proof.: We may assume that \(\Sigma\) is irreducible. Let \(E=\sum_{i=1}^{N}E_{i}\) be the irreducible decomposition of \(E\). For a nonempty subset \(I\subset\{1,\dots,N\}\), we set \(E_{I}=\bigcap_{i\in I}E_{i}\), which is a smooth closed subvariety of \(X\). If \(f(E_{I})\neq\Sigma\), we set \(\Sigma_{I}=f(E_{I})\), which is a closed analytic subset of \(\Sigma\). If \(f(E_{I})=\Sigma\), then there exists a closed analytic subset \(\Sigma_{I}\subsetneq\Sigma\) such that \(f|_{E_{I}}\colon E_{I}\to\Sigma\) is smooth over \(\Sigma\setminus\Sigma_{I}\). We are going to check that the closed analytic subset \[\Sigma_{0}:=\bigcup_{\emptyset\neq I\subset\{1,\dots,N\}}\Sigma_{I}\] satisfies the desired property. We have \(\Sigma_{0}\neq\Sigma\), by definition. Therefore \(\dim\Sigma_{0}\leq\dim Y-2\) because \(\Sigma\) is irreducible. Then, it suffices to prove that \(\Omega^{1}_{X/Y}(\log E)\) is locally free on \(f^{-1}(Y\setminus\Sigma_{0})\). A point \(x\in E\cap f^{-1}(Y\setminus\Sigma_{0})\) defines a nonempty subset \(I\subset\{1,\dots,l\}\) by \(I=\{i\mid x\in E_{i}\}\). Then \(x\in E_{I}\) and \(f(E_{I})=\Sigma\). Take local coordinates \(x_{1},\dots,x_{n}\) and \(t_{1},\dots,t_{k}\) centered at \(x\) and \(f(x)\) on \(X\) and \(Y\) respectively, satisfying the following conditions: * \(\Sigma=\{t_{1}=0\}\) on \(Y\), and * \(f^{*}t_{1}=x_{1}^{a_{1}}\cdots x_{l}^{a_{l}}\) for some \(a_{1},\dots,a_{l}\in\mathbb{Z}_{>0}\). We set \(f_{i}=f^{*}t_{i}\) for \(i=2,\dots,k\). Then \(E_{I}=\{x_{1}=\cdots=x_{l}=0\}\) and the morphism \((f|_{E_{I}})^{*}\Omega^{1}_{\Sigma}\to\Omega^{1}_{E_{I}}\) is represented by the matrix \[\left(\frac{\partial f_{i}}{\partial x_{j}}(0,\dots,0,x_{l+1},\dots,x_{n}) \right)_{2\leq i\leq k,l+1\leq j\leq n}\] via the isomorphisms \((f|_{E_{I}})^{*}\Omega^{1}_{\Sigma}\simeq\bigoplus_{j=2}^{k}\mathcal{O}_{E_{I }}f^{*}dt_{j}\) and \(\Omega^{1}_{E_{I}}\simeq\bigoplus_{i=l+1}^{n}\mathcal{O}_{E_{I}}dx_{i}\). Since \(x\in f^{-1}(\Sigma\setminus\Sigma_{I})\), the morphism \(f|_{E_{I}}\) is smooth at \(x\). Then \[\operatorname{rank}\left(\frac{\partial f_{i}}{\partial x_{j}}(0)\right)_{2 \leq i\leq k,l+1\leq j\leq n}=k-1,\] which implies that the matrix (3.15) in the proof of Lemma 3.3 is of rank \(k\). Therefore the canonical morphism \(f^{*}\Omega^{1}_{Y}(\log\Sigma)_{x}\otimes\mathbb{C}(x)\to\Omega^{1}_{X}(\log E )_{x}\otimes\mathbb{C}(x)\) is injective, by which we conclude that \(\Omega^{1}_{X/Y}(\log E)\) is locally free around \(x\). **3.7**.: For the moment, we assume that there exist another semisimplicial variety \(Z_{\bullet}\) and a morphism of semisimplicial varieties \(\sigma\colon Z_{\bullet}\to(X,D)_{\bullet}\) satisfying the conditions * \(Z_{n}\) is smooth and Kahler, * \(\sigma_{n}\colon Z_{n}\to(X,D)_{n}\) is a projective surjective morphism, * for \(g_{n}:=f_{n}\sigma_{n}=f\varepsilon_{n}\sigma_{n}\colon Z_{n}\to Y\), the divisor \(E_{n}:=(g_{n}^{*}\Sigma)_{\text{red}}\) is a simple normal crossing divisor on \(Z_{n}\) having finitely many irreducible components, and * \(\sigma_{n}\colon Z_{n}\to(X,D)_{n}\) is isomorphic over \(Y^{*}\) for every \(n\in\mathbb{Z}_{\geq 0}\). We obtain an augmentation \(\eta\colon Z_{\bullet}\to X\) by setting \(\eta=\varepsilon\sigma\). The relative log de Rham complex of \(Z_{n}\) over \(Y\) is denoted by \(\Omega_{Z_{n}/Y}(\log E_{n})\). Then \(\{\Omega_{Z_{n}/Y}(\log E_{n})\}_{n\in\mathbb{Z}_{\geq 0}}\) forms a complex on the semisimplicial variety \(Z_{\bullet}\). For an augmentation of a semisimplicial variety, we can define the direct image functor as in [10, 4.1, 4.2] (for the detail, see e.g. [4, 5.1, 5.2], [11, 1.2]). The complex \(R_{\varepsilon*}\Omega_{(X,D)_{\bullet}}\) is isomorphic to \(\varepsilon_{*}\Omega_{(X,D)_{\bullet}}\) defined in the proof of Theorem 1.1 (i) in the derived category because \(\varepsilon_{n}\colon(X,D)_{n}\to X\) is a finite morphism for all \(n\). On the other hand, we obtain a complex \(R\eta_{\bullet}\Omega_{Z_{\bullet}/Y}(\log E_{\bullet})\) on \(X\). Here, we briefly recall the definitions of this complex, of the finite increasing filtration \(L\), and of the finite decreasing filtration on it. First, the complex \(R\eta_{*}\Omega_{Z_{\bullet}/Y}(\log E_{\bullet})\) is given as the total single complex associated to the double complex \[\begin{CD}&\vdots\\ @V{}V{}V@V{}V{}V\\ \cdots@>{}>{}>(R(\eta_{p})_{*}\Omega_{Z_{p}/Y}(\log E_{p}))^{q}@>{\delta}>{}>(R( \eta_{p+1})_{*}\Omega_{Z_{p+1/Y}}(\log E_{p+1}))^{q}@>{}>{}>\cdots\\ @V{(-1)^{p}d}V{}V@V{}V{(-1)^{p+1}d}V\\ \cdots@>{}>{}>(R(\eta_{p})_{*}\Omega_{Z_{p}/Y}(\log E_{p}))^{q+1}@>{\delta}>{}>(R( \eta_{p+1})_{*}\Omega_{Z_{p+1/Y}}(\log E_{p+1}))^{q+1}@>{}>{}>\cdots\\ @V{}V{}V@V{}V{}V\\ \vdots\end{CD}\] that is, \[(R\eta_{*}\Omega_{Z_{\bullet}/Y}(\log E_{\bullet}))^{n}=\bigoplus_{p}(R(\eta_{ p})_{*}\Omega_{Z_{p}/Y}(\log E_{p}))^{n-p},\] where \(R(\eta_{p})_{*}\Omega_{Z_{p}/Y}(\log E_{p})\) is regarded as _a genuine complex_ on \(X\) by using the Godement resolutions (cf. [10, 4.1]). The filtrations \(L\) and \(F\) are defined by \[L_{m}(R\eta_{*}\Omega_{Z_{\bullet}/Y}(\log E_{\bullet}))^{n}= \bigoplus_{p\geq-m}(R(\eta_{p})_{*}\Omega_{Z_{p}/Y}(\log E_{p}))^{n-p},\] \[F^{r}(R\eta_{*}\Omega_{Z_{\bullet}/Y}(\log E_{\bullet}))^{n}= \bigoplus_{p}F^{r}(R(\eta_{p})_{*}\Omega_{Z_{p}/Y}(\log E_{p}))^{n-p}\] for all \(m,n,r\). Therefore we have \[(\operatorname{Gr}_{m}^{L}R\eta_{*}\Omega_{Z_{\bullet}/Y}(\log E_{\bullet}),F) \simeq(R(\eta_{-m})_{*}\Omega_{Z_{-m}/Y}(\log E_{-m})[m],F) \tag{3.18}\] in the derived category. Similarly, we have a filtered complex \((R\eta_{*}\mathbb{R}_{Z_{\bullet}},L)\) on \(X\). The composite of the canonical morphisms \(\mathbb{R}_{Z_{\bullet}}\to\mathbb{C}_{Z_{\bullet}}\to\Omega_{Z_{\bullet}/Y}( \log E_{\bullet})\) induces a morphism of filtered complexes \((R\eta_{*}\mathbb{R}_{Z_{\bullet}},L)\to(R\eta_{*}\Omega_{Z_{\bullet}/Y}(\log E _{\bullet}),L)\), which is denoted by \(\iota\). From the morphism \(\sigma\colon Z_{\bullet}\to(X,D)_{\bullet}\), we obtain a morphism of bifiltered complexes \[(\varepsilon_{*}\Omega_{(X,D)_{\bullet}/Y},L,F)\to(R\eta_{*}\Omega_{Z_{ \bullet}/Y}(\log E_{\bullet}),L,F),\] which induces a morphism \[\begin{split}\operatorname{Gr}_{m}^{L}\operatorname{Gr}_{F}^{0} \varepsilon_{*}\Omega_{(X,D)_{\bullet}/Y}&\simeq(\varepsilon_{-m })_{*}\mathcal{O}_{(X,D)_{-m}}\\ &\to R(\eta_{-m})_{*}\mathcal{O}_{Z_{-m}}\simeq\operatorname{Gr} _{m}^{L}\operatorname{Gr}_{F}^{0}R\eta_{*}\Omega_{Z_{\bullet}/Y}(\log E_{ \bullet})\end{split} \tag{3.19}\] for all \(m\). Because \(\sigma_{n}\) induces the isomorphism \(\mathcal{O}_{(X,D)_{n}}\xrightarrow{\sim}R(\sigma_{n})_{*}\mathcal{O}_{Z_{n}}\) for all \(n\), we have the isomorphisms \[(\varepsilon_{-m})_{*}\mathcal{O}_{(X,D)_{-m}}\simeq R(\varepsilon_{-m})_{*} \mathcal{O}_{(X,D)_{-m}}\simeq R(\varepsilon_{-m})_{*}R(\sigma_{-m})_{*} \mathcal{O}_{Z_{-m}}\simeq R(\eta_{-m})_{*}\mathcal{O}_{Z_{-m}}\] for all \(m\). Therefore the morphism (3.19) is an isomorphism for all \(m\) in the derived category, which implies \[(\operatorname{Gr}_{F}^{0}\varepsilon_{*}\Omega_{(X,D)_{\bullet}/Y},L)\simeq( \operatorname{Gr}_{F}^{0}R\eta_{*}\Omega_{Z_{\bullet}/Y}(\log E_{\bullet}),L) \tag{3.20}\] in the filtered derived category. Now, we complete the proof of Theorem 1.1. Proof of Theorem 1.1 (ii)-(iv).: First, we prove (ii). The uniqueness of the filtration \(F\) on \({}^{l}{\mathcal{V}}^{k}_{Y^{*}}\) follows from [11, Corollary 5.2]. Therefore we may work locally on \(Y\). Then after shrinking \(Y\) to a relatively compact open subset, we can take \(Z_{\bullet}\) and \(\sigma_{\bullet}\colon Z_{\bullet}\to(X,D)_{\bullet}\) in 3.7 by the theorem of resolution of singularities (see [10, Section 13]). By Lemma 3.6, there exists a closed analytic subset \(\Sigma_{0}\subset\Sigma\) with \(\dim\Sigma_{0}\leq\dim Y-2\) such that \(\Sigma\setminus\Sigma_{0}\) is a smooth divisor in \(Y\setminus\Sigma_{0}\), and that \(\Omega^{1}_{Z_{n}/Y}(\log E_{n})\) is locally free over \(g_{n}^{-1}(Y\setminus\Sigma_{0})\) for all \(n\in{\mathbb{Z}}_{\geq 0}\). By setting \(Y_{0}:=Y\setminus\Sigma_{0}\), we trivially have \(Y^{*}\subset Y_{0}\subset Y\). Now we set \[K(\log)=Rf_{*}R\eta_{*}\Omega_{Z_{\bullet}/Y}(\log E_{\bullet})\] equipped with the induced filtrations \(L\) and \(F\). Then we have \[(K(\log),L,F)|_{Y^{*}}\simeq(K_{\mathcal{O}},L,F)|_{Y^{*}} \tag{3.21}\] because \(\sigma_{\bullet}\) is isomorphic over \(Y^{*}\). We consider the spectral sequence \[E^{p,q}_{r}(K(\log),L)\Rightarrow E^{p+q}(K(\log),L)\] equipped with the inductive filtration \(F\) on \(E^{p,q}_{r}(K(\log),L)\) and denote the morphisms of \(E_{r}\)-terms by \(d^{p,q}_{r}(K(\log),L)\). Then \(d^{p,q}_{r}(K(\log),L)|_{Y^{*}}=0\) for all \(p,q\) and \(r\geq 2\) by (3.21) and (3.3.2). By the exactness of (3.13) over \(Y_{0}\), the morphism \(d^{p,q}_{0}(K(\log),L)|_{Y_{0}}\) is strictly compatible with the filtration \(F\) on \(E^{p,q}_{0}(K(\log),L)|_{Y_{0}}\) for all \(p,q\). We have \[(E^{p,q}_{1}(K(\log),L),F)\simeq(R^{q}(g_{p})_{*}\Omega_{Z_{p}/Y}(\log E_{p}),F)\] by (3.18), and then \[(E^{p,q}_{1}(K(\log),L),F)|_{Y_{0}}\simeq(^{l}E^{p,q}_{1}(K(\log),L)|_{Y^{*}}, F)|_{Y_{0}}\simeq(^{l}E^{p,q}_{1}(K_{\mathcal{O}},L)|_{Y^{*}},F)|_{Y_{0}}\] by (3.21), Lemma 3.3, and the uniqueness of the filtrations in [11, Corollary 5.2]. Under these isomorphisms, \[d^{p,q}_{1}(K(\log),L)|_{Y_{0}}=(^{l}d^{p,q}_{1})|_{Y_{0}}\] by Lemma 3.1 because \[d^{p,q}_{1}(K(\log),L)|_{Y^{*}}=d^{p,q}_{1}(K_{\mathcal{O}},L)|_{Y^{*}}=(^{l}d ^{p,q}_{1})|_{Y^{*}}\] by (3.21). Therefore \(d^{p,q}_{1}(K(\log),L)|_{Y_{0}}\) is strictly compatible with \(F\) and \[(E^{p,q}_{2}(K(\log),L),F)|_{Y_{0}}\simeq(^{l}E^{p,q}_{2}(K_{\mathcal{O}},L)|_{ Y^{*}},F)|_{Y_{0}} \tag{3.22}\] for all \(p,q\) by the decomposition (3.9). Because \(d^{p,q}_{r}(K(\log),L)|_{Y^{*}}=0\) for \(r\geq 2\), we obtain \(d^{p,q}_{r}(K(\log),L)|_{Y_{0}}=0\) for \(r\geq 2\) inductively by using Lemma 3.1 as before. Thus \(d^{p,q}_{r}(K(\log),L)|_{Y_{0}}\) is strictly compatible with \(F\) for all \(r\geq 0\). Then the lemma on two filtrations (see e.g. [10, 7.2], [12, Theorem 3.12]) implies \[(E^{p,q}_{2}(K(\log),L),F)|_{Y_{0}}\simeq(\operatorname{Gr}^{L}_{-p}H^{p+q} (K(\log)),F)|_{Y_{0}} \tag{3.23}\] \[H^{k}(\operatorname{Gr}^{r}_{F}K(\log))|_{Y_{0}}\simeq\operatorname{Gr}^{r}_{F}H ^{k}(K(\log))|_{Y_{0}} \tag{3.24}\] for all \(k,p,q,r\). Hence \(\operatorname{Gr}^{p}_{F}\operatorname{Gr}^{L}_{m}H^{k}(K(\log))|_{Y_{0}}\) is a locally free \({\mathcal{O}}_{Y_{0}}\)-module of fintie rank and \(\operatorname{Gr}^{L}_{m}H^{k}(K(\log))|_{Y_{0}}\) is the lower canonical extension of \[\operatorname{Gr}^{L}_{m}H^{k}(K(\log))|_{Y^{*}}\simeq\operatorname{Gr}^{L}_{m }H^{k}(K_{\mathcal{O}})|_{Y^{*}}\simeq\operatorname{Gr}^{L}_{m}{\mathcal{V}}^{k}_ {Y^{*}}\] for all \(k,m,p\) by (3.22) and (3.23). Therefore \(H^{k}(K(\log))|_{Y_{0}}\) is the lower canonical extension of \[H^{k}(K(\log))|_{Y^{*}}\simeq H^{k}(K_{\mathcal{O}})|_{Y^{*}}\simeq{\mathcal{V}} ^{k}_{Y^{*}}\] for all \(k\). Thus we obtain \[(H^{k}(K(\log)),L)|_{Y_{0}}\simeq({}^{l}{\mathcal{V}}^{k}_{Y^{*}},L)|_{Y_{0}} \tag{3.25}\] as filtered \({\mathcal{O}}_{Y_{0}}\)-modules by the uniqueness of the lower canonical extensions and of the filtrations by subbundles. Via the isomorphism above, we obtain a filtration \(F\) on \({}^{l}{\mathcal{V}}^{k}_{Y^{*}}|_{Y_{0}}\) satisfying the two conditions in Theorem 1.1 (ii) on \(Y_{0}\). Then Lemma 1.11.2 in [Ka] together with Schmid's nilpotent orbit theorem (see [Sc, (4.12)]) for each \(\operatorname{Gr}^{L}_{m}{\mathcal{V}}^{k}_{Y^{*}}\) implies the conclusion of Theorem 1.1 (ii) on the whole \(Y\). Next, we will prove (iii). We return to the spectral sequence (3.5). As already mentioned in the proof of Theorem 1.3, \(E_{2}^{p,q}(\operatorname{Gr}^{0}_{F}K_{\mathcal{O}},L)\) is a locally free \({\mathcal{O}}_{Y}\)-module of finite rank for all \(p,q\). Because the spectral sequence (3.5) degenerate at \(E_{2}\)-terms by Theorem 1.3, we have \[\operatorname{Gr}^{L}_{m}R^{k}f_{*}{\mathcal{O}}_{X}(-D)\simeq E_{\infty}^{- m,k+m}(\operatorname{Gr}^{0}_{F}K_{\mathcal{O}},L)\simeq E_{2}^{-m,k+m}( \operatorname{Gr}^{0}_{F}K_{\mathcal{O}},L)\] for all \(m,k\). Thus \(\operatorname{Gr}^{L}_{m}R^{k}f_{*}{\mathcal{O}}_{X}(-D)\) is locally free of finite rank for all \(k,m\), and then so is \(R^{k}f_{*}{\mathcal{O}}_{X}(-D)\). Now it suffices to prove that the isomorphism (3.4) can be extended to an isomorphism \[\operatorname{Gr}^{0}_{F}({}^{l}{\mathcal{V}}^{k}_{Y^{*}})\simeq R^{k}f_{*}{ \mathcal{O}}_{X}(-D) \tag{3.26}\] for every \(k\). The extension above is unique by Lemma 3.1 because \(\operatorname{Gr}^{0}_{F}({}^{l}{\mathcal{V}}^{k}_{Y^{*}})\) is also a locally free \({\mathcal{O}}_{Y}\)-module of finite rank by Theorem 1.1 (ii). Therefore we may work in the situation 3.7 as above. Then we already have the isomorphisms \[\operatorname{Gr}^{0}_{F}({}^{l}{\mathcal{V}}^{k}_{Y^{*}})|_{Y_{0}}\simeq \operatorname{Gr}^{0}_{F}H^{k}(K(\log))|_{Y_{0}}\simeq H^{k}(\operatorname{Gr} ^{0}_{F}K(\log))|_{Y_{0}} \tag{3.27}\] by (3.24) and (3.25). On the other hand, \[\operatorname{Gr}^{0}_{F}K(\log)\simeq Rf_{*}\operatorname{Gr}^{0}_{F}R\eta_{* }\Omega_{Z_{\bullet}/Y}(\log E_{\bullet})\simeq Rf_{*}\operatorname{Gr}^{0}_{F }\varepsilon_{*}\Omega_{(X,D)_{\bullet}/Y}\simeq\operatorname{Gr}^{0}_{F}K_{ \mathcal{O}} \tag{3.28}\] by (3.20). Therefore we have \[\operatorname{Gr}^{0}_{F}({}^{l}{\mathcal{V}}^{k}_{Y^{*}})|_{Y_{0}}\simeq R^{ k}f_{*}{\mathcal{O}}_{X}(-D)|_{Y_{0}} \tag{3.29}\] by (3.27), (3.28) and (3.1.2), which gives an extension of the isomorphism (3.4) over \(Y_{0}\). Then the isomorphism (3.29) can be extended to the desired isomorphism (3.26) on the whole \(Y\) because \(\dim\Sigma_{0}\leq\dim Y-2\) and because the both sides of (3.26) are locally free of finite rank on \(Y\). By Grothendieck duality (see [RRV]), we obtain (iv) from (iii). The following theorem is an easy consequence of the proof of Theorem 1.3. We will use it in the proof of Theorem 1.4. **Theorem 3.8**.: _In Theorem 1.1, for every \(i\), there exists a finite filtration of locally free sheaves_ \[0={\mathcal{E}}^{i}_{0}\subset{\mathcal{E}}^{i}_{1}\subset\cdots\subset{ \mathcal{E}}^{i}_{l_{i}}=R^{i}f_{*}\omega_{X/Y}(D)\] _such that_ \[{\mathcal{E}}^{i}_{j+1}/{\mathcal{E}}^{i}_{j}\] _is isomorphic to a direct summand of_ \[\bigoplus_{\text{\rm finite}}R^{\alpha}f_{*}\omega_{S_{\beta}/Y},\] _where \(\alpha\) is a nonnegative integer and \(S_{\beta}\) is a stratum of \((X,D)\), for every \(j\)._ Proof.: By Theorem 1.3, there exists a finite filtration of locally free sheaves \[0=\mathcal{F}_{0}^{d-i}\subset\mathcal{F}_{1}^{d-i}\subset\cdots\subset\mathcal{ F}_{l_{i}}^{d-i}=R^{d-i}f_{*}\mathcal{O}_{X}(-D)\] such that \[\mathcal{F}_{j+1}^{d-i}/\mathcal{F}_{j}^{d-i}\] is isomorphic to a direct summand of \[\bigoplus_{\text{finite}}R^{d-i}f_{*}\mathcal{O}_{S_{\beta}},\] where \(S_{\beta}\) is a stratum of \((X,D)\), for every \(j\). We put \[\mathcal{E}_{j}^{i}:=\mathcal{H}om_{\mathcal{O}_{Y}}(\mathcal{O}_{Y}/\mathcal{ F}_{l_{i}-j}^{d-i},\mathcal{O}_{Y})\] for every \(j\). Then, by Grothendieck duality (see [RRV]), we obtain a desired filtration of \(R^{i}f_{*}\omega_{X/Y}(D)\). We close this section with the proof of Theorem 1.2. Proof of Theorem 1.2.: This theorem is obvious by Theorem 1.1 (iv) and the Fujita-Zucker-Kawamata semipositivity theorem. For the details of the Fujita-Zucker-Kawamata semipositivity theorem, see, for example, [FF1, Section 5], [FFS, Corollary 2], [FF2], and so on. We note that Theorems 1.1 and 1.2 have already played a crucial role when \(f\colon(X,D)\to Y\) is algebraic. We recommend that the interested reader looks at [Fn4], [Fn5], [Fn6], [Fn7], [FFL], [FH], and so on. ## 4. Proof of Theorem 1.4 In this section, we will prove Theorem 1.4 by using Theorem 3.8. In Section 5, we will see that Theorem 1.6 follows from Theorem 1.4. Proof of Theorem 1.4.: In Step 1 and Step 2, we will prove (i) and (ii), respectively. **Step 1.** In this step, we will prove (i). We take an arbitrary point \(P\in Y\). It is sufficient to prove (i) around \(P\). By Lemma 2.8, we may assume that \((X,D)\) is an analytic globally embedded simple normal crossing pair and that there exists the following commutative diagram: where \(M\) is the ambient space of \((X,D)\), such that \(q_{M}\) is projective and \(\iota_{Y}(P)=0\in\Delta^{m}\). By taking a suitable resolution of singularities of \(Y\) (see [BM, Sections 12 and 13]), there exist a projective bimeromorphic morphism \(\psi\colon Y^{\prime}\to Y\) from a smooth complex variety \(Y^{\prime}\) and a simple normal crossing divisor \(\Sigma^{\prime}\) on \(Y^{\prime}\) such that every stratum of \((X,D)\) is smooth over \(Y\setminus\psi(\Sigma^{\prime})\). Then, by taking a suitable resolution of singularities of \(M\) (see [BM, Sections 12 and 13]) and applying Lemma 2.7, we may assume that \[f^{\prime}\colon X\stackrel{{ f}}{{\longrightarrow}}Y\stackrel{{ \psi-1}}{{\dashrightarrow}}Y^{\prime}\] is a projective morphism. Hence we have the following commutative diagram: such that every stratum of \((X,D)\) is smooth over \(Y^{\prime}\setminus\Sigma^{\prime}\). By Theorem 3.8, \(R^{q}f^{\prime}_{*}\omega_{X/Y^{\prime}}(D)\) is locally free and has a finite filtration as in Theorem 3.8. By Lemma 2.11, we see that \(R^{q}f_{*}\omega_{X}(D)=\psi_{*}R^{q}f^{\prime}_{*}\omega_{X}(D)\) is torsion-free. This is what we wanted. **Step 2.** In this step, we will prove (ii). We take an arbitrary point \(P\in Z\). It is sufficient to prove (ii) around \(P\). As in Step 1, after shrinking \(Z\) suitably, by Lemma 2.8, a suitable resolution of singularities (see [BM, Sections 12 and 13]), and Lemma 2.7, we may assume that there exists the following commutative diagram: such that \(\iota_{Z}(P)=0\in\Delta^{m}\). By Theorem 3.8 and Lemma 2.11, we can reduce the problem to the case where \(X\) is smooth and \(D=0\). In that case, the desired vanishing theorem follows from Theorem 2.10. We finish the proof of Theorem 1.4. **Remark 4.1**.: By the above proof, we see that Theorem 1.4 (ii) holds under a weaker assumption that \(\mathcal{A}\) is \(\pi\)-nef and \(\pi\)-big over \(Z\) (see Theorem 2.10). ## 5. Proof of Theorem 1.6 In this section, we will prove Theorem 1.6 by using Theorem 1.4. As we mentioned before, Theorem 1.6 (iii) is an easy consequence of Theorem 1.6 (i) and (ii). Proof of Theorem 1.6.: In Step 1, we will prove Theorem 1.6 (i). Then, in Steps 2 and 3, we will prove Theorem 1.6 (ii) and (iii), respectively. **Step 1.** In this step, we will prove Theorem 1.6 (i). By replacing \(Y\) with \(f(X)\), we may assume that \(f(X)=Y\). Let \(P\in Y\) be an arbitrary point. It is sufficient to prove the statement after shrinking \(Y\) around \(P\) suitably. By Lemma 2.8, we may assume that \((X,D)\) is an analytic globally embedded simple normal crossing pair and that there exists the following commutative diagram: where \(M\) is the ambient space of \((X,D)\), such that \(q_{M}\) is projective and \(\iota_{Y}(P)=0\in\Delta^{m}\). By using Lemma 2.9 finitely many times, we can decompose \(X=X^{\prime}+X^{\prime\prime}\) as follows: is the union of all strata of \((X,D)\) that are not mapped onto irreducible components of \(Y=f(X)\) and \(X^{\prime\prime}=X-X^{\prime}\). We put \[K_{X^{\prime}}+D_{X^{\prime}}:=(K_{X}+D)|_{X^{\prime}}\] and \[K_{X^{\prime\prime}}+D_{X^{\prime\prime}}:=(K_{X}+D)|_{X^{\prime\prime}}-X^{ \prime}|_{X^{\prime\prime}}.\] We note that \((X^{\prime\prime},D_{X^{\prime\prime}})\) is an analytic globally embedded simple normal crossing pair such that \(D_{X^{\prime\prime}}\) is reduced and that every stratum of \((X^{\prime\prime},D_{X^{\prime\prime}})\) is mapped onto some irreducible component of \(Y\). We consider the following short exact sequence: \[0\to\mathcal{O}_{X^{\prime\prime}}(K_{X^{\prime\prime}}+D_{X^{\prime\prime}}) \to\mathcal{O}_{X}(K_{X}+D)\to\mathcal{O}_{X^{\prime}}(K_{X^{\prime}}+D_{X^{ \prime}})\to 0.\] By Theorem 1.4 (i), every associated subvariety of \(R^{q}f_{*}\mathcal{O}_{X^{\prime\prime}}(K_{X^{\prime\prime}}+D_{X^{\prime \prime}})\) is an irreducible component of \(Y\) for every \(q\). Note that every associated subvariety of \(R^{q}f_{*}\mathcal{O}_{X^{\prime}}(K_{X^{\prime}}+D_{X^{\prime}})\) is contained in \(f(X^{\prime})\) for every \(q\). Thus, the connecting homomorphisms \[\delta\colon R^{q}f_{*}\mathcal{O}_{X^{\prime}}(K_{X^{\prime}}+D_{X^{\prime}} )\to R^{q+1}f_{*}\mathcal{O}_{X^{\prime\prime}}(K_{X^{\prime\prime}}+D_{X^{ \prime\prime}})\] are zero for all \(q\). Hence we obtain the following short exact sequence \[0\to R^{q}f_{*}\mathcal{O}_{X^{\prime\prime}}(K_{X^{\prime\prime}}+D_{X^{ \prime\prime}})\to R^{q}f_{*}\mathcal{O}_{X}(K_{X}+D)\to R^{q}f_{*} \mathcal{O}_{X^{\prime}}(K_{X^{\prime}}+D_{X^{\prime}})\to 0 \tag{5.1}\] for every \(q\). By induction on \(\dim f(X)\), every associated subvariety of \(R^{q}f_{*}\mathcal{O}_{X^{\prime}}(K_{X^{\prime}}+D_{X^{\prime}})\) is the \(f\)-image of some stratum of \((X^{\prime},D_{X^{\prime}})\) for every \(q\). Therefore, every associated subvariety of \(R^{q}f_{*}\mathcal{O}_{X}(K_{X}+D)\) is the \(f\)-image of some stratum of \((X,D)\) for every \(q\) by (5.1). **Step 2.** In this step, we will prove Theorem 1.6 (ii). We may assume that \(f(X)=Y\) and \(\pi\circ f(X)=Z\). Let \(P\in Z\) be an arbitrary point. It is sufficient to prove the desired vanishing theorem after shrinking \(Z\) around \(P\) suitably. As in Step 1, by Lemma 2.8, we have the following commutative diagram: where \(M\) is the ambient space of \((X,D)\), such that \(q_{M}\) is projective and \(\iota_{Z}(P)=0\in\Delta^{m}\). By the same argument as in Step 1, we obtain \[0\to R^{q}f_{*}\mathcal{O}_{X^{\prime\prime}}(K_{X^{\prime\prime}}+D_{X^{ \prime\prime}})\to R^{q}f_{*}\mathcal{O}_{X}(K_{X}+D)\to R^{q}f_{*} \mathcal{O}_{X^{\prime}}(K_{X^{\prime}}+D_{X^{\prime}})\to 0\] for every \(q\). By applying Theorem 1.4 (ii) to every connected component of \(X^{\prime\prime}\), we see that \[R^{p}\pi_{*}\left(\mathcal{A}\otimes R^{q}f_{*}\mathcal{O}_{X^{\prime\prime}}( K_{X^{\prime\prime}}+D_{X^{\prime\prime}})\right)=0\] holds for every \(p>0\). By induction on \(\dim f(X)\), we obtain \[R^{p}\pi_{*}\left(\mathcal{A}\otimes R^{q}f_{*}\mathcal{O}_{X^{\prime}}(K_{X^{ \prime}}+D_{X^{\prime}})\right)=0\] for every \(p>0\). This implies \[R^{p}\pi_{*}\left(\mathcal{A}\otimes R^{q}f_{*}\mathcal{O}_{X}(K_{X}+D)\right)=0\] for every \(p>0\). This is what we wanted. **Step 3.** In this step, we will prove Theorem 1.6 (iii). Since we have already proved the strict support condition (see (i)) and the vanishing theorem (see (ii)) in Steps 1 and 2, respectively, the proof of [10, Theorem 3.1 (iii)] works. Hence we obtain the desired injectivity in (iii). We finish the proof of Theorem 1.6. **Remark 5.1**.: Theorem 1.6 (ii) holds under a weaker assumption that \(\mathcal{A}\) is nef and log big over \(Z\) with respect to \(f\colon(X,D)\to Y\). We can easily check it by the above proof of Theorem 1.6 (ii) and Remark 4.1. We do not discuss the details here because we have already known a more general statement, that is, the vanishing theorem of Reid-Fukuda type (see Theorem 1.8). ## 6. Supplement to [11] In this section, we give a remark on the construction of the cohomological \(\mathbb{Q}\)-mixed Hodge complex \(((A_{\mathbb{Q}},W),(A_{\mathbb{C}},W,F))\) in [11, p.536]. More precisely, we will present a new construction of \((A_{\mathbb{Q}},W)\) here. In the context of log geometry, such a construction is originated in [11] and used in other articles (e.g. [FN], [12] and so on). For the case of a semistable reduction, a new construction of \((A_{\mathbb{Q}},W)\), which is similar to [11], is given in [13, 11.2.6 The Rational Structure]. (For the case of a semistable morphism over the polydisc, see e.g. [12].) Here we will see that the construction in [12] works in the situation of [11]. **6.1**.: Let \(f\colon X\to\Delta\) be a proper surjective morphism from a smooth complex variety \(X\) to the unit disc \(\Delta\) satisfying the conditions * \(f\) is smooth over \(\Delta^{*}=\Delta\setminus\{0\}\), and * \(\operatorname{Supp}f^{-1}(0)\) is a simple normal crossing divisor on \(X\) as in [11, (2.1) Notations]. Note that \(f^{-1}(0)\) is _not_ assumed to be reduced. We fix \(N\in\mathbb{Z}_{>0}\), which is a multiple of all the multiplicities of the irreducible components of \(\operatorname{Supp}f^{-1}(0)\), and consider the morphism \(\sigma\colon\Delta\to\Delta\) given by \(\sigma(t)=t^{N}\). We define \(\widetilde{X},\pi\) and \(\widetilde{f}\) by the commutative diagram where \(\nu\) is the normalization. We set \(E=\operatorname{Supp}\widetilde{f}^{-1}(0)\), which is an effective Cartier divisor on \(\widetilde{X}\). The irreducible decomposition of \(E\) is written in \(E=\bigcup_{i=1}^{l}E_{i}\). The closed immersion \(E_{i}\hookrightarrow\widetilde{X}\) is denoted by \(a_{i}\). **6.2**.: We recall the local description of \(\widetilde{X}\) and \(\widetilde{f}\) given in the proof of [11, (2.2) Lemma]. For any point of \(\widetilde{X}\), there exist an open neighborhood \(\widetilde{U}\) in \(\widetilde{X}\), \(d_{1},\dots,d_{k}\in\mathbb{Z}_{>0}\) with \(\gcd(d_{1},\dots,d_{k})=1\), and \(e\in\mathbb{Z}_{>0}\cap(\bigcap_{i=1}^{k}d_{i}\mathbb{Z})\) with \(N\in e\mathbb{Z}\) such that \(\widetilde{U}\) and \(\widetilde{f}|_{\widetilde{U}}\) are described by using \(d_{1},\dots,d_{k},e\) as follows. By setting \(c_{i}:=e/d_{i}\in\mathbb{Z}_{>0}\) and \(G:=\bigoplus_{i=1}^{k}\mathbb{Z}/c_{i}\mathbb{Z}\) the kernel of the morphism \[G=\bigoplus_{i=1}^{k}\mathbb{Z}/c_{i}\mathbb{Z}\ni(b_{1},\dots,b_{k})\mapsto\sum_{ i=1}^{k}d_{i}b_{i}\in\mathbb{Z}/e\mathbb{Z}\] is denoted by \(H\). The finite abelian group \(G\) acts on the polydisc \(\Delta^{n}\) by \[(b_{1},\dots,b_{k})\cdot y_{i}=\begin{cases}\exp(2\pi\sqrt{-1}b_{i}/c_{i})y_{i }&\text{ for }1\leq i\leq k\\ y_{i}&\text{ for }k+1\leq i\leq n,\end{cases}\] where \((y_{1},\dots,y_{n})\) is the coordinate of \(\Delta^{n}\). Then \(\widetilde{U}\simeq\Delta^{n}/H\) and \(\widetilde{f}^{*}t=y_{1}\cdots y_{k}\), where \(t\) is the coordinate of \(\Delta\). Note that \(y_{1}\cdots y_{k}\) is \(H\)-invariant. Moreover, \(U=\pi(\widetilde{U})\) is an open subset of \(X\), and we also have \(U\simeq\Delta^{n}/G\) and \(f^{*}t=(y_{1}\cdots y_{k})^{N}\). Here we note that \((y_{1}\cdots y_{k})^{N}\) is \(G\)-invariant because \(N\in e\mathbb{Z}\). The \(G\)-invariant functions \(y_{1}^{c_{1}},\dots,y_{k}^{c_{k}},y_{k+1},\dots,y_{n}\) give us a coordinate on \(U\). From the local description above, \(\widetilde{X}\) is trivially a \(V\)-manifold. We can easily see that \(E_{i}\) is a reduced Cartier divisor on \(X\setminus\bigcup_{j\neq i}E_{j}\). Moreover, \(E_{i}\) is locally irreducible at any point because \(\pi(E_{i})\) is an irreducible component of \(\operatorname{Supp}f^{-1}(0)\) and because \(\operatorname{Supp}f^{-1}(0)\) is a simple normal crossing divisor on \(X\). **6.3**.: In the situation 6.1, the log structure on \(\widetilde{X}\) associated to the effective divisor \(E\) is denoted by \(\mathcal{M}\), that is, \[\mathcal{M}:=\mathcal{O}_{\widetilde{X}}\cap j_{*}\mathcal{O}_{\widetilde{X} \setminus E}^{*}\] in \(j_{*}\mathcal{O}_{\widetilde{X}\setminus E}\), where \(j\) denotes the open immersion \(\widetilde{X}\setminus E\hookrightarrow\widetilde{X}\). The abelian sheaf associated to the monoid sheaf \(\mathcal{M}\) is denoted by \(\mathcal{M}^{\operatorname{gp}}\). By using the fact that \(E_{i}\) is locally irreducible, a morphism of monoid sheaves \(\mathcal{M}\to(a_{i})_{*}\mathbb{N}_{E_{i}}\) can be defined by \[\mathcal{M}=\mathcal{O}_{\widetilde{X}}\cap j_{*}\mathcal{O}_{\widetilde{X} \setminus E}^{*}\ni a\mapsto\operatorname{ord}_{E_{i}}(a)\in(a_{i})_{*} \mathbb{N}_{E_{i}} \tag{6.1}\] for any \(i\), where \(\operatorname{ord}_{E_{i}}\) denotes the vanishing order of a holomorphic function on \(\widetilde{X}\) along the divisor \(E_{i}\). The direct sum of the morphisms (6.1) for all \(i\) induces a morphism \[\mathcal{M}^{\operatorname{gp}}\to\bigoplus_{i=1}^{l}(a_{i})_{*}\mathbb{Z}_{E _{i}}, \tag{6.2}\] which fits in an exact sequence \[0\to\mathcal{O}_{\widetilde{X}}^{*}\to\mathcal{M}^{\operatorname{gp}}\to \bigoplus_{i=1}^{l}(a_{i})_{*}\mathbb{Z}_{E_{i}} \tag{6.3}\] by definition. The following is a key lemma for the construction of \((A_{\mathbb{Q}},W)\). **Lemma 6.4**.: _We obtain the exact sequence_ \[0\to\mathcal{O}_{\widetilde{X}}^{*}\otimes_{\mathbb{Z}}\mathbb{Q}\to\mathcal{ M}^{\operatorname{gp}}\otimes_{\mathbb{Z}}\mathbb{Q}\to\bigoplus_{i=1}^{l}(a_{i})_{*} \mathbb{Q}_{E_{i}}\to 0\] _by tensoring \(\mathbb{Q}\) to (6.3)._ Proof.: We may work in the local situation described in 6.2. Since \(y_{i}^{c_{i}}\) is \(H\)-invariant, it gives us a holomorphic function on \(\widetilde{U}\) for \(i=1,\dots,k\). We may assume that \(E_{i}=\operatorname{Supp}\{y^{c_{i}}=0\}\) for \(1\leq i\leq k\) and \(E_{i}\cap\widetilde{U}=\emptyset\) for \(k+1\leq i\leq l\) by changing the indices. Because \(E_{i}\) is the zero set of \(\widetilde{f}^{*}t=y_{1}\cdots y_{k}\) on \(\widetilde{U}\setminus\bigcup_{j\neq i}(E_{j}\cap\widetilde{U})\), the image of \(y_{i}^{c_{i}}\in\mathcal{M}\subset\mathcal{M}^{\operatorname{gp}}\) by the morphism (6.2) is \((0,\dots,0,c_{i},0\dots,0)\in\bigoplus_{j=1}^{l}(a_{j})_{*}\mathbb{Z}_{E_{j}}\), where \(c_{i}\) is on the \(i\)-th entry. Thus we obtain the conclusion. **6.5**.: We briefly recall the constructions of the Koszul complexes and related objects in [Fs2]. For the detail, see [Fs2, Sections 1 and 2] (cf. [I], [St3] and so on). A morphism of abelian sheaves \(\mathbf{e}\colon\mathcal{O}_{\widetilde{X}}\to\mathcal{M}^{\operatorname{gp}}\) is defined as the composite of the exponential map \[\mathcal{O}_{\widetilde{X}}\ni a\mapsto e^{2\pi\sqrt{-1}a}\in\mathcal{O}_{ \widetilde{X}}^{*}\] and the inclusion \(\mathcal{O}_{\widetilde{X}}^{*}\hookrightarrow\mathcal{M}^{\operatorname{gp}}\). From the morphism \(\mathbf{e}\otimes\operatorname{id}\colon\mathcal{O}_{\widetilde{X}}\simeq \mathcal{O}_{\widetilde{X}}\otimes\mathbb{Q}\to\mathcal{M}^{\operatorname{gp}} \otimes\mathbb{Q}\), \(1\in\Gamma(X,\mathbb{Q})\) which is a global section of the kernel of \(\mathbf{e}\otimes\operatorname{id}\), and a subsheaf \(\mathcal{O}_{\widetilde{X}}^{*}\otimes\mathbb{Q}\subset\mathcal{M}^{ \operatorname{gp}}\otimes\mathbb{Q}\), we obtain a complex of \(\mathbb{Q}\)-sheaves on \(\widetilde{X}\) \[\operatorname{Kos}(\mathcal{M}):=\operatorname{Kos}(\mathbf{e}\otimes \operatorname{id};\infty;1)\] equipped with a finite increasing filtration \(W:=W(\mathcal{O}_{\widetilde{X}}^{*}\otimes\mathbb{Q})\) as in [Fs2, Definition 1.8]. By replacing \(\mathcal{M}^{\operatorname{gp}}\) by \(\mathcal{O}_{\widetilde{X}}^{*}\), we obtain a complex, denoted by \(\operatorname{Kos}(\mathcal{O}_{\widetilde{X}}^{*})\), by the same way as above. The global section \(\widetilde{f}^{*}t\in\Gamma(\widetilde{X},\mathcal{M})\) defines a morphism of complexes \[(\widetilde{f}^{*}t)\wedge\colon\operatorname{Kos}(\mathcal{M})\to \operatorname{Kos}(\mathcal{M})[1],\] which sends \(W_{m}\operatorname{Kos}(\mathcal{M})^{n}\) to \(W_{m+1}\operatorname{Kos}(\mathcal{M})^{n+1}\) as in [Fs2, (1.11) and (1.12)]. On the other hand, we have a morphism of complexes of \(\mathbb{Q}\)-sheaves \[\psi\colon\operatorname{Kos}(\mathcal{M})\to\widetilde{\Omega}_{\widetilde{X} }(\log E)\] as in [Fs2, (2.4)], which preserves the filtration \(W\) on the both sides. Moreover, it can be checked easily from the definition that the diagram \[\begin{CD}\operatorname{Kos}(\mathcal{M})@>{\psi}>{}>\widetilde{\Omega}_{ \widetilde{X}}(\log E)\\ @V{(\widetilde{f}^{*}t)\wedge}V{}V@V{}V{\theta\wedge}V\\ \operatorname{Kos}(\mathcal{M})[1]@>{}>{(2\pi\sqrt{-1})\psi}>\widetilde{\Omega}_{ \widetilde{X}}(\log E)[1]\end{CD} \tag{6.4}\] is commutative, where \(\theta=\widetilde{f}^{*}(dt/t)\in\widetilde{\Omega}_{\widetilde{X}}^{1}(\log E)\). For \(\operatorname{Kos}(\mathcal{M})\) and \(\psi\) above, we have the following lemmas. **Lemma 6.6**.: _In the situation above, we set_ \[E^{(k)}=\coprod_{1\leq i_{1}<\cdots<i_{k}\leq l}E_{i_{1}}\cap\cdots\cap E_{i_{ k}}\] _for \(k\in\mathbb{Z}_{>0}\). Moreover, we set \(E^{(0)}=\widetilde{X}\). The natural morphism \(E^{(k)}\to\widetilde{X}\) is denoted by \(a^{(k)}\) for \(k\in\mathbb{Z}_{\geq 0}\). Then we have an isomorphism_ \[(a^{(m)})_{*}\mathbb{Q}_{E^{(m)}}[-m]\simeq\operatorname{Gr}_{m}^{W} \operatorname{Kos}(\mathcal{M})\] _for all \(m\in\mathbb{Z}\)._ Proof.: We have an isomorphism \[\bigwedge^{m}(\mathcal{M}^{\mathrm{gp}}\otimes\mathbb{Q}/\mathcal{O}^{*}_{\widetilde {X}}\otimes\mathbb{Q})\otimes\operatorname{Kos}(\mathcal{O}^{*}_{\widetilde{X}}) [-m]\simeq\operatorname{Gr}^{W}_{m}\operatorname{Kos}(\mathcal{M})\] by [10, Proposition 1.10], and a quasi-isomorphism \(\mathbb{Q}_{\widetilde{X}}\to\operatorname{Kos}(\mathcal{O}^{*}_{\widetilde{X}})\) by [10, Corollary 1.15]. Therefore we obtain the conclusion by Lemma 6.4. **Lemma 6.7**.: _In the situation above, we have the commutative diagram_ \[\begin{CD}(a^{(m)})_{*}\mathbb{Q}_{E^{(m)}}[-m]@>{(2\pi\sqrt{-1})^{-m}}>{}>(a ^{(m)})_{*}\widetilde{\Omega}_{E^{(m)}}[-m]\\ @V{\simeq}V{}V@V{}V{\simeq}V\\ \operatorname{Gr}^{W}_{m}\operatorname{Kos}(\mathcal{M})@>{}>{\operatorname{ Gr}^{W}_{m}\widetilde{\Omega}_{\widetilde{X}}(\log E)}\end{CD} \tag{6.5}\] _where \(\iota\) is the natural morphism induced from the inclusion \(\mathbb{Q}\to\mathcal{O}_{E^{(m)}}\), the left vertical arrow is the isomorphism in Lemma 6.6, and the right vertical arrow is the residue isomorphism in [10, (1.18) Definition and (1.19) Lemma]. In particular, the morphism_ \[\operatorname{Kos}(\mathcal{M})\otimes\mathbb{C}\to\widetilde{\Omega}_{ \widetilde{X}}(\log E)\] _induced by \(\psi\) is a filtered quasi-isomorphism with respect to \(W\) on the both sides._ Proof.: The commutativity of the diagram (6.5) can be checked by the direct computation from the definition of \(\psi\) (cf. [10, (2.4)]). Then the latter conclusion follows from [10, (1.9) Corollary]. Once we obtain these two lemmas, it is more or less clear that the construction, parallel to \(A_{\mathbb{C}}\) in [10, (4.14) and (4.17)] and [10, (2.8)], works for \(A_{\mathbb{Q}}\). **Definition 6.8**.: In the situation 6.1, a filtered complex of \(\mathbb{Q}\)-sheaves \((A_{\mathbb{Q}},W)\) on \(\widetilde{X}\) is defined by \[A_{\mathbb{Q}}^{n}:=\bigoplus_{q\geq 0}\operatorname{Kos}(\mathcal{M} )^{n+1}/W_{q}\operatorname{Kos}(\mathcal{M})^{n+1}\] \[W_{m}A_{\mathbb{Q}}^{n}:=\bigoplus_{q\geq 0}W_{m+2q+1} \operatorname{Kos}(\mathcal{M})^{n+1}/W_{q}\operatorname{Kos}(\mathcal{M})^{n +1}\] with the differential \(-d-(\widetilde{f}^{*}t)\wedge\), where \(d\) denotes the differential of the complex \(\operatorname{Kos}(\mathcal{M})\). The direct sum of the morphisms of \(\mathbb{Q}\)-sheaves \[(2\pi\sqrt{-1})^{q+1}\psi\colon\operatorname{Kos}(\mathcal{M})^{n+1}/W_{q} \operatorname{Kos}(\mathcal{M})^{n+1}\to\widetilde{\Omega}^{n+1}_{\widetilde{X }}(\log E)/W_{q}\widetilde{\Omega}^{n+1}_{\widetilde{X}}(\log E)\] gives us a morphism of \(\mathbb{Q}\)-sheaves \[A_{\mathbb{Q}}^{n}=\bigoplus_{q\geq 0}\operatorname{Kos}(\mathcal{M})^{n+1}/W_{q }\operatorname{Kos}(\mathcal{M})^{n+1}\to\bigoplus_{q\geq 0}\widetilde{ \Omega}^{n+1}_{\widetilde{X}}(\log E)/W_{q}\widetilde{\Omega}^{n+1}_{ \widetilde{X}}(\log E)=A_{\mathbb{C}}^{n}\] which is compatible with the differentials by the commutativity of the diagram (6.4). Thus we obtain a morphism of filtered complexes of \(\mathbb{Q}\)-sheaves \(\alpha\colon(A_{\mathbb{Q}},W)\to(A_{\mathbb{C}},W)\). The Hodge filtration \(F\) on \(A_{\mathbb{C}}\) is defined by \[F^{p}A_{\mathbb{C}}^{n}:=\bigoplus_{0\leq q\leq n-p}\widetilde{\Omega}^{n+1}_{ \widetilde{X}}(\log E)/W_{q}\widetilde{\Omega}^{n+1}_{\widetilde{X}}(\log E)\] as in [10, (4.17)]. **Theorem 6.9** (cf. [St2, (2.8)]).: _Let \(f\colon X\to\Delta\) be as in 6.1. If we assume that \(X\) is Kahler, then \(((A_{\mathbb{Q}},W),(A_{\mathbb{C}},W,F),\alpha)\) is a cohomological \(\mathbb{Q}\)-mixed Hodge complex on \(E\)._ Proof.: By Lemmas 6.6 and 6.7, \((\operatorname{Gr}_{m}^{W}A_{\mathbb{Q}},(\operatorname{Gr}_{m}^{W}A_{ \mathbb{C}},F),\operatorname{Gr}_{m}^{W}\alpha)\) is identified with the direct sum of the direct images of \[(\mathbb{Q}(-m-q)[-m-2q],(\widetilde{\Omega}_{E^{(m+2q+1)}}[-m-2q],F[-m-q]))\] by the finite morphism \(a^{(m+2q+1)}\) for all \(q\geq\max(0,-m)\). Since \(\widetilde{X}\) is an almost Kahler \(V\)-manifold as in [PS, I.2.5] by the assumption for \(X\) being Kahler, we obtain the conclusion by Theorem 2.43 of [PS].
2305.17027
Robotic vectorial field alignment for spin-based quantum sensors
Developing practical quantum technologies will require the exquisite manipulation of fragile systems in a robust and repeatable way. As quantum technologies move towards real world applications, from biological sensing to communication in space, increasing experimental complexity introduces constraints that can be alleviated by the introduction of new technologies. Robotics has shown tremendous progress in realising increasingly smart, autonomous and highly dexterous machines. Here, we demonstrate that a robotic arm equipped with a magnet can sensitise an NV centre quantum magnetometer in challenging conditions unachievable with standard techniques. We generate vector magnetic field with $1^\circ$ angular and 0.1 mT amplitude accuracy and determine the orientation of a single stochastically-aligned spin-based sensor in a constrained physical environment. Our work opens up the prospect of integrating robotics across many quantum degrees of freedom in constrained settings, allowing for increased prototyping speed, control, and robustness in quantum technology applications.
Joe A. Smith, Dandan Zhang, Krishna C. Balram
2023-05-26T15:36:24Z
http://arxiv.org/abs/2305.17027v2
# Robotic vectorial field alignment for spin-based quantum sensors. ###### Abstract Developing practical quantum technologies will require the exquisite manipulation of fragile systems in a robust and repeatable way. As quantum technologies move towards real world applications, from biological sensing to communication in space, increasing experimental complexity introduces constraints that can be alleviated by the introduction of new technologies. Robotics has shown tremendous progress in realising increasingly smart, autonomous and highly dexterous machines. Here, we demonstrate that a robotic arm equipped with a magnet can sensitiie an NV centre quantum magnetometer in challenging conditions unachievable with standard techniques. We generate vector magnetic field with \(1^{\circ}\) angular and 0.1 mT amplitude accuracy and determine the orientation of a single stochastically-aligned spin-based sensor in a constrained physical environment. Our work opens up the prospect of integrating robotics across many quantum degrees of freedom in constrained settings, allowing for increased prototyping speed, control, and robustness in quantum technology applications. + Footnote †: preprint: APS/123-QED Experiments designed to exploit quantum technologies for applications can be extremely challenging. Fragile quantum states must be delicately manipulated, whilst minimising sources of decoherence, in order to preserve a quantum advantage. This often necessitates cutting-edge experimental physics, including precise and complex optical assemblies [1; 2], strong vector magnetic fields [3], high-speed microwave delivery [4], and compatibility with extremely low temperature environments [5]. Emerging quantum technologies based on hybrid quantum systems [6] combine research from two or more experimental settings: such as coupling spins in silicon to superconducting resonators and qubits [7; 8], interfacing remote NV centres in diamond with photonic qubits [9], and using nanomechanics to interface with spins [10] or superconducting qubits [11]. As these proof-of-principle devices become more sophisticated and start to scale in size and complexity, established lab infrastructure such as translation stages and solenoid coils will no longer provide the flexibility, speed, and precision to meet these constrained [12] and sometimes competing experimental requirements. In contrast, the field of robotics has long adapted to operate robots in challenging conditions, such as at the microscale [13] or in very low temperature environments [14]. Robotics can provide more flexible and adaptable approaches than traditional methods, that would speed up the deployment of quantum technology across applications. With sophisticated software stacks and well-developed open-source hardware, the deployment of robotics in a diverse range of experimental settings in the chemical and biological sciences has become increasingly feasible [15; 16]. Here, we introduce and validate the idea of a robot-assisted quantum technology. Specifically, we employ the use of a robotic arm to hold a strong permanent magnet for meeting a requirement in spin-based sensing: aligning an external magnetic field along the magnetic dipole axis of an arbitrarily oriented spin system (Fig. 1A). We demonstrate that this method has significant advantages where traditional techniques for generating vector fields, such as mounting the magnet on a fixed axis translation stage, or using a 3-axis Helmholtz coil, are infeasible owing to the tight physical constraints of the surrounding optomechanical apparatus. While this work focuses on a specific use case for robotics in quantum technology, the methods developed here can be easily adapted and extended to other experimental settings. **Problem statement and requirements** Spin-based magnetometers operate by mapping local perturbations in their environment to shifts in the transition (magnetic resonance absorption) frequency of the spin system [17]. The NV centre in diamond is the prototypical solid state quantum sensor on account of its optically accessible spin state which allows optical manipulation and readout of its spin state at room temperature (Optically Detected Magnetic Resonance or ODMR). NV centre magnetometers have rapidly advanced over the past decade and have reached maturity as a quantum magnetometer, with nanotesla (nT) sensitivities at nano-scale resolutions [18; 19]. As magnetic dipole-dipole interactions are weak and confined to the near field, near-surface NV centres are required to image fields from individual spins [20]. Nanoscale inclusions of diamond, or nanodiamond, are used to host the spin-probe in hot and wet biochemical surroundings in applications such as protein [21; 22] or cell detection [23; 24]. Nanodiamonds typically contribute an additional energy term \(\Pi\) to the NV centre Hamiltonian \(H\), from lattice strain and local charges [25; 26]: \[H=DS_{z}^{2}+\Pi\left(S_{x}^{2}-S_{y}^{2}\right)+\gamma\mathbf{B}_{\perp} \cdot\mathbf{S}_{\perp}+\gamma B_{z}S_{z}, \tag{1}\] where \(\mathbf{B}_{\perp}=(B_{x},B_{y})\) and \(\mathbf{S}_{\perp}=(S_{x},S_{y})\) are the transverse magnetic field and Pauli spin terms, with \(z\) defined as the axis comprising the NV centre along the diamond lattice. In Fig. 1B, the energy term \(\Pi\) leads to a frequency splitting of size \(2\Pi\) (shown in grey), making the NV centre transition frequencies robust to magnetic field fluctuations to first order. A bias field \(B_{z}\) is thus required bring to the NV centre into the regime (\(B_{z}\gg\Pi/\gamma\)) where the transitions are linearly dependent on the magnetic field, which corresponds to the highest sensitivity. Given nanodiamonds typically display \(\Pi\sim 10\) MHz [27], this requires a moderate \(B_{z}\) magnetic field of 5 mT aligned to the NV axis. A misaligned magnetic field (with a residual \(B_{x}\) or \(B_{y}\) component) would lead to a mixing of the energy eigenstates, which would result in a reduction of both the fluorescence and contrast (SNR) of the spin-dependent optical readout [28]. Magnetic fields of 5 mT significantly degrade the spin coherence time (\(T_{2}\)) of the NV centre when misaligned by \(5^{\circ}\) as they cause nearby nuclear spins to precess [29; 30]. To date, the established method to align a static magnetic field to an arbitrarily oriented spin is using three perpendicular wire coils [31; 32; 33] or sets of coils in the Helmholtz configuration [27; 34; 35]. To produce appreciable field strengths (\(\approx 10\) mT), coils comprise of hundreds of wire Figure 1: **Experimental setup and working principles.****A.** Placing a permanent magnet near the NV centre magnetometer produces a magnetic field of a known orientation, defined along its axis (field lines in white). **B.** One use case here is to change the spin resonance of the magnetometer, to operate at its most sensitive regime (linear with respect to detected field) away from the zero-field splitting (marked in grey). As observed, field along the NV centre \(B_{z}\) affects this response, whereas transverse components only contribute unwanted performance degradation. The field \(B_{ext}\) should therefore be approximately aligned to the NV centre magnetic dipole orientation \(B_{z}\). **C.** The 6 DoF robot is used to orient the magnet in complex surroundings. The robot base is located at the world origin (x-axis indicated in red, y-axis indicated in green, z-axis in blue). The Tool Centre Point (TCP axis marked) is translated along the x-axis of the end effector axis (marked) to set the required field strength. The TCP coordinate \((x,y,z,\alpha_{y},\alpha_{z})\) is then set to the NV centre position and rotated around the \(y\) and \(z\) axis, i.e. varying \(\alpha_{y}\) and \(\alpha_{z}\) of the TCP, to form a defined vector from the end effector to the TCP (shown in yellow). The robot position at a range of different \((\alpha_{y},\alpha_{z})\) are shown in the inset diagrams. Through this method the highly-dexterous robot can create fields with arbitrary field strengths and orientations, and align the TCP axis with the NV axis to produce the desired \(B_{z}\). turns. The wire gauge is chosen to balance wire packing and current density. Coils operate hot, which has adverse implications for sensitive samples, such as in biosensing [36]. The configuration is convenient for producing vector magnetic fields, after calibration, as the current in each coil can be ratioed to produce a desired orientation. The constraint of requiring coils at three axes around the sample severely restricts optical or mechanical degrees of freedom. An alternative method is to place and orient a strong neodymium permanent magnet (NdFeB) in the vicinity of the sample. The advantage here is that the small magnet can produce much larger field strengths than the coil. It is less restrictive in physical footprint so can be combined with optical assemblies and cryogenics. The magnets are aligned using linear [37; 38; 39] or rotational translation stages [40; 41]. The physical limitations of these stages preclude certain orientation NV centres from being aligned [41]. Typically, the magnet is positioned once and is aligned along a set diamond crystalline axis because the calibration process is cumbersome. This precludes the use of nanodiamonds where each site may have a random dipole orientation [42], eliminating important applications that involve inspecting spatially separated regions or tracking dynamic events in liquid environments. We propose to combine the convenience and control of coils with the small footprint and strength of a permanent magnet in developing a robotically controlled vectorial field alignment system. Our approach has the following advantages: (1) **Increased precision and control**. The robot manipulates the magnet with a high degree of accuracy, ensuring precise alignment of the generated magnetic field. For our application, this means better than 5\({}^{\circ}\) accuracy [29]. (2) **Fast alignment**: by employing a robot to move and position the magnet across optimal trajectories, alignment should be more efficient than manual techniques. (3) **Long-term stability**: employing closed-loop feedback with sufficient torque against gravity will maintain position securely for extended periods, ensuring stable alignment during experiments. (4) **Enhanced reproducibility**: A robust algorithm can align and realign the magnet between sample exchange or across multiple sites of interrogation. The robot consistently produces a given orientation field for different sample geometries. (5) **Scalability**: the robot has an adaptable routine that is able to suit a wide range of experimental configurations and constraints. One can also easily extend to scenarios where two fields with specific orientations need to be simultaneously applied, for instance an in-plane and out-of-plane field. For positioning of a magnet at a desired location in 3D space with respect to a point of interest, and allowing for rotation about two axes to achieve magnetic field orientation, we require a robot with at least five degrees of freedom. The robot must be capable of handling a moderate payload in order to carry enough magnet mass to produce an appreciable field (10 mT) at a distance. It should also be readily available, with a well developed software interface and be economical to meet the requirements of use outside the robotics community. In the following section we will evaluate this robot for the described task. ### Results **Workspace analysis** Workspace analysis is essential for robot control and application, as it evaluates the space the robot can access and manipulate with its end-effector, constrained by the robot's kinematic configuration. This analysis helps to identify the robot's suitability for specific tasks and environments. Key aspects include the reachable workspace volume (total 3D space the robot's end-effector can reach), workspace boundaries (limits of reachable space) and dexterity within this workspace (ability to precisely orient the end-effector) [43]. In this section, we perform workspace analysis on a magnet carrying robotic arm to evaluate its performance in generating vector magnetic fields. The robotic arm consists of a set of rigid bodies called links, connected by joints, with each joint driven by a motor actuator. An end-effector, in this case a permanent magnet, is attached to the end link. The arm is an open chain robot, with the position and orientation of the end-effector uniquely determined from the joint positions. The common configuration comprises of six joints, providing six Degrees of Freedom (DoF). For our experiments, we use a _Niryo NED 2_ robot owing to its well-documented open source stack, and ready availability. The arm has a moderate payload of 300 g, and is thus capable of lifting 40 cm\({}^{3}\) NdFeB, which can generate a surface magnetic field of \(\approx 800\) mT (see Supplementary Fig. 1). For ease of adoption, we select a cylindrical magnetic source with a radial hole, through which it can be fixed by a screw to the tool shaft. Shown in Fig. 1C, the robot is first set up by translating the Tool Center Point (TCP) along the x-axis of its end-effector, coaxial with the magnetisation axis of the magnet (Fig. 1A). This translation sets the distance, and hence strength, of the magnet, from the point of interrogation. To create a set vector field, the robot uses the Robot Operating System (ROS) kinematic processor [44] to position its joints, in order to compute a desired pose. The robot pose comprises the location and orientation of the TCP relative to a global coordinate frame. Rigid robots possess six state variables \((x,y,z,\alpha_{x},\alpha_{y},\alpha_{z})\), where the latter three coordinates are angles of rotation about the x, y and z axis respectively. The inverse kinematics problem is to find the joint position given a desired pose. In Supplementary Table 1, we give the Denavit-Hartenberg (D-H) representation for the kinematics of this robot. In principle, by fixing \(x\), \(y\), and \(z\) at the NV centre location, the vector orientation of an applied magnetic field can be modified by varying \(\alpha_{y}\) and \(\alpha_{z}\) of the pose. In our scheme, the cylindrical magnet is symmetric about \(\alpha_{x}\), so this degree of freedom is left unused. In Fig. 1C, we simulate in RViz, a visualization software for ROS, that the robot is sufficiently dexterous in positioning its joints to achieve a range of orientations, whereby the magnet is rotated around a stationary point with varying \(\alpha_{y}\) and \(\alpha_{z}\). In Supplementary Fig. 2, we calculate the full work-space volume and dexterity within this volume. Magnetic vector reconstructionThe goal in controlling the pose angle is to create a desired magnetic vector field at a given sample location. To experimentally verify this, we position a 3-axis Hall sensor at the point of interest in order to measure the field generated by the robot. We set the robot approximately collinear with the sensor axis, observed using a camera with a zoom lens. In Fig. 2A, we see the effect of adding magnets to the structure up to 70 % of the payload by setting the robot along an arc trajectory from horizontal to vertical, by rotating the desired pose from \(\alpha_{y}=0\) to \(\alpha_{y}=\pi/2\) with the distance between the sample and magnet surface fixed. Commonly in robotics, camera data is processed to extract information on the desired pose [13]. Here, the 3-axis Hall Sensor provides rich additional vector information, which, coupled with the known dependence of the magnetic field on position, allows the desired pose to be measured with higher precision than is visually observable. We fit the data with a closed form expression of the magnetic field observed from the cylindrical magnet [45], using the pose variables as fitting parameters. We fit a constant \(15^{\circ}\) offset in \(\alpha_{y}\) and observe this in the \(x-z\) crossing point of the sensor and the robot, which for an aligned system would occur at \(45^{\circ}\) (marked by a dashed line in Fig. 2A). Additionally, for this trajectory, we would expect no \(B_{y}\) field to be measured. The non-zero \(B_{y}\) component is well fit to a varying non-zero \(\alpha_{z}\) occurring when each magnet is added. This offset results in a non-linear relation between the number of magnets added and the observed strength. With an initial calibration trajectory to record this magnetic field information, fine alignment can achieved either by physically adjusting the robot or by modifying the coordinate frame to correct for the observed misalignment error. Following this initial trajectory, in Fig. 2B we observe that by scanning through a dictionary of poses, varying only \(\alpha_{y}\) and \(\alpha_{z}\), we are able to traverse a set of \(B_{abs}\mathbf{\hat{n}}(\alpha_{y},\alpha_{z})\) points on the sphere where \(B_{abs}\) is an approximately constant scalar and \(\mathbf{\hat{n}}\) is the unit normal vector. In the image plots, we see the measured \(B_{x}\), \(B_{y}\), and \(B_{z}\) over each pose \(\alpha_{y},\alpha_{z}\) compared to the designed field. There is a small percentage of white pixels representing poses within the workspace that were unachievable by the kinematic processor. The robot scans Figure 2: **Robot arm generates arbitrary vector magnetic fields.****A.** A permanent magnet of varying mass is placed in the tool (left panel). A Hall sensor measures the \(x-z\) trajectory of the field produced by the arm (right panel). The trajectory is well-fitted using a model of the field generated by the cylindrical magnet, noting a \(15\ ^{\circ}\) offset in the \(x\)-\(z\) crossing from the expect \(45^{\circ}\) (shown with dotted line) and a varying non-zero offset in \(y\), with this offset resulting in a non-linear trend in field registered with increasing magnetic mass. This initial measurement and model can be used for fine alignment calibration. **B.** The arm can create a field over the full \(x-y-z\) sphere segment (one-eighth) with \(3^{\circ}\) accuracy. White pixels in the image plots (circled in red) indicate the few unreachable positions. **C.** The distance \(r\) from the end effector to the Tool Center Point (TCP) produces a field strength fall off in \(B_{x}\) proportional to \(1/r^{3}\) (top panel), from which points (shown by vertical lines) can be then sampled (middle panel) to create a linear field response with high accuracy (bottom panel). in a meander, alternating \(+z,-z\), and artefacts of this is seen through scan lines in the measured data. Overall, we measure high angular accuracy with a mean error of \(2.9^{\circ}\) and mode error of \(2.3^{\circ}\) and confirm that the robotic arm is able to produce desired field orientations with a high accuracy. Field amplitude controlFor a set vector orientation and magnet mass, some ODMR applications require tuning of the magnetic field amplitude, for instance so that the spin resonance frequency matches a microwave resonator [46, 47]. The field amplitude can be controlled by tuning the distance between the magnet and the sample position. However, the magnetic field fall-off with \(r\) distance is highly non-linear, characterised by the Biot-Savart \(1/r^{3}\) relation. In addition, the robotic arm performs non-linear displacement, requiring dual movement of two rotational joints per linear step of the end effector. We observe in Fig. 2C that the displacement of the magnet away from the Hall sensor is sufficiently linear to produce a \(1/x^{3}\) response in \(B_{x}\). Because the magnetic field generated by the permanent magnet is large, it can be positioned sufficiently far away from the sensor so that the 10 mm \(1/r^{3}\) trajectory can be subsampled within the 0.5 mm resolution of the robot (lines shown in the top panel) to create a desired response \(B(r)\). In the middle panel we observe that we can create a linear field response between 0 and 10 mT through this method. In the bottom panel, we observe the error in this sampling technique is typically lower than 0.1 mT. However, as expected, this error increases for close distances to the sensor, as the available resolution to subsample the \(1/r^{3}\) field diminishes. Collision-free motion planningWith operation validated in an unconstrained environment, we move to navigating the robot around complicated lab infrastructure. By evaluating intersections with its environment, the robot is able to compute collisions in the ROS simulation using LBKPiece from the Open Motion Planning Library to traverse a tree of possible trajectories to achieve a given pose goal [48]. We take two experimental setups in our lab, a cryostat with an optical window and a scanning stage confocal microscope (see Fig. 3A), and add their spatial meshes in simulation to the robot environment. For these complex geometries, it would not be possible to position 3-axis Helmholtz coils for magnetic field alignment due to the competing requirements for optical access to the sample and the need to move the sample in three dimensions. Single microcils or a permanent magnet mounted on a stage would have limitations in terms of achievable proximity to the sample. With the available kinematics of the 6 DoF robotic arm, the position of the magnet with respect to the sample is far less constrained. In Fig. 3A, using LBKPiece, we simulate that a chosen subset of poses can be traversed with access to the top and back side of the cryostat, oriented around the TCP located at the sample mount, generating a \(-B_{x},+B_{y},-B_{z}\) sphere segment (one-eighth) without collisions. However, we observe that for the scanning stage confocal microscope, only a subset of any sphere segment is achievable. Poses that cannot be accessed without collision are shown with red dots and form a significant part of the subset. From this simulation, it is evident that there would limited success of the robot in the magnetisation-axis-aligned configuration described in Fig. 1C. Designing collision-free field vectorsAn important consideration at this point is that the set of poses in this configuration only make a small subset of the possible joint configurations of the robot, and therefore possible magnetic field vectors. By moving the TCP defined in Fig. 1C from the NV centre to the magnet, we give free control on its orientation and position, with access to the fringing fields of the magnetic source. Our hypothesis is that there exists a set of collision-free poses that would produce a full set of magnetic vectors. This idea makes uses of the magnetic inverse problem in field sensing: even if a pose cannot be reached, the desired field can be obtained because there is non-unique mapping between the field and the pose [49]. Our algorithm is laid out along Fig. 2B. Firstly, the unreachable set of poses in the constrained environment are found. For each such pose (Fig. 2B(i)), the TCP is translated along \(x\) to the magnet centre and the magnet is then linearly translated in either \(y\) or \(z\) to a new reachable pose (Fig. 2B(ii)). Next, the new pose is rotated in \(\alpha_{y}\) or \(\alpha_{z}\) to obtain the same vector field orientation as the original pose (Fig. 2B(iii)). Finally, the magnet is translated in \(x\) to recover the original magnitude (Fig. 2B(iv)). To calculate the vector rotation in Fig. 2B(iii), we can approximate the magnet with a dipole, for which the inverse magnetostatic expression is known [49]. For a powerful permanent magnet, the arm can be withdrawn to sufficient distances so that this dipole approximation becomes valid. The orientation of a unit dipole \(\vec{m}\) at a vector \(\vec{r}\) to create a field at the sensor location \(\vec{B}\) is given by: \[\vec{m}=\frac{6\pi}{\mu_{0}}(\vec{B}\cdot\vec{r})\,|\vec{r}|\,\vec{r}-\frac{ 4\pi}{\mu_{0}}\,|\vec{r}|^{3}\,\vec{B} \tag{2}\] , where \(\mu_{0}\) is the vacuum permeability. In Fig. 3C, we model the \(z\) displaced magnet and find the field observed at the sample location (green dot) has a significantly modified orientation (first panel). We can use Eq.2 to calculate the dipole orientation \(\vec{m}\) to find \(\vec{B}\) (middle panel). We then rotate the magnet to be coaxial with the calculated dipole orientation and recover the desired field vector \(\vec{B}\) with high accuracy, minimising \(B_{y}\) and \(B_{z}\) (last panel). In Fig. 3D, we show that in the physical experiment, the off-axial field component is indeed minimised when set to the calculated \(26^{\circ}\) angle, and that this algorithm succeeds within the pose resolution limit of the robot. As well as correcting orientation, for some applications it is important to maintain the field amplitude. This final step in Fig. 3B(iv) is achieved by using the known \(1/r^{3}\) Biot-Savart relation of Fig. 2C to scale the amplitude, translating the magnet in \(x\). To capture both the orientation and amplitude, we define a Gaussian kernel similarity function between the target field vector \(B_{1}\) and the replacement vector \(B_{2}\), as this is well bounded between 0 (least similar) and 1 (most similar): \[S=\exp\biggl{\{}\frac{||B_{2}-B_{1}||^{2}}{2d^{2}}\biggr{\}} \tag{3}\] with \(d=3\). We experimentally implement each step of the algorithm in Fig. 3E, and see that the final vector achieves a high similarity to the target vector with \(S=0.95\). This can also be seen by comparing the histograms of the measured field components in Fig. 3E(i) and Fig. 3E(iv). Here, the final amplitude correcting step maintains the desired field orientation. We have evidenced that by using this algorithm, it is possible Figure 3: **Motion planning in experimental settings A.** We model two experimental settings, a cryostat with optical access and a confocal microscope. Plotted are the simulated collisions with the robot (in red) and avoided collisions (in blue) over the fixed position varied \((\alpha_{y},\alpha_{z})\) poses. For the confocal, we observe a limited reachable workspace subset. **B.** We develop an algorithm to replace these unreachable poses with collision-free poses. The procedure follows: (i) a desired field vector is measured in a forbidden position, (ii) displacement puts the magnet in an allowed position, (iii) angular orientation sets the correct field vector, (iv) a further displacement corrects the field magnitude. **C.** Using a dipole source to calculate the rotation, we can deterministically rotate the magnet and recover the \(\hat{B}\) vector (title bar) at the observer (green dot). **D.** The designed rotation matches experimentally in minimising the transverse field at \(26^{\circ}\). **E.** Measuring with the 3D Hall sensor at each stage following panels in B, the final vector well matches the initial vector (quantified by similarity function \(S\) defined in the text). to systematically replace unreachable poses with reachable poses with the same field vector. The full dexterity offered by the robotic links combined with the inverse problem of magnetostatics make this system a powerful tool for setting arbitrary strength magnetic field vectors in highly constrained environments. Experimental setting of an NV centre confocal microscopeFollowing validation with the Hall sensor, we now move to a full experimental setting in order to evaluate the performance of the robot for aligning a spin based quantum sensor. We see in Fig. 4A and Fig. 4B that this requires navigating a highly complex environment with many sensitive optical and mechanical instruments. In the technique described in the previous section, the collision-free positions found in simulation can be downloaded to the physical robot. This requires a fine alignment between the simulation and the real world performance which could be achieved using the approache described in Fig. 2A. In a highly constrained environment, we find sensor-driven fine angular alignment is difficult to achieve as the initial calibration trajectories contain poses that cannot be verified as collision-free until this alignment has succeeded. A fast and pragmatic approach in an experimental setting is to _teach_ the robot a set of collision-free poses. We do this by switching the torque to each motor off momentarily, allowing the joints to move freely. Now, the operator can grasp the magnet, and guide the robotic arm to a desired location within the geometry, avoiding collision. Whilst doing this, the user can monitor the magnetic field produced at the sample location with the Hall sensor. When a desired field is registered, the torque and closed loop feedback can be switched on, locking the magnet at its set position. The corresponding coordinates (either the pose or the joint angles) can be registered along with the measured magnetic field. Through this method, the user can hunt and find locations where, for instance, \(z\) is the dominant field component. These poses can be used to gather information to calibrate the source to its experimental surroundings. We can both compare the taught poses with the simulation to calculate collisions and, with the dipole field model, we can locally modify the taught position to achieve desired magnetic field vectors. As a proof-of-principle experiment, we teach the robot a trajectory across the confocal microscope shown in Fig. 4A. We can then demonstrate that traversing this trajectory repeatedly affects the NV centre spin based sensor. In Fig. 4C, we use the confocal microscope to locate a collection of milled solid immersion lenses in a polycrystalline diamond sample [50], observing a bright NV centre in the central lens (Fig. 4D). With the robot arm at its home position, 10 cm away from the sample, we perform ODMR on this NV centre (see methods for experimental details). As defined in Eq. 1, we observe a small 3.7030 MHz splitting due to an intrinsic \(\Pi=1.8515\) MHz, about the central \(D=2.8704\) GHz, shown in Fig 4E. In moving the TCP near to the NV centre site, with the robot end effector in the vicinity of the experimental setup, we observe a large 100 MHz splitting in the ODMR spectra (green plot) from the presence of the permanent magnet. Typically, the NV centre must be optically aligned to the photon detector within 500 nm using a piezo stage and the ability to engage and disengage the robot whilst performing ODMR indicates it is suitable to use in this highly sensitive experiment. This is especially useful for ODMR where collecting each spectra takes on the order of minutes. ## II Discussion Our results show that an industrially designed robotic arm can be adapted to operate around sensitive optomechanical samples and setups. The presented modality produces stable and controllable magnetic fields that are capable of manipulating and aligning a single solid state quantum spin sensor. This an important step in the use of robotics to replace axial stages and bulky field coils for experimental physics and in developing quantum technologies, where we have evidenced the benefit of the innate flexibility and configurability of robotics arms in highly constrained environments. The next step for this work is to generate on-demand magnetic fields using a sophisticated algorithm that maps the traversable space given geometrical parameters, making use of the collision-free techniques described. With this, a set of control points can be found, considering application-specific criteria such as the field magnitude, linearisation, or the time taken to move between points. Robotics, unlike solenoid coils, produce minimal local heat. This makes them suited for sensitive samples and algorithms could be designed for tracking quantum sensors in motion under cell uptake, a difficult task where the spin sensor orientation changes over time [52; 53]. For further flexibility, the cylindrical magnet could be replaced with a rectangular magnet fixed perpendicular to its magnetic axis, with the unused roll degree of freedom in the robotic wrist providing rapid field orientation. Figure 4: **Robot-assisted magnetometry: A.** Image of confocal microscope setup showing robotic arm in position. **B.** Optically accessed cryostat with robotic arm in position **C**. Confocal image of NV centre located in diamond lens (mounted in setup shown in A) **D.** Photoluminescence scan in z of above. **E.** Optically Detected Magnetic Resonance (ODMR) showing zero-field magnetic field splitting of associated spin in blue when robot arm is approximately 10 cm from sensor, and strong 100 MHz splitting in green when robot arm is proximal to the spin sensor. **F.** Fitted peak resonance for robot trajectory in \(5^{\circ}\) increments (large circles) and \(2^{\circ}\) increments (small circles) indicating movement is stable and repeatable. **G.** Hall sensor data shows the \(B_{x}\), \(B_{y}\) crossover in trajectory. Normalising splitting between resonances by \(B\) magnitude reveals \(\sigma_{z}\) dependence in Trajectory 1 (black) and Trajectory 2 (blue), indicative of angular alignment. Beyond an off-the-shelf design, an application-specific robot could further maximise efficiency, precision, and control. This could have a larger payload whilst having a smaller form factor, for instance. We can extend this to the use of multiple robots to generate gradient magnetic fields. As well as a range of solid-state sensors, the alignment of atoms and ions in cold and vacuum environments can be explored with these form factors. In addition, the robot-driven orientation presented can be extended to aligning quantum objects with a range of parameters including electric and lights fields. Here, the end effector would be an electrode, or in optics, a laser or mirror surface. Following this proof-of-principle work, the adaptability of robots in combination with sophisticated software could provide ruggedness for alignment in demanding real-world environments where quantum technologies are emerging such as point to point Quantum Key Distribution (QKD) [54] and quantum rangefinding [55]. ## III Methods **Magnetic field modelling** Magnetic field calculations in this work are performed using the closed form expressions presented in the Magpylib package [45]. The hollow cylindrical magnet is modelled by subtracting an inner cylindrical magnetic source of opposite magnetisation from the outer cylindrical magnet source. **Robotic modelling** The Niryo NED 2 robot geometry is specified in the Unified Robot Description Format (URDF). Here, the end-effector geometry file specified in the URDF is replaced with a geometry of the magnet tool. For the collision-free motion path finding, the experimental setups are modelled in FreeCad and replace the geometry file of the robot base. The robot is simulated in a ROS environment, and controlled using the python wrapper PyNiryo2. **Experimental setup** The magnetic field measurements in this work is made using the Infineon TLE493D-P2B6MS2GO 3D Magnetic Sensor fitted on a compact platform mount or in the described confocal microscope. For the ODMR measurements, the NV centre is excited by a CW 532 nm laser (gem 532; Laser Quantum). A confocal microscope is used to image the collected count rate. Using a 0.9 NA microscope objective, the excitation beam is highly focused on the sample producing a nearly diffraction-limited spot (\(<\) 1 \(\upmu\)m diameter). The NV centre PL is collected through the same lens and separated from the excitation path by the use of a dichroic mirror before detection by single photon avalanche diodes (SPADs)(SPCM-AQRH-12-FC; PerkinElmer). By scanning the position of the sample, a map of detected count rate is generated by which the position of the NV centre and its maximum count rate can be found. ODMR was performed under CW excitation using a Rohde and Schwarz SMB100A microwave source driving a custom loop antenna PCB on which the sample is mounted. **Spin sensor modelling** The spin transitions presented in Fig. 1B are calculated solving the Hamiltonian eigenstates in QuTiP [56]. Other calculations use the NV spin energies characteristic polynomial presented in Balasubramanian _et al._, whereas the polar angle \(\theta\) between the field and the NV centre is found using the solution given in that work [51]. The polynomial can be solved and least-squared fitted to the experimental data to find the NV orientation \((\alpha_{y}^{\mathrm{NV}},\alpha_{z}^{\mathrm{NV}})\): [33], \[x^{3}-\left(\frac{D^{2}}{3}+\Pi^{2}+\beta^{2}\right)x-\frac{ \beta^{2}}{2}D\cos 2\gamma -\frac{D}{6}\left(4\Pi^{2}+\beta^{2}\right) \tag{4}\] \[+\frac{2D^{3}}{27}=0.\] where \(\gamma=\arccos\left(\left|\cos\left(\alpha_{z}^{\mathrm{B}}-\alpha_{z}^{ \mathrm{NV}}\right)\cos\!\left(\alpha_{y}^{\mathrm{B}}-\alpha_{y}^{\mathrm{NV} }\right)\right|\right)\) with a known \(\alpha_{y}^{\mathrm{B}}\) and \(\alpha_{z}^{\mathrm{B}}\) set by the robot, \(D\) and \(\Pi\) fitted from the zero-field data, and \(\beta=\gamma_{e}|B|\) where \(\gamma_{e}\) is the gyromagnetic ratio and \(|B|\) is the external magnetic field amplitude. In the trajectories presented, and the separation between resonances \(\nu(i)\), \(|B|\) is not conserved. For this fit, we must obtain a constant \(|B|\) so first normalise using the field magnitude data. For this, we sub-sample the higher resolution Hall data to reduce noise (see Supplementary Fig. 3) and obtain normalised splittings \(\nu_{n}(i)\) for each measurement point \(i\) where: \[\nu_{n}(i)=\frac{\nu(i)}{|B_{\mathrm{Hall}}(i)|}\max|B_{\mathrm{Hall}}| \tag{5}\] and leave the non-physical \(B\) in the characteristic equation as the third free parameter in the fit to this data. ## Acknowledgements We thank Jorge Monroy-Ruz for building the NV centre confocal microscope used in this experiment. We thank him, Hao-Cheng Weng, Wyatt Vine and John G. Rarity for useful discussions. We acknowledge funding support from the Engineering and Physical Sciences Research Council (EPSRC) grant QC:SCALE EP/W006685/1.
2307.06753
Cramer Type Distances for Learning Gaussian Mixture Models by Gradient Descent
The learning of Gaussian Mixture Models (also referred to simply as GMMs) plays an important role in machine learning. Known for their expressiveness and interpretability, Gaussian mixture models have a wide range of applications, from statistics, computer vision to distributional reinforcement learning. However, as of today, few known algorithms can fit or learn these models, some of which include Expectation-Maximization algorithms and Sliced Wasserstein Distance. Even fewer algorithms are compatible with gradient descent, the common learning process for neural networks. In this paper, we derive a closed formula of two GMMs in the univariate, one-dimensional case, then propose a distance function called Sliced Cram\'er 2-distance for learning general multivariate GMMs. Our approach has several advantages over many previous methods. First, it has a closed-form expression for the univariate case and is easy to compute and implement using common machine learning libraries (e.g., PyTorch and TensorFlow). Second, it is compatible with gradient descent, which enables us to integrate GMMs with neural networks seamlessly. Third, it can fit a GMM not only to a set of data points, but also to another GMM directly, without sampling from the target model. And fourth, it has some theoretical guarantees like global gradient boundedness and unbiased sampling gradient. These features are especially useful for distributional reinforcement learning and Deep Q Networks, where the goal is to learn a distribution over future rewards. We will also construct a Gaussian Mixture Distributional Deep Q Network as a toy example to demonstrate its effectiveness. Compared with previous models, this model is parameter efficient in terms of representing a distribution and possesses better interpretability.
Ruichong Zhang
2023-07-13T13:43:02Z
http://arxiv.org/abs/2307.06753v1
# Cramer Type Distances for Learning Gaussian Mixture Models by Gradient Descent ###### Abstract The learning of Gaussian Mixture Models (also referred to simply as GMMs) plays an important role in machine learning. Known for their expressiveness and interpretability, Gaussian mixture models have a wide range of applications, from statistics, computer vision to distributional reinforcement learning. However, as of today, few known algorithms can fit or learn these models, some of which include Expectation-Maximization algorithms and Sliced Wasserstein Distance. Even fewer algorithms are compatible with gradient descent, the common learning process for neural networks. In this paper, we derive a closed formula of two GMMs in the univariate, one-dimensional case, then propose a distance function called Sliced Cramer 2-distance for learning general multivariate GMMs. Our approach has several advantages over many previous methods. First, it has a closed-form expression for the univariate case and is easy to compute and implement using common machine learning libraries (e.g., PyTorch and TensorFlow). Second, it is compatible with gradient descent, which enables us to integrate GMMs with neural networks seamlessly. Third, it can fit a GMM not only to a set of data points, but also to another GMM directly, without sampling from the target model. And fourth, it has some theoretical guarantees like global gradient boundedness and unbiased sampling gradient. These features are especially useful for distributional reinforcement learning and Deep Q Networks, where the goal is to learn a distribution over future rewards. We will also construct a Gaussian Mixture Distributional Deep Q Network as a toy example to demonstrate its effectiveness. Compared with previous models, this model is parameter efficient in terms of representing a distribution and possesses better interpretability. ## 1 Introduction Gaussian Mixture Models, also known as Mixture of Gaussians, sometimes abbreviated as GMMs or MoGs, are renowned for their expressiveness and interpretability, and apply in fields like signal processing [3], generative adversarial nets [2], distributional reinforcement learning [21], Autoencoders for image generation [4] and much more. The learning or fitting of GMMs, or estimating the parameters of GMM given the data distribution, has long been a major concern in the field of machine learning. The most famous approaches include the Expectation-Maximization (EM) algorithm, which is equivalent to minimizing the Negative Log Likelihood loss, but might suffer heavily from local optima problem [5, 14], or might seem powerless dealing with neural networks; and gradient descent based methods, like the sliced Wasserstein distance [19] or Wasserstein-Fischer-Rao gradient flow [18], which generally performs better than expectation or likelihood based iteration algorithms, while being compatible with neural network learning. The Cramer 2-distance [29], or known as the \(L^{2}\) distance between cumulative distribution functions of two univariate random variables, is used to fit probability distributions, also applicable to distributional reinforcement learning [15]. As an alternative to the Wasserstein distance, it is known to enjoy certain key properties, like unbiased sampling gradient and contraction in the distributional Bellman operator [20], The Sliced Cramer 2-distance, also known as the Cramer-Wold distance [16, 17], is considered as a natural generalization of Cramer 2-distance for random vectors or distributions in higher dimensional spaces. Guaranteed by the Cramer-Wold theorem, it is calculated by taking projections of distributions along all unit vectors on the sphere, and integrating up the 1D Cramer distance of the projected distributions. The closed form formula of Sliced Cramer 2-distance between spherical (isotropic) Gaussians have been proposed, using hypergeometric functions. [4] Although these Cramer type distances have been applied to GMM learning, the main purpose of our work is a bit different. Our work mainly focuses on the following points: * Derive a closed formula for the Cramer 2-distance for univariate (1D) GMM learning, which is accessible directly through common machine learning libraries. * Use the Sliced Cramer 2-distance for general multivariate GMM learning, applicable to general mixtures of anisotropic Gaussians. * Offer detailed formula derivation processes and proofs, including avoidance of gradient explosion and unbiased sampling gradients. * Conduct some basic experiments to demonstrate the feasibility of our approaches. ## 2 Preliminaries About GMMs In this section, we will go over some definitions that is crucial to our formulation of the theory, as well as the related previous works. ### Multivariate Gaussian Distribution The Gaussian distribution is of central importance in the theory of probability and statistics. It is known from the central limit theorem that in most situations, standard sampled mean of independent, identically distributed random variables tends to a Gaussian distribution. Let \(m\in\mathbb{N}^{+}\) be a positive integer. In all cases below, we denote by \(m\) the dimension number. A multivariate Gaussian distribution (also called Gaussian random vector, or \(m\)-dimensional Gaussian distribution) in \(\mathbb{R}^{m}\) is defined as \(\mathcal{N}(\boldsymbol{\mu},\boldsymbol{\Sigma})\) where \(\boldsymbol{\mu}\in\mathbb{R}^{m}\) is a vector, and \(\boldsymbol{\Sigma}\in M_{m}(\mathbb{R})\) is a positive-definite matrix. The probability density function (PDF) is \[\mathcal{N}(\boldsymbol{\mu},\boldsymbol{\Sigma})(\boldsymbol{x})=\frac{1}{(2 \pi)^{m/2}\sqrt{\det(\boldsymbol{\Sigma})}}\exp\left(-\frac{(\boldsymbol{x}- \boldsymbol{\mu})^{\mathsf{T}}\boldsymbol{\Sigma}^{-1}(\boldsymbol{x}- \boldsymbol{\mu})}{2}\right)\] This Gaussian distribution is called spherical, or isotropic, if \(\boldsymbol{\Sigma}\) is a multiple of the identity matrix \(\boldsymbol{I}_{m}\), and anisotropic if otherwise. When \(m=1\), we obtain the univariate case: \[\mathcal{N}(\mu,\sigma^{2})(x)=\frac{1}{\sqrt{2\pi}\sigma}\exp\left(-\frac{(x- \mu)^{2}}{2\sigma^{2}}\right)\] Which has expectation \(\mu\) and standard deviation \(\sigma\). When \(\sigma=0\), the distribution is degenerate as a single-point distribution. All univariate Gaussians are isotropic. A property of the multivariate Gaussian distribution is that its inner product with another vector is a univariate Gaussian random variable [8]. For a general multivariate Gaussian distribution, the \(\boldsymbol{\Sigma}\in M_{m}(\mathbb{R})\) is not guaranteed to be strictly positive definite, (i.e., \(\text{rank}(\boldsymbol{\Sigma})<m\)). Thus, the probability distribution function may fail to exist in the common sense. However, the projection of \(\mathcal{N}(\boldsymbol{\mu},\boldsymbol{\Sigma})\) along a certain unit vector \(\boldsymbol{a}\) exists and is still a Gaussian distribution. If \(\boldsymbol{X}\sim\mathcal{N}(\boldsymbol{\mu},\boldsymbol{\Sigma})\) as a Gaussian random vector, the expectation and variance of \(\langle\boldsymbol{X},\boldsymbol{a}\rangle=\boldsymbol{X}^{\mathsf{T}} \boldsymbol{a}\) are respectively \(\boldsymbol{\mu}^{\mathsf{T}}\boldsymbol{a}\) and \(\boldsymbol{a}^{\mathsf{T}}\boldsymbol{\Sigma}\boldsymbol{a}\), or in other words, \(\boldsymbol{X}^{\mathsf{T}}\boldsymbol{a}\sim\mathcal{N}(\boldsymbol{\mu}^{ \mathsf{T}}\boldsymbol{a},\boldsymbol{a}^{\mathsf{T}}\boldsymbol{\Sigma} \boldsymbol{a})\). ### Gaussian Mixture Model A Gaussian mixture model (GMM) in \(\mathbb{R}^{m}\) is defined as the tuple \(G=(\{p_{j}\}_{j},\{\boldsymbol{\mu}_{j}\}_{j},\{\boldsymbol{\Sigma}_{j}\}_{j})\) where \(j=1,2,\cdots,n\), \(p_{j}\geq 0\) and \(\sum_{j=1}^{n}p_{j}=1\), \(\boldsymbol{\mu}_{j}\in\mathbb{R}^{m}\), and \(\boldsymbol{\Sigma}_{j}\in M_{m}(\mathbb{R})\) are positive-definite matrices. Under this notation, \(n\) is called the component number, and the parameters \(\{p_{j}\}_{j},\{\boldsymbol{\mu}_{j}\}_{j},\{\boldsymbol{\Sigma}_{j}\}_{j}\) are respectively called the mixing coefficients (fractionals), means, and covariances of the Gaussian components. The PDF of it \(G\) is obtained by summing over all components: \[\mathrm{PDF}(G)(\boldsymbol{x})=\sum_{j=1}^{n}\frac{p_{j}}{(2\pi)^{m/2}\sqrt{ \det(\boldsymbol{\Sigma_{j}})}}\exp\left(-\frac{(\boldsymbol{x}-\boldsymbol{ \mu}_{\boldsymbol{j}})^{\mathsf{T}}\boldsymbol{\Sigma}_{j}^{-1}(\boldsymbol{x}- \boldsymbol{\mu}_{\boldsymbol{j}})}{2}\right)\] Here is another more understandable way of describing a Gaussian mixture model [1]. Let \(c\) be a categorical random variable of \(n\) categories, with probability \(p_{j}\) of being in the \(j\)-th category, i.e., \(\mathbb{P}[c=j]=p_{j}\). If \(\mathbf{X}\sim G\), then the conditional distribution of \(\mathbf{X}\) when \(c=j\), denoted by \(P(\mathbf{X}|c=j)\), is \[P(\mathbf{X}|c=j)\sim\mathcal{N}(\mathbf{\mu}_{j},\mathbf{\Sigma}_{j})\] The expectation of \(\mathbf{X}\) is easily computed as \(\mathbb{E}[\mathbf{X}]=\sum_{j=1}^{n}p_{j}\mathbf{\mu}_{j}\). The projection of \(\mathbf{X}\) along unit vector \(\mathbf{a}\) is also a random variable that follows a Gaussian mixture distribution, which is \(\langle\mathbf{X},\mathbf{a}\rangle=\mathbf{X}^{\text{T}}\mathbf{a}\sim\sum_{j=1}^{n}p_{j} \mathcal{N}(\mathbf{\mu}_{j}^{\text{T}}\mathbf{a},\mathbf{a}^{\text{T}}\mathbf{\Sigma}_{j}\bm {a})\) with expectation \(\mathbb{E}[\langle\mathbf{X},\mathbf{a}\rangle]=\sum_{j=1}^{n}p_{j}\mathbf{\mu}_{j}^{\text {T}}\mathbf{a}\). ### The Expressiveness of GMMs Although the Gaussian distribution is common in a variety of situations, there are some data distributions that differ significantly from the Gaussian distribution. Therefore, more expressive models are required to describe the real data distribution. In this part, the expressiveness of GMMs is characterized by the theorems below [31, 3]. **Theorem 1**.: _Gaussian distributions are universal approximators, which can approximate any distribution by distribution. Namely, if \(A\) is a distribution of a random variable \(X\), then there exists a series of Gaussian mixture distributions, then there exists a series of GMMs \(\{G_{q}\}_{q}\)\((q\in\mathbb{N})\) such that_ \[\{G_{q}\}\to A,\quad\text{by distribution.}\] Proof.: The proof can be found at page 6-7 of [3]. **Theorem 2**.: _Gaussian mixtures are uniquely identified by their distributions. If \(G=(\{p_{j}\}_{j},\{\mathbf{\mu}_{j}\}_{j},\{\mathbf{\Sigma}_{j}\}_{j})\) and \(G^{\prime}=(\{p^{\prime}_{k}\}_{k},\{\mathbf{\mu}^{\prime}_{k}\}_{k},\{\mathbf{\Sigma}^ {\prime}_{k}\}_{k})\) are two GMMs with the same distribution, then their parameters are equal in the sense that they differ by one permutation. In other words, if \(G\) and \(G^{\prime}\) are two GMMs with different set of parameters, then \(G\) and \(G^{\prime}\) are distinguishable by distribution._ Proof.: The proof can be found at page 7-8 of [3], or the Appendix of [18]. ### Learning Gaussian Mixture Models The commonly used methods of learning Gaussian mixtures can be roughly divided into two categories, namely iterative methods and gradient descent methods. Each method has its unique advantages and defects. Below is a list of some renowned methods for Gaussian mixture learning. #### 2.4.1 The Expectation-Maximization and K-means Algorithm The Expectation-Maximization (EM) algorithm and the K-means algorithm are iterative methods that iterate over the parameters of a GMM \(G\) to fit \(G\) to a distribution of data points, of which the EM algorithm is the most widely used. The classical EM algorithm contains 2 important steps, the Expectation (E) step and the Maximization (M) step. Each of these steps updates a part of the parameters. The two steps are performed alternatively until convergence is reached [1]. The K-means algorithm is very similar to the Expectation-Maximization algorithm, except for that it uses _hard assignments_, which means that every point is assigned to only one Gaussian component [7]. However, these iteration-based approaches also have their drawbacks. For example, the Expectation-Maximization algorithm is known to suffer from the local optima problem. Under certain initializations, the EM algorithm might perform badly, converging to a bad local optima [9]. Also, if the parameters of GMM \(G\) are not explicitly given, such as the parameters are given by the output of the neural network, these methods will not work directly. #### 2.4.2 Gradient Descent Based Algorithms There are a series of algorithms that fit GMMs by gradient descent. Generally speaking, the principal goal of gradient descent is to search for the optimal set of parameters \(\mathbf{\theta}\) such that a certain loss function \(L(\mathbf{\theta})\) attains its minimum. If \(L\) is sufficiently differentiable, this is usually done by gradient descent (and its variations) over \(L\). There are multiple gradient descent optimization algorithms, such as SGD, RMSProp or Adam, that achieve this goal in slightly different manners [30]. However, the most crucial part is the designation of the loss function to be optimized. A good design of loss function is the key to successful learning or fitting of GMMs. One of the most commonly used loss functions, the _Negative Log Likelihood (NLL) Loss_ is defined as \(L=-\log(H)\) where \(H\) is the likelihood function. The term "negative log" comes directly form the formula. Since \(-\log(x)\) is monotonically decreasing when \(x>0\), minimizing the NLL loss is equivalent to maximizing the likelihood \(H\). Given \(G=(\{p_{j}\}_{j},\{\boldsymbol{\mu}_{j}\}_{j},\{\boldsymbol{\Sigma}_{j}\}_{j} )(j=1,2,\cdots,n),\ X=\{\boldsymbol{x}_{i}\}_{i}(i=1,2,\cdots,k)\), the likelihood \(H\) is defined as follows: \[H=\prod_{i=1}^{k}\left(\sum_{j=1}^{n}p_{j}\mathcal{N}(\boldsymbol{\mu}_{j}, \boldsymbol{\Sigma}_{j})(\boldsymbol{x}_{i})\right)\] Therefore, \(L\) is obtained by \[L=-\log(H)=-\sum_{i=1}^{k}\log\left(\sum_{j=1}^{n}p_{j}\mathcal{N}(\boldsymbol{ \mu}_{j},\boldsymbol{\Sigma}_{j})(\boldsymbol{x}_{i})\right)\] There are also other gradient descent methods, such as the sliced Wasserstein distance [19] or Wasserstein-Fischer-Rao gradient flow. [18] Generally, some drawbacks of gradient descent for learning Gaussian mixture models are: * **Local optima:** The loss functions may have multiple local maxima or minima. Gradient descent may get stuck in a poor solution that is not the global minimum. Till today, no loss function has theoretical guarantees to fit GMMs to global optima. To deal with this drawback, one may need to try multiple different initial values for the parameters or use some global optimization methods. * **Numerical instability:** Some loss functions suffer from heavy numerical instability. For example, the negative log likelihood loss for GMM computes the exponential function in the Gaussian density, which might cause overflow or underflow errors when the initialization is far from the data points, or when the covariance matrices are ill-conditioned. * **Slow convergence:** Generally speaking, gradient descent based methods is slower than iteration-based methods. To avoid missing the optima, the learning rate should be set small enough, therefore much more iterations are required to attain the optima. In addition, the gradient computation is another time-consuming step in gradient descent. ## 3 Cramer Type Distances Below we will introduce theoretical works about the Cramer type distances. Note: Unless explicitly stated, we do not distinguish between a random variable and a probabilistic distribution in the following context, since those distances are defined solely over distributions, and each random variable has a distribution. ### The \(L^{p}_{\text{CDF}}\) Class And The \(l_{p}\)-distance Let \(p\in[1,\infty)\) be a positive number. The \(l_{p}\)-distance [20] between two probabilistic distributions \(P,Q\) on \(\mathbb{R}\) is defined as: \[l_{p}(P,Q)=\left(\int_{-\infty}^{\infty}|\mathrm{CDF}(P)-\mathrm{CDF}(Q)|^{p} \mathrm{d}x\right)^{1/p}\] Where \(\mathrm{CDF}\) denote the cumulative distribution function. Before we dive deeper into this section, we should check whether this distance is well-defined. The question is: on which space is the \(l_{p}\)-distance well-defined? We know that a CDF function \(F\) on \(\mathbb{R}\) is right-continuous, non-decreasing with limit conditions \[\lim_{x\rightarrow-\infty}F(x)=0,\quad\lim_{x\rightarrow\infty}F(x)=1\] We can write it as a set \[\mathbf{CDF}=\left\{F:\mathbb{R}\rightarrow[0,1]:F\text{ right continuous and non-decreasing},\lim_{x\rightarrow-\infty}F(x)=0,\lim_{x\rightarrow\infty}F(x)=1\right\}\] Since a CDF uniquely defines a distribution, we will not distinguish between a CDF and its corresponding distribution either, unless explicitly stated. Let \[H(x)=\begin{cases}0,&x<0\\ 1,&x\geq 0\end{cases}\] be the Heaviside function, which, according to the definitions above, is a CDF function. In fact, \(H\) is the CDF of the degenerate distribution at \(0\). By now, we can define the function class \(L^{p}_{\text{CDF}}\): \[L^{p}_{\text{CDF}}=\{F\in\mathbf{CDF}:|F-H|\in L^{p}(\mathbb{R})\}\] Not all CDF functions belong to this class, though. Nonetheless, this is a sufficiently large class that contains the CDF of most distributions, including the Bernoulli distribution, the uniform distribution, and the Gaussian distribution. We have the following lemma: **Lemma 1**.: _The space \((L^{p}_{\text{CDF}},l_{p})\) is a complete metric space that is closed under weighted average. In other words, it is a convex set._ For the proof, please see Appendix A. The \(l_{p}\)-distance, especially for \(p=2\), has many intriguing properties. When \(p=2\), the distance is called Cramer 2-distance, denoted by \(C_{2}\). It has unbiased sample gradient and contraction property [20]. In the following, we will mainly focus on the Cramer 2-distance of Gaussian distributions and Gaussian mixtures. From now on, we denote the cumulative distribution function of the standard normal distribution by \[\Phi(x)=\int_{-\infty}^{x}\frac{\exp\left(-\frac{y^{2}}{2}\right)}{\sqrt{2\pi} }\mathrm{d}y\] Then we define the cumulative distribution function \(\Phi_{\mu,\sigma^{2}}\) of normal distribution \(\mathcal{N}_{\mu,\sigma^{2}}\): \(\Phi_{\mu,\sigma^{2}}(x):=\Phi((x-\mu)/\sigma)\), and \(\Phi^{c}_{\mu,\sigma^{2}}(x):=1-\Phi((x-\mu)/\sigma)\). By definition, \(\Phi_{0,1}(x)=\Phi(x)\). The following lemma might be useful: **Lemma 2**.: _GMMs are dense in \(L^{2}_{\text{CDF}}\)._ See Appendix A for the proof. ### A Heuristic Computation Suppose that we want to compute the Cramer \(2\)-distance between two Gaussian distributions: \(\mathcal{N}_{m,s^{2}}\) and \(\mathcal{N}_{0,1}\). We have \[\int_{-\infty}^{\infty}|\Phi_{m,s^{2}}(x)-\Phi(x)|^{2}\mathrm{d}x =\int_{-\infty}^{\infty}\Phi_{m,s^{2}}(x)(1-\Phi(x))\mathrm{d}x+ \int_{-\infty}^{\infty}(1-\Phi_{m,s^{2}}(x))\Phi(x)\mathrm{d}x\] \[+\int_{-\infty}^{\infty}\Phi(x)(1-\Phi(x))\mathrm{d}x+\int_{- \infty}^{\infty}\Phi_{m,s^{2}}(x)(1-\Phi_{m,s^{2}}(x))\mathrm{d}x\] For simplicity, we only compute this term \(\int_{-\infty}^{\infty}(1-\Phi_{m,s^{2}}(x))\Phi(x)\mathrm{d}x\), which provides us enough information to derive the other 3 terms by analogy. We take derivative of \(m\) twice: \[\frac{\partial^{2}}{\partial m^{2}}\int_{-\infty}^{\infty}(1-\Phi_{m,s ^{2}}(x))\Phi(x)\mathrm{d}x =\int_{-\infty}^{\infty}\frac{\partial^{2}}{\partial m^{2}}\left(1- \Phi\left(\frac{x-m}{s}\right)\right)\Phi(x)\mathrm{d}x\] \[=\int_{-\infty}^{\infty}\frac{1}{s}\frac{\partial}{\partial m} \Phi^{\prime}\left(\frac{x-m}{s}\right)\Phi(x)\mathrm{d}x\] \[=\int_{-\infty}^{\infty}-\frac{1}{s^{2}}\Phi^{\prime\prime} \left(\frac{x-m}{s}\right)\Phi(x)\mathrm{d}x\] \[=\int_{-\infty}^{\infty}\frac{1}{s}\Phi^{\prime}\left(\frac{x-m} {s}\right)\Phi^{\prime}(x)\mathrm{d}x\] (Integration by parts) \[=\frac{1}{2\pi s}\int_{-\infty}^{\infty}\exp\left(-\frac{(s^{2}+ 1)\left(x+\frac{ms^{2}}{s^{2}+1}\right)^{2}+\frac{m^{2}s^{2}}{s^{2}+1}}{2s^{2 }}\right)\mathrm{d}x\] \[=\frac{\sqrt{\frac{2\pi s^{2}}{s^{2}+1}}}{2\pi s}\exp\left(- \frac{m^{2}}{2(s^{2}+1)}\right)\] \[=\frac{1}{\sqrt{2\pi(s^{2}+1)}}\exp\left(-\frac{m^{2}}{2(s^{2}+1 )}\right)\] Integrate \(m\) back: \[\frac{\partial}{\partial m}\int_{-\infty}^{\infty}(1-\Phi_{m,s^{2}}(x))\Phi(x )\mathrm{d}x=\Phi_{0,s^{2}+1}(m)+C\] Where \(C=0\) by taking the limit at \(m\to-\infty\). Integrate again: \[\int_{-\infty}^{\infty}(1-\Phi_{m,s^{2}}(x))\Phi(x)\mathrm{d}x=\Phi_{0,s^{2}+1 }^{(-1)}(m)+C_{1}=\sqrt{s^{2}+1}\cdot\Phi^{(-1)}\left(\frac{m}{\sqrt{s^{2}+1}} \right)+C_{1}\] Where \(\Phi^{(-1)}\) denote the antiderivative of \(\Phi\). It's easy to verify (although may not be known to all) by integration by parts that \[\Phi^{(-1)}(x)=x\Phi(x)+\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{x^{2}}{2}\right) +C_{0}\] In our case, \(C_{0}=C_{1}=0\) by taking \(m\to-\infty\). In conclusion, \[\int_{-\infty}^{\infty}(1-\Phi_{m,s^{2}}(x))\Phi(x)\mathrm{d}x=\sqrt{s^{2}+1} \cdot U\left(\frac{m}{\sqrt{s^{2}+1}}\right)\] Where \[U(x)=x\Phi(x)+\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{x^{2}}{2}\right)=\mathrm{ GELU}(x)+\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{x^{2}}{2}\right)\] Here, \(\mathrm{GELU}(x)=x\Phi(x)\) means the Gaussian Error Linear Unit function [12]. Note that the function \(U(x)\) here is exactly the anti-derivative of the function \(\Phi(x)\), i.e., \(U^{\prime}(x)=\Phi(x)\). Then, we can compute the integral \(\int_{-\infty}^{\infty}(1-\Phi_{m_{1},s_{1}^{2}}(x))\Phi_{m_{2},s_{2}^{2}}(x) \mathrm{d}x\) by changing of variables: \[\int_{-\infty}^{\infty}(1-\Phi_{m_{1},s_{1}^{2}}(x))\Phi_{m_{2}, s_{2}^{2}}(x)\mathrm{d}x =\int_{-\infty}^{\infty}(1-\Phi_{m_{1}-m_{2},s_{1}^{2}}(y))\Phi_{0,s_{2}^{2}}(y)\mathrm{d}y\] \[=s_{2}\int_{-\infty}^{\infty}(1-\Phi_{(m_{1}-m_{2})/s_{2},s_{1}^{ 2}/s_{2}^{2}}(y))\Phi_{0,1}(y)\mathrm{d}y\] \[=s_{2}\sqrt{\frac{s_{1}^{2}}{s_{2}^{2}}+1}\cdot U\left(\frac{ \frac{m_{1}-m_{2}}{s_{2}}}{\sqrt{\frac{s_{1}^{2}}{s_{2}^{2}}+1}}\right)\] \[=\sqrt{s_{1}^{2}+s_{2}^{2}}\cdot U\left(\frac{m_{1}-m_{2}}{\sqrt{ s_{1}^{2}+s_{2}^{2}}}\right)\] For \(s_{2}=0\), we can just take the limit \[\lim_{s_{2}\to 0}\sqrt{s_{1}^{2}+s_{2}^{2}}\cdot U\left(\frac{m_{1}-m_{2}}{ \sqrt{s_{1}^{2}+s_{2}^{2}}}\right)=s_{1}\cdot U\left(\frac{m_{1}-m_{2}}{s_{1}}\right)\] ### The Closed Formula for Cramer 2-Distance of 1D GMMs The main work of this article is the full parametric form expression for the Cramer 2-distance of two univariate Gaussian mixtures. This function is of central importance in this study and is used multiple times in subsequent analysis and experiments. Consider 2 univariate Gaussian mixture distributions \(G_{1}=(\{p_{j}\}_{j},\ \{\mu_{j}\}_{j},\ \{\sigma_{j}^{2}\}_{j})\)\((j=1,2,\cdots,n)\) and \(G_{2}=(\{p_{k}^{\prime}\}_{k},\ \{\mu_{k}^{\prime}\}_{k},\ \{\sigma_{k}^{\prime 2}\}_{k}) \)\((k=1,2,\cdots,n^{\prime})\). The Cramer 2-distance is defined as \[C_{2}(G_{1},G_{2})=\left(\int_{-\infty}^{\infty}|\mathrm{CDF}(G_{1})(x)- \mathrm{CDF}(G_{2})(x)|^{2}\mathrm{d}x\right)^{1/2}\] The CDF (cumulative distribution function) of \(G_{1}\) and \(G_{2}\) are separately: \[\mathrm{CDF}(G_{1})(x) =\sum_{j=1}^{n}p_{j}\Phi_{\mu_{j},\sigma_{j}^{2}}(x)\] \[\mathrm{CDF}(G_{2})(x) =\sum_{k=1}^{n^{\prime}}p_{k}^{\prime}\Phi_{\mu_{k}^{\prime}, \sigma_{k}^{\prime 2}}(x)\] Now we can derive the formula \[C_{2}^{2}(G_{1},G_{2}) =\int_{-\infty}^{\infty}|\mathrm{CDF}(G_{1})(x)-\mathrm{CDF}(G_{ 2})(x)|^{2}\mathrm{d}x\] \[=\int_{-\infty}^{\infty}(\mathrm{CDF}(G_{1})(x)-\mathrm{CDF}(G_ {2})(x))((1-\mathrm{CDF}(G_{2})(x))-(1-\mathrm{CDF}(G_{1})(x)))\mathrm{d}x\] \[=\int_{-\infty}^{\infty}\mathrm{CDF}(G_{1})(x)(1-\mathrm{CDF}(G_ {2})(x))\mathrm{d}x+\int_{-\infty}^{\infty}\mathrm{CDF}(G_{2})(x)(1-\mathrm{ CDF}(G_{1})(x))\mathrm{d}x\] \[-\int_{-\infty}^{\infty}\mathrm{CDF}(G_{1})(x)(1-\mathrm{CDF}(G _{1})(x))\mathrm{d}x-\int_{-\infty}^{\infty}\mathrm{CDF}(G_{2})(x)(1-\mathrm{ CDF}(G_{2})(x))\mathrm{d}x\] \[=\int_{-\infty}^{\infty}\sum_{j=1}^{n}\sum_{k=1}^{n^{\prime}} \left(p_{j}\Phi_{\mu_{j},\sigma_{j}^{2}}(x)p_{k}^{\prime}\Phi_{\mu_{k}^{ \prime},\sigma_{k}^{\prime 2}}^{\mathrm{e}}(x)\right)\mathrm{d}x+\cdots\] (By analogy) \[=\sum_{j=1}^{n}\sum_{k=1}^{n^{\prime}}\left(p_{j}p_{k}^{\prime} \sqrt{\sigma_{j}^{2}+\sigma_{k}^{\prime 2}}\cdot U\left(\frac{\mu_{j}-\mu_{k}^{ \prime}}{\sqrt{\sigma_{j}^{2}+\sigma_{k}^{\prime 2}}}\right)\right)+\cdots\] We write the full formula below in case someone fails on the analogy: \[C_{2}^{2}(G_{1},G_{2}) =\sum_{j=1}^{n}\sum_{k=1}^{n^{\prime}}\left(p_{j}p_{k}^{\prime} \sqrt{\sigma_{j}^{2}+\sigma_{k}^{\prime 2}}\cdot U\left(\frac{\mu_{j}-\mu_{k}^{ \prime}}{\sqrt{\sigma_{j}^{2}+\sigma_{k}^{\prime 2}}}\right)\right)+\sum_{j=1}^{n}\sum_{k=1}^{n^{ \prime}}\left(p_{j}p_{k}^{\prime}\sqrt{\sigma_{j}^{2}+\sigma_{k}^{\prime 2}} \cdot U\left(\frac{\mu_{k}^{\prime}-\mu_{j}}{\sqrt{\sigma_{j}^{2}+\sigma_{k}^{ \prime 2}}}\right)\right)\] \[-\sum_{j=1}^{n}\sum_{k=1}^{n}\left(p_{j}p_{k}\sqrt{\sigma_{j}^{2} +\sigma_{k}^{2}}\cdot U\left(\frac{\mu_{j}-\mu_{k}}{\sqrt{\sigma_{j}^{2}+ \sigma_{k}^{2}}}\right)\right)-\sum_{j=1}^{n^{\prime}}\sum_{k=1}^{n^{\prime}} \left(p_{j}^{\prime}p_{k}^{\prime}\sqrt{\sigma_{j}^{\prime 2}+\sigma_{k}^{\prime 2}} \cdot U\left(\frac{\mu_{j}^{\prime}-\mu_{k}^{\prime}}{\sqrt{\sigma_{j}^{\prime 2}+ \sigma_{k}^{\prime 2}}}\right)\right) \tag{1}\] In fact, we have a more symmetric form. If we denote \(V(x)=(U(x)+U(-x))/2\) for \(x\in\mathbb{R}\), we have \[C_{2}^{2}(G_{1},G_{2}) =2\sum_{j=1}^{n}\sum_{k=1}^{n^{\prime}}\left(p_{j}p_{k}^{\prime} \sqrt{\sigma_{j}^{2}+\sigma_{k}^{\prime 2}}\cdot V\left(\frac{\mu_{j}-\mu_{k}^{ \prime}}{\sqrt{\sigma_{j}^{2}+\sigma_{k}^{\prime 2}}}\right)\right)\] \[-\sum_{j=1}^{n}\sum_{k=1}^{n}\left(p_{j}p_{k}\sqrt{\sigma_{j}^{2} +\sigma_{k}^{2}}\cdot V\left(\frac{\mu_{j}-\mu_{k}}{\sqrt{\sigma_{j}^{2}+ \sigma_{k}^{2}}}\right)\right)-\sum_{j=1}^{n^{\prime}}\sum_{k=1}^{n^{\prime}} \left(p_{j}^{\prime}p_{k}^{\prime}\sqrt{\sigma_{j}^{\prime 2}+\sigma_{k}^{ \prime 2}}\cdot V\left(\frac{\mu_{j}^{\prime}-\mu_{k}^{\prime}}{\sqrt{\sigma _{j}^{\prime 2}+\sigma_{k}^{\prime 2}}}\right)\right) \tag{2}\] which saves about \(1/4\) of the computation. Although the functions \(U\) and \(V\) are not elementary functions (the Gaussian error linear unit function itself is not elementary), it is provided by common machine learning libraries such as PyTorch [13]. So it is a good idea to directly implement such a function and to directly perform gradient descent over it. The example implementation can be found in the Appendix B. The following theorem ensures the gradient stability of Cramer 2-distance: **Theorem 3**.: _Suppose that \(G_{1}=(\{p_{j}\}_{j},\ \{\mu_{j}\}_{j},\ \{\sigma_{j}^{2}\}_{j})\ (j=1,2,\cdots,n)\) is the online distribution to be trained, and \(G_{2}=(\{p_{k}^{\prime}\}_{k},\ \{\mu_{k}^{\prime}\}_{k},\ \{\sigma_{k}^{ \prime 2}\}_{k})\ (k=1,2,\cdots,n^{\prime})\) is the target distribution. The loss function is \(L=C_{2}^{2}(G_{1},G_{2})\). Then for any \(j=1,2,\cdots,n\), we have_ \[\left|\frac{\partial L}{\partial\mu_{j}}\right|\leq 4,\quad\left|\frac{ \partial L}{\partial\sigma_{j}}\right|\leq 4.\] _In other words, loss \(L\) is global Lipschitz for \(\{\mu_{j}\}\) and \(\{\sigma_{j}\}\)._ The proof can be found in the Appendix A. Remark: The GELU function has a well known approximate form [13] \[\mathrm{GELU}(x)\approx\frac{x}{2}\left(1+\tanh\left(\sqrt{\frac{2}{\pi}} \left(x+0.044715x^{3}\right)\right)\right)\] We do not use this form in any of our experiments, because we want an accurate computation of the loss values and gradients. ### Sliced Cramer 2-Distance for the Multivariate Case This section is a natural generalization of the formula in the univariate case, similar to [19] and [16]. Let \(\mathbf{X}\) and \(\mathbf{Y}\) be random vectors in \(\mathbb{R}^{m}\). The Sliced Cramer 2-distance (also called the Cramer-Wold distance) for \(X\) and \(Y\) could be defined as follows: \[S_{2}^{2}(\mathbf{X},\mathbf{Y}):=\int_{\mathbf{\nu}\in\mathbb{S}^{m-1}}C_{2}^{2}((\mathbf{X}, \mathbf{\nu}),\langle\mathbf{Y},\mathbf{\nu}\rangle)\mathrm{d}\mathbf{\nu}\] where \(\langle\_,\mathbf{\nu}\rangle\) denote the projection onto the direction of \(\mathbf{\nu}\). For simplicity of calculation, We uniformly and independently sample \(t\) unit vectors \(\{\mathbf{\nu}_{i}\}(i=1,2,\cdots,t)\) from the sphere \(\mathbb{S}^{m-1}\subset\mathbb{R}^{m}\). Then we approximate \(S_{2}\) by \[S_{2}^{2}(\mathbf{X},\mathbf{Y})\approx\sum_{i=1}^{t}C_{2}^{2}(\langle\mathbf{X},\mathbf{\nu}_ {i}\rangle,\langle\mathbf{Y},\mathbf{\nu}_{i}\rangle)\] Note that if \(\mathbf{X}\sim G=(\{p_{j}\}_{j},\{\mathbf{\mu}_{j}\}_{j},\{\mathbf{\Sigma}_{j}\}_{j})(j=1,2,\cdots,n)\) is a multivariate GMM, then \(\langle\mathbf{X},\mathbf{\nu}\rangle\) yields a univariate GMM by projection onto the direction of unit vector \(\mathbf{\nu}\): \[\langle\mathbf{X},\mathbf{\nu}\rangle\sim G_{\mathbf{\nu}}=\left(\{p_{j}\}_{j},\{\mathbf{\mu}_{ j}^{\intercal}\mathbf{\nu}\}_{j},\{\mathbf{\nu}^{\intercal}\mathbf{\Sigma}_{j}\mathbf{\nu}\}_{j} \right).\] Here is a figure that demonstrates how this formula works. Again, we confirm that this is a well-defined distance. **Theorem 4**.: _The function_ \[S_{2}(\mathbf{X},\mathbf{Y})=\sqrt{\int_{\mathbf{\nu}\in\mathbb{S}^{m-1}}C_{2}^{2}(\langle \mathbf{X},\mathbf{\nu}\rangle,\langle\mathbf{Y},\mathbf{\nu}\rangle)\mathrm{d}\mathbf{\nu}}\] _defines a distance of two distributions._ Proof.: The proof of symmetry and triangle inequality is direct. To prove the positivity, one need to show that \[\mathrm{CDF}(\langle\mathbf{X},\mathbf{\nu}\rangle)=\mathrm{CDF}(\langle\mathbf{Y},\mathbf{ \nu}\rangle)\;(\forall\mathbf{\nu})\implies\mathbf{X}\sim\mathbf{Y}.\] The left side implies \[\langle\mathbf{X},\mathbf{\nu}\rangle\sim\langle\mathbf{Y},\mathbf{\nu}\rangle\;(\forall\mathbf{ \nu}),\;(\text{as distribution})\] Which is the Cramer-Wold Theorem [29; 6], and can be proved by the fact that Radon transform admits an inverse. We show that Sliced Cramer 2-distance inherits some key properties from the univariate Cramer 2-distance. These results apply in a general sense, not just GMMs. **Theorem 5**.: _Sliced Cramer 2-loss enjoys the following properties in general:_ * _Independent sum: For two random vectors_ \(\mathbf{X}\)_,_ \(\mathbf{Y}\)_, and a random vector_ \(\mathbf{A}\) _independent of both_ \(\mathbf{X}\) _and_ \(\mathbf{Y}\)_. Then_ \[S_{2}^{2}(\mathbf{A}+\mathbf{X},\mathbf{A}+\mathbf{Y})\leq S_{2}^{2}(\mathbf{X},\mathbf{Y})\] * _Scaling property: For two random vectors_ \(\mathbf{X}\)_,_ \(\mathbf{Y}\)_, and_ \(c>0\)_,_ \[S_{2}^{2}(c\mathbf{X},c\mathbf{Y})=cS_{2}^{2}(\mathbf{X},\mathbf{Y})\] * _Unbiased sampling gradients: Given_ \(\mathcal{X}=\mathbf{X}_{1},\cdots,\mathbf{X}_{r}\) _sampled from a distribution_ \(P\)_, the empirical distribution_ \(\hat{P}=\frac{1}{r}(\delta_{\mathbf{X}_{1}}+\cdots+\delta_{\mathbf{X}_{r}})\)_, and a distribution_ \(G_{\theta}\) _induced by parameter_ \(\theta\)_,_ \[\mathbb{E}_{\mathcal{X}\sim P}\left[\nabla_{\theta}S_{2}^{2}(G_{\theta},\hat{ P})\right]=\nabla_{\theta}S_{2}^{2}(G_{\theta},P)\] _Moreover, If_ \(\mathbf{\nu}\) _is a random unit vector uniformly distributed in_ \(\mathbb{S}^{m-1}\)_, we have_ \[B_{m-1}\cdot\mathbb{E}_{\mathbf{\nu}}\mathbb{E}_{\mathcal{X}\sim P}\left[\nabla_{ \theta}C_{2}^{2}(\langle G_{\theta},\mathbf{\nu}\rangle,\langle\hat{P},\mathbf{\nu} \rangle)\right]=\nabla_{\theta}S_{2}^{2}(G_{\theta},P)\] _Where_ \(B_{m-1}=2\pi^{m/2}/\Gamma(m/2)\) _is the hypersurface area of_ \(\mathbb{S}^{m-1}\)_._ Just like the univariate case, we have the gradient boundedness theorem for Sliced Cramer 2-loss for multivariate GMMs as well: Figure 1: Demonstration of sliced Cramér 2-distance. **Theorem 6**.: _Suppose that \(G_{1}=(\{p_{j}\}_{j},\ \{\mathbf{\mu}_{j}\}_{j},\ \{\mathbf{\Sigma}_{j}\}_{j})\ (j=1,2,\cdots,n)\) is the online distribution to be trained, and \(G_{2}=(\{p^{\prime}_{k}\}_{k},\ \{\mathbf{\mu}^{\prime}_{k}\}_{k},\ \{\mathbf{\Sigma}^{ \prime}_{k}\}_{k})\ (k=1,2,\cdots,n^{\prime})\) is the target distribution. The loss function \(L=S_{2}^{2}(G_{1},G_{2})\). Then for any \(j=1,2,\cdots,n\), we have_ \[\left|\nabla_{\mathbf{\mu}_{j}}L\right|\leq 4B_{m-1}\] _and if we obtain \(\mathbf{\Sigma}_{j}\) by \(\mathbf{S}^{T}_{j}\mathbf{S}_{j}\) where \(\mathbf{S}_{j}\) is a learnable matrix, then_ \[\left|\nabla_{\mathbf{S}_{j}}L\right|\leq 4B_{m-1}\] _Where \(B_{m-1}=2\pi^{m/2}/\Gamma(m/2)\)._ Although we have tried to derive a full parametric form for a distance of general multivariate GMMs, we have simply failed because of the intrinsic complexity of the formula. Yet, our approaches still offer unbiased gradient guarantees, anisotropic Gaussian support, and simpler implementation compared to [4]. ## 4 Experiments and Results In order to demonstrate the feasibility and effectiveness of learning GMMs by gradient descent over (Sliced) Cramer 2-distance, we have conducted experiments for both the univariate and the multivariate case. ### Distributional Q-Learning Distributional Q-Learning [27, 23, 10] is a model-free reinforcement learning algorithm which learns the distribution of the returns given a state-action pair, rather than only the expectation of outcome. If we denote \((s,a)\) by the state-action pair, \(R(s,a)\) the reward over \((s,a)\), \((S^{\prime},A^{\prime})\) be the subsequent state-action pair, and \(Z\) the distribution of returns, then the Bellman Operator can be written as \[Z(s,a)\gets R(s,a)+\gamma Z(S^{\prime},A^{\prime})\ \text{(as distribution)}\] Distributional returns contain more information than scalar returns, including the expectations, variances, momentums and risks. This allows the agent to capture the risk preferences of the policy, thus can improve the stability and performance of deep neural network agents. Here are some famous examples of distributional Q-learning: * **C51** (Categorical 51) [23]: This method discretizes the return distribution into 51 equally spaced atoms (deltas) at fixed points on the interval \([-10,10]\), and learns a categorical distribution over them. It uses a projection operator to update the distribution parameters based on the Bellman equation, and greatly outperforms DQN on the Atari57 benchmark. * Deep Q Network) [24]: This method discretizes the return distribution into \(N\) atoms with fixed probabilities but adjustable positions (called quantiles), and it improved further upon C51. * **FQF** (Fully Quantile Function) [25]: This method discretizes the return distribution into \(N\) atoms with both adjustable probabilities (given by a fractional proposal network) and adjustable positions. The parameters are updated by 1-Wasserstein distance. FQF improved even further upon QR-DQN. All these methods use a mixture of delta (degenerate) distributions, of which the CDF are not continuous and show "zig-zags" in their plots. However, considering the expressiveness of GMMs, it's entirely possible to learn a mixture of Gaussians towards the distribution. Given the continuity and smoothness of the CDF, Such a model could be capable of capturing fine-grained details of the distribution in fewer parameters. It's worth noting that we are not the first one to propose such an idea. In the article [21], a Gaussian mixture deep Q network is learned, but the loss function used is _Jensen-Tsallis Distance_, which is the \(L^{2}\) difference of two _probability density functions (PDF)_, not _cumulative distribution functions (CDF)_. We are also not the first to apply the Cramer distance to distributional reinforcement learning. The Cramer distance have been successfully tested on a Quantile Regression DQN, which improves over the original QR-DQN [15]. But by now, thanks to the formula of the Cramer 2-distance between two GMMs earlier, it is now feasible to combine the two techniques together, yielding a prosperous architecture. To test the effectiveness, we designed a distributional DQN, a simple 3-layer full-connection network. The input size is the observation space, with 2 hidden layers of size 128, and output 3 parts: fractional \(\{p_{j}\}\), mean \(\{\mu_{j}\}\) and standard deviation \(\{\sigma_{j}\}\). The total output dimension is 3 * Number_of_mixtures * Action_dimension. The network architecture is the same to [21], but the loss function is our own. Without enough computational resources, we only tested the Gymnasium LunarLander-v2 [28]. This is because this environment possesses some intrinsic randomness, such as the shape of the terrain. Some hyperparameters are listed in this table: We use the Double DQN [26] which consists of an online network for training and action selection, and a target network for the estimation of Q value distribution. The main motivation is that Double DQN is a practical solution in order to address overestimation of the mean \(\{\mu_{j}\}\) and standard deviation \(\{\sigma_{j}\}\) parts with little costs. Note that the network of parameter \(\theta\) returns a univariate Gaussian mixture distribution \(Z_{\theta}(S,A)\). Therefore, the loss function (Double DQN) can be written as: \[L=\frac{1}{\text{batch\_size}}\sum_{(S,A,R,S^{\prime})\in\text{ Batch}}C_{2}^{2}\left(Z_{\theta_{\text{min}}}(S,A),\ R+\gamma Z_{\theta_{\text{ wrong}}}\left(S^{\prime},\arg\max_{a\in\mathcal{A}}\mathbb{E}[Z_{\theta_{ \text{online}}}(S^{\prime},a)]\right)\right)\] The algorithm is shown as follows. The rest of the training procedure is the same as Double DQN. ``` 1:procedureCramer2loss 2: Randomly sample a batch of \((S,A,R,S^{\prime})\) from the replay memory 3:\(L\gets 0\) 4:for all\((S,A,R,S^{\prime})\) in batch do 5: // Input distribution 6:\((\{p_{j}\},\{\mu_{j}\},\{\sigma_{j}\})\gets Z_{\theta_{\text{online}}}(S,A)\) 7: // Selection of action 8:for all\(a\) in the action set do 9:\((\{p_{a,j}\},\{\mu_{a,j}\},\{\sigma_{a,j}\})\gets Z_{\theta_{\text{online}}}(S ^{\prime},a)\) 10:\(q_{a}\leftarrow\sum_{j=1}^{n}p_{a,j}\mu_{a,j}\) 11:\(a_{0}\leftarrow\arg\max_{a\in\mathcal{A}}q_{a}\) 12: // Target distribution 13:\((\{p^{\prime}_{j}\},\{\mu^{\prime}_{j}\},\{\sigma^{\prime}_{j}\})\gets Z _{\theta_{\text{aug}}}(S^{\prime},a_{0})\) 14:for\(j=1\) to \(a\)do 15:\(\mu^{\prime}_{j}\gets R+\gamma\mu^{\prime}_{j}\) 16:\(\sigma^{\prime}_{j}\leftarrow\gamma\sigma^{\prime}_{j}\) 17: // Compute loss according to the previous formula 18:\(L\gets L+C_{2}^{2}\left((\{p_{j}\},\{\mu_{j}\},\{\sigma_{j}\}),\ \ (\{p^{\prime}_{j}\},\{\mu^{\prime}_{j}\},\{\sigma^{\prime}_{j}\})\right)\) 19:\(L\leftarrow(L/\text{batch\_size})\) 20:return\(L\) ``` **Algorithm 1** Computation of Cramer 2-loss of GMM DQN (Double DQN version) Another important factor to consider is the restrictions on \(\{p_{j}\}\) and \(\{\sigma_{j}\}\) parts. We use a Softmax function to obtain the fractional part \(\{p_{j}\}\), and set a small learning rate (5e-9) for this part to avoid it from degenerating. For the standard deviation part \(\{\sigma_{j}\}\), we should prevent them from being negative, which lose their mathematical meanings and affect both performance and interpretability. In our experiments, this is done by adding a large penalty term over negative parts of \(\{\sigma_{j}\}\): \[L\gets L+10\sum_{j=1}^{n}\mathrm{ReLU}(-\sigma_{j})\] \begin{table} \begin{tabular}{c c|c c} \hline Parameter & Value & Parameter & Value \\ \hline Hidden layer count & 2 & Hidden layer size & 128 \\ Discount rate (\(\gamma\)) & 0.99 & Number of mixtures & 3 \\ Observation dimension & 8 & Action dimension & 4 \\ Batch size & 64 & Target update in frames & 200 \\ Main learning rate & 5e-5 & Fractional proposal part learning rate & 5e-9 \\ Optimizer & Lion [22] & Replay capacity & 1e+5 \\ \end{tabular} \end{table} Table 1: Hyperparameters The coefficient \(10\) is enough, due to our previous theorem 3. We achieved a score of \(279\pm 22\) in LunarLander-v2. The figures below illustrate the behavior of the agent and the corresponding distributions in a \(313\)-point perfect landing. The result shows that the agent is able to learn complex distributions as well as evaluating and distingushing between different actions. ### Multivariate GMM Learning From our earlier discussions on the Sliced Cramer 2-distance, it is theoretically feasible to learn a general multivariate GMM towards another target GMM. Specifically, a set of \(n^{\prime}\) data points can be considered as the mixture of \(n^{\prime}\) degenerate Gaussians. The algorithm, especially the procedure of loss computation are shown in the following pseudo-code: ``` 1:procedureSlicedCramer2loss 2: Input GMM: \(G=(\{p_{j}\}_{j},\{\boldsymbol{\mu}_{j}\}_{j},\{\boldsymbol{\Sigma}_{j}\}_{j})\)\((j=1,2,\cdots,n)\) 3: Target GMM: \(G^{\prime}=(\{p^{\prime}_{k}\}_{k},\{\boldsymbol{\mu}^{\prime}_{k}\}_{k},\{ \boldsymbol{\Sigma}^{\prime}_{k}\}_{k})\)\((k=1,2,\cdots,n^{\prime})\) 4: // If we fit a GMM towards a set of \(n^{\prime}\) points, then \(G^{\prime}=(\{1/n^{\prime}\}_{k},\{\boldsymbol{x}_{k}\}_{k},\{\boldsymbol{0} \}_{k})\) 5: Number of projections (slices): \(t\) 6: Uniformly sample \(\boldsymbol{\nu}_{1},\boldsymbol{\nu}_{2},\cdots\boldsymbol{\nu}_{t}\in \mathbb{S}^{m-1}\) 7:\(L\gets 0\) 8:for\(i=1\) to \(t\)do 9: // projection onto \(\boldsymbol{\nu}_{i}\) 10: \(G_{\boldsymbol{\nu}_{i}}\leftarrow\left(\{p_{j}\}_{j},\{\boldsymbol{\mu}^{ \prime}_{j}\boldsymbol{\nu}_{i}\}_{j},\{\boldsymbol{\nu}^{\mathsf{T}}_{i} \boldsymbol{\Sigma}_{j}\boldsymbol{\nu}_{i}\}_{j}\right)\) 11: \(G^{\prime}_{\boldsymbol{\nu}_{i}}\leftarrow\left(\{p^{\prime}_{k}\}_{k},\{( \boldsymbol{\mu}^{\prime}_{k})^{\mathsf{T}}\boldsymbol{\nu}_{i}\}_{k},\{ \boldsymbol{\nu}^{\mathsf{T}}_{i}\boldsymbol{\Sigma}^{\mathsf{T}}_{k} \boldsymbol{\nu}_{i}\}_{k}\right)\) 12: \(L\gets L+S_{2}^{2}(G_{\boldsymbol{\nu}_{i}},G^{\prime}_{\boldsymbol{\nu }_{i}})\)return\(L\) ``` **Algorithm 2** Computation of Sliced Cramer 2-loss of multivariate GMMs To demonstrate its feasibility, we fit a multivariate GMM to a fixed data distribution, using the algorithm above. We tested it on a small dataset (which is the same dataset in [19], available at GitHub repository [11]) with 850 points (\(n^{\prime}=850\)) on a plane (dimension \(m=2\)), forming a rectangle, a circle, and a line attached to them. The GMM contains 10 mixtures (\(n=10\)). We ran this experiment across 3 different random seeds: 123, 456 and 789. Figure 2: DQN experiment results. For \(G=(\{p_{j}\}_{j},\{\mathbf{\mu}_{j}\}_{j},\{\mathbf{\Sigma}_{j}\}_{j})\), considering the restrictions on them, we obtain them separately with different learning rates as follows: * Fractional part \(\{p_{j}\}_{j}\): By applying a Softmax function to \(n\) parameters, we obtain an \(n\)-category distribution. The learning rate for this part is set to 5e-6. We set small learning rate for this part in order to prevent it from degenerating. * Mean part \(\{\mathbf{\mu}_{j}\}\): This part is learned directly as \(n\)\(m\)-dimensional vectors. The learning rate for this part is set to 2e-2. * Covariance part \(\{\mathbf{\Sigma}_{j}\}_{j}\): By \(\mathbf{\Sigma}_{j}=\mathbf{S}_{j}^{\intercal}\mathbf{S}_{j}\) where \(\mathbf{S}_{j}\in M_{m}(\mathbb{R})\) is the learnable matrix, in order to ensure the positive-definiteness. The learning rate for this part is set to 3e-3. Again, we use the Lion (Evolved Sign Momentum) optimizer [22] because it is easy to understand and implement. Note: In our experiment, the dimension \(m=2\). Due to the particular shape of \(\mathbb{S}^{1}\) (which is a circle), we are able to equidistantly sample \(\mathbf{\nu}_{1},\mathbf{\nu}_{2},\cdots\mathbf{\nu}_{t}\) to obtain a better estimation of the Sliced Cramer 2-distance. In this experiment, we set \(t=7\), so that \(\mathbf{\nu}_{1},\mathbf{\nu}_{2},\cdots\mathbf{\nu}_{7}\) form a heptagon. We also show that our algorithm surpasses the existing gradient descent algorithm, which is descending over the Negative Log Likelihood loss. The meaning of each column is explained here: * Init: The initial GMM, without any learning. * SC2: By gradient descent over the Sliced Cramer 2-loss for 1200 steps. Learning rates are set to 5e-6, 2e-2, 3e-3 respectively for \(\{p_{j}\}_{j},\{\mathbf{\mu}_{j}\}_{j},\{\mathbf{\Sigma}_{j}\}_{j}\) parts. * NLL: By gradient descent over the Negative Log Likelihood loss for 1200 steps. The learning rates are the same as the SC2. During this experiment, overflows and underflows are encountered, indicating that this method is numerically unstable. Figure 3: Results. The blue points are the data. Each red ellipse denotes a Gaussian component, whose boundary is the contour of 2 standard deviations. * SC2+NLL: By gradient descent over the Sliced Cramer 2-loss for 1200 steps, then gradient descent over the Negative Log Likelihood loss for another 200 steps. The learning rates do not change. As shown in the figure, Pure gradient descent over the Negative Log Likelihood suffers from problems like local minima, degeneration, and instability. Gradient descent over our Sliced Cramer 2-loss is generally stable and consistent, yet there are spaces for improvements, since slight overestimations are encountered of the \(\{\mathbf{\Sigma}_{j}\}\) part. The best results overall are obtained by "fine-tuning" the results with the NLL loss after the SC2 step, where the overestimations are addressed. As can be seen from the figure, Sliced Cramer 2-loss is much more stable than Negative Log Likelihood loss. Therefore, we recommend only performing the SC2 step, since there is only a slight difference in the results, but the NLL loss is at high risk of instability. It's usually not worth the risk. ## 5 Conclusion We have successfully proposed the closed formula for Cramer 2-loss in the context of univariate GMM learning, as well as the Sliced Cramer 2-loss for multivariate GMM learning. Our new methods offer several advantages over previous approaches. Firstly, our methods, based solely on gradient descent, is particularly beneficial in cases where GMM learning is combined with neural networks. This compatibility allows for easy integration with deep learning libraries and facilitates applications such as training neural networks that output GMMs. Secondly, our approaches eliminate the need for sampling the target model. By using a loss function between two models, we can directly learn a GMM towards another model, making it possible to apply our methods to tasks like model compression. This expands the range of potential applications and simplifies the learning process. Additionally, our algorithms come with theoretical guarantees. The loss function is proved to be global Lipschitz for the mean and standard deviation components, preventing gradient explosion, and the sampling gradients are unbiased. These theoretical foundations guarantee that our approach can perform well in various scenarios. While these are general advantages, there are also more specific advantages to the one-dimensional, univariate case. For one thing, the closed-form solution computable by deep learning libraries allows for precise computation of the loss and facilitates the study of its properties. Moreover, our algorithm is directly applicable to Distributional Q-learning, providing both theoretical guarantees and practical convenience. It is parameter-efficient because only a few Gaussian mixtures are required to accurately approximate the continuous and smooth real distribution of \(Q\) values commonly encountered in practice. Furthermore, our approach enhances interpretability. It completely avoids issues like "zig-zags" (discontinuities) and "crossings" (violations of the monotonicity of the CDF) in the distribution function of QR-DQN and FQF. This enables straightforward computation of Quantiles, Expectiles [35], and Conditional-Value-at-Risks (CVaRs) [36]. Figure 4: Comparison of the two loss functions over steps. In summary, our proposed methods provide novel solutions for GMM learning and offer significant advantages, including compatibility with gradient descent, direct learning without sampling, theoretical guarantees, closed-form solutions in the one-dimensional case, applicability in Distributional Q-learning, parameter efficiency, and improved interpretability. ## 6 Future work In terms of future work, there are several areas that are worthy to explore. Firstly, conducting more experiments would provide valuable insights. This work primarily focuses on the theoretical foundations and feasibility of our approaches, so only a few simple experiments have been done. It would be beneficial to invite researchers with access to ample computational resources to test our methods on a larger scale, such as the Atari57 benchmark. Another area of future research involves investigating numerical stability of the loss function. Although our experiments are not heavily affected by numerical instability issues, it is possible that our algorithms may encounter them, such as _catastrophic cancellations_[34]. This concern arises from subtracting nearly equal terms in our formula, resulting in a loss of precision. In our experiments in float64, two almost equal terms about \(30\) are subtracted, yielding a loss of about \(0.003\), which loses approximately \(15\) bits of precision. Further study could be conducted to see whether and how this issue would affect performance, and how it could be mitigated. Additionally, considering the frequent computation of the loss function, it is recommended to optimize the code. One potential optimization strategy is implementing the computation using CUDA or other techniques to make use of parallel processing capabilities and enhance efficiency. Would you consider integrating this algorithm into your own work, we have the following suggestions: 1. Experiment different learning rates for different parameter sets. It is suggested to set a learning rate for the fractional part, \(\{p_{j}\}\), at most 1/1,000 of the learning rate for \(\{\boldsymbol{\mu}_{j}\}\). Differentiation in learning rates helps achieve a balanced optimization process, and avoids degeneration of distribution, since the gradient stability is guaranteed for \(\{\boldsymbol{\mu}_{j}\}\) and \(\{\boldsymbol{\Sigma}_{j}\}\) components but not for \(\{p_{j}\}\) components. 2. Use higher precision floating point numbers. We suggest at least float32 or even float64, to prevent potential problems of catastrophic cancellation. Is also a good practice to use higher precision floating-point types to improve the accuracy and stability of computations. 3. When it's necessary, combine our methods with other techniques, such as the Expectation-Maximization (EM) algorithm, or gradient descent over Negative Log Likelihood loss or Kullback-Leibler divergence to further improve upon results. This combination might help resolve slight overestimations of \(\{\boldsymbol{\Sigma}_{j}\}\) component. By incorporating these suggestions, you might enhance the effectiveness of this algorithm when applying it into your projects.
2305.08107
Privacy-Preserving Taxi-Demand Prediction Using Federated Learning
Taxi-demand prediction is an important application of machine learning that enables taxi-providing facilities to optimize their operations and city planners to improve transportation infrastructure and services. However, the use of sensitive data in these systems raises concerns about privacy and security. In this paper, we propose the use of federated learning for taxi-demand prediction that allows multiple parties to train a machine learning model on their own data while keeping the data private and secure. This can enable organizations to build models on data they otherwise would not be able to access. Evaluation with real-world data collected from 16 taxi service providers in Japan over a period of six months showed that the proposed system can predict the demand level accurately within 1\% error compared to a single model trained with integrated data.
Yumeki Goto, Tomoya Matsumoto, Hamada Rizk, Naoto Yanai, Hirozumi Yamaguchi
2023-05-14T08:56:03Z
http://arxiv.org/abs/2305.08107v2
# Privacy-Preserving Taxi-Demand Prediction Using Federated Learning ###### Abstract Taxi-demand prediction is an important application of machine learning that enables taxi-providing facilities to optimize their operations and city planners to improve transportation infrastructure and services. However, the use of sensitive data in these systems raises concerns about privacy and security. In this paper, we propose the use of federated learning for taxi-demand prediction that allows multiple parties to train a machine learning model on their own data while keeping the data private and secure. This can enable organizations to build models on data they otherwise would not be able to access. Evaluation with real-world data collected from 16 taxi service providers in Japan over a period of six months showed that the proposed system can predict the demand level accurately within 1% error compared to a single model trained with integrated data. Taxi demand, federated learning, trajectory generation, transportation system ## I Introduction The utilization of spatio-temporal location data has immense potential to enhance the availability and improvement of various services, especially data-driven approaches, which can train intelligent models in different domains, such as transportation, urban planning, and emergency management. One such service, taxi transportation, is a critical component of modern urban transportation systems, providing convenient and efficient transportation to a wide range of passengers. However, there is often a mismatch between the supply of taxis and passenger demand, leading to decreased profits for taxi providers due to increased cruising times, fuel consumption, and longer wait times for customers. To address this issue, taxi-demand prediction systems have been proposed that utilize data-driven approaches to predict taxi demand and optimize dispatch processes [1, 2]. Machine or deep learning models are trained with real customer mobility data to forecast future taxi demand in a specific geographic area. This training data includes pickup and drop-off locations, routes taken, and timing information of customers. However, sharing such trajectory data raises significant privacy concerns as it could reveal intimate personal details, such as individuals' whereabouts, movement patterns, and even their religious, political, or sexual convictions, through the prediction of Points of Interest (POI) using mapping data and coordinates. facilities may have different legal and regulatory requirements that they need to comply with. These requirements can vary between countries and regions and need to be considered when working with data from different facilities. Various privacy-preserving methods [3, 4, 5, 6, 7, 8, 9] have been proposed to address privacy concerns associated with personal data. These methods aim to protect the privacy of individuals by anonymizing the data before sharing it. Differential privacy is a method that introduces randomness into data, making it difficult for an attacker to determine the identity of individuals [6]. K-anonymity groups individuals into groups with similar characteristics, making it difficult to determine the identity of any individual [3]. L-diversity and t-closeness are other privacy-preserving methods that generalize data to prevent sensitive information disclosure [10, 11, 12]. Secure computation allows for the computation of a function on private data without revealing it [13, 14, 15]. While these methods can protect privacy, they can also result in a loss of data quality and quantity, negatively impacting the performance of the service (e.g., the prediction accuracy of taxi demand). Thus, it is important to weigh the trade-off between privacy and performance when choosing a privacy-preserving method. In this paper, we propose a novel taxi-demand prediction system that prioritizes customer privacy and builds the model without necessitating sharing data. This can be achieved by employing federated learning that allows multiple parties to train a machine learning model on their own data while keeping the data private and secure. In the context of taxi-demand prediction, this could be useful because it allows multiple facilities (e.g., taxi service providers) to collaborate on building a demand prediction model without sharing their proprietary data with each other. This can lead to more accurate predictions, as the model is able to learn from a larger and more diverse dataset. However, the application of federated learning in this context faces a generalization problem as the local models are trained with absolute latitude-longitude values associated with each facility's data. The use of absolute latitude-longitude values may exhibit _region-dependence characteristics_ that affect the generalization ability and convergence of the global prediction model. To address this challenge, the system incorporates a number of techniques to encode the absolute latitude-longitude values into a region-independent space, making the model more versatile and applicable to different geographical areas. The proposed system was subjected to a rigorous evaluation using real-world data gathered from 16 taxi service providers in Japan. The data was collected over a six-month period and employed to evaluate the system's effectiveness in maintaining prediction performance while preserving passenger privacy. The results obtained from the evaluation confirm that the proposed system, which utilizes federated learning and associated modules, achieves a comparable accuracy level with a negligible reduction of less than 1% in accuracy compared to non-federated learning approaches that require sharing of customer data among facilities. The rest of the paper is organized as follows: Section II contains related works. Section III explains our federated learning system for taxi-demand prediction in detail. Section IV discusses evaluations of the system. Finally, the conclusions are discussed in Section V. ## II Related Work This section describes taxi-demand prediction and privacy-preserving machine learning, including federated learning in spatiotemporal data and several privacy-preserving notions, as related works. ### _Taxi-Demand Prediction_ The prediction of taxi demand has recently garnered considerable attention, owing to the abundance of large-scale spatiotemporal data that facilitates the training of deep neural networks, such as Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks. Recent studies have leveraged both spatial and temporal characteristics to predict taxi demand with greater accuracy. For example, [16] employs a CNN to capture spatial features and an LSTM to capture temporal features, resulting in improved accuracy compared to methods that only consider semantic, spatial, or temporal information. [17] recognizes the existence of spatiotemporal correlations between pick-up and drop-off locations and proposes a taxi-demand prediction model using multitask learning, which predicts both pick-up and drop-off locations as interrelated tasks. This approach leads to more accurate prediction results. Other studies have focused on accounting for the heterogeneity of taxi demand across regions. [18] clusters taxi-demand data and trains region-specific models to predict demand, taking into account the unique distribution and temporal variability of demand in each region. While these machine learning-based methods have shown promising results when applied to spatiotemporal data, they do not consider privacy threats associated with sharing users's data, even anonymized. The methods proposed in [19, 20] represent groundbreaking approaches to sharing synthetic versions of data by utilizing generative adversarial networks, thereby enabling secure data publication. _In contrast, our proposed system evaluates the accuracy of taxi-demand prediction while preserving privacy. The system uses federated learning to avoid sharing sensitive customer data._ ### _Privacy-Preserving Machine Learning_ The main motivation for federated learning in spatiotemporal data is for privacy-preserving on heterogeneous data [21] that may cause a model drift problem for conventional training algorithms. Federated learning in spatiotemporal data is often discussed in actual application environments, i.e., urban [22, 23], renewable energy [24], and robotics [25]. In this paper, we discuss taxi-demand prediction as an application environment different from the above existing works. The most popular approach for privacy-preserving machine learning is differential privacy [26] which provides theoretical security. Differential privacy is used for gradient computation [27], and it can theoretically prevent data recovery [28]. There are results on differential privacy of federated learning [29] and developments of libraries [30, 31, 32]. However, differential privacy deteriorates accuracy significantly. Another approach for providing privacy is to achieve definitions such as k-anonymity [33, 34] and l-diversity [35]. The k-anonymity requires each record to share the same values with at least k-1 other records in the dataset while the l-diversity requires each equivalence class to contain at least 1 sensitivities. Similar to differential privacy, accuracies of machine learning models based on these notions deteriorate [36, 37]. ## III The System Details This section describes the proposed system in detail. The virtual gridding module and its resultant taxi-demand prediction model are first described. Then, the federated learning is described. Fig. 1: An example of how the taxi demand is biased toward no or low level in an area of one facility. X-Y are the lat-long values and the boxes represent the number of taxi requests in this spot at a specific time. ### _The Virtual Gridding Module_ The Virtual Gridding module is a crucial component that operates during both the online and offline phases of the system. In the offline phase, the module processes historical trajectory data to construct a comprehensive demand profile for the city. This profile is then used to train the machine learning models that power the demand prediction functionality of the system. The module achieves this by transforming the raw trajectory data collected from taxi drivers into a more manageable and interpretable format. To accomplish this, the module creates a virtual grid, dividing the city map into evenly spaced grid cells that correspond to specific locations. By tracking the number of pick-up and drop-off events within each cell during a specified time-slot, the module accurately calculates the total demand events for each area. This approach enables the system to provide a high-level overview of the taxi demand across various regions of the city. The resulting demand patterns can then be leveraged to train machine learning models for predicting the number of demand events accurately in different cells. Furthermore, the grid-based visualization of the demand patterns can be used to identify areas of high or low demand quickly. During the online phase, this module converts any latitude and longitude coordinate into the corresponding grid cell in real-time. This cell ID can be fed into the trained demand prediction model to make accurate real-time predictions ensuring that the system has access to the most recent demand information. ### _Taxi-Demand Prediction Model_ This module is responsible for leveraging the input features (\(c\)) to train a deep localization model and find its optimal parameters. The trained model is used during the online phase by the _Demand Predictor_ module to provide an estimate of the taxi-demand. A deep fully-connected neural network is adopted here due to its representational ability, which allows the learning of complex patterns. #### Iii-B1 The Network Architecture Fig. 2 shows our deep network structure. We construct a deep fully connected neural network consisting of cascaded hidden layers of nonlinear processing neurons. Specifically, we use the hyperbolic tangent function (tanh) as the activation function for the hidden layers due to its non-linearity, differentiability (i.e., having stronger gradients and avoiding bias in the gradients), and consideration of negative and positive inputs [38]. The input layer of the network is the cell id and the timestamp. The output layer consists of a number of neurons corresponding to the number of taxi-demand levels in the data. This network is trained to operate as a multinomial (multi-class) classifier by leveraging a softmax activation function in the output layer. This leads to a probability distribution over the demand levels given a spatiotemporal input (cell location and time). More formally, the input feature vector \(c_{i}\) = \((c_{i1},c_{i2},..c_{ik})\) of length \(k\), the corresponding discrete outputs (i.e logits) \(c_{i}\) is \(a_{i}=(a_{i1},a_{i2},..,a_{in})\) capture the score for each demand level from the possible \(n\) total taxi-demand levels to be the estimated level. The softmax function converts the logit score \(a_{ij}\) (for sample \(i\) to be at demand level \(j\)) into a probability as: \[p(a_{ij})=\frac{e^{a_{ij}}}{\sum_{j=1}^{j=q}e^{a_{ij}}} \tag{1}\] This module is responsible for leveraging the input features (\(c\)) to train a deep localization model and find its optimal parameters. The trained model is used during the online phase by the _Demand Predictor_ module to provide an estimate of the taxi-demand. A deep fully-connected neural network is adopted here due to its representational ability, which allows the learning of complex patterns. #### Iii-B2 Training During the training phase, the ground-truth probability label vector of demand \(P(a_{i})=[p(a_{i1}),p(a_{i2})...p(a_{in})]\) is formalized using one-hot-encoding. This encoding has a probability of one for the correct demand levels and zeros for others. The model is trained using the Adaptive Moment Estimation (Adam optimizer [39]) to minimize the average cross-entropy between the estimated output probability distribution \(P(a_{i})\) and the one-hot-encoded vector \(g_{i}\). The loss function is defined as follows: \[\mathcal{L}=\frac{1}{N_{s}}\sum_{i=1}^{n}D(P(a_{i}),g_{i}) \tag{2}\] where \(P(a_{i})\) is obtained using the softmax function, \(g_{i}\) is the one-hot encoded vector of the \(i^{th}\) sample, \(N_{s}\) is the number of samples available for training, and \(D(P(a_{i}),g_{i})\) is the cross-entropy distance function defined as: \[D(P(a_{i}),g_{i})=-\sum_{j=1}^{n}g_{ij}log(P(a_{ij})) \tag{3}\] ### _Federated Learning_ #### Iii-C1 Our Approach Federated learning is a distributed machine learning technique that enables multiple clients to train a model collaboratively without sharing their private data with a central server. In this study, we use the Federated Averaging (FedAvg) algorithm on our federated learning of taxi-demand prediction. FedAvg, proposed by McMahan et al. [40], is a widely used framework for federated learning due to its simplicity and scalability. Fig. 2: Neural network structure for the taxi-demand prediction model. The FedAvg algorithm works as follows: At the beginning of each round, the central server selects a subset of clients to participate in the training process. The server sends the current global model to the selected clients, and each client trains the model using their local data. Specifically, each client updates the model by computing the gradients of their local loss function and performing a gradient descent step. This local update is given by: \[w_{t+1}^{k}\gets w_{t}-\eta g_{k} \tag{4}\] where \(w_{t}\) and \(w_{t+1}^{k}\) are the model parameters at round \(t\) and \(t+1\) respectively, \(k\) is the client ID, \(\eta\) is the learning rate, and \(g_{k}\) is the gradient of the local loss function with respect to the model parameters. After the local updates are completed, each client sends their updated model to the central server. The server then averages all the received models to obtain a new global model. The global update is given by: \[w_{t+1}\gets\sum_{k=1}^{K}\frac{n_{k}}{n}w_{t+1}^{k} \tag{5}\] where \(n_{k}\) is the number of data samples held by client \(k\), \(n\) is the total number of data samples in the system, and \(K\) is the total number of clients participating in the training process. The FedAvg algorithm repeats the above process for a specified number of rounds until convergence. The global model of the server is the final output. Fig.3 shows the overview of our federated learning approach for taxi-demand prediction. Each client represents a specific facility and has access to its own private data. We implemented our approach using PyTorch, a popular machine learning framework, and Flower [41], a federated learning framework for PyTorch. Our approach involves the following steps: 1. The central server sends the current global model to a subset of clients. 2. Each client trains the model using their local data, and updates the model using the FedAvg algorithm. 3. Each client sends their updated model back to the central server. 4. The central server averages all the received models to obtain a new global model. 5. The above steps are repeated until convergence. ## IV Evaluation This section describes experimental evaluations. Firstly, data collection is described. Then, the evaluations of the taxi-demand prediction model described in Section III and the privacy are described. ### _Data Collection and Setting_ #### Iv-A1 Data Collection We gathered real-world data from \(16\) service facilities in Japan over a period of six months. The collected data includes (1) vehicle information and their trajectories (including idle time), and (2) spatiotemporal data of each customer's pickup and drop-off event for each vehicle. The system determined the trajectory of each customer's trip by merging the two datasets using the vehicle ID and time as the key factors. This resulted in 15,178 trips, with taxi demands ranging from 0 to 20, calculated using a grid size of 1 km and a time slot of 1 hour. The trajectory data was obtained through GPS for latitude and longitude, with data acquisition intervals of approximately every 5 seconds, with some missing data. To determine the locations of pickup and drop-off events, we used data on vehicle positions during the 45 seconds before and after the event, if available. If the data was not present, the event was omitted from the evaluation data. The number of demands with determined locations and times was 10327. #### Iv-A2 Experimental Setting We describe each setting below. Data SplittingIn the following experiments, we split the entire data into three subsets, i.e., 64% for training data, 16% for validation data, and 20% for test data. The training Fig. 3: Overview of our federated learning of taxi-demand prediction \begin{table} \begin{tabular}{c|c} \hline Criteria & Value (**bold** default) \\ \hline _Number of prediction classes_ & **4** \\ _Number of global epochs_ & **300** \\ _Patience of early stopping_ & 10, **30**, \(\infty\) \\ _Number of facilities_ & 4, 8, **16** \\ _Number of local epochs_ & **1** \\ \hline \end{tabular} \end{table} TABLE I: Hyperparameters of experiment settings. data is utilized for training the model, the validation data is for early stopping the training, and the test data is for computing the evaluation metrics described later. In the case of federated learning, each facility has the training and validation data, and a central server has the test data. We then utilize the split dataset for two models, i.e., a single model and federated learning. Each model is trained in the same setting as Section IV-B. MetricsWe focus on two metrics, i.e., accuracy and balanced accuracy [42] for taxi-demand prediction evaluation. Since the gathered data described above are class-imbalanced, the evaluation of the conventional accuracy for prediction results is insufficient. Therefore, we adopt the balanced accuracy, which is the average of the accuracy between all the classes. We utilize the existing implementations of the scikit-learn library for the above metrics. HyperparametersHyperparameters in the experiments are shown in Table I, where four prediction classes are defined as _non_, _low_, _med_, and _high_. We also set the'margin' described in Section III-C as 1. ### _Evaluation of Taxi-Demand Prediction_ Figure 4 and Figure 5 illustrate the comparison between the single model and the proposed federated learning approach, and it shows that the accuracy and balanced accuracy of the federated learning are slightly lower than those of the single model by 0.096, and 0.310, respectively. However, it is essential to highlight that federated learning enables privacy preservation by training the model on decentralized data without compromising the security of the data. This aspect is particularly important for commercial applications, e.g., taxi-demand prediction based on customers' data. Therefore, the slight tradeoff between accuracy and privacy in federated learning is a reasonable compromise, and it makes this approach a practical and promising solution for privacy-sensitive scenarios. Specifically, federated learning ensures compliance with privacy regulations such as the General Data Protection Regulation (GDPR) by keeping the data local and not transmitting it to a central server. Figures 6 shows the result of the patience parameter that controls early stopping. This parameter represents the number of epochs required before terminating the training process when no performance improvement is obtained. According to the figure, the system accuracy seems to reach the optimal model with as low as only 10 epochs. ## V Conclusion In this paper, we presented a novel approach to privacy-preserving taxi demand prediction using federated learning. Our proposed system leverages the FedAvg federated learning technique to train a taxi-demand prediction model without compromising the privacy and security of customer data owned by taxi-providing facilities. By enabling facilities to build models on data they would otherwise be unable to access, our approach offers significant benefits in terms of data availability. To evaluate the effectiveness of our proposed system, we conducted experiments using real-world data collected from 16 taxi service providers in Japan over a period of six months. The results demonstrated that the system accurately predicts demand levels with less than a 1% decrease in accuracy compared to classical solutions. Fig. 4: Results of taxi-demand prediction for single model and federated learning. Fig. 5: Effect of changing the number of facilities (nodes) in federated learning. Fig. 6: Results for different patience values in federated learning. ## Acknowledgment This work was supported by JST, CREST Grant JPMJCR21M5, Japan, and JSPS, KAKENHI Grant 22K12011, and NVIDIA award.
2304.10461
Reducing Aggregate Electric Vehicle Battery Capacity through Sharing
Meeting growing demand for automotive battery resources is predicted to be costly from both economic and environmental perspectives. To minimize these costs, battery resources should be deployed as efficiently as possible. A potential source of inefficiency in battery deployment is the fact that the batteries of personal vehicles are typically much larger than needed to meet most daily mobility needs. In this paper, we consider whether battery resources can be used more efficiently in a setting where drivers, in addition to having personal vehicle batteries, have access to a shared battery resource. More precisely, we consider the problem of minimizing aggregate battery capacity in settings with and without a shared resource subject to the requirement that driver commuting needs are met with high reliability. To assess the potential for reductions in deployed battery capacity with the addition of a shared resource, we quantify the difference in deployed battery capacity with and without a shared resource in case study using real-world longitudinal mobility data from Puget Sound, Washington. We find that giving drivers access to a shared battery resource can substantially reduces deployed battery capacity. Furthermore, relative reductions in battery capacity increase with number of drivers and the level of reliability desired.
Polina Alexeenko, Vasileios Charisopoulos
2023-04-20T17:07:39Z
http://arxiv.org/abs/2304.10461v2
# Reducing Aggregate Electric Vehicle Battery Capacity through Sharing ###### Abstract Meeting growing demand for automotive battery resources is predicted to be costly from both economic and environmental perspectives. To minimize these costs, battery resources should be deployed as efficiently as possible. A potential source of inefficiency in battery deployment is the fact that the batteries of personal vehicles are typically much larger than needed to meet most daily mobility needs. In this paper, we consider whether battery resources can be used more efficiently in a setting where drivers, in addition to having personal vehicle batteries, have access to a shared battery resource. More precisely, we consider the problem of minimizing aggregate battery capacity in settings with and without a shared resource subject to the requirement that driver commuting needs are met with high reliability. To assess the potential for reductions in deployed battery capacity with the addition of a shared resource, we quantify the difference in deployed battery capacity with and without a shared resource in case study using real-world longitudinal mobility data from Puget Sound, Washington. We find that giving drivers access to a shared battery resource can substantially reduces deployed battery capacity. Furthermore, relative reductions in battery capacity increase with number of drivers and the level of reliability desired. ## I Introduction Meeting the growing demand for electric vehicle (EV) batteries is predicted to be costly from both economic and environmental perspectives [1, 2]. In 2015, global automotive battery production was less than \(40\) GWh/year. By 2020, battery production had increased four-fold to over 150 GWh/year [3]. Increases in battery production are driven by multiple factors. First, the number of electric vehicles (EVs) on the road is growing due to a combination of government incentives, increasing environmental concerns, and decreasing vehicle costs. Secondly, vehicle battery sizes are themselves becoming larger: the average range of a new BEV increased by 43% between 2015 and 2020, from 124 miles to 218 miles [3]. Demand for large battery capacities stems, in part, from _range anxiety_, the fear that an EV's battery capacity will be insufficient to satisfy a typical driver's mobility needs. That is, because of the sparsity of EV charging stations and the long charging times associated with most stations, drivers tend to chose their battery sizes to minimize the likelihood of exhausting their battery mid-trip. As a result, typical commuting distances of EV owners tend to be small relative to battery size [4, 5]. For example, a study of the driving behavior of several hundred drivers in the United States over the course of a year found that 75% of drivers traveled _fewer than 100 miles daily_ during 96% of the year [6]. Instead of sizing batteries to meet drivers' infrequent long-distance travel needs, EV drivers can use _range extenders_. Range extenders (REs) are auxiliary devices which provide additional energy to the EV to supplement its battery and increase range [7]. REs can be used to reduce vehicle battery sizes while ensuring that both long and short distance commuting needs are met. It is important to note that although REs can offset personal vehicle battery sizes, their impact on the system as a whole (e.g., in terms of environmental or economic benefits) can be difficult to measure because of the diversity of RE technologies available. For example, range extender powertrains vary widely, including internal combustion engines, hydrogen fuel cells, and gas turbines [7, 8, 9]. Of particular relevance to this paper are range extenders consisting of a trailer-mounted battery which can be plugged into the EV through its charging port [10]. Because these REs use the same powertrain as the vehicles themselves, their capacity can be compared directly with personal vehicle battery capacity. When personal vehicle batteries are sized to meet typical commuting needs and when driver commuting distances are not strongly correlated, the probability that many drivers will simultaneously require range extension is low. This suggests that a modestly sized RE resource could be shared across drivers without compromising reliability. Furthermore, providing drivers with a shared battery resource (in addition to personal batteries) could reduce the total (i.e., personal and shared RE) amount of battery capacity deployed relative to a setting where drivers only have personal batteries. The focus of this paper is to quantify the potential reduction in total battery capacity achievable through the introduction of a shared RE resource. In particular, we consider the problem of determining the amount of battery capacity required to meet driver commuting needs with high probability in settings with and without a shared resource and show that the presence of a shared resource can reduce total battery capacity without reducing reliability. The concept of using a shared resource to reduce system-wide risk is related to the concept of _diversification_ in financial risk management. Diversification refers to the phenomenon where financial assets with varying risk profiles are combined into a portfolio whose aggregate level of risk is lower than the sum of its component assets [11, 12]. In the context of power systems, risk aggregation (with the objective of improving system robustness) has received considerable attention, e.g., in the context of wind generation [13, 14], photovoltaics [15, 16], and mini-grids [17, 18]. Closest in spirit to our work is that of Abdolmaleki et al., which considers increasing vehicle range using a network of vehicles able to share power through wireless transfer [19]. Although the authors mention the potential of their proposed methodology to reduce battery capacity, the discussion is brief and focuses on quantifying reliability under a specific alternative battery capacity size. This differs from the more general framework presented in our paper, which aims to characterize the capacity-reliability tradeoff for a wider range of capacity sizes. ### _Contributions_ In this paper, we explore how the presence of a shared battery resource can reduce the total amount of battery capacity deployed across a system of EV drivers while guaranteeing that every driver's mobility needs are met with high reliability. We formulate the battery capacity planning problem as a chance-constrained optimization problem and derive a conservative approximation of this problem. Our approximation offers two key advantages over the original problem. First, the constraint function is convex in the decision variables and thus amenable to solution using scenario approximation. Second, while the original problem involves a number of constraints equal to the number of drivers in the system, our reformulation has only a single constraint. To demonstrate the practical utility of the proposed framework, we assess the potential for capacity reduction through sharing using real-world mobility data from Puget Sound, Washington. Our empirical results suggest that access to a shared resource can significantly reduce the required battery capacity, and that the potential for reduction increases with desired reliability levels and the number of drivers in the system. ### _Notation and organization_ We briefly describe notation used throughout the paper. Vectors are denoted using lowercase letters in boldface, such as \(\mathbf{x}\). We write \((x)_{+}:=\max\left\{x,0\right\}\) for the positive part of a scalar. We write \(\Pr(\mathcal{E})\) for the probability of an event \(\mathcal{E}\), and \(\mathbf{E}\left[X\right]\) for the expectation of random variable \(X\). Finally, given a positive integer \(n\), we write \([N]\) for the set \(\left\{1,2,\ldots,N\right\}\). The remainder of the paper is organized as follows. Section II introduces the system model and presents the optimization problems in the shared and non-shared settings. Section III presents the conservative reformulation of the battery capacity planning problem and discusses approximate solutions to the reformulated problem. Section IV presents a results from an empirical study on real-world commuting data, and Section V concludes and discusses potential future directions. ## II Formulation ### _System model_ We consider a setting in which a central decision maker selects a set of electric vehicle battery capacities to serve the mobility needs of a group of EV drivers. The decision maker's objective is to minimize the total amount of EV battery capacity deployed (in order to minimize the economic or environmental costs of battery capacity production) while ensuring that each driver's daily energy requirements are satisfied with high probability. We index the set of driver's according to \(i\in[N]:=\left\{1,\ldots,N\right\}\) and denote their personal battery capacity by \(x_{i}\). Each driver's daily energy requirements are distributed according to a probability distribution \(\mathbb{P}\), and they require that the amount of battery capacity available to them exceed their daily energy requirement with probability at least \(\alpha\in(0,1)\). In the setting without a shared resource, each driver's personal battery capacity must exceed their daily energy requirement with high probability, and the decision maker selects capacities according to the solution of the following optimization problem: \[\underset{\mathbf{x}}{\text{minimize}} \sum_{i=1}^{N}x_{i}\] (P-NS) subject to \[\Pr\left(\xi_{i}<x_{i}\right)\geq\alpha_{i}\quad\forall i\in[N].\] The optimal solution to Problem (P-NS) sets each driver's battery capacity according to the quantile of their daily energy distribution, i.e., \[x_{i}=F_{i}^{-1}(\alpha_{i}):=\inf\{x\in\mathbb{R}:F_{i}\left(x\right)\geq \alpha_{i}\}. \tag{1}\] where \(F_{i}\) denotes the cumulative distribution function associated with driver \(i\)'s energy requirement distribution \(\mathbb{P}_{i}\). In aggregate, the total amount of battery capacity required to meet everyone's driving needs is given by \[\mathsf{opt}_{\text{ns}}^{*}=\sum_{i=1}^{N}F_{i}^{-1}(\alpha_{i}). \tag{2}\] When the target reliability \(\alpha_{i}\) is high, driver batteries tend to be under-utilized in the sense that battery capacities are much larger than required to meet mobility needs on most days. Furthermore, when energy requirements are not strongly correlated across drivers, the probability that a large fraction of drivers have high energy requirements on the same day is small. Together, these observations suggest that under certain circumstances, diverting resources from personal batteries to a shared battery may allow for significant reductions in the total battery capacity needed to satisfy driver reliability constraints. ### _Battery sharing as a chance constrained problem_ As an alternative to the setting where drivers must rely exclusively on their personal batteries, we consider a setting in which drivers have access to a shared battery resource of capacity \(s\). We assume that this resource can be divided into units of arbitrary size and distributed to the individual drivers at no cost. In this setting, the decision maker must chose both the personal battery sizes \(x_{i}\) for \(i\in\mathcal{N}\) and the size of the shared resource \(s\). In the presence of a shared resource, the probability with which a driver's needs are satisfied is a function of both their personal battery size and the portion of shared resource allocated to them. In this setting, the decision maker chooses shared and personal battery sizes as the solution to the following: \[\underset{s,\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol}}}}}}}}}}}}}}}{} }{s} s+\sum_{i=1}^{N}x_{i}\] (P-S) where \(f_{i}:\mathbb{R}^{N}\times\mathbb{R}^{N}\times\mathbb{R}\mapsto\mathbb{R}\) is an _allocation rule_ determining the quantity of shared resource allocated to driver \(i\) as a function of the personal battery capacities, shared resource capacity, and realized energy requirements of all drivers. At a slight abuse of terminology, we will refer to both \(f_{i}\) and the collection \(\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ { \midmidmidmidmid}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} \)\)\right\} \} \} \} \} \} \)\}{\} \} \} \} \} \ \ \ \ \ \ \ \(\{x_{i}+f_{i}(\mathbf{x},\mathbf{\xi},s)\geq\xi_{i}\}\) for brevity. We have \[\Pr\left(\mathcal{E}_{i}\right) =\Pr\left(\mathcal{E}_{i}\cap\mathcal{E}\right)+\Pr\left(\mathcal{E }_{i}\cap\mathcal{E}^{c}\right)\] \[\geq\Pr\left(\mathcal{E}_{i}\cap\mathcal{E}\right)\] \[=\Pr\left(x_{i}+f_{i}(\mathbf{x},\mathbf{\xi},s)\geq\xi_{i}\mid\mathcal{ E}\right)\cdot\Pr\left(\mathcal{E}\right) \tag{5}\] \[=\Pr\left(\mathcal{E}\right), \tag{6}\] using (3) in (4) to deduce that the conditional probability \(\Pr\left(\mathcal{E}_{i}\mid\mathcal{E}\right)=1\). Since \(\alpha\geq\alpha_{i}\) and the choice of index \(i\) was arbitrary, the desired claim follows. Although the approximation can be conservative relative to the original capacity planning problem, it provides a useful tool for assessing the potential benefits of resource sharing. That is, the aggregate battery capacity required to satisfy the constraints of Problem (P-SI) with a particular target reliability are often smaller than those required to achieve the same reliability level in the non-shared setting. Moreover, because Problem (P-SI) is an inner approximation of Problem (P-S), analysis of Problem (P-SI) allows us to derive lower bounds on the reduction in deployed battery capacity achievable through resource sharing. For example, in Appendix B, we characterize the benefits of resource sharing in a setting where daily energy requirements are distributed according to independent Gaussians. In settings where direct analysis of the chance constraint is not possible, a solution to Problem (P-SI) can be provably approximated through scenario-based methods. Specifically, to produce an approximate solution to Problem (P-SI), we replace the chance constraint with a set of \(M_{\text{sc}}\)_sampled constraints_ to produce an approximated problem: \[\underset{\mathbf{x},s}{\text{minimize}} s+\sum_{i=1}^{N}x_{i}\] (7) subject to \[\sum_{i=1}^{N}\max(\xi_{i}^{(j)}-x_{i},0)\leq s,\;\;j\in[M_{\text {sc}}].\] where \(\mathbf{\xi}^{(j)}\sim\mathbb{P}\) for \(j=1,\ldots,M_{\text{sc}}\) are independent samples of \(\mathbf{\xi}\) drawn from the underlying distribution. A solution to Problem (7) is guaranteed to be feasible for Problem (P-SI) with high confidence given a sufficiently large sample size. Specifically, to produce a solution which is feasible with confidence \(1-\delta\), it is sufficient to choose a sample size of at least \(O\left(\frac{N}{1-\alpha}\ln\left(\frac{1}{\delta}\right)\right)\)[22]. ### _A heuristic for reducing conservatism_ In practice, the solutions obtained through scenario approximations can be very conservative. Indeed, due to the high number of samples used in the approximation, Problem (7) will often produce battery configurations attaining a reliability level significantly greater than the target \(\alpha\). To reduce the conservatism of our approximations, we use a heuristic method for reducing the number of constraints involved in the solution of the problem. The conservatism reduction heuristic is implemented as follows. We start by solving Problem (7) using the sample size dictated by [22]. We evaluate the empirical reliability of the obtained candidate solution using an additional set of samples of size \(M_{\text{eval}}\), where the sample size requirement is chosen as described in Appendix Section C. If the empirical reliability level is close to the target, the algorithm terminates. If, however, the empirical reliability is larger than the target level, we reduce the number of samples used in the solution of Problem (7) and re-solve the problem to produce a less conservative solution. The resultant solution's empirical reliability is then evaluated, and the constraint number is either decreased or increased depending on whether the empirical reliability is greater or less than the target. The conservatism reduction heuristic thus performs a binary search over the number of samples used in the scenario approximation to produce a solution with reliability level close to the target \(\alpha\). The pseudocode for the conservatism reduction heuristic is given in Algorithm 1. ``` 1:Inputs:\(M_{\text{sc}}\)Number of scenario samples \(M_{\text{eval}}\)Number of evaluation samples \(T\)Number of trials 2:for\(t=1,2,\ldots,T\)do 3: Draw \(\mathbf{\xi}^{(j)}_{\text{sc}}\sim\mathbb{P}\) for \(j=1,\ldots,M_{\text{sc}}\). 4: Draw \(\mathbf{\xi}^{(j)}_{\text{eval}}\sim\mathbb{P}\) for \(j=1,\ldots,M_{\text{eval}}\). 5:\(\mathbf{x}_{t},s_{t},\hat{\alpha}_{t}\leftarrow\texttt{BinarySearch}(\{\mathbf{\xi}^{(j)}_{ \text{sc}}\}_{j=1}^{M_{\text{cr}}},\{\mathbf{\xi}^{(j)}_{\text{eval}}\}_{j=1}^{M_ {\text{val}}},\alpha)\). 6:endfor 7:return\((\mathbf{x}_{t},s_{t})\), where \(t=\arg\min_{s\in[T]}\hat{\alpha}_{s}\). ``` **Algorithm 1** Conservatism reduction heuristic ## IV Empirical study ### _Data source and model_ To assess the potential of resource sharing to reduce deployed battery capacity in practice, we conduct an empirical study using real-world mobility data. The data was collected as part of a study by the Puget Sound Research Council on the driving behavior of approximately 400 vehicles located in the Seattle metropolitan area between November 2004 and April 2006, and is publicly available through the National Renewable Energy Laboratory's Transportation Secure Data Center [23]. During the study, GPS data loggers were installed into each vehicle and collected information on the timing and distance of every trip taken by the vehicle's driver. Figure 1 illustrates distributions over daily mileage for each driver and total daily mileage summed across all drivers. Notice that daily travel distances are short relative to typical EV battery ranges: 85% of daily trips are less than \(50\) miles long, and on 54% days, the total daily mileage across customers is less than ten thousand. To simulate energy requirements from the daily mileage data, we assume that EVs have an energy efficiency of three miles per kWh [24]. Additionally, the distribution over each driver's daily energy consumption is modeled as a histogram with binwidth approximately two kWh. Driver energy requirements are then simulated by sampling from these distributions. ### _Study results_ In these empirical studies, we quantify the impacts of resource sharing by comparing the amounts of battery capacity required to achieve a given target reliability level in settings with and without a shared resource. Throughout the studies, we assume that all drivers have the same target reliability level, i.e. that \(\alpha_{i}=\alpha\) for all \(i\in[N]\). For a given battery _configuration_ (i.e., a choice of personal battery capacities \(\mathbf{x}\) and shared battery capacity \(s\)) we estimate the reliability level associated with the configuration through its empirical reliability in each setting. That is, in the non-shared setting, the battery capacity associated with a given reliability level \(\alpha\) is approximated by the sum of the drivers' empirical \(\alpha\)-quantiles. In the shared setting, we use the inner approximation Problem (P-SI) and Algorithm 1 to find candidate battery configuration and evaluate the empirical reliability associated with the configuration under various allocation rules. Specifically, we consider a given configuration's reliability under the proportional, first-come, first-serve (FCFS), and utilitarian allocation rules described below. Proportional allocationUnder the proportional allocation rule, every driver receives a fraction of the shared capacity that is proportional to their _shortfall_ (the difference between their personal battery capacity and realized energy requirement): \[f_{i}^{\text{rep}}(\mathbf{x},\mathbf{\xi},s):=s\cdot\frac{(\xi_{i}-x_{i})_{+}}{\sum_ {j=1}^{N}(\xi_{j}-x_{j})_{+}}.\] First-come first-served (FCFS)The FCFS rule assumes that drivers request a portion of the shared resource according to a certain order, given as a permutation \(\pi:[N]\rightarrow[N]\): \[f_{i}^{\text{FCFS}}(\mathbf{x},\mathbf{\xi},s):=\begin{cases}(\xi_{i}-x_{i})_{+},& \text{if }\sum_{j=1}^{\pi(i)}(\xi_{j}-x_{j})_{+}\leq s,\\ 0,&\text{otherwise}\end{cases}.\] For a fixed sample of realized driver energy requirements, we simulate the FCFS allocation rule by drawing a random permutation of \([N]\) and allocating available shared capacity to drivers in that order. UtilitarianUnder "utilitarian" rules, the objective of the decision maker distributing the shared resource is to maximize the number of drivers whose energy requirements are met through the shared resource. Under this allocation rule, the drivers are sorted in increasing order of shortfall, and resources are disbursed according to this ordering. That is, the utilitarian allocation rule disburses resources as follows: \[f_{i}^{\text{util}}(\mathbf{x},\mathbf{\xi},s) =\begin{cases}(\xi_{i}-x_{i})_{+},&\text{if }\sum_{j=1}^{\pi^{*}(i)}(\xi_{j}-x_{j})_{+}\leq s,\\ 0,&\text{otherwise}\end{cases},\] where \(\pi^{*}(i)\leq\pi^{*}(j)\Leftrightarrow(\xi_{i}-x_{i})_{+}\leq(\xi_{j}-x_{j}) _{+}\). For a particular allocation rule and battery configuration, the associated empirical reliability is calculated as the minimum over customer-level empirical reliabilities (i.e., the largest value \(\hat{\alpha}\) such that all customers have empirical energy requirement satisfaction probability at least \(\hat{\alpha}\)). This calculation is described in detail in Appendix C #### Iii-B1 Effect of the target reliability level First, we demonstrate how battery capacity size requirements scale with target reliability level \(\alpha\) in the shared and non-shared settings. Figure 2 illustrates the empirical reliability level as a function of the average (per-driver) deployed battery capacity for systems with 25, 50, and 100 drivers and target reliabilities \(\alpha\in\{0.5,0.505,\ldots,0.995\}\). In each sub-figure, the purple line labeled "Non-shared" depicts the capacity-reliability frontier in the setting without resource sharing and three scatter plots of differing colors show the reliability levels associated with various candidate battery configurations for each of the three allocation rule considered. Additionally, the 'efficient frontier' of each allocation rule (i.e., the largest reliability level associated with a particular capacity level) is depicted by solid lines. For target reliability levels higher than \(0.70\), the amount of battery capacity required to achieve a particular reliability is lower in the shared setting than in the non-shared settings for each system size and allocation rule considered. Furthermore, the benefits of sharing (as measured by a reduction in battery capacity requirements) increase as the target reliability level increases. For example, as depicted in Figure 2, for \(N=25\) the difference between the capacity required to achieve a reliability level of 0.75 with and without sharing ranges between about 5-10 kWh per driver (depending on the allocation rule used). By contrast, for a target reliability level of 0.85 the difference is larger, between about 10-20 kWh per driver. It is worth noting that, as evidenced by the scatter plots in Figure 2, the empirical level of reliability associated with each allocation rule can vary significantly. While the FCFS and utilitarian allocation rules are associated with similar reliability levels given a candidate configuration, the proportional allocation rule tends to be more conservative in the sense that the reliability level associated with a given configuration is lower than that of the other two rules. In fact, due to its conservatism, the proportional allocation rule is associated with larger capacity requirements than the non-shared setting for low reliability levels (e.g., below 0.70 in the \(N=25\) setting or below 0.67 in the \(N=100\) setting). #### Iv-B2 Effect of the number of drivers The benefits of resource sharing also increase with the number of drivers in the system. Notice from Figure 2 that as the number of customers increases from \(N=25\) to \(N=100\), the reliability associated with a given allocation rule and level of capacity increases. For example, the reliability associated with 35 kWh of battery capacity per driver under a proportional allocation rule is approximately 0.67 for \(N=25\), 0.89 for \(N=50\), and 0.91 for \(N=100\). To illustrate the benefits of resource sharing as a function of driver number in greater detail, Figure 3 shows the relative reduction in deployed battery capacity as a function of \(N\) for three different reliability levels \(\alpha\in\{0.75,0.85,0.95\}\). More precisely, for each of the three target reliabilities considered, we vary \(N\in\{5,25,\ldots,185\}\) and plot the relative reduction in battery capacity, \[1-\frac{\sum_{i=1}^{N}x_{i}^{\text{shared}}+s^{\text{shared}}}{\sum_{i=1}^{N} x_{i}^{\text{nonshared}}},\] where \(\boldsymbol{x}^{\text{shared}},s^{\text{shared}}\) are the shared battery configurations found by Algorithm 1 and \(\boldsymbol{x}^{\text{nonshared}}\) is the non-shared configuration determined by the empirical estimate of (2). For each \(N\) and target \(\alpha\), we the conduct 20 independent trials and plot the median, interquartile, and interdecile ranges of the relative reduction in total battery capacity. We find that the percentage reduction in capacity grows with the number of drivers \(N\). For example, for \(\alpha=0.85\), a system size of \(N=5\) is associated with a median capacity reduction of 7% while a system size of \(N=185\) is associated with a median reduction of over 20%. Moreover, for sufficiently large \(N\) and target reliability levels, the benefits of sharing can be large: for a target reliability of \(\alpha=0.95\), the availability of a shared resource enables at least a \(50\%\) reduction in deployed battery capacity relative to the non-shared setting. ## V Conclusion We consider a setting in which electric vehicle drivers are given access to a shared battery resource which can be used to complement their personal vehicle batteries. From the perspective of a central decision maker, we formulate the problem of choosing the personal and shared battery capacities in order to minimize total deployed capacity while ensuring that driver mobility needs are met with high probability. The resultant capacity planning problem is a chance constrained optimization problem and can be challenging to solve directly or even approximate using sampling-based methods. We derive a tractable inner approximation to the original capacity planning problem which is amenable to approximation through scenario methods. To assess the potential of resource sharing to reduce total deployed battery capacity (relative to a setting without sharing) we conduct an empirical study using longitudinal mobility data from drivers in Puget Sound, Washington. The empirical results demonstrate that resource sharing has the potential to greatly reduce the amount of battery capacity deployed, and that benefits from sharing increase with number of drivers in the system and the target reliability level desired. In particular, when driver target reliabilities are high (e.g., greater than 0.95), resource sharing can reduce the amount of deployed battery capacity by at least 40%, even for moderately sized systems of 25 drivers or more. Our results suggest resource sharing has significant potential to reduce deployed battery capacity, and merits further exploration. There are several interesting directions for future work. First, the stylized model considered in this paper can be refined to reflect various practical considerations. For example, although we assume that the shared resource can be divided into units of arbitrary size, commercially available range extension resources are typically of fixed size, giving rise to a problem with integer constraints. Additionally, while we assume the perspective of a central decision maker who Fig. 1: Distributions over daily driving distance in the Puget Sound mobility dataset. **Left**: Distribution over daily mileage for each driver and day when travel occured. **Right**: Distribution over total daily mileage summed across drivers for each day of data collection. has control over the size of both the personal and shared batter capacity, it may be more reasonable to model each driver and the central planner as individual agents. Such an assumption would give rise to a game-theoretic model, in which one might attempt to characterize an allocation rule that induces socially optimal behavior from a mechanism design or cooperative game theory perspective.
2304.06203
LeafAI: query generator for clinical cohort discovery rivaling a human programmer
Objective: Identifying study-eligible patients within clinical databases is a critical step in clinical research. However, accurate query design typically requires extensive technical and biomedical expertise. We sought to create a system capable of generating data model-agnostic queries while also providing novel logical reasoning capabilities for complex clinical trial eligibility criteria. Materials and Methods: The task of query creation from eligibility criteria requires solving several text-processing problems, including named entity recognition and relation extraction, sequence-to-sequence transformation, normalization, and reasoning. We incorporated hybrid deep learning and rule-based modules for these, as well as a knowledge base of the Unified Medical Language System (UMLS) and linked ontologies. To enable data-model agnostic query creation, we introduce a novel method for tagging database schema elements using UMLS concepts. To evaluate our system, called LeafAI, we compared the capability of LeafAI to a human database programmer to identify patients who had been enrolled in 8 clinical trials conducted at our institution. We measured performance by the number of actual enrolled patients matched by generated queries. Results: LeafAI matched a mean 43% of enrolled patients with 27,225 eligible across 8 clinical trials, compared to 27% matched and 14,587 eligible in queries by a human database programmer. The human programmer spent 26 total hours crafting queries compared to several minutes by LeafAI. Conclusions: Our work contributes a state-of-the-art data model-agnostic query generation system capable of conditional reasoning using a knowledge base. We demonstrate that LeafAI can rival an experienced human programmer in finding patients eligible for clinical trials.
Nicholas J Dobbins, Bin Han, Weipeng Zhou, Kristine Lan, H. Nina Kim, Robert Harrington, Ozlem Uzuner, Meliha Yetisgen
2023-04-13T00:34:32Z
http://arxiv.org/abs/2304.06203v2
# LeafAI: query generator for clinical cohort discovery ###### Abstract We propose a novel approach to the development of a clinical cohort discovery in clinical cohort discovery in clinical cohort discovery in clinical cohort discovery in clinical cohort discovery in clinical cohort discovery in clinical cohort discovery in clinical cohort discovery in clinical cohort discovery in clinical cohort discovery in clinical cohort discovery in clinical cohort discovery in clinical cohort discovery in clinical cohort discovery in clinical cohort discovery in clinical cohort discovery in clinical cohort discovery in clinical cohort discovery in clinical cohort discovery in clinical cohort discovery in clinical cohort discovery in clinical cohort discovery in clinical cohort discovery in clinical cohort discovery in clinical cohort in clinical cohort discovery in clinical cohort in clinical cohort discovery in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in clinical cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in cohort in clinical cohort in cohort in clinical cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort in clinical cohort clinical cohort in clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort in clinical cohort clinical cohort in clinical cohort in clinical cohort in clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort clinical in clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort clinical in clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort in clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort clinical cohort cohort clinical ###### Abstract **Objective:** Identifying study-eligible patients within clinical databases is a critical step in clinical research. However, accurate query design typically requires extensive technical and biomedical expertise. We sought to create a system capable of generating data model-agnostic queries while also providing novel logical reasoning capabilities for complex clinical trial eligibility criteria. **Materials and Methods:** The task of query creation from eligibility criteria requires solving several text-processing problems, including named entity recognition and relation extraction, sequence-to-sequence transformation, normalization, and reasoning. We incorporated hybrid deep learning and rule-based modules for these, as well as a knowledge base of the Unified Medical Language System (UMLS) and linked ontologies. To enable data-model agnostic query creation, we introduce a novel method for tagging database schema elements using UMLS concepts. To evaluate our system, called LeafAI, we compared the capability of LeafAI to a human database programmer to identify patients who had been enrolled in 8 clinical trials conducted at our institution. We measured performance by the number of actual enrolled patients matched by generated queries. **Results:** LeafAI matched a mean 43% of enrolled patients with 27,225 eligible across 8 clinical trials, compared to 27% matched and 14,587 eligible in queries by a human database programmer. The human programmer spent 26 total hours crafting queries compared to several minutes by LeafAI. **Conclusions:** Our work contributes a state-of-the-art data model-agnostic query generation system capable of conditional reasoning using a knowledge base. We demonstrate that LeafAI can rival a human programmer in finding patients eligible for clinical trials. keywords: clinical trials, natural language processing, machine learning, electronic health records, cohort definition ## Introduction Identifying groups of patients meeting a given set of eligibility criteria is a critical step for recruitment into randomized controlled trials (RCTs). Often, clinical studies fall short of recruitment goals, leading to time and cost overruns or challenges in ensuring adequate statistical power [1; 2]. Failure to recruit research subjects may result from a variety of factors, but often stems from difficulties in translating complex eligibility criteria into effective queries that can sift through data in the electronic health record (EHR) [3]. Despite these difficulties, RCT investigators increasingly rely on EHR data queries to identify research candidates instead of labor-intensive manual chart or case report form review [4]. At the same time, the amount and variety of data contained in EHRs is increasing dramatically, creating both challenges and opportunities for patient recruitment [5]. While more granular and potentially useful data are captured and stored in EHRs now than in the past, the process of accessing and leveraging that data requires technical expertise and extensive knowledge of biomedical terminologies and data models. Cohort discovery tools such as Leaf [6] and i2b2 [7] may be used in many situations, as they offer relatively simple drag-and-drop interfaces capable of querying EHR data to find patients meeting given criteria [8]. However, these tools have limitations, since their use often requires significant training and the tools have difficulty representing particularly complex nested or temporal eligibility criteria [9]. Moreover, existing cohort discovery tools lack functionality to dynamically reason upon non-specific criteria that frequently appear in real-world eligibility criteria. For example, a criterion may require patients "indicated for bariatric surgery", but translating such non-specific criteria into a query (e.g., patients with a diagnosis of morbid obesity or body mass index greater than 40) must be performed manually by a researcher, even in cases where constructing an exhaustive list of such criteria may be time-intensive, subjective, and error-prone. In recent years, alternatives to web-based cohort discovery tools have been explored. In particular, various methods using Natural Language Processing (NLP) have been put forth by the research community [10; 11; 12; 13; 14; 15; 16; 17; 18]. NLP-based cohort discovery methods could be especially valuable since they can key on eligibility criteria described in natural language, a medium that clinicians, researchers and investigators already use. Background and Significance Various methods for cohort discovery using NLP have been previously explored. Yuan _et al_ developed Criteria2Query [10], a hybrid information extraction (IE) pipeline and application which uses both rules and machine learning to generate database queries on an Observation Medical Outcomes Partnership (OMOP) [19] database, later expanded by Fang _et al_[12]. Other research has used encoder-decoder neural architectures for transforming clinical natural language questions into SQL queries [20; 21; 22; 16; 23]. These studies include exploration of cross-domain transformations, where systems must generalize to unseen database schema [21], handling of typos and abbreviations [20], and the generation of intermediate representations between a natural language utterance and final SQL database query.[23] Beyond database query generation, other cohort discovery methods explored include document ranking and classification [11; 14] where clinical notes are summarized, ranked and classified as relevant to a given eligibility criterion, and embedding projections for entailment prediction [13; 16] where predicting that a patient can be inferred from a given eligibility criteria equates to eligibility. Other studies have explored the use of ontologies and OWL-based reasoning in determining eligibility [24; 25; 26; 27; 15]. **Gaps and opportunities** To date, most programs capable of generating database queries do so for only a single database schema, such as OMOP or MIMIC-III [27]. This lack of flexibility limits their capability to accommodate real-world project needs [28; 29; 30; 31; 32; 33; 34], such as adding new database tables to OMOP for cancer staging [28]. Moreover, most methods, particularly those using direct text-to-SQL deep learning approaches, tend to generate relatively simple SQL statements with few JOINs or nested sub-queries and typically no support for UNION operators and so on. This relative simplicity contrasts with the complexity of real-world EHR databases, which may contain dozens or even hundreds of tables using various vocabularies and mappings. Furthermore, direct text-to-SQL methods are bound to SQL syntax, and thus incapable of querying other systems such as Fast Healthcare Interoperability Resources (FHIR) [35]. Additionally, few of the methods described provide support for complex logic such as nested Boolean statements or temporal sequences, and none support reasoning on non-specific criteria (e.g., "diseases that affect respiratory function"), phenomena common to study eligibility criteria [3; 36]. Perhaps most importantly, to the best of our knowledge, only one previous work has been tested in terms of matching patients actually enrolled in clinical trials [13], and none have been directly compared to the capabilities of a human database programmer. **Key Contributions** We introduce the LeafAI query engine, an application capable of generating database queries for cohort discovery from free-text eligibility criteria. This work contributes the following: 1. A novel database annotation schema and mapping method to enable **data model-agnostic** query generation from natural language. 2. Methods for transforming and leveraging **intermediate logical representations** of eligibility criteria. 3. A **corpus of human-annotated logical forms of eligibility criteria** available to the research community1. Footnote 1: Will be made available upon article acceptance 4. Methods for dynamically **reasoning upon non-specific criteria** using an integrated knowledge base (KB) of biomedical concepts. 5. An evaluation of system performance by **direct comparison to that of a human database programmer** on actual clinical trial enrollments. **MATERIALS AND METHODS** **System Architecture** The LeafAI query engine was designed using a modular, micro service-based architecture with a central Application Program Interface (API) which orchestrates end-to-end query generation. Inter-module communication is performed using gRPC [37], a robust open-source remote procedure call framework which enables language-agnostic service integration. This allows individual modules to be implemented (and substituted) in programming languages and using libraries well-suited to a given task. A diagram of the LeafAI query engine architecture is shown in Figure 1. At a high level, query generation is performed in the following steps: 1. A query request is received by the API in the form of inclusion and exclusion criteria as free-text strings. 2. The input texts are tokenized and named entity recognition is performed to determine spans of text representing conditions, procedures, and so on. 3. Relation extraction is performed to determine relations between the entities, such as _Caused-By_ or _Numeric-Filter_. 4. The input texts are transformed by replacing spans of "raw" text with logical form names. For example, "Diagnosed with diabetes" would become "Diagnosed with cond("diabetes")." The resulting input texts are in turn transformed into an output logical representation using a Sequence to Sequence (Seq2Seq) architecture, in the form of a string. Figure 1: LeafAI query architecture. Inter-module communication is performed using the gRPC framework. Individual modules are deployed as Docker [38] containers and communicate solely with the central API, which orchestrates query generation and handles query generation requests. 5. A logical form interpreter module implemented as a recursive descent parser [39] reads the logical form string input and instantiates it as an abstract syntax tree (AST) of nested in-memory logical form objects. 6. "Named" logical form objects (i.e., specified with quoted text, such as "cond("diabetes")") are normalized into one or more corresponding UMLS concepts. UMLS child concepts are also added using our KB. For example, "cond("type 2 diabetes")" would also include concepts for type 2 diabetes with kidney complications (C2874072). 7. Working recursively inside-to-outside the AST structure, each logical form object calls a _Reason()_ method which executes various rules depending on context. 8. Each reasoning rule is performed as one or more pre-defined SPARQL queries to the KB, concept by concept. 9. The final normalized, reasoned, logical form AST is thus a nested structure of UMLS concepts. Each AST criterion is mapped to zero or more corresponding entries in the semantic metadata mapping (SMM), which in turn lists meanings, roles, and relations of a database schema in the form of UMLS concepts. 10. The final mapped AST object is transformed into a series of database queries, one per line of eligibility criteria text. The output SQL query can either be executed directly on a database or returned to the API caller. Figure 2 illustrates an example of this process. In the following subsections we examine these steps in detail. #### Named entity recognition and relation extraction Named entity recognition (NER) refers to the segmentation and identification of tokens within an input sentence as "entities", such as conditions or procedures. We used the Leaf Clinical Trials (LCT) corpus [40] to train two BERT-based [41] NER extractors, one each for LCT general- and fine-grained-entities (see [40] for more information on LCT entity types). Next, we perform relation extraction between named entity pairs similarly using a BERT-based model also trained on the LCT corpus. #### Logical form transformation One of the core challenges of generating queries for eligibility criteria is the problem of logical representation. Generating queries directly based on named entities and relations alone, while practical, performs poorly in cases of nested or particularly complex logic. An alternative to this approach is to use a so-called intermediate representation (IR), which transforms the original natural language by removing "noise" unnecessary to a given task and which more logically represents underlying semantics (see Herzig _et al_[42] for an examination of IR-based SQL generation approaches). Similar to earlier work using Description Logics, Roberts and Demner-Fushman [43] proposed a representation of questions on EHR databases using a comparatively compact but flexible format using first order logic expressions, for example, representing "Is she wheezing this morning?" as \[\delta(\lambda x.has\_problem(x,C0043144,status)\wedge time\_within(x,"\text{ this morning}"))\] This style of representation is powerfully generalizable, but also difficult to translate directly into SQL Figure 2: LeafAI query generation processes statements as multiple predicates (e.g., _has_problem_ and _time_within_) may actually correspond to one or many SQL statements, depending on context, complicating direct transformation into queries. We thus chose a similar intermediate representation (hereafter simply "logical forms") as proposed by Roberts and Demner-Fushman but more closely resembling a nested functional structure in programming languages such as Python or JavaScript and more amenable to SQL generation. A criterion such as "Diabetic women and men over age 65" would be represented by our logical forms as \[intersect(\] \[cond("Diabetic"),\] \[union(female(),male()),\] \[age().num\_filter(eq(op(GT),val("65")))\] ) A description of our logical forms annotation schema, corpus, called the Leaf Logical Forms (LLF) corpus, annotation process, and performance metrics can be found in Appendix A. After NER and relation extraction are performed, we leverage T5 [44], a state-of-the-art Seq2Seq architecture we fine-tuned for predicting logical forms on the LLF corpus. As inputs to the Seq2Seq model we use the original eligibility criteria with named entity spans replaced by logical form representations, since we found this improved performance compared to training with raw inputs. Thus the above example input would be transformed to _"cond("Diabetic") female() and male() over age() eq(op(GT), val("65"))"_ The returned logical form string is then instantiated into an abstract syntax tree (AST) of nested in-memory logical form objects using a recursive descent parser [39] within our API. **Concept normalization** Normalization refers to the mapping of free-text string values (e.g., "diabetes mellitus") to coded representations (e.g., UMLS, ICD-10, SNOMED, or LOINC). We normalize "named" logical forms to UMLS concepts using MetaMapLite [45, 46]. We consider a logical form "named" if it contains a free-text value surrounded by quotes. For example, _cond()_ is unnamed and refers to any condition or disease, while _cond("hypertension")_ is named as it refers to a specific condition. Normalization using MetaMapLite can often result in high recall but low precision, as MetaMapLite has no NER component and tends to return UMLS concepts which match a given phrase syntactically but refer to abstract concepts not of interest (e.g., a search for "BMI" may return "body mass index" (C1305855), but also organic chemical "BMI 60" (C0910133)). To improve normalization precision, we employ two strategies. First, our NER component filters predicted UMLS concepts to only those of specific semantic types. For example, we limit condition concepts to only those related to semantic types of diseases or syndrome (dsyn) and so on. Next, using term-frequencies pre-computed across UMLS concept phrases, we compare term frequency-inverse document frequency (tf-idf) on MetaMapLite predictions, removing UMLS concepts whose summed matched spans have a tf-idf score lower than that of unmatched spans in a given named entity. For example, for the string "covid-19 infection", MetaMapLite predicts both "COVID-19" (C5203670) as well as several concepts related to general infections. Our tf-idf strategy removes general infection concepts because "infection" has a lower tf-idf score than the summed scores for "covid" + "-" + "19". Laboratory values present a particular challenge, as LeafAI expects predicted lab concepts to have directly associated LOINC codes, while MetaMapLite typically normalizes lab test strings to UMLS concepts of semantic type "laboratory test or finding", but which do not have direct mappings to LOINC codes. For example, a search for "platelet count" returns the concept "Platelet Count Measurement" (C0032181), but not the needed concept of "Platelet # Bld Auto" (C0362994) with LOINC code "777-3". Thus similar to Lee and Uzuner with medications [47], we trained a BERT model for sequence classification to normalize lab tests. We trained this model to identify UMLS concepts associated with LOINC codes most frequently used in eligibility criteria [48], with each CUI as a possible class. **Reasoning using an integrated knowledge base** For reasoning and derivation of ICD-10, LOINC, and other codes for UMLS concepts, we designed a KB accessible via SPARQL queries and stored as Resource Description Framework (RDF) [49] triples. The core of our KB is the UMLS, derived using a variation of techniques created for ontologies in BioPortal [50]. To further augment the UMLS, we mapped and integrated the Disease Ontology [51], Symptom Ontology [52], COVID-19 Ontology [53], Potential Drug-Drug Interactions [54], LOINC2HPO [55], and the Disease-Symptom Knowledge Base [56]. We then developed SPARQL queries parameterized by UMLS concepts for various scenarios which leveraged our KB, such as contraindications to treatments, symptoms of diseases, and so on. Using LOINC2HPO mappings further allows us to infer phenotypes by lab test results rather than using ICD-10 or SNOMED codes alone. Our KB, nested logical forms, and inside-to-outside normalization methods enable "multi-hop" reasoning on eligibility criteria over several steps. For example, given the non-specific criterion "Contraindications to drugs for conditions which affect respiratory function", our system successfully reasons that (among other results), 1. **Asthma** causes changes to **respiratory function** 2. **Methylprednisolone** can be used to treat **asthma** 3. **Mycosis** (fungal infection) is a contraindication to **methylprednisolone** These features allow LeafAI to reason upon fairly complex non-specific criteria. **Query generation using semantic metadata mapping** To enable data model-agnostic query generation, we leveraged a subset of codes within the UMLS in what we define as a semantic metadata mapping, or SMM. An SMM includes a listing of available databases, tables, columns, and so on within a given database schema. Critically, these database artifacts are "tagged" using UMLS concepts. An example of this can be seen in Figure 3, which shows strategies by which a given criterion can be used to generate schema-specific queries by leveraging different SMMs. In cases where the LeafAI query engine finds more than one means of querying a concept (e.g., two SQL tables for diagnosis codes), the queries are combined in a UNION statement. **Evaluation** It is reasonable to expect that an NLP-based system for finding patients based on eligibility criteria would find many patients who actually enrolled in a real clinical trial -- assuming that patients enrolled in those trials met the necessary criteria as determined by study investigators. While there are caveats to this approach (for example, certain diagnosis codes may be missing for some patients, etc.), we suggest that tools such as LeafAI be evaluated by their ability to handle real-world eligibility criteria and clinical data. In this study we compared LeafAI's results to that of a human database programmer experienced in the use of clinical databases and data extraction. Our evaluation was performed as follows: 1. We extracted metadata on 168 clinical trials from our EHR between January 2017 and December 2021 where at least 10 patients were indicated as enrolled and not withdrawn, and the total number of raw lines within the eligibility criteria (besides the phrases "Inclusion Criteria" and "Exclusion Criteria") was less than or equal to 30. 2. By manual review, we excluded 22 trials with multiple sub-groups, as it would not be possible to know Figure 3: The LeafAI query engine’s SQL query generation process using two hypothetical database schema to generate queries for platelet counts (shown in logical form after normalization). This example illustrates the flexibility of LeafAI’s semantic metadata mapping system (represented here in JSON format) in adapting to virtually any data model. On the left, “Tall Table Structure”, platelet counts must be filtered from within a general purpose “labs” table. The LeafAI KB recognizes that labs may be stored as LOINC codes, and the corresponding SMM indicates that records in this table can be filtered to LOINC values. On the right, “Pivoted Table Structure”, platelet counts are stored as a specific column in a “complete_blood_counts” table, and thus can be directly queried without further filtering. Additional metadata, columns, tables, types and so on needed in SMMs are omitted for brevity. which eligibility criteria applied to which sub-group of enrolled patients. 3. To narrow the scope of our evaluation, we chose to evaluate only trials studying the following 7 disease groups: Cardiology, COVID-19, Crohn's Disease, Multiple Sclerosis (MS), Diabetes Mellitus, Hepatitis C, and Cancer. Using the "condition" field for each trial within metadata from [https://clinicaltrials.gov](https://clinicaltrials.gov), we filtered and grouped the remaining 146 trials into only those studying our diseases of interest. 4. We randomly chose 1 trial from each group, with the exception of Cancer, where given the large number of trials and variety of cancer types, we chose 2 trials. 427 patients were enrolled across the chosen 8 clinical trials. 5. Both the LeafAI query engine and human programmer created queries to find patients for each eligibility criteria, which we executed on an OMOP database derived from our EHR of our institution's entire research-eligible patient population. 6. To ensure results returned would be limited to only data available during the time of each trial, we replaced references to the SQL function for generating a current timestamp (_GETDATE()_) with that of each trial's end date, and similarly replaced OMOP table references with SQL views filtering data to only that existing prior to the end of a trial. 7. To ensure queries would be comparable to LeafAI, the human programmer was instructed to (1) ignore criteria which cannot be computed, (2) make a best effort to reason upon non-specific criteria (e.g., symptoms for a condition), (3) not check whether patients found by a human query enrolled within a trial, and (4) skip criteria which cause an overall query to find no eligible patients. ## Results Results of the query generation experiment are shown in Table 1. Overall, LeafAI matched 212 of 427 (49%) total enrolled patients across 8 clinical trials compared to 180 (42%) found by queries of the human programmer. The mean per-trial percent of patients matched was 43.5% for LeafAI and 27.2% for the human programmer. LeafAI had a greater number of patients deemed eligible across all 8 trials, for a total of 27,225 eligible compared to 14,587 found by the human programmer. Table 2 shows the number of criteria which were skipped by LeafAI. Of the 103 total criteria across all 8 studies, LeafAI executed queries for 61 (59.3%) and skipped 5 (4.8%) as it found no patients and 42 (40.7%) because no computable concepts were found. Figure 4 shows differences in query strategies for 4 trials between LeafAI and the human programmer. \begin{table} \begin{tabular}{l c c c|c c|c c|c} & & & \multicolumn{2}{c|}{**LeafAI**} & \multicolumn{2}{c}{**Human**} \\ \hline **Condition** & **ID** & **\# Crit.** & **Enrolled** & **Matched** & **Eligible** & **Matched** & **Eligible** & **Time (hrs)** \\ \hline CL Lymphoma & NCT04852822 & 4 & 83 & 80 (96\%) & 3,252 & 77 (92\%) & 2,382 & 1 \\ Hepatitis C & NCT02786537 & 8 & 42 & 33 (78\%) & 9,529 & 32 (76\%) & 9,372 & 4 \\ Crohn’s Disease & NCT03782376 & 9 & 16 & 0 (0\%) & 113 & 1 (6\%) & 9 & 2 \\ Cardiac Arrest & NCT04217551 & 12 & 27 & 12 (44\%) & 4,792 & 0 (0\%) & 598 & 5 \\ COVID-19 & NCT04501952 & 13 & 41 & 0 (0\%) & 0 & 0 (0\%) & 98 & 2 \\ Multiple Sclerosis & NCT03621761 & 14 & 196 & 77 (39\%) & 4,891 & 69 (35\%) & 1,016 & 3 \\ Type 1 Diabetes & NCT03335371 & 18 & 11 & 0 (0\%) & 1,006 & 1 (9\%) & 1,104 & 4 \\ Ovarian Cancer & NCT03029611 & 25 & 11 & 10 (91\%) & 1,667 & 0 (0\%) & 8 & 5 \\ \hline **Mean** & & & & 43.5\% & & 27.2\% & & \\ **Total** & & 103 & 427 & 212 (49\%) & 27,225 & 180 (42\%) & 14,587 & 26 \\ \end{tabular} \end{table} Table 1: Statistics for each clinical trial evaluated by the LeafAI query engine and human programmer. The number of enrolled and matched patients were determined by cross-matching enrollments listed within our EHR. The _Time_ column indicates the number of hours the human programmer spent developing queries for each trial. \begin{table} \begin{tabular}{l c c c c} **Condition** & **\# Criteria** & **\# No Patients** & **\# Not Computable** & **\# Fully Executed** \\ \hline Cl Lymphoma & 4 & 0 (0\%) & 0 (0\%) & 4 (100\%) \\ Hepatitis C & 8 & 0 (0\%) & 4 (50\%) & 4 (50\%) \\ Crohn’s Disease & 9 & 0 (0\%) & 4 (44.4\%) & 5 (55.5\%) \\ Cardiac Arrest & 12 & 0 (0\%) & 8 (66.6\%) & 4 (33.3\%) \\ COVID-19 & 13 & 0 (0\%) & 6 (46.1\%) & 7 (53.8\%) \\ Multiple Sclerosis & 14 & 1 (7.1\%) & 3 (21.4\%) & 10 (71.4\%) \\ Type 1 Diabetes & 18 & 2 (11.1\%) & 8 (44.4) & 8 (44.4\%) \\ Ovarian Cancer & 25 & 2 (8\%) & 9 (36\%) & 14 (56\%) \\ \hline **Total** & 103 & 5 (4.8\%) & 42 (40.7\%) & 61 (59.3\%) \\ \end{tabular} \end{table} Table 2: The LeafAI query engine’s handling of eligibility criteria for each trial. The column _No Patients_ indicates the count of criteria which would, if executed, cause no patients to be eligible. The column _Not Computable_ indicates the count of criteria which LeafAI could not generate a query for, for various reasons. Both of these types of criteria were ignored by the system. ## 6 Conclusion Figure 4: Longitudinal results listing patients found at each step in the query process for four trials illustrating data issues and differing query strategies between LeafAI and the human programmer. The blue line indicates recall for LeafAI and orange that of the human programmer. The X axis represents the line number within the free-text eligibility criteria. Dots indicate that a query was executed for a given line. On the right, boxes represent the text of a given eligibility criteria, with comments below discussing strategies and findings. ## Discussion Our results demonstrate that LeafAI is capable of rivaling the ability of a human programmer in identifying patients who are potentially eligible for clinical trials. Indeed, in numerous cases we found LeafAI and the human programmer executing similar queries, such as for Hepatitis C (NCT04852822), Chronic Lymphocytic Leukemia (NCT04852822), MS (NCT03621761), and Diabetes Mellitus (NCT03029611), where both ultimately matched a similar number of patients. One notable pattern we found is that LeafAI consistently finds a higher number of potentially eligible patients. We hypothesize that in many cases, LeafAI's KB played a key role both in finding additional eligible patients. For example, in the MS trial, LeafAI searched for 11 different SNOMED codes related to MS (including MS of the spinal cord, MS of the brain stem, acute relapsing MS, etc.), while the human programmer searched for only one, and ultimately LeafAI found nearly 5 times the number of potentially eligible patients (4,891 versus 1,016). It is possible that the human programmer had a lower rate of false positives (and thus higher precision); this will be explored in a future analysis. On the other hand, in the same trial, as can be seen in Figure 4 (A), given the exclusion criteria: "Current shift work sleep disorder, or narcolepsy diagnosed with polysomnography and multiple sleep latency", LeafAI's KB unnecessarily excluded otherwise eligible patients by removing those with diagnosis codes for drowsiness, snoring, etc, as within the UMLS those are child concepts of sleep disorder (C0851578). The exclusion of these patients likely resulted in an approximately 40% drop in recall at that stage compared to the human programmer, though ultimately both achieved similar recall (LeafAI: 39% versus Human: 35%). Beyond performance as measured by recall, it is notable that the human programmer spent approximately 26 hours crafting queries for the 8 trials while LeafAI took only several minutes running on a single laptop. The time saved by using automated means such as LeafAI for cohort discovery may save health organizations significant time and resources. ### Limitations This project has a number of limitations. First, while the 8 clinical trials we evaluated were randomly selected, we specifically restricted the categories of diseases from which trials were chosen and limited to trials with 30 or less lines of eligibility criteria, and thus our results may not generalize to other kinds of trials. Next, we evaluated our queries using an OMOP-based extract which did not contain the full breadth of data within our EHR. Had our experiments instead been conducted using our enterprise data warehouse (populated by our EHR), it is possible the human programmer would have achieved better results than LeafAI due to knowledge and experience in utilizing granular source data. For example, in the Cardiac Arrest clinical trial, the human programmer noted that data for use of cooling blankets is available in our EHR, but not in OMOP. It is not clear how LeafAI would perform were such data available. We further did not directly compare LeafAI to other NLP systems. While we considered evaluating another notable system, Criteria2Query [10], as part of our baseline, we ultimately determined that it was inappropriate for our analysis as we aimed to review results longitudinally (i.e., line by line of criteria), a function which Criteria2Query does not perform. Last, the number of truly eligible patients within our institution for each trial is unknown, which hampers our ability to measure system performance. We used each trial's known enrolled participants as our gold standard, but assume they are only a (possibly small) subset of those actually eligible. **Future work** We are actively developing a web-based user interface for LeafAI, shown in Figure 5. In future work, we will deploy a prototype of the tool and evaluate user feedback and system performance. The LeafAI web application will provide rapid feedback to users explaining its search strategies, and allow users to override system-reasoned concepts and edit or add their own. Additionally, we intend to explore the adaptation of our logical form-based query generation methods to general-purpose question answering and querying systems such as FHIR endpoints. ## Conclusion This study introduced LeafAI, a NLP-based system leveraging deep learning and an integrated KB which can automatically generate queries for cohort discovery on virtually any clinical data model. Using an OMOP database representing the entire patient population of our institution, we demonstrated that LeafAI rivals the performance of a human programmer in identifying eligible research candidates. As future work we will deploy LeafAI into the analytic tool box for our research community, obtaining their feedback and iteratively improving the tool. ## Acknowledgements This study was supported in part by the National Library of Medicine under Award Number R15LM013209 and by the National Center for Advancing Translational Sciences of National Institutes of Health under Award Number UL1TR002319. Experiments were run on computational resources generously provided by the UW Department of Radiology. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Figure 5: Example screenshot of the LeafAI web application, which is currently in development. ## Author contributions statement ND is the developer of LeafAI and wrote the majority of the manuscript. BH and WZ annotated the LLF dataset and contributed to the annotation schema. KL served as the human database programmer, and NK and RH advised on study design. OU and MY advised on strategies for query generation and NLP architectures. All authors contributed to the interpretation of the data, manuscript revisions, and intellectual value to the manuscript. ## Competing interests The authors declare no competing interests.
2302.04942
A Superconducting Nanowire Binary Shift Register
We present a design for a superconducting nanowire binary shift register, which stores digital states in the form of circulating supercurrents in high-kinetic-inductance loops. Adjacent superconducting loops are connected with nanocryotrons, three terminal electrothermal switches, and fed with an alternating two-phase clock to synchronously transfer the digital state between the loops. A two-loop serial-input shift register was fabricated with thin-film NbN and achieved a bit error rate less than $10^{-4}$, operating at a maximum clock frequency of $83\,\mathrm{MHz}$ and in an out-of-plane magnetic field up to $6\,\mathrm{mT}$. A shift register based on this technology offers an integrated solution for low-power readout of superconducting nanowire single photon detector arrays, and is capable of interfacing directly with room-temperature electronics and operating unshielded in high magnetic field environments.
Reed A. Foster, Matteo Castellani, Alessandro Buzzi, Owen Medeiros, Marco Colangelo, Karl K. Berggren
2023-02-09T21:23:12Z
http://arxiv.org/abs/2302.04942v2
# A Superconducting Nanowire Binary Shift Register ###### Abstract We present a design for a superconducting nanowire binary shift register, which stores digital states in the form of circulating supercurrents in high-kinetic-inductance loops. Adjacent superconducting loops are connected with nanocrystalline, three terminal electrothermal switches, and fed with an alternating two-phase clock to synchronously transfer the digital state between the loops. A two-loop serial-input shift register was fabricated with thin-film NbN and achieved a bit error rate less than \(10^{-4}\), operating at a maximum clock frequency of 83 MHz and in an out-of-plane magnetic field up to 6 mT. A shift register based on this technology offers an integrated solution for low-power readout of superconducting nanowire single photon detector arrays, and is capable of interfacing directly with room-temperature electronics and operating unshielded in high magnetic field environments. Superconducting nanowires are interesting candidates for cryogenic data processing and storage, particularly for readout of superconducting nanowire single photon detector (SNSPD) arrays. The high kinetic inductance of thin film superconductors allows them to store data in compact loops,[1] and the existence of nanocryotrons (nTrons), three-terminal electrothermal switches,[2] enables the creation of low-power digital logic and memory elements.[3] In addition, superconducting nanowires can operate in harsh environments. NbN is radiation hard,[4] and SNSPDs have been shown to operate under high magnetic fields: both in-plane up to 5T and out-of-plane up to 500 mT.[5] This makes nanowires an interesting candidate for applications in which SNSPD readout electronics must be able to withstand strong ambient magnetic fields or radiation, such as high energy physics and space exploration. Furthermore, the shared technology platform with SNSPDs and ability to drive high-impedance loads[2] is a strong motivator for direct integration of nanowire electronics with SNSPD arrays. Dedicated readout electronics are necessary to address the thermal and mechanical challenges of scaling SNSPD imagers beyond 1 kilopixel,[6] and low-power electronic devices that operate in extreme environments and can be fabricated adjacent to superconducting detectors are an attractive choice over Josephson junction logic and CMOS. Previous work[7; 8] has used the high kinetic inductance of superconducting nanowires to make analog delay-line imagers, which offer high pixel counts and preserve the picosecond timing resolution of the SNSPDs. Row column multiplexing has also been shown as an effective technique for reducing cable counts,[9] however a more aggressive reduction in cable count will be required for megapixel arrays. Inspired by the operation of a semiconductor CCD, serial readout of SNSPD arrays could be performed by a superconducting nanowire binary shift register. Serial readout may enable higher count rates than delay-line techniques by shortening dead-time, but more importantly, it simplifies the interface to conventional CMOS readout electronics by removing the need for high resolution, low jitter time-to-digital converters. In this work, we demonstrate a proof-of-concept for a superconducting nanowire binary shift register, which encodes digital states with dissipationless circulating current in superconducting loops. As shown in Fig. 1, each loop is formed by a kinetic inductor L\({}_{\textrm{k}}\) and two nTrons, U\({}_{1}\) and U\({}_{2}\). The presence of a circulating current flowing through \(L_{\textrm{k}}\) into the gate of U\({}_{2}\) encodes a binary "1", and the absence of current is used to represent a "0". The shift register is designed to use circulating currents on the order of 100 \(\mu\)A, therefore small (\(\mu\)A) fluctuations in loop current (_e.g._ due to thermally-activated phase slips which change the stored flux in each loop by \(\Phi_{0}\)) are not expected to impact the binary state. A substantial environmental disturbance that makes the film resistive (e.g. \(T>T_{\textrm{c}}\), \(H>H_{\textrm{c2}}\)) would be necessary to destroy the state stored in the shift register. The state of the shift register is only altered under the application of a clock, when the combination of the circulating current and clock pulse exceeds the critical current density in the nTron channel, causing it to switch from superconductive to resistive, diverting the clock pulse into the next loop. This process forms a new circulating current conditional on the presence of current in the previous loop. A two-phase clock is used to guarantee the diverted current always has a superconducting path to ground (as shown in Fig. 1c). In comparison to the original nTron design,[2] which acts like an amplifier, the loops are connected with wide-gate nTrons, where the width of the gate constriction is comparable or equal to that of the channel constriction. A wide-gate nTron is crucial for the shift register: because the output of one nTron becomes the input of another, the current levels for the input and output should be equal. The additional readout nTron shown in Fig. 1f uses a standard nTron with a small choke. It terminates the final loop of the shift register to destroy any circulating current present at the end of each clock cycle. The readout nTron serves two purposes: (1) to reset the final loop of the shift register, and (2) to generate an output voltage signal, which can be sent to off-chip readout electronics or cascaded through a resistor to other nTron logic. The shift register was fabricated on a 16 nm-thick layer of NbN, deposited with an AJA sputtering system onto an Si wafer with 300 nm-thick SiO\({}_{2}\) thermal oxide. The circuit geometry was patterned on the NbN layer with electron-beam lithography using ZEP530A resist and CF\({}_{4}\) reactive ion etch ing. The wide-gate nTron channel constriction widths were designed to be 270 nm (with an equal-sized gate choke), and the readout nTron channel width was designed to be 240 nm, with a gate choke width of 40 nm. Figure 2a shows an electron micrograph of a wide-gate nTron patterned on thin-film NbN. Figure 2b is an electron micrograph of the experimental two-loop shift register circuit, and the equivalent circuit model is shown in Fig. 2c. The loop kinetic inductors were designed to be 100 nH; the estimated inductance came out to 60 nH (30 pH per square) based on a room temperature sheet resistance measurement of 1942 per square. The finished chip was wirebonded to a printed circuit board with off-chip current bias and shunt resistors, which was mounted to a custom dip probe [10] and cooled to 4.2 K in a dewar of liquid helium. The 2 k\(\Omega\) bias resistors were used as approximate current sources to convert an applied voltage to a current through the nanowire. The hotspot resistance of the switching nTron is small compared to 2 k\(\Omega\), so the amount of current through the nanowire given some applied voltage stays roughly constant regardless of the nanowire state. The nTron dimensions, inductor sizes and resistor values were selected through LT-Spice simulation, [11] the results of which are shown in Fig. 1g. The bit error rate of the shift register model under high levels of noise (_e.g._\(\pm\)5% variation in clock amplitude) was used to guide selection of component properties. Eight different shift register circuits were fabricated on a single 1 cm\({}^{2}\) chip. Two circuits were tested: the circuit presented in this letter, which used a wide-gate nTron to connect adjacent loops, and a shift register with a different switch geometry. The alternative design used current summation into a single two-terminal constriction as a switch, which performed worse than the design based on the wide-gate nTron, likely due to leakage current that could flow between loops unimpeded regardless of the switch state. The results presented in this letter are from the Figure 1: (a)-(f) Principle of operation of the shift register, which uses the presence or absence of circulating current to encode digital states. (g) Shows the results of a transient simulation in LTSpice of a four-loop shift register, including noise and parasitics from the packaging and experimental apparatus. (a) Shows a shift register with an initial circulating current in the loop formed by the kinetic inductor L\({}_{\text{k}}\) and nTrons U\({}_{1}\), U\({}_{2}\). The corresponding time \(a\) in the simulation is indicated in (g). A two phase clock (\(\phi_{1}\), \(\phi_{2}\)) is used to transfer the digital state between adjacent loops; the first phase \(\phi_{1}\) is applied in (b). In (c), the summation of the clock and circulating currents exceeds the switching current of U\({}_{2}\)’s channel, forming a resistive hotspot and diverting the clock into the loop formed by U\({}_{2}\) and U\({}_{3}\). The hotspot creates a voltage spike \(v_{1\to 2}\) shown in the lower panel of (g) at time \(c\). By the time the clock is turned off in (d), the channel of U\({}_{2}\) has healed and a circulating current is present in the loop between U\({}_{2}\) and U\({}_{3}\). The process continues in (e) when the second clock phase \(\phi_{2}\) is applied. Two clock phases are needed to ensure a zero resistance path to ground for the diverted clock, for example, the path through U\({}_{3}\) as shown in (c). The readout nTron U\({}_{\text{ro}}\) in (f) is used to reset the state of the final loop and generate an output voltage conditional on the presence of a circulating current. Figure 2: (a) and (b) show electron micrographs of fabricated wide nTron and two loop shift register. The large meanders in (b) are 100 nH kinetic inductors. (c) Is an equivalent circuit model of the experimental circuit. The current pulses are provided by a voltage source in series with 2 k\(\Omega\) resistors mounted off-chip on a printed circuit board. circuit which used wide-gate nTrons. The circuit was characterized with clock rates from 10 MHz to 100 MHz and under magnetic fields from \(\pm 1\) mT to \(\pm 6\) mT, applied orthogonal to the chip surface by a superconducting magnet mounted on the end of the dip probe. A Keysight PXIe M3202A (arbitrary waveform generator) and M3102A (digitizer) were used to verify correct operation of the shift register over a range of signal amplitudes. This was done by generating multiple 10 kbit-long pseudorandom binary sequences of voltage pulses and measuring the circuit response. The data and clock input signals encoded digital "1"s with low-duty-cycle 2 ns FWHM voltage pulses, as can be seen in the top panel of Fig. 3c. The PXIe chassis controller swept the amplitude of the shift and readout clock pulses and measured the bit error rate in near real time for each set of clock amplitudes by comparing the device output with the 10 kbit input sequence. Each spike of the output waveform was thresholded and digitized, and the result was compared with a copy of the input signal delayed by a clock period -- for each instance where the input and digitized output differed, the total error count was incremented. A sample waveform used to calculate the bit error rate is shown in Fig. 3c. The plots in Fig. 3a are bias margin plots, which show the bit error rate as a function of clock pulse amplitude for various clock rates. The dark regions indicate no measured errors for the 10kbit sequence, and the width of the dark regions give the bias margins, defined as the amount of variation in clock amplitude that is acceptable before the circuit begins to function incorrectly. The device performed correctly up to a maximum clock rate of 83 MHz, with the bias margins steadily shrinking for increasing clock frequency. The bias margins of the shift clock were \(\pm 24\) % at \(f_{\mathrm{clk}}=10\) MHz, but only \(\pm 7\) % for \(f_{\mathrm{clk}}=83\) MHz. Margins for the readout clock shrank even more, from \(>\pm 45\) % at \(f_{\mathrm{clk}}=10\) MHz to \(\pm 5\) % at \(f_{\mathrm{clk}}=83\) MHz. As shown in Fig. 3b, the introduction of a \(\pm 1\) mT field did not dramatically hurt the margins of the shift clock: \(\pm 25\)% for \(+1\) mT and \(\pm 20\) % for \(-1\) mT. The readout clock margins were unimpacted. However, introduction of a \(+6\) mT field reduced the margins of the shift clock to \(\pm 4\) %, and a \(-6\) mT field (not shown) prevented the device from working with a bit error rate below \(10^{-3}\). The lower half of each bias margin plot exhibits a downwards slope due to the transfer characteristics of the readout nTron: for a larger gate current, the required channel current to switch the nTron is lower. Therefore, for a larger readout clock, the required loop current (and thus shift clock amplitude) is lower. The abrupt change in bit error rate for readout clock amplitudes below \(30\) \(\mu\)A occurred because the readout clock was not strong enough to switch the readout nTron. If the final loop current is left circulating, it prevents the middle nTron from switching again when a shift clock is applied. The optimal bias region slopes upwards for high readout clock currents, possibly because of current injection from the readout clock, which would create a reverse circulating current in the final shift register loop. This would require the amplitude of the shift clock to be larger to leave a net-forward circulating current in the final loop that was large enough for the readout nTron to switch when clocked. As the frequency of the clock increased, the bias margins for the shift clock shrank from both sides, and the maximum acceptable readout clock amplitude decreased dramatically. The \(L/R\) time constant to charge a loop with a circulating current depends on the loop kinetic inductance and the total shunt resistance. It is plausible that, for higher clock frequencies, the circulating current does not reach a stable level in the half-period between the two clock phases, thus producing incorrect behavior. Further characterization with various shunt resistor and kinetic inductor sizes should be performed to verify that the decrease in margins is due to this electrical time constant, and not a thermal process or some other unconsidered effect. One possible explanation for the large decrease in the bias margins of the readout clock could be slow thermal reset of the readout nTron gate choke. The designed critical current of the choke was only \(30\) \(\mu\)A, and overdriving the readout clock significantly above that (e.g., \(100\) \(\mu\)A) would generate a considerable amount of heat. Residual heat from a readout clock with phase \(\phi_{1}\) would suppress the critical current of the channel, potentially causing the readout nTron to switch on phase \(\phi_{2}\) if it had not cooled sufficiently. Shunting the gate with a small resistor could limit the heating of the choke, potentially restoring the bias margin range of the readout clock for high clock frequencies. The observed shift in bias margins of \(15\) \(\mu\)A/mT due to the external magnetic field (Fig. 3b) agrees with the expected loop current induced by the Meissner effect. However, enhancement of current crowding around constrictions (such as the sharp corners in the nTron channel as can be seen in Fig. 2a) due to the Lorentz force is potentially a more plausible explanation, so further work must be done to understand the mechanism of the external field on the bias margins of the circuit. If the Meissner effect is the dominant mechanism, reducing the size of the loop inductor may help improve resilience against out-of-plane magnetic fields. Instead, if the mechanism is current crowding enhanced by the Lorentz force, then the nTron geometry would need to be modified to mitigate this effect. The total energy of any cryogenic electronics system will be dominated by the cryocooler, which can consume on the order of 1 kW to supply tens of milliwatts of cooling power at 4 K.[12] Unless the design of the shift register presented in this work is modified, SNSPD arrays using shift register readout are limited to the kilopixel regime by cryostat cooling power. The energy consumption of the shift register is estimated to be 80 fJ per shift operation, and is dominated by the clocking: each clock phase dissipates 100 \(\mu\)A through 2 k\(\Omega\) for 2 ns. When the shift register stores a "1", approximately 300 aJ of energy is stored (100 \(\mu\)A in a 60 nH loop). Each shifting operation destroys this circulating current, dissipating the stored energy through the resistive hotspot in the nTron channel. Shift register readout of a 1 kilopixel array clocked at 50 MHz would dissipate about 4 mW. Reduction of the clock impedance by a factor of 20 from 2 k\(\Omega\) to 100 \(\Omega\) and the operating current from 100 \(\mu\)A to 10 \(\mu\)A would reduce the power dissipation of the 1 kilopixel array to 2 \(\mu\)W, making a megapixel array feasible from a power perspective. Decreasing the size of the loop inductor will enable faster, more compact shift registers due to a reduced kinetic in ductance and therefore smaller \(L/R\) loop current time constant. The speed of the device is fundamentally limited by the hotspot thermal relaxation time, since the nTron channel must cool between the two clock phases, otherwise there will not be a superconducting path for the diverted clock if the previous shift register stage switches. For example, as shown in Fig. 1c, U3 must be superconducting during the application of clock \(\phi_{1}\). An nTron fabricated with NbN on SiO\({}_{2}\) thermal oxide has achieved a thermally-limited switching speed of 615.4 MHz, with an estimated thermal relaxation time of 130ps.[13] Based on this, a conservative estimate for the thermal-reset-limited clock frequency of the shift register is about 1 ns, allowing for a 500 MHz two-phase clock. At this clock rate, a 1 megapixel array could be read out on two wires at a frame rate of 1 kHz, for a maximum photon count rate of 1 Gcps. More thermally conductive substrates can speed up thermal relaxation,[14] potentially offering further speed improvements to nanowire logic. Due to the small feature size of the nTron constriction, fabrication variations may pose a challenge when drastically reducing feature sizes, especially for shift registers with many nTrons. In order to minimize cable count, the same clock signal must be shared between multiple nTrons for any practical shift register. Therefore, all nTrons will receive the same amplitude clock signal, so if there is substantial variation in the switching current of the nTrons, then some loops may not function correctly for a clock amplitude which works for other loops. The bias margins of each nTron in a large shift register will have roughly the same shape, with variations in the midpoint of the optimal bias region due to edge roughness altering the constriction widths. Film thickness also plays a role, but edge roughness should be the dominant factor in switching current variations. Based on Fig. 3a, the allowable variation in switching current is \(\pm\)7 % for a clock rate of 83 MHz. This is equivalent to \(\pm\)18 nm variation in nTron width for the 270 nm-wide nTrons. A nanowire fabrication process using ma-N demonstrated 36 nTrons with a mean gate width of 33.7 nm and standard deviation of 2.4 nm across a 1 cm\({}^{2}\) chip area.[15] With \(\pm\)7 standard deviations of allowable variation in width, a shift register with millions of nTrons should be feasible. However, scaling down to smaller nTron widths may still pose a challenge, as the relative variation in nTron switching current is larger. Because the device we fabricated only accepts serial inputs, it would provide little practical benefit for large SNSPD arrays, as it is incapable of reducing wire count. However, mod Figure 3: (a) and (b) are bias margin plots, which show the bit error rate of the shift register (number of errors out of a 10kbit random bit sequence) as a function of shift and readout clock amplitude. The black regions represent correct operation, with a bit error rate below \(10^{-4}\). The input clock amplitude was fixed at a level that gave optimal margins. (c) Shows an example trace of the transient response of the circuit with a 10 MHz clock. The voltage spikes on V\({}_{\text{shunt1}}\) indicate the storage of a circulating current in the first loop, and spikes on V\({}_{\text{shunt2}}\) indicate transfer of state between adjacent loops. Traces are vertically offset for clarity. ifications to the circuit design can be incorporated to load data from an entire row of pixels in parallel into the shift register, as shown in Fig. 4. This proposed modification was designed and simulated in LTSpice. A simple pixel and destructive-readout memory can be implemented with an inductively-shunted SNSPD and nTron. A second nTron is used to store a current in the shift register when the pixel is read out, conditional on the presence of a current in the pixel inductor. Using this technique, data from all pixels could be loaded simultaneously into the shift register. Since the readout of the pixels is destructive, the bias current through the SNSPD is restored, so the pixels can still detect photons after the pixel data is loaded into the shift register. There is still per-pixel dead time set by the frame rate of the imager, since each pixel can only detect a single photon before it is reset again, but there is no imager-wide dead time like in a delay-line readout approach. In addition to performing detector readout, the simplicity of a shift register makes it a useful test structure, which could be used to characterize process yield, as has been done in the past with SFQ logic to evaluate yield for Josephson junction processes.[16] More generally, the inherent ability of shift registers to serialize and deserialize data makes them a critical function of any large-scale digital system. A superconducting shift register could help increase the capacity of links between room temperature and superconducting electronics, and with the introduction of digital logic, push even more computing into the fridge and enable larger scale superconducting systems based on nanowires. The initial stages of this work were sponsored by the Army Research Office (ARO) under Cooperative Agreement Number W911NF-21-2-0041. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. The completion of the data analysis and presentation was funded by the DOE under the National Laboratory LAB 21-2491 Microelectronics grant. The authors would like to thank Kyle Richards and Teja Kothamasu for assistance with setting up and using the Keysight PXIe system. The data that support the findings of this study are available from the corresponding author upon reasonable request. The authors have no conflicts of interest to report.
2307.10131
On the work of dynamic constant-time parallel algorithms for regular tree languages and context-free languages
Previous work on Dynamic Complexity has established that there exist dynamic constant-time parallel algorithms for regular tree languages and context-free languages under label or symbol changes. However, these algorithms were not developed with the goal to minimise work (or, equivalently, the number of processors). In fact, their inspection yields the work bounds $O(n^2)$ and $O(n^7)$ per change operation, respectively. In this paper, dynamic algorithms for regular tree languages are proposed that generalise the previous algorithms in that they allow unbounded node rank and leaf insertions, while improving the work bound from $O(n^2)$ to $O(n^{\epsilon})$, for arbitrary $\epsilon > 0$. For context-free languages, algorithms with better work bounds (compared with $O(n^7)$) for restricted classes are proposed: for every $\epsilon > 0$ there are such algorithms for deterministic context-free languages with work bound $O(n^{3+\epsilon})$ and for visibly pushdown languages with work bound $O(n^{2+\epsilon})$.
Jonas Schmidt, Thomas Schwentick, Jennifer Todtenhoefer
2023-07-19T16:54:56Z
http://arxiv.org/abs/2307.10131v1
On the work of dynamic constant-time parallel algorithms for regular tree languages and context-free languages ###### Abstract Previous work on Dynamic Complexity has established that there exist dynamic constant-time parallel algorithms for regular tree languages and context-free languages under label or symbol changes. However, these algorithms were not developed with the goal to minimise work (or, equivalently, the number of processors). In fact, their inspection yields the work bounds \(\mathcal{O}(n^{2})\) and \(\mathcal{O}(n^{7})\) per change operation, respectively. In this paper, dynamic algorithms for regular tree languages are proposed that generalise the previous algorithms in that they allow unbounded node rank and leaf insertions, while improving the work bound from \(\mathcal{O}(n^{2})\) to \(\mathcal{O}(n^{\epsilon})\), for arbitrary \(\epsilon>0\). For context-free languages, algorithms with better work bounds (compared with \(\mathcal{O}(n^{7})\)) for restricted classes are proposed: for every \(\epsilon>0\) there are such algorithms for deterministic context-free languages with work bound \(\mathcal{O}(n^{3+\epsilon})\) and for visibly pushdown languages with work bound \(\mathcal{O}(n^{2+\epsilon})\). Dynamic complexity work parallel constant time. 10.4230/LIPIcs.MFCS.2023.83 2021 ## 1 Introduction It has been known for many years that regular and context-free string languages and regular tree languages are maintainable under symbol changes by means of dynamic algorithms that are specified by formulas of first-order logic, that is, in the dynamic class DynFO[10, 7]. It is also well-known that such specifications can be turned into parallel algorithms for the CRCW PRAM model that require only constant time [8] and polynomially many processors. However, an "automatic" translation of a "dynamic program" of the DynFO setting usually yields a parallel algorithm with large work, i.e., overall number of operations performed by all processors.1 In the case of regular languages, the dynamic program sketched in [10] has a polynomial work bound, in which the exponent of the polynomial depends on the number of states of a DFA for the language at hand. The dynamic program given in [7] has quadratic work. Footnote 1: We note that in the context of constant-time parallel algorithms work is within a constant factor of the number of processors. Only recently a line of research has started that tries to determine, how efficient such constant-time dynamic algorithms can be made with respect to their work. It turned out that regular languages can be maintained with work \(\mathcal{O}(n^{\epsilon})\), for every \(\epsilon>0\)[11], even under polylogarithmic numbers of changes [12], and even with logarithmic work for star-free languages under single changes [11] and polylogarithmic work under polylogarithmic changes [12]. For context-free languages the situation is much less clear. The dynamic algorithms resulting from [7] have an \(\mathcal{O}(n^{7})\) upper work bound. In [11] it was shown that the Dyck-1 language, i.e., the set of well-bracketed strings with one bracket type, can be maintained with work \(\mathcal{O}((\log n)^{3})\) and that Dyck-\(k\) languages can be maintained with work \(\mathcal{O}(n\log n)\). Here, the factor \(n\) is due to the problem to test equality of two substrings of a string. Most of these results also hold for the query that asks for membership of a substring in the given language. For Dyck languages the upper bounds for substring queries are worse than the bounds for membership queries: for every \(\epsilon>0\) there exist algorithms for Dyck-1 and Dyck-\(k\) languages with work bounds \(\mathcal{O}(n^{\epsilon})\) and \(\mathcal{O}(n^{1+\epsilon})\), respectively. It was also shown in [11] that there is some context-free language that can be maintained in constant time with work \(\mathcal{O}(n^{\omega-1-\epsilon})\), for any \(\epsilon>0\), only if the \(k\)-Clique conjecture [1] fails. Here, \(\omega\) is the matrix multiplication exponent, which is known to be smaller than \(2.373\) and conjectured by some to be exactly two according to [14]. In this paper, we pursue two natural research directions. Regular tree languages.We first extend the results on regular string languages to regular tree languages. On one hand, this requires to adapt techniques from strings to trees. On the other hand, trees offer additional types of change operations beyond label changes that might change the structure of the tree. More concretely, besides label changes we study insertions of new leaves and show that the favourable bounds of [11] for regular string languages still hold. This is the main contribution of this paper. Our algorithms rely on a hierarchical partition of the tree of constant depth. The main technical challenge is to maintain such a partition hierarchy under insertion2 of leaves. Footnote 2: For simplicity, we only consider insertions of leaves, but deletions can be handled in a straightforward manner, as discussed in Section 2. Subclasses of context-free languages.We tried to improve on the \(\mathcal{O}(n^{7})\) upper work bound for context-free languages, but did not succeed yet. The other goal of this paper is thus to find better bounds for important subclasses of the context-free languages: deterministic context-free languages and visibly pushdown languages. We show that, for each \(\epsilon>0\), there are constant-time dynamic algorithms with work \(\mathcal{O}(n^{3+\epsilon})\) for deterministic context-free languages and \(\mathcal{O}(n^{2+\epsilon})\) for visibly pushdown languages. Here, the main challenge is to carefully apply the technique from [11] that allows to store information for only \(\mathcal{O}(n^{\epsilon})\) as opposed to \(n\) different values for some parameters. For more restricted change operations, the algorithm for regular tree languages yields an \(\mathcal{O}(n^{\epsilon})\) work algorithm for visibly pushdown languages. Structure of the paperWe explain the framework in Section 2, and present the results on regular tree languages and context-free string languages in Sections 3 and 4, respectively. Almost all proofs are delegated to the appendix. Related workIn [12], parallel dynamic algorithms for regular string languages under bulk changes were studied. It was shown that membership in a regular language can be maintained, for every \(\epsilon>0\), in constant time with work \(\mathcal{O}(n^{\epsilon})\), even if a polylogarithmic number of changes can be applied in one change operation. If the language is star-free, polylogarithmic work suffices. The paper also shows that for regular languages that are not star-free, polylogarithmic work does _not_ suffice. Maintaining regular languages of trees under label changes has also been studied in the context of enumeration algorithms (for non-Boolean queries) [3]. The dynamic parallel algorithms of [11] partially rely on dynamic sequential algorithms, especially [6]. Acknowledgements.We are grateful to Jens Keppeler and Christopher Spinrath for careful proof reading. ## 2 Preliminaries Trees and regular tree languages.We consider ordered, unranked trees \(t\), which we represent as tuples \((V,r,c,\text{label})\), where \(V\) is a finite set of nodes, \(r\in V\) is the root, \(c:V\times\mathbb{N}\to V\) is a function, such that \(c(u,i)\) yields the \(i\)-th child of \(u\), and label \(:V\to\Sigma\) is a function that assigns a label to every node. We denote the set of unranked trees over an alphabet \(\Sigma\) as \(T(\Sigma)\). The terms _subtree_, _subforest_, _sibling_, _ancestor_, _descendant_, _depth_ and _height_ of nodes are defined as usual. A node that has no child is called a _leaf_. A _forest_ is a sequence of trees. Let \(\preceq\) denote the order on siblings, i.e., \(u\prec v\) denotes that \(u\) is a sibling to the left of \(v\). We write \(u\preceq v\) if \(u\prec v\) or \(u=v\) holds. By \(t^{v}\) we denote the subtree of \(t\) induced by node \(v\). For sibling nodes \(u\prec v\), we write \({}^{u}t^{v}\) for the subforest of the tree \(t\), induced by the sequence \(u,\ldots,v\). If \(w\) is a node in \(t^{v}\), then \(t^{v}_{w}\) denotes the subtree consisting of \(t^{v}\) without \(t^{w}\). Analogously, for \({}^{u}t^{w}_{w}\). Our definition of tree automata is inspired from hedge automata in the TaTa book [5], slightly adapted for our needs. A deterministic finite (bottom-up) tree automaton (DTA) over an alphabet \(\Sigma\) is a tuple \(\mathcal{B}=(Q_{\mathcal{B}},\Sigma,Q_{f},\delta,\mathcal{A})\) where \(Q_{\mathcal{B}}\) is a finite set of states, \(Q_{f}\subseteq Q_{\mathcal{B}}\) is a set of final states, \(\mathcal{A}=(Q_{\mathcal{A}},Q_{\mathcal{B}},\delta_{\mathcal{A}},s)\) is a DFA over alphabet \(Q_{\mathcal{B}}\) (without final states) and \(\delta:Q_{\mathcal{A}}\times\Sigma\to Q_{\mathcal{B}}\) maps pairs \((p,\sigma)\), where \(p\) is a state of \(\mathcal{A}\) and \(\sigma\in\Sigma\), to states of \(\mathcal{B}\). We refer to states from \(Q_{\mathcal{B}}\) as \(\mathcal{B}\)_-states_ and typically denote them by the letter \(q\). Likewise states from \(Q_{\mathcal{A}}\) are called _\(\mathcal{A}\)-states_ and denoted by \(p\). We note that we do not need a set of accepting states for \(\mathcal{A}\), since its final states are fed into \(\delta\). The semantics of DTAs is defined as follows. For each tree \(t\in T(\Sigma)\), there is a unique _run_ of \(\mathcal{B}\) on \(t\), that is, a unary function \(\rho_{t}\) that assigns a \(\mathcal{B}\)-state to each node in \(V\). It can be defined in a bottom-up fashion, as follows. For each node \(v\in V\) with label \(\sigma\) and children \(u_{1},\ldots,u_{\ell}\), \(\rho_{t}(v)\) is the \(\mathcal{B}\)-state \(\delta(\delta^{\star}_{\mathcal{A}}(s,\rho_{t}(u_{1})\cdots\rho_{t}(u_{\ell})),\sigma)\). That is, the state of a node \(v\) with label \(\sigma\) is determined by \(\delta(p,\sigma)\), where \(p\) is the final \(\mathcal{A}\)-state that \(\mathcal{A}\) assumes when reading the sequence of states of \(v\)'s children, starting from the initial state \(s\). In particular, if \(v\) is a leaf with label \(\sigma\), its \(\mathcal{B}\)-state is \(\delta(s,\sigma)\). A tree \(t\) is accepted by the DTA \(\mathcal{B}\) if \(\rho_{t}(r)\in Q_{f}\) holds for the root \(r\) of \(t\). We denote the language of all trees accepted by \(\mathcal{B}\) as \(L(\mathcal{B})\). We call the languages decided by DTAs _regular_. Strings and context-free languages.Strings \(w\) are finite sequences of symbols from an alphabet \(\Sigma\). By \(w[i]\) we denote the \(i\)-th symbol of \(w\) and by \(w[i,j]\) we denote the substring from position \(i\) to \(j\). We denote the empty string by \(\lambda\), since \(\epsilon\) has a different purpose in this paper. We use standard notation for context-free languages and pushdown automata, to be found in the appendix. Dynamic algorithmic problems.In this paper, we view a dynamic (algorithmic) problem basically as the interface of a data type: that is, there is a collection of operations by which some object can be initialised, changed, and queried. A _dynamic algorithm_ is then a collection of algorithms, one for each operation. We consider two main dynamic problems in this paper, for regular tree languages and context-free languages. For each regular tree language \(L\), the algorithmic problem \(\textsc{RegTree}(L)\) maintains a labelled tree \(T\) and has the following operations. * \(\textsc{Init}(T,r,\sigma)\) yields an initial labelled tree object \(T\) and returns in \(r\) a node id for its root, which is labelled by \(\sigma\); * \(\textsc{Relabel}(T,u,\sigma)\) changes the label of node \(u\) in \(T\) into \(\sigma\); * \(\textsc{AddChild}(T,u,v,\sigma)\) adds a new child with label \(\sigma\) behind the last child of node \(u\) and returns its id in \(v\); * \(\textsc{Query}(T,v)\) returns true if and only if the subtree of \(T\) rooted at \(v\) is in \(L\). We refer to the restricted problem without the operation AddChild as \(\textsc{RegTree}^{-}\). For this data type, we assume that the computation starts from an initial non-trivial tree and that the auxiliary data for that tree is given initially. For each context-free language \(L\), the algorithmic problem \(\textsc{CFL}(L)\) maintains a string \(w\) and has the following operations. * \(\textsc{Init}(w)\) yields an initial string object \(w\) with an empty string; * \(\textsc{Relabel}(w,i,\sigma)\) changes the label at position \(i\) of \(w\) into \(\sigma\); * \(\textsc{InsertPositionBefore}(w,i,\sigma)\) and \(\textsc{InsertPositionAfter}(w,i,\sigma)\) insert a new position with symbol \(\sigma\) before or after the current position \(i\), respectively; * \(\textsc{Query}(w,i,j)\) returns true if and only if the substring \(w[i,j]\) is in \(L\). Readers may wonder, why these dynamic problems do not have operations that delete nodes of a tree or positions in a string. This is partially to keep the setting simple and partially because node labels and symbols offer easy ways to simulate deletion by extending the alphabet with a symbol \(\sqcup\) that indicates an object that should be ignored. E.g., if \(\delta_{\mathcal{A}}(p,\sqcup)=p\), for every state \(p\) of the horizontal DFA of a DTA, then the label \(\sqcup\) at a node \(u\) effectively deletes the whole subtree induced by \(u\) for the purpose of membership in \(L(\mathcal{B})\). Similarly, a CFL might have a neutral symbol or even a pair \((\sqcup,)_{\sqcup}\) of "erasing" brackets that make the PDA ignore the substring between \((\sqcup\) and \()_{\sqcup}\). For \(\textsc{RegTree}(L)\) and \(\textsc{CFL}(L)\), the Init operation is possible in constant sequential time and will not be considered in detail. Throughout this paper, \(n\) will denote an upper bound of the size of the structure at hand (number of nodes of a tree or positions of a string) that is linear in that size, but changes only infrequently. More precisely, the number of nodes of a tree or the length of the string will always be between \(\frac{1}{4}n\) and \(n\). Whenever the size of the structure grows beyond \(\frac{1}{2}n\), the data structure will be prepared for structures of size up to \(2n\) and, once this is done, \(n\) will be doubled. Since the size of the structure is always \(\theta(n)\) all bounds in \(n\) also hold with respect to the size of the structure. Parallel Random Access Machines (PRAMs).A _parallel random access machine_ (PRAM) consists of a number of processors that work in parallel and use a shared memory. The memory is comprised of memory cells which can be accessed by a processor in \(\mathcal{O}(1)\) time. Furthermore, we assume that simple arithmetic and bitwise operations, including addition, can be done in \(\mathcal{O}(1)\) time by a processor. We mostly use the Concurrent-Read Concurrent-Write model (CRCW PRAM), i.e. processors are allowed to read and write concurrently from and to the same memory location. More precisely, we assume the _common_ PRAM model: several processors can concurrently write into the same memory location, only if all of them write the same value. We also mention the Exclusive-Read Exclusive-Write model (EREW PRAM), where concurrent access is not allowed. The work of a PRAM computation is the sum of the number of all computation steps of all processors made during the computation. We define the space \(s\) required by a PRAM computation as the maximal index of any memory cell accessed during the computation. We refer to [9] for more details on PRAMs and to [13, Section 2.2.3] for a discussion of alternative space measures. The main feature of the common CRCW model relevant for our algorithms that separates it from the EREW model is that it allows to compute the minimum or maximum value of an array of size \(n\) in constant time (with work \(\mathcal{O}(n^{1+\epsilon})\)) which is shown in another paper at MFCS 2023.3 Footnote 3: Jonas Schmidt, Thomas Schwentick. Dynamic constant time parallel graph algorithms with sub-linear work. For simplicity, we assume that even if the size bound \(n\) grows, a number in the range \([0,n]\) can still be stored in one memory cell. This assumption is justified, since addition of larger numbers \(N\) can still be done in constant time and polylogarithmic work on a CRCW PRAM. Additionally, we assume that the number of processors always depends on the current size bound \(n\). Hence, the number of processors increases with growing \(n\) which allows us to use the PRAM model with growing structures. We describe our PRAM algorithms on an abstract level and do not exactly specify how processors are assigned to data. Whenever an algorithm does something in parallel for a set of objects, these objects can be assigned to a bunch of processors with the help of some underlying array. This is relatively straightforward for strings and substrings and the data structures used in Section 4. In Section 3, it is usually based on zone records and their underlying partition records. ## 3 Maintaining regular tree languages In this section, we present our results on maintaining regular tree languages under various change operations. We will first consider only operations that change node labels, but do not change the shape of the given tree. A very simple dynamic algorithm with work \(\mathcal{O}(n^{2})\) is presented in the appendix. We sketch its main idea and how it can be improved to \(\mathcal{O}(n^{\epsilon})\) work per change operation by using a _partition hierarchy_ in Subsection 3.1. These algorithms even work on the EREW PRAM model. Afterwards, in Subsection 3.2, we also consider an operation that can change the tree structure: adding a leaf to the tree. Here, the challenge is to maintain the hierarchical structure that we used before to achieve work \(\mathcal{O}(n^{\epsilon})\) per change operation. It turns out that maintaining this structure is possible without a significant increase of work, that is, maintaining membership under these additional operations is still possible with work \(\mathcal{O}(n^{\epsilon})\) per change operation. ### Label changes: a work-efficient dynamic program In this section, we describe how membership in a regular tree language can be maintained under label changes, in a work efficient way. For each \(\epsilon>0\) and each regular tree language \(L\), there is a parallel constant time dynamic algorithm for \(\textsc{RegTree}^{-}(L)\) with work \(\mathcal{O}(n^{\epsilon})\) on an EREW PRAM. The \(\mathtt{Query}\) operation can actually be answered with constant work. We start by briefly sketching the \(\mathcal{O}(n^{2})\) work algorithm that is given in the appendix. The algorithm basically combines the dynamic programs for regular string languages and binary regular tree languages from [7]. For regular string languages, the program from [7] stores the behaviour of a DFA for the input word \(w\) by maintaining information of the form "if the run of the DFA starts at position \(i\) of \(w\) and state \(p\), then it reaches state \(q\) at position \(j\)" for all states \(p,q\) and substrings \(w[i,j]\). After a label change at a position \(\ell\), this information can be constructed by combining the behaviour of the DFA on the intervals \(w[i,\ell-1]\) and \(w[\ell+1,j]\) with the transitions induced by the new label at position \(\ell\). The dynamic program for (binary) regular tree languages from [7] follows a similar idea and stores the behaviour of a (binary) bottom-up tree automaton by maintaining information of the form "if \(v\) gets assigned state \(q\), then \(u\) gets assigned state \(p\) by the tree automaton" for all states \(p,q\) and all nodes \(v,u\), where \(v\) is a descendant of \(u\). Both programs induce algorithms with \(\mathcal{O}(n^{2})\) work bounds. Towards a \(\mathcal{O}(n^{2})\) work algorithm for unranked tree languages, the two dynamic programs can be combined into an algorithm that mainly stores the following _automata functions_ for a fixed DTA \(\mathcal{B}=(Q_{\mathcal{B}},\Sigma,Q_{f},\delta,\mathcal{A})\) for \(L\), with DFA \(\mathcal{A}=(Q_{\mathcal{A}},Q_{\mathcal{B}},\delta_{\mathcal{A}},s)\): * The ternary function \(\mathcal{B}_{t}:Q_{\mathcal{B}}\times V\times V\mapsto Q_{\mathcal{B}}\) maps each triple \((q,u,v)\) of a state \(q\in Q_{\mathcal{B}}\) and nodes of \(t\), where \(u\) is a proper ancestor of \(v\), to the state that the run of \(\mathcal{B}\) on \(t_{v}^{u}\) takes at \(u\), with the provision that the state at \(v\) is \(q\). * The ternary function \(\mathcal{A}_{t}:Q_{\mathcal{A}}\times V\times V\mapsto Q_{\mathcal{A}}\) maps each triple \((p,u,v)\) of a state \(p\in Q_{\mathcal{A}}\) and nodes of \(t\), where \(u\prec v\) are siblings, to the state that the run of \(\mathcal{A}\) on \(u,\ldots,v\), starting from state \(p\), takes after \(v\). Every single function value can be updated in constant sequential time, as stated in the following lemma. This leads to a quadratic work bound since there are quadratically many tuples to be updated in parallel. After a \(\mathtt{Relabel}\) operation, single values \(\mathcal{A}_{t}(p,u,x)\) and \(\mathcal{B}_{t}(q,u,x)\) can be updated by a sequential algorithm in constant time. Some information about the shape of the tree is required, which we refer to as _basic tree functions_. For more details we refer to the appendix. However, as label changes cannot change the shape of the tree, this information does not need to be updated und can be assumed as precomputed. To lower the work bound the basic idea now is to store the automata functions not for _all_ possible arguments, but for a small subset of _special_ arguments that allow the computation of function values for _arbitrary_ arguments in constant time with constant work. In [11], this idea was applied to the \(\mathcal{O}(n^{2})\) work program for regular string languages. A constant-depth hierarchy of intervals was defined by repeatedly partitioning intervals into \(\mathcal{O}(n^{\theta})\) subintervals, for some \(\theta>0\). This hierarchy allowed to define _special_ intervals such that any update only affects \(\mathcal{O}(n^{\epsilon})\) intervals and function values of arbitrary intervals can be computed in constant time with constant work. We transfer this idea to the case of unranked tree languages by partitioning the tree into \(\mathcal{O}(n^{\theta})\)_zones_, each of which is partitioned into further \(\mathcal{O}(n^{\theta})\) zones and so on until, after a constant number of refinements, we arrive at zones of size \(\mathcal{O}(n^{\theta})\). Here, \(\theta>0\) is a constant that will be chosen later. It will always be chosen such that \(h=\frac{1}{\theta}\) is an integer. Before we define this partition hierarchy more precisely, we first define zones and show that they can always be partitioned in a way that guarantees certain number and size constraints. A _zone_ is a set \(S\) of nodes with the following properties: * \(S\) is a proper subforest of \(t\), * for every \(v\in S\) it holds that either no or all children are in S, and * there exists at most one node \(v_{S}\) in \(S\), whose children are not in \(S\). The node \(v_{S}\) is called the _vertical connection node_ of \(S\). We call a zone a _tree zone_ if it consists of only one sub-tree of \(t\) and a _non-tree zone_ otherwise. We call a zone _incomplete_ if it has a vertical connection node and _complete_, otherwise. There are thus four different types of zones which can be written, with the notation introduced in Section 2, as follows: complete tree zones \(t^{v}\), complete non-tree zones \({}^{u}t^{v}\), incomplete tree zones \(t^{v}_{w}\), and incomplete non-tree zones \({}^{u}t^{v}_{w}\). Depending on the type, zones can therefore be represented by one to three "important nodes". The overall tree can be seen as the zone \(t^{r}\), where \(r\) is its root. From now on, we always assume that \(n\) is as in Section 2, some \(\theta>0\) is fixed, and that \(h=\frac{1}{\theta}\) is an integer. We call a zone of \(t\) with at most \(n^{\theta\ell}\) nodes an \(\ell\)_-zone_. The tree \(t\) itself constitutes a \(h\)-zone, to which we will refer to as the _overall zone_. We next define _partition hierarchies_ formally. More precisely, for every \(\ell\geq 2\), we define partition hierarchies of height \(\ell\) for \(\ell\)-zones as follows. If \(S\) is a 2-zone and \(S_{1},\ldots,S_{k}\) are 1-zones that constitute a partition of \(S\), then \((S,\{S_{1},\ldots,S_{k}\})\) is a partition hierarchy of height 2 for \(S\). If \(S\) is an \((\ell+1)\)-zone, \(\{S_{1},\ldots,S_{k}\}\) is a partition of \(S\) into \(\ell\)-zones, and for each \(j\), \(H_{j}\) is a partition hierarchy of height \(i\) for \(S_{j}\), then \((S,\{H_{1},\ldots,H_{k}\})\) is a partition hierarchy of height \(\ell+1\) for \(S\). A partition hierarchy of height \(h\) of the zone consisting of \(t\) is called a partition hierarchy of \(t\). An example of a \((1,\frac{1}{3})\)-bounded partition hierarchy is given in Figure 1. Figure 1: Example of a \((1,\frac{1}{3})\)-bounded partition hierarchy. We often call a zone \(S^{\prime}\) that occurs at some level \(i<\ell\) within the partition hierarchy of a zone \(S\) of some level \(\ell\) a _component zone_. If \(S^{\prime}\) has level \(\ell-1\) we also call it a _sub-zone_ of \(S\). We call a partition hierarchy \(H\)_\((c,\theta)\)-bounded_, constants \(c\) and \(\theta>0\), if each partition of a zone consists of at most \(cn^{\theta}\) nodes. Our next aim is to prove that \((10,\theta)\)-bounded partition hierarchies actually exist. To this end, we prove the following lemma. It is similar to [4, Lemma 3], but adapted to our context, which requires a hierarchy of constant depth and a certain homogeneity regarding children of vertical connection nodes. Let \(m\geq 2\) be a number and \(S\) a zone with more than \(m\) nodes. Then \(S\) can be partitioned into at most five zones, one of which has at least \(\frac{1}{2}m\) and at most \(m\) nodes. This lemma immediately yields the existence of \((10,\theta)\)-bounded partition hierarchies. For each \(\theta>0\), each tree \(t\) has some \((10,\theta)\)-bounded partition hierarchy. We now explain in more detail, which information about the behaviour of \(\mathcal{A}\) and \(\mathcal{B}\) is stored by the work-efficient algorithm. Function values for the ternary functions are stored only for so-called special pairs of nodes, which we define next. Special pairs of nodes are always defined in the context of some zone \(S\) of a partition hierarchy. In the following, we denote, for a zone \(S\) of a level \(\ell\geq 2\) its set of sub-zones of level \(\ell-1\) by \(T\). * Any pair of siblings \(u\prec v\) in a zone \(S\) of level \(1\) is a _special horizontal pair_. A pair of siblings \(u\prec v\) in a complete zone \(S\) of level \(\ell\geq 2\) is a _special horizontal pair_, if \(u\) is a left boundary of some zone in \(T\) and \(v\) is a right boundary of some zone in \(T\). However, if \(S\) is incomplete and there is an ancestor \(w^{\prime}\) of the lower boundary \(w\) with \(u\preceq w^{\prime}\preceq v\), then, instead of \((u,v)\), there are two special pairs: \((u,\textsc{left-sibling}(w^{\prime}))\) and \((\textsc{right-sibling}(w^{\prime}),v)\). * Any pair of nodes \(u,v\) in some zone \(S\) of level \(1\) is a _special vertical pair_, if \(v\) is an ancestor of \(u\). A pair of nodes \(u,v\) in some zone \(S\) of level \(\ell\geq 2\) is a _special vertical pair_, if \(v\) is an ancestor of \(u\), \(v\) is an upper or lower boundary of some zone in \(T\) and \(u\) is a lower boundary of some zone in \(T\). However, if \(S\) is incomplete with lower boundary \(w\) and \(w^{\prime}:=\textsc{lca}(w,u)\) is strictly above \(u\) and below or equal to \(v\), then, instead of \((u,v)\), there are two special pairs: \((u,\textsc{anc-child}(w^{\prime},u))\) and \((w^{\prime},v)\). Here lca determines the least common ancestor and anc-child the child of \(w^{\prime}\) that is an ancestor of \(u\). The algorithm stores \(\mathcal{A}_{t}(p,u,v)\) for each state \(p\) of \(\mathcal{A}\) and each special horizontal pair \(u,v\). Furthermore, it stores \(\mathcal{B}_{t}(q,u,v)\), for each state \(q\) of \(\mathcal{B}\) and each special vertical pair \(u,v\). We note, that in all cases \(\mathcal{A}_{t}(p,u,v)\) and \(\mathcal{B}_{t}(q,u,v)\) only depend on the labels of the nodes in the zone, for which \((u,v)\) is special. From the stored values for functions \(\mathcal{A}_{t}\) and \(\mathcal{B}_{t}\) for special pairs, it is possible to compute \(\rho_{t}(v)\), for arbitrary nodes \(v\), \(\mathcal{A}_{t}(p,u,u^{\prime})\) for arbitrary pairs \(u\prec u^{\prime}\) of siblings of \(t\) and \(\mathcal{B}_{t}(q,u,u^{\prime})\) for arbitrary pairs \(u,u^{\prime}\) of nodes, where \(u^{\prime}\) is an ancestor of \(u\), sequentially in constant time. This enables us to show the \(\mathcal{O}(n^{\epsilon})\) work bound for label changes. Proof of Proposition 3.1.: To achieve the stated bound, we use the above algorithm with work parameter \(\theta=\frac{\epsilon}{2}\). The algorithm uses a \((\theta,10)\)-bounded partition hierarchy, which exists thanks to Proposition 3.5. As indicated before, the algorithm stores \(\mathcal{A}_{t}(\cdot,u,v)\) and \(\mathcal{B}_{t}(\cdot,u,v)\), for all special pairs \((u,v)\). As already observed before, these values only depend on the labels of the nodes of the zone relative to which \((u,v)\) is special. Therefore, if a node label is changed for some node \(x\), values \(\mathcal{A}_{t}(\cdot,u,v)\) and \(\mathcal{B}_{t}(\cdot,u,v)\) need only be updated for special pairs of zones in which \(x\) occurs. Since each node occurs in exactly \(h\) zones and each zone has \(\mathcal{O}(n^{2\theta})=\mathcal{O}(n^{\epsilon})\) special pairs, \(h\cdot\mathcal{O}(n^{\epsilon})\) processors can be used, where every processor updates a single value in constant time and work, as is possible thanks to Lemma 3.2 and Lemma 3.6. Since the shape of the tree does not change we can assume a mapping from the updated node and the processor number to the special tuple that the respective processor recomputes. ### Structural Changes In Proposition 3.1 only label changes were allowed, so the structure of the underlying tree did not change. In particular, there was no need to update any of the basic tree functions. In this subsection, we consider structural changes of the tree. We show that the work bounds of Proposition 3.1 can still be met for the full data types \(\textsc{RegTree}(L)\). For each regular tree language \(L\) and each \(\epsilon>0\), there is a dynamic constant time parallel algorithm for \(\textsc{RegTree}(L)\) that handles change operations with work \(\mathcal{O}(n^{\epsilon})\) and answers query operations with constant work. In the next subsection, we describe the general strategy of the algorithm, define some notions that will be used and present its proof. Then, in a second subsection, we give some more detailed information about the data that is stored and how it can be maintained. #### High-level description of the dynamic algorithm Our approach generalises the algorithm of Subsection 3.1. It makes sure that, at any point in time, there is a valid partition hierarchy together with corresponding tree and automata functions. The general strategy of the dynamic algorithm is to add new leaves to their nearest zone. In principle, this is not hard to handle -- unless it leads to a violation of a size constraint of some zone. As soon as zones exceed a certain size bound the affected parts of the hierarchy will thus be recomputed to ensure the size constraints. For reasons that will become clearer below, we need to slightly modify the definition of partition hierarchies, basically by omitting the lowest two levels. To this end, we define \(3\)-pruned partition hierarchies just like we defined partition hierarchies, but the lowest level is at height \(3\). More precisely, _a \(3\)-pruned partition hierarchy of height \(3\)_ is just a \(3\)-zone, and _\(3\)-pruned partition hierarchies of height \(\ell>3\)_ are inductively defined just like partition hierarchies of height \(\ell\). It is clear that a \(3\)-pruned partition hierarchy exists for each tree by ommiting the two lowest levels in the partition hierarchy computed in Proposition 3.5. Moreover, using a \(3\)-pruned partition hierarchy as basis for our efficient label change approach still ensures the sequential constant time computation of arbitrary automaton function values from the stored values for special pairs. However, zones on the lowest level have size \(\mathcal{O}(n^{3\theta})\) leading to a work bound of \(\mathcal{O}(n^{6\theta})\) per change operation. To ensure that at each point in time, a usable partition hierarchy is available, the general strategy is as follows: the algorithm starts from a _strong partition hierarchy_ in which zones at level \(\ell\) have size at most \(\frac{1}{4}n^{\ell\theta}\), well below the maximum allowed size of such a zone of \(n^{\ell\theta}\). As soon as the size of a zone \(S\) at level \(\ell\) reaches its _warning limit_\(\frac{1}{2}n^{\ell\theta}\), the algorithm starts to compute a new partition hierarchy for the parent zone \(S^{\prime}\) of \(S\) at level \(\ell+1\). This computation is orchestrated in a way that makes sure that the new partition hierarchy for \(S^{\prime}\) is ready (together with all required function values) before \(S\) reaches its size limit \(n^{\ell\theta}\), at which point the old partition hierarchy for \(S^{\prime}\) becomes useless. Since a partition hierarchy of the whole tree together with the required function values has size \(\Omega(n)\), its computation inherently requires that amount of work and it can probably not be done in constant time. Furthermore, since we aim at work \(\mathcal{O}(n^{\epsilon})\) per operation, the algorithm cannot afford to do the re-computation "as fast as possible" but rather needs to stretch over at least \(n^{1-\epsilon}\) steps. However, the fact that the tree can change during a re-computation poses a challenge: if many change operations happen with respect to a particular zone in a low level of the new partition hierarchy, this new zone might reach its warning limit and then its hard limit, before the overall re-computation of the hierarchy has finished. This challenge can be met by a careful orchestration of the re-computation. We will next describe the data structure that the dynamic algorithm uses to orchestrate re-computations of partition hierarchies. As mentioned before, there will always be a valid partition hierarchy. However, for some zones, re-computations might be underway. The algorithm will always manage to complete the re-computation of a partition hierarchy for a zone of level \(\ell\), before any of the subzones of level \((\ell-1)\) of the new partition reaches its warning limit. Therefore, for each zone within the data structure, there is always at most one partition hierarchy under construction, and therefore each zone has at any time at most two partition records. If a zone actually has two partition records, one of them contains a usable partition hierarchy. We formalise usability of a partition hierarchy by the term _operable_ and tie the whole data structure together through the following notion of zone records. It is defined in an inductive fashion, together with the concept of partition records. A _zone record_ of level \(3\) is a \(3\)-zone. A _zone record_ of level \(\ell>3\) consists of an \(\ell\)-zone \(S\) and up to two partition records \(P_{1},P_{2}\) of level \(\ell\) for \(S\). If it has two partition records then \(P_{1}\) is complete and \(P_{2}\) is incomplete. A _partition record_\((Z,M)\) of level \(\ell>3\) for an \(\ell\)-zone \(S\) consists of a set \(Z\) of zone records of level \(\ell-1\) and a set \(M\) of zones, such that the zones from \(Z\) and the zones from \(M\) together constitute a partition of \(S\). A partition record \(Z\) of level \(\ell\) is _valid_, if all zones of its zone records are actual \((\ell-1)\)-zones. A zone record of level \(3\) is _operable_. A partition record at level \(\ell>3\) is _operable_, if it is valid and all its zone records are operable. A zone record of level \(\ell>3\) is _operable_, if its first partition record is operable. We refer to the hierarchical structure constituting the overall zone record as the _extended partition hierarchy_. Within the extended partition hierarchy, we are particularly interested in "operable substructures". To this end, we associate with an operable zone record, the _primary partition hierarchy_ that results from recursively picking the operable partition record from each zone record. Altogether, the algorithm maintains an extended partition hierarchy for \(t\). Before we describe how the algorithm stores the extended partition hierarchy, we need two more concepts. For each zone record \(R\) of a level \(\ell\) there is a sequence \(R_{h},\ldots,R_{\ell}=R\) of zone records such that, for each \(i\geq\ell\), \(R_{i}\) is a zone record that occurs in a partition record of \(R_{i+1}\). This sequence can be viewed as the _address_ of \(R\) in the extended partition hierarchy. Furthermore, this address induces a _finger print_ for \(R\): the sequence \(\operatorname{status}(R_{h}),\ldots,\operatorname{status}(R_{\ell})\), where \(\operatorname{status}(R_{i})\) is either _operable_ or _in progress_. It is a simple but useful observation that if a tree node \(v\) occurs in two zones with zone records \(R\neq R^{\prime}\) within the extended partition hierarchy, then the finger prints of \(R\) and \(R^{\prime}\) are different. Consequently a tree node occurs in at most \(2^{h}\) and thus, a constant number of zones in the extended partition hierarchy. Now we can describe, how the algorithm stores \(t\) and the extended partition hierarchy. * A zone record of level 3 is represented as an array of \(\mathcal{O}(n^{\theta})\) nodes. * A zone record of a level \(\ell>3\) consists of up to four boundary nodes and up to two pointers to partition records. The operable partition record is flagged. * Each zone record of level \(\ell\geq 3\) with finger print \(pa\), also stores a pointer to its zone on level \(\ell+1\) with finger print \(p\), and three pointers to the zone records of its parent, first child and right sibling zones. * A partition record \((Z,M)\) is represented as an array of zone records (some of which may be zones of \(M\)). The zones records from \(Z\) are flagged. * The nodes of \(t\) are stored in an array (in no particular order) together with pointers for the functions parent, left-sibling, right-sibling, first-child, and last-child. * For each node \(v\), and each possible finger print \(p\), a pointer \(Z^{p}(v)\) to its zone with finger print \(p\). Now we are prepared to outline the proof of Theorem 3.7. Proof (of Theorem 3.7).: Let \(\mathcal{B}\) be a DTA for the regular tree language \(L\) and let \(\theta=\frac{\epsilon}{7}\). The dynamic algorithm stores \(t\) and an extended partition hierarchy as described above. It also stores some additional function values, including values for the automata functions, that will be specified in Subsubsection 3.2.2. Some functions are independent from zones and are stored for all nodes. Some other functions are independent from zones but are only stored for particular node tuples that are induced from zones (like it was already the case for the automata functions in Subsection 3.1) and some functions are actually defined for (tuples of) zones. After each change operation, the algorithm updates function values, pursues re-computations of hierarchies and computes function values that are needed for newly established zones. It starts a re-computation for a zone \(S\), whenever one of its subzones reaches its warning limit. It starts a re-computation of the overall zone, whenever the number of nodes of \(t\) reaches \(\frac{1}{2}n\). The algorithm has one thread for each zone with an ongoing re-computation, that is, for each zone whose zone record is not yet operable. A re-computation for a zone at level \(\ell\) requires the computation of \(\mathcal{O}(n^{\theta})\) zones of level \(\ell-1\), each of which yields re-computations of \(\mathcal{O}(n^{\theta})\) zones of level \(\ell-2\) and so forth, down to level 3. It is easy to see that the overall number of zones that needs to be computed during a re-computation of a zone at level \(\ell\) is bounded by \(\mathcal{O}(n^{(\ell-3)\theta})\). The re-computation of the overall zone requires the computation of at most \(\mathcal{O}(n^{1-3\theta})\) zones. We show in Lemma 3.9 that, in the presence of a primary partition hierarchy for the overall zone, the computation of a new zone is possible in constant time with work \(\mathcal{O}(n^{6\theta})\). The thread for the re-computation of a zone at level \(\ell\) thus (first) consists of \(\mathcal{O}(n^{(\ell-3)\theta})\) computations of component zones, each of which is carried out in constant time with work \(\mathcal{O}(n^{6\theta})\). We refer to such a re-computation as a round. A thread thus consists of \(\mathcal{O}(n^{(\ell-3)\theta})\) rounds of zone computations. The thread follows a breadth-first strategy, that is, it first computes all zones of level \(\ell-1\) then the sub-zones of those zones at level \(\ell-2\) and so forth. Once the zone record of a zone \(S\) is operable, the thread computes in its second phase all function values associated to \(S\). This can be done in constant time with work \(\mathcal{O}(n^{\prime\theta})\) per sub-zone of \(S\), as is shown in the appendix. That is, it requires at most \(\mathcal{O}(n^{(\ell-3)\theta})\) additional rounds. We note that it does not matter if the primary partition hierarchy \(H\) required for Lemma 3.9 changes during the computation of a thread, since \(H\) is only used to make the identification of a new zone more efficient. To address the above mentioned challenge, the algorithm starts a separate thread for each zone that is newly created during this process. That is, for each zone at level \(\ell-1\), an additional re-computation thread is started, as soon as the zone is created. Now we can state the orchestration strategy for re-computations. This strategy is actually very simple: **Re-computation strategy:** After each change operation affecting some node \(v\), the algorithm performs one computation round, for all threads of zones \(S\), at any level, with \(v\in S\). That is, thanks to the above observation, after a change operation, there are at most \(2^{h}\) threads for which one computation round is performed. Since \(2^{h}\) is a constant, these computations together require work at most \(\mathcal{O}(n^{7\theta})\). On the other hand, the whole re-computation for a zone \(S\) at level \(\ell\), including the computation of the relevant function values, is finished after at most \(\mathcal{O}(n^{(\ell-3)\theta})\) change operations that affect \(S\). Since \(\frac{1}{2}n^{(\ell-1)\theta}\) leaf additions are needed to let a sub-zone \(S^{\prime}\) grow from the warning limit \(\frac{1}{2}n^{(\ell-1)\theta}\) to the hard limit \(n^{(\ell-1)\theta}\), it is guaranteed that the re-computation thread for \(S\) is completed, before \(S^{\prime}\) grows too large. In fact, this is exactly, why partition hierarchies are \(3\)-pruned. When a re-computation of the overall zone was triggered by the size of \(t\), \(n\) is doubled as soon as this re-computation is completed. Thanks to Lemma 3.11 the overall work to update the stored function values for all affected zones (in constant time) after a change operation is \(\mathcal{O}(n^{3\theta})\). Altogether, the statement of the theorem follows by choosing \(\theta=\frac{\epsilon}{7}\). We state the lemma about the computation of new zones next. The partition hierarchy is used as a means to assign evenly distributed nodes to processors and to do parallel search for nodes with a particular property regarding the number of their descendants. Given a tree \(t\), a \(3\)-pruned partition hierarchy \(H\) of \(t\), and a zone \(S\) with at least \(m\) nodes, \(S\) can be partitioned into at most five zones, one of which has at least \(\frac{1}{2}m\) and at most \(m\) nodes, in constant time with work \(\mathcal{O}(n^{6\theta})\). #### Maintaining functions In Subsection 3.1, the tree functions were static and given by the initialisation. Only the automata functions needed to be updated. However, if leaf insertions are allowed, the tree functions can change. To keep the algorithm efficient, the special pairs need to be adapted to the evolution of the partition hierarchy, and tree functions can no longer be stored for all possible arguments. Furthermore, additional tree functions and functions defined for zones will be used. The stored information suffices to compute all required functions in constant time, and almost all of them with constant work. Given a tree \(t\), a \(3\)-pruned partition hierarchy \(H\) of \(t\), and the stored information as described above, for each \(\theta>0\), the child function can be evaluated in \(\mathcal{O}(1)\) time with work \(\mathcal{O}(n^{\theta})\). All other functions can be evaluated for all tuples with constant work. Furthermore, all stored information can be efficiently updated, with the help of and in accordance with the current primary partition hierarchy. Let \(\theta>0\) and \(H\) be a \(3\)-pruned partition hierarchy of \(t\) with automata and tree functions. The stored information described above can be maintained after each Relabel and AddChild operation in constant time with \(\mathcal{O}(n^{6\theta})\) work per operation. ## 4 Maintaining context-free languages As mentioned in the introduction, an analysis of the dynamic program that was used in [8] to show that context-free languages can be maintained in DynFO yields the following result. [[8, Proposition 5.3]] For each context-free language \(L\), there is a dynamic constant-time parallel algorithm on a CRCW PRAM for \(\operatorname{CFL}(L)\) with \(\mathcal{O}(n^{7})\) work. There is a huge gap between this upper bound and the conditional lower bound of \(\mathcal{O}(n^{\omega-1-\epsilon})\), for any \(\epsilon>0\), derived from the \(k\)-Clique conjecture [2], where \(\omega<2.373\)[12]. Our attempts to make this gap significantly smaller, have not been successful yet. However, for realtime deterministic context-free languages and visibly pushdown languages, more efficient dynamic algorithms are possible, as shown in the following two subsections. ### Deterministic context-free languages Realtime deterministic context-free languages are decided by deterministic PDAs without \(\lambda\)-transitions (RDPDAs). For each realtime deterministic context-free language \(L\) and each \(\epsilon>0\), there is a dynamic constant-time parallel algorithm on a CRCW PRAM for \(\operatorname{CFL}(L)\) with \(\mathcal{O}(n^{3+\epsilon})\) work. Given an RDPDA \(\mathcal{A}\) for \(L\), a configuration \(C=(p,u,s)\) consists of a state \(p\), a string \(u\) that is supposed to be read by \(\mathcal{A}\) and a string \(s\), the initial stack content. We use the following functions \(\delta_{\text{state}}\), \(\delta_{\text{stack}}\), and \(\delta_{\text{empty}}\) to describe the behaviour of \(\mathcal{A}\) on configurations. * \(\delta_{\text{state}}(C)\) yields the last state of \(\operatorname{run}(C)\). * \(\delta_{\text{stack}}(C)\) yields the stack content at the end of \(\operatorname{run}(C)\). * \(\delta_{\text{empty}}(C)\) is the position in \(u\), after which \(\operatorname{run}(C)\) empties its stack. It is zero, if this does not happen at all. The algorithm maintains the following information, for each simple configuration \(C=(p,u,\tau)\), where \(u=w[i,j]\), for some \(i\leq j\), for each suffix \(v=w[m,n]\) of \(w\), where \(j<m\), each state \(q\), and some \(k\leq n\). * \(\hat{\delta}(C)\) defined as the tuple \((\delta_{\text{state}}(C),|\delta_{\text{stack}}(C)|,\operatorname{top}_{1}( \delta_{\text{stack}}(C)),\delta_{\text{empty}}(C),)\), consisting of the state, the height of the stack, the top symbol of the stack, at the end of the run on \(C\) and the position where the run ends. If the run empties the stack prematurely or at the end of \(u\), then \(\operatorname{top}_{1}(\delta_{\text{stack}}(C))\) is undefined; * push-\(\operatorname{pos}(C,k)\), defined as the length of the longest prefix \(x\) of \(u\), such that \(|\delta_{\text{stack}}(p,x,\tau)|=k\). Informally this is the position of \(u\) at which the \(k\)-th symbol of \(\delta_{\text{stack}}(C)\), counted from the bottom, is written; * pop-\(\operatorname{pos}(C,q,v,k)\), defined as the pair \((o,r)\), where \(o\) is the length of the prefix \(v^{\prime}\) of \(v\), for which \(\operatorname{run}(q,v,\operatorname{top}_{k}(\delta_{\text{stack}}(C)))\) empties its stack at the last symbol of \(v^{\prime}\), and \(r\) is the state it enters. However, tuples for push-pos and pop-pos are only stored for values \(k\) of the form \(an^{b\theta}\), for integers \(b<\frac{1}{\theta}\) and \(a\leq n^{\theta}\), for some fixed \(\theta>0\). A more detailed account is given in the appendix. ### Visibly pushdown languages Visibly pushdown languages are a subclass of realtime deterministic CFLs. They use _pushdown alphabets_ of the form \(\hat{\Sigma}=(\Sigma_{c},\Sigma_{r},\Sigma_{\text{int}})\) and deterministic PDA that always push a symbol when reading a symbol from \(\Sigma_{c}\), pop a symbol when reading a symbol from \(\Sigma_{r}\) and leave the stack unchanged otherwise. We refer to [2] for more information. There is a correspondence between wellformed strings over a pushdown alphabet and labelled trees, where each matching pair \((a,b)\) of a call symbol from \(\Sigma_{c}\) and a return symbol from \(\Sigma_{r}\) is represented by an inner node with label \((a,b)\) and each other symbol by a leaf. From Theorem 3.2 and this correspondence the following can be concluded. For each visibly pushdown language \(L\) and each \(\epsilon>0\), there is a dynamic constant-time parallel algorithm on a CRCW PRAM for \(\mathrm{VPL}^{-}(L)\) with \(\mathcal{O}(n^{\epsilon})\) work. Here, \(\mathrm{VPL}^{-}(L)\) only allows the following change operations: * Replacement of a symbol by a symbol of the same type; * Insertion of an internal symbol from \(\Sigma_{\mathrm{int}}\) before a return symbol; * Replacement of an internal symbol by two symbols \(ab\), where \(a\in\Sigma_{c}\) and \(b\in\Sigma_{r}\). For arbitrary symbol replacements and insertions, there is a much less work-efficient algorithm which, however, is still considerably more efficient than the algorithm for DCFLs. For each visibly pushdown language \(L\) and each \(\epsilon>0\), there is a dynamic constant-time parallel algorithm on a CRCW PRAM for \(\mathrm{VPL}(L)\) with \(\mathcal{O}(n^{2+\epsilon})\) work. The work improvement mainly relies on the fact that how the height of the stack evolves during a computation only depends on the types of symbols. ## 5 Conclusion We have shown that the good work bounds for regular string languages from [11] carry over to regular tree languages, even under some structural changes of the tree. In turn they also hold for visibly pushdown languages under limited change operations. For realtime deterministic context-free languages and visibly pushdown languages under more general change operations better work bounds than for context-free languages could be shown. There are plenty of questions for further research, including the following: are there other relevant change operations for trees that can be handled with work \(\mathcal{O}(n^{\epsilon})\)? What are good bounds for further operations? Can the bounds for context-free languages be improved? Can the \(\mathcal{O}(n^{3+\epsilon})\) be shown for arbitrary (not necessarily realtime) DCFLs? And the most challenging: are there further lower bound results that complement our upper bounds?
2304.09130
${\mathsf D}^2={\mathsf H}+1/4$ with point interactions
Let ${\mathsf D}$ and ${\mathsf H}$ be the self-adjoint, one-dimensional Dirac and Schr\"odinger operators in $L^{2}(\mathbb{R};\mathbb{C}^{2})$ and $L^{2}(\mathbb{R};\mathbb{C})$ respectively. It is well known that, in absence of an external potential, the two operators are related through the equality ${\mathsf D}^2 = ({\mathsf H} + \frac{1}{4}){\mathbb 1}$. We show that such a kind of relation also holds in the case of $n$-point singular perturbations: given any self-adjoint realization $\widehat {\mathsf D}$ of the formal sum ${\mathsf D}+\sum_{k=1}^{n}\gamma_{k}\delta_{y_{k}}$, we explicitly determine the self-adjoint realization $\widehat{\mathsf H}$ of ${\mathsf H}{\mathbb 1}+\sum_{k=1}^{n}(\alpha_{k}\delta_{y_{k}}+\beta_{k}\delta'_{y_{k}})$ such that ${\widehat{\mathsf D}}^2 = \widehat{\mathsf H} + \frac{{\mathbb 1}}{4}$. The found correspondence preserves the subclasses of self-adjoint realizations corresponding to both the local and the separating boundary conditions. Some connections with supersymmetry are provided. The case of nonlocal boundary conditions allows the study of the relation ${\mathsf D}^{2}={\mathsf H}+\frac14$ for quantum graphs with (at most) two ends; in particular, the square of the extension corresponding to Kirchhoff-type boundary conditions for the Dirac operator on the graph gives the direct sum of two Schr\"odinger operators on the same graph, one with the usual Kirchhoff boundary conditions and the other with a sort of reversed Kirchhoff ones.
A. Posilicano, L. Reginato
2023-04-18T16:58:29Z
http://arxiv.org/abs/2304.09130v2
# \(\mathsf{D}^{2}\)=H+\(\frac{1}{4}\) with point interactions ###### Abstract. Let \(\mathsf{D}\) and \(\mathsf{H}\) be the self-adjoint, one-dimensional Dirac and Schrodinger operators in \(L^{2}(\mathbb{R};\mathbb{C}^{2})\) and \(L^{2}(\mathbb{R};\mathbb{C})\) respectively. It is well known that, in absence of an external potential, the two operators are related through the equality \(\mathsf{D}^{2}=(\mathsf{H}+\frac{1}{4})\mathbb{1}\). We show that such a kind of relation also holds in the case of \(n\)-point singular perturbations: given any self-adjoint realization \(\tilde{\mathsf{D}}\) of the formal sum \(\mathsf{D}+\sum_{k=1}^{n}\gamma_{k}\delta_{y_{k}}\), we explicitly determine the self-adjoint realization \(\tilde{\mathsf{H}}\) of \(\mathsf{H}\mathbb{1}+\sum_{k=1}^{n}(\alpha_{k}\delta_{y_{k}}+\beta_{k}\delta_{ y_{k}}^{\prime})\) such that \(\tilde{\mathsf{D}}^{2}=\tilde{\mathsf{H}}+\frac{2}{4}\). The found correspondence preserves the subclasses of self-adjoint realizations corresponding to both the local and the separating boundary conditions. The case on nonlocal boundary conditions allows the study of the relation \(\mathsf{D}^{2}=\mathsf{H}+\frac{1}{4}\) for quantum graphs with (at most) two ends; in particular, the square of the extension corresponding to Kirchhoff-type boundary conditions for the Dirac operator on the graph gives the direct sum of two Schrodinger operators on the same graph, one with the usual Kirchhoff boundary conditions and the other with a sort of reversed Kirchhoff ones. ## 1. Introduction Let \(L^{2}(\mathbb{R};\mathbb{C}^{d})\) be the Hilbert space of \(\mathbb{C}^{d}\)-valued square integrable functions with scalar product \(\langle\cdot,\cdot\rangle\) and norm \(\|\cdot\|\); likewise, \(H^{2}(\mathbb{R};\mathbb{C}^{d})\subset H^{1}(\mathbb{R};\mathbb{C}^{d}) \subset C_{b}(\mathbb{R};\mathbb{C}^{d})\) denote the Sobolev space on \(\mathbb{R}\) of order \(1\) and \(2\) and the space of bounded continuous functions with values in \(\mathbb{C}^{d}\) respectively. Whenever \(d=1\), we simply write \(L^{2}(\mathbb{R})\), \(H^{k}(\mathbb{R})\) and \(C_{b}(\mathbb{R})\). In \(L^{2}(\mathbb{R};\mathbb{C}^{2})\) we consider the free self-adjoint Dirac operator \(\mathsf{D}\) defined by \[\mathsf{D}:H^{1}(\mathbb{R};\mathbb{C}^{2})\subseteq L^{2}(\mathbb{R};\mathbb{ C}^{2})\to L^{2}(\mathbb{R};\mathbb{C}^{2})\,,\quad\mathsf{D}:=-i\,\frac{d}{dx}\, \sigma_{1}+\frac{1}{2}\,\sigma_{3}\] where \[\sigma_{1}=\begin{bmatrix}0&1\\ 1&0\end{bmatrix},\quad\sigma_{2}=\begin{bmatrix}0&-i\\ i&0\end{bmatrix},\quad\sigma_{3}=\begin{bmatrix}1&0\\ 0&-1\end{bmatrix}\] are the Pauli matrices. Furthermore, we consider the free self-adjoint Schrodinger operator in \(L^{2}(\mathbb{R})\) \[\mathsf{H}:H^{2}(\mathbb{R})\subseteq L^{2}(\mathbb{R})\to L^{2}(\mathbb{R}), \quad\mathsf{H}:=-\frac{d^{2}}{dx^{2}}\,.\] It is well known and easy to check that in this free case there exists a relation between the two operators: \[\mathsf{D}^{2}=\left(\mathsf{H}+\frac{1}{4}\right)\mathbb{1}\,. \tag{1.1}\] Here and below, we use the isomorphism \(L^{2}(\mathbb{R};\mathbb{C}^{2})\simeq L^{2}(\mathbb{R})\oplus L^{2}(\mathbb{R})\) and the identification \(\mathsf{L}\mathbb{1}\equiv\mathsf{L}\oplus\mathsf{L}\), \(\mathsf{L}\) a linear operator in \(L^{2}(\mathbb{R})\). More generally, in the following we use the shorthand notation \(\mathsf{L}\mathbb{1}\equiv\mathsf{L}\oplus\mathsf{L}\) for a linear operator \(L:\mathrm{dom}(L)\subseteq H_{1}\to H_{2}\). Notice that (1.1) entails a relation between the resolvent operators: (1.2) \[(-\mathsf{D}+z)^{-1}=(\mathsf{D}+z)\left(-\mathsf{H}+z^{2}-\frac{1}{4}\right) \hskip-1.422638pt\raisebox{-0.86pt}{$\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! The aim of this paper is to extend this connection between Dirac's and Schrodinger's operators to the case where \(\mathsf{D}\) is perturbed by a sum of \(\delta\)'s potential, equivalently, given any self-adjoint extension \(\mathsf{D}_{\Pi,\Theta}\) of the symmetric operator \(\mathsf{D}|C^{\infty}_{comp}(\mathbb{R}\backslash\{y_{1},\ldots,y_{n}\};\mathbb{ C}^{2})\), we explicitly determine the couple \((\widehat{\Pi},\widehat{\Theta})\) such that \[(\mathsf{D}_{\Pi,\Theta})^{2}=\left(\widehat{\mathsf{H}}_{\widehat{\Pi}, \widehat{\Theta}}+\frac{1}{4}\right)\,. \tag{1.3}\] Here, we parametrize the self-adjoint extensions of \(\mathsf{D}|C^{\infty}_{comp}(\mathbb{R}\backslash\{y_{1},\ldots,y_{n}\}; \mathbb{C}^{2})\) by couples \((\Pi,\Theta)\), \(\Pi:\mathbb{C}^{2n}\to\mathbb{C}^{2n}\) an orthogonal projector, \(\Theta:\mathrm{ran}(\Pi)\to\mathrm{ran}(\Pi)\) a symmetric operator, and likewise \(\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}}\) denotes the self-adjoint extension of \(\mathsf{H}\mathbb{1}|C^{\infty}_{comp}(\mathbb{R}\backslash\{y_{1},\ldots,y_{ n}\};\mathbb{C}^{2})\) corresponding to the couple \((\widehat{\Pi},\widehat{\Theta})\), \(\widehat{\Pi}:\mathbb{C}^{4n}\to\widehat{\mathbb{C}}^{4n}\) an orthogonal projector, \(\widehat{\Theta}:\mathrm{ran}(\widehat{\Pi})\to\mathrm{ran}(\widehat{\Pi})\) a symmetric operator. Any operator of the kind \(\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}}\) is a self-adjoint realization of a singular perturbation of \(\mathsf{H}\mathbb{1}\) by a sum of \(\delta\)'s and \(\delta^{\prime}\)'s potentials. As in the free case, the relation (1.3) entails another one for the resolvents: \[(-\mathsf{D}_{\Pi,\Theta}+z)^{-1}=(\mathsf{D}_{\Pi,\Theta}+z)\left(-\widehat{ \mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}}+z^{2}-\frac{1}{4}\right)^{\!\!- 1}\,,\] where \(\pm z\in\varrho(\mathsf{D}_{\Pi,\Theta})\) if and only if \((z^{2}-\frac{1}{4})\in\varrho(\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{ \Theta}})\); here, \(\varrho(L)\) denotes the resolvent set of the closed operator \(L\). The specific case here considered is an example of solution of the problem concerning the representation of the square of a singular perturbation of a self-adjoint operator \(A\) by a singular perturbation of \(A^{2}\). This problem has been studied in [2]; however, in such a paper only the case \(A>0\) has been considered and the explicit examples there presented are limited to rank-one singular perturbations. The methods here used are different from the ones in [2], we do not use the resolvent formulae directly but instead use the self-adjointness domains. In more detail, the content of the paper is the following. In Section 2 we build the whole families of the self-adjoint extensions of \(\mathsf{D}|C^{\infty}_{comp}(\mathbb{R}\backslash\{y_{1},\ldots,y_{n}\}; \mathbb{C}^{2})\) and \(\mathsf{H}\mathbb{1}|C^{\infty}_{comp}(\mathbb{R}\backslash\{y_{1},\ldots,y_{ n}\};\mathbb{C}^{2})\). Instead of using the standard von Neumann theory (see, e.g., [1], [8], [16]), which gives a parametrization in terms of unitary operators between the defect spaces, we found more convenient to use the equivalent approach proposed in [21] and [22], which gives a parametrization in terms of couples \((\Pi,\Theta)\), where \(\Pi\) is an orthogonal projection and \(\Theta\) is a self-adjoint operator in \(\mathrm{ran}(\Pi)\); this allows for an easy writing of the corresponding resolvents. Then, in Section 3, by a comparison of the self-adjointness domains, we found the correspondence between the couple \((\Pi,\Theta)\) and \((\widehat{\Pi},\widehat{\Theta})\) such that (1.3) holds. In order to enhance the reader intuition, we start with simplest case, where \(n=1\) and \(\Pi=\mathbb{1}\) and then proceede step-by-step towards the most general case. Finally, in Section 4, we present various applications. In Subsection 4.1 we consider the subclass of self-adjoint extensions for the Dirac operator corresponding to local boundary conditions, i.e., to the ones which do not couple different points \(y_{k}\) and show that the corresponding extensions for the Schrodinger operator provide local boundary conditions as well. As a particular case of such a result, in Subsection 4.2 we consider the Gesztesy-Seba realizations; they are the self-adjoint realizations of the Dirac operator with local point interactions corresponding, in the non relativistic limit, to Schrodinger operators with local point interactions either of \(\delta\)-type or of \(\delta^{\prime}\)-type (see [17], [1, Appendix J], [14]). Then, in Subsection 4.3, we consider the subclass of self-adjoint extensions for the Dirac operator corresponding to separating boundary conditions, i.e., to the local ones for which, at any point, left limits are independent from right limits. This entails that the corresponding Dirac operator is the direct sum of self-adjoint Dirac operators \(\mathsf{D}_{k}\) in \(L^{2}(I_{k})\), where the \(I_{k}\)'s are either the half-lines \((-\infty,y_{1})\) and \((y_{n},+\infty)\) or the bounded intervals \((y_{k},y_{k+1})\); the same is true for the corresponding corresponding Schrodinger operator and \((\mathsf{D}_{k})^{2}=\widehat{\mathsf{H}}_{k}+\frac{1}{4}\). In Subsection 4.4, some connections with supersymmetry are discussed and a simple criterion of spontaneous supersymmetry breaking is provided (see [23], [3] and references therein for somehow different aspects of supersymmetry in presence of point interactions). In Subsection 4.5, we point out that our results, in the case of non local boundary conditions, allow the study of the connection between the square of the Dirac operator and the Schrodinger operator on quantum graphs with (at most) two ends. In particular, as an explicit example, we consider the Dirac operator on the eye graph with Kirchhoff-type boundary conditions at the vertices and show that its square is the direct sum of two Schrodinger operators on the same graph, one with Kirchhoff boundary conditions and the other with a sort of inverse Kirchhoff ones. These latter boundary conditions, like the Kirchhoff ones, reduce, in the case of the real line, to the free boundary conditions; this is consistent with (1.1). The procedure used for the eye graph can be extended, without substantial changes, to any kind of graph, thus showing that the property of conservation of Kirchhoff-like boundary conditions holds in general. We presume that the results here presented can be extended to the more involved cases corresponding to extensions of symmetric operators with infinite deficiency indices as the 1-dimensional Dirac and Schrodinger operators with singular perturbations on discrete sets (see [18] and [14]) and the 3-dimensional Dirac and Schrodinger operators with singular perturbations on 2-dimensional surfaces (see, e.g., [5], [6] and [7], [19]). ## 2. \(\mathsf{D}\) and \(\mathsf{H}\) with point interactions Given a finite set of points \(Y=\{y_{1},\cdots,y_{n}\}\), \(y_{1}<y_{2},\cdots<y_{n}\), we define \[H^{1}(\mathbb{R}\backslash Y;\mathbb{C}^{d}):=H^{1}(I_{0};\mathbb{C}^{d}) \oplus\cdots\oplus H^{1}(I_{n};\mathbb{C}^{d})\,, \tag{2.1}\] where, \[I_{0}:=(-\infty,y_{1})\,,\quad I_{1}:=(y_{1},y_{2})\,,\quad\ldots\ldots\,\quad I _{n-1}:=(y_{n-1},y_{n})\,,\quad I_{n}:=(y_{n},+\infty)\,, \tag{2.2}\] and \[H^{1}(I_{j};\mathbb{C}^{d}):=\{f\in L^{2}(I_{j};\mathbb{C}^{d}):f^{\prime}\in L ^{2}(I_{j};\mathbb{C}^{d})\}\,,\quad j=0,\ldots,n\,.\] Here and below, \(f^{\prime}\) denotes the (distributional) derivative of \(f\). Notice that the left and right limits \(f(y_{k}^{\pm})\) exists and are finite for any \(f\in H^{1}(\mathbb{R}\backslash Y;\mathbb{C}^{d})\). We define \[H^{2}(\mathbb{R}\backslash Y;\mathbb{C}^{d}):=H^{2}(I_{0};\mathbb{C}^{d}) \oplus\cdots\oplus H^{2}(I_{n};\mathbb{C}^{d})\,,\] where \[H^{2}(I_{j};\mathbb{C}^{d}):=\{f\in H^{1}(I_{j};\mathbb{C}^{d}):f^{\prime \prime}\in L^{2}(I_{j};\mathbb{C}^{d})\}\,,\quad j=0,\ldots,n\,.\] Obviously, \(H^{2}(\mathbb{R}\backslash Y;\mathbb{C}^{d})\subset H^{1}(\mathbb{R} \backslash Y;\mathbb{C}^{d})\) and \(f\in H^{2}(\mathbb{R}\backslash Y;\mathbb{C}^{d})\) implies \(f^{\prime}\in H^{1}(\mathbb{R}\backslash Y;\mathbb{C}^{d})\). We simply write \(H^{k}(\mathbb{R}\backslash Y)\), \(k=1,2\), whenever \(d=1\). Next, we introduce the two bounded operators \[\tau:H^{1}(\mathbb{R}\backslash Y;\mathbb{C}^{2})\to\mathbb{C}^{2n}\,,\quad \tau\Psi:=(\tau_{y_{1}}\Psi\,,\ldots,\tau_{y_{n}}\Psi)\,,\qquad\tau_{y}\Psi: =\langle\Psi\rangle_{y}\,, \tag{2.3}\] and \[\widehat{\tau}:H^{2}(\mathbb{R}\backslash Y)\to\mathbb{C}^{2n}\,,\qquad \widehat{\tau}\psi:=(\widehat{\tau}_{y_{1}}\psi\,,\ldots,\widehat{\tau}_{y_{ n}}\psi)\,,\qquad\widehat{\tau}_{y}\psi:=\langle\psi\rangle_{y}\oplus\langle\psi^{ \prime}\rangle_{y}\,, \tag{2.4}\] where \[\langle f\rangle_{y}:=\frac{1}{2}\left(f(y^{-})+f(y^{+})\right).\] Obviously, \(\langle f\rangle_{y_{k}}=f(y_{k})\) whenever \(f\in H^{1}(\mathbb{R};\mathbb{C}^{d})\subset C_{b}(\mathbb{R};\mathbb{C}^{d})\). In this section, following the scheme proposed in [22] (for the equivalent approaches which use either von Neuman's theory or Boundary Triples theory, see [16], [8] and [20, Sect. 4.1], [18], [14] respectively), we review the construction of the self-adjoint extensions of the closed symmetric operators \[S:=\mathsf{D}|\ker(\tau|H^{1}(\mathbb{R};\mathbb{C}^{2}))\,,\qquad\widehat{S }:=\mathsf{H}|\ker(\widehat{\tau}|H^{2}(\mathbb{R}))\,.\] Both \(S\) and \(\widehat{S}\) have defect indices \((2n,2n)\); they are the closures of the symmetric operators \[S^{\circ}:={\sf D}|C^{\infty}_{comp}(\mathbb{R}\backslash Y;\mathbb{C}^{2})\,, \qquad\widehat{S}^{\circ}:={\sf H}|C^{\infty}_{comp}(\mathbb{R}\backslash Y)\,.\] Let \(\widehat{g}_{z}(x-y)\) be the kernel of the free Schrodinger resolvent \((-{\sf H}+z)^{-1}=\left(\frac{d^{2}}{dx^{2}}+z\right)^{-1}\), with \(z\in\varrho({\sf H})=\mathbb{C}\backslash[0,+\infty)\), i.e., \[\widehat{g}_{z}(x)=\frac{e^{i\sqrt{z}\,|x|}}{2i\sqrt{z}}\,,\qquad{\rm Im}( \sqrt{z})>0\,. \tag{2.5}\] By (1.2), setting \(w_{z}:=z^{2}-\frac{1}{4}\), one then obtains the kernel \(g_{z}(x-y)\) of the free Dirac resolvent \((-{\sf D}+z)^{-1}\), \(z\in\varrho({\sf D})=\mathbb{C}\backslash((-\infty,-1/2]\cup[1/2,+\infty))\), \[g_{z}(x)=({\sf D}+z)\widehat{g}_{w_{z}}\mathbb{1}=\frac{e^{i\sqrt{w_{z}}\,|x |}}{2i}\begin{bmatrix}\zeta_{z}&{\rm sgn}(x)\\ {\rm sgn}(x)&\zeta_{z}^{-1}\end{bmatrix}, \tag{2.6}\] where \(\zeta_{z}:=(\frac{1}{2}-z)/\sqrt{w_{z}}\) and \({\rm Im}(w_{z})>0\). By such kernels, one gets that the bounded operators \[G_{z}:\mathbb{C}^{2n}\to L^{2}(\mathbb{R};\mathbb{C}^{2})\,,\quad G_{z}:=(\tau (-{\sf D}+\bar{z})^{-1})^{*}\,,\quad z\in\mathbb{C}\backslash((-\infty,-1/2] \cup[1/2,+\infty))\,,\] and \[\widehat{G}_{z}:\mathbb{C}^{2n}\to L^{2}(\mathbb{R})\,,\quad\widehat{G}_{z}:=( \widehat{\tau}(-{\sf H}+\bar{z})^{-1})^{*}\,,\quad z\in\mathbb{C}\backslash(- \infty,0]\,,\] represents as \[[G_{z}\xi](x)=\sum_{k=1}^{n}g_{z}(y_{k}-x)\,\xi_{k}\,,\qquad\xi\equiv(\xi_{1},\ldots,\xi_{n})\,,\quad\xi_{k}\in\mathbb{C}^{2}\,.\] and \[[\widehat{G}_{z}\xi](x)=\sum_{k=1}^{n}(\widehat{g}_{z}(y_{k}-x)\,\xi_{k,1}+ \widehat{g}_{z}^{\,\prime}(y_{k}-x)\,\xi_{k,2})\,,\quad\xi\equiv((\xi_{1,1}, \xi_{1,2}),\ldots,(\xi_{n,1},\xi_{n,2}))\,.\] Their adjoints \[G_{\bar{z}}^{*}:L^{2}(\mathbb{R};\mathbb{C}^{2})\to\mathbb{C}^{2n}\,,\qquad \widehat{G}_{\bar{z}}^{*}:L^{2}(\mathbb{R})\to\mathbb{C}^{2n}\] are given by \[G_{\bar{z}}^{*}\Psi=\left((G_{z}^{*}\Psi)_{1},\ldots,(G_{z}^{*}\Psi)_{n} \right),\qquad(G_{\bar{z}}^{*}\Psi)_{k}:=\int_{\mathbb{R}}g_{z}(y_{k}-x)\Psi(x )\,dx\] and \[\widehat{G}_{\bar{z}}^{*}\psi=\left((\widehat{G}_{z}^{*}\Psi)_{1},\ldots,( \widehat{G}_{z}^{*}\Psi)_{n}\right),\qquad(\widehat{G}_{\bar{z}}^{*}\psi)_{k}: =\left(\int_{\mathbb{R}}\widehat{g}_{z}(y_{k}-x)\psi(x)\,dx,\int_{ \mathbb{R}}\widehat{g}_{z}^{\,\prime}(y_{k}-x)\psi(x)\,dx\right).\] Since \[G_{z}\xi\in H^{1}(\mathbb{R}\backslash Y;\mathbb{C}^{2})\quad\text{ and }\quad \widehat{G}_{z}\xi\in H^{2}(\mathbb{R}\backslash Y),\] both \[\tau G_{z}:\mathbb{C}^{2n}\to\mathbb{C}^{2n}\quad\text{ and }\quad\widehat{\tau} \widehat{G}_{z}:\mathbb{C}^{2n}\to\mathbb{C}^{2n}\] are well defined and are represented by the two \(n\times n\) block matrices with the \(2\times 2\) blocks \[[\tau G_{z}]_{jk}=\frac{e^{i\sqrt{w_{z}}\,|y_{k}-y_{j}|}}{2i} \begin{bmatrix}\zeta_{z}&{\rm sgn}(y_{k}-y_{j})\\ {\rm sgn}(y_{k}-y_{j})&\zeta_{z}^{-1}\end{bmatrix}, \tag{2.8}\] \[[\widehat{\tau}\widehat{G}_{z}]_{jk}=\frac{e^{i\sqrt{z}\,|y_{k}-y_{ j}|}}{2}\begin{bmatrix}(i\sqrt{z})^{-1}&{\rm sgn}(y_{k}-y_{j})\\ -{\rm sgn}(y_{k}-y_{j})&i\sqrt{z}\end{bmatrix}, \tag{2.7}\] where \[\operatorname{sgn}(x):=\begin{cases}-1&x<0\\ 0&x=0\\ +1&x>0\,.\end{cases}\] In the following, given an orthogonal projection \(P:\mathbb{C}^{d}\to\mathbb{C}^{d}\), by a slight abuse of notation, we use the same symbol to denote both the surjection \(P:\mathbb{C}^{d}\to\operatorname{ran}(P)\) and the injection \(P:\operatorname{ran}(P)\to\mathbb{C}^{d}\). **Theorem 2.1**.: _The sets of self-adjoint extensions of \(S\) and \(\widehat{S}\) are both parametrized by couples \((\Pi,\Theta)\), where \(\Pi:\mathbb{C}^{2n}\to\mathbb{C}^{2n}\) is an othogonal projector and \(\Theta:\operatorname{ran}(\Pi)\to\operatorname{ran}(\Pi)\) is symmetric. The extensions \(\mathsf{D}_{\Pi,\Theta}\) and \(\mathsf{H}_{\Pi,\Theta}\) have resolvents_ \[(-\mathsf{D}_{\Pi,\Theta}+z)^{-1} =(-\mathsf{D}+z)^{-1}+G_{z}\Pi(\Theta-\Pi\,\tau G_{z}\Pi)^{-1} \Pi G_{\widetilde{z}}^{*}\,, z\in\varrho(\mathsf{D}_{\Pi,\Theta})\cap\varrho(\mathsf{D})\] \[(-\mathsf{H}_{\Pi,\Theta}+z)^{-1} =(-\mathsf{H}+z)^{-1}+\widehat{G}_{z}\Pi(\Theta-\Pi\,\widehat{ \tau}\widehat{G}_{z}\Pi)^{-1}\Pi\widehat{G}_{\widetilde{z}}^{*}\,, z\in\varrho(\mathsf{H}_{\Pi,\Theta})\cap\varrho(\mathsf{H})\,.\] _Moreover,_ \[\operatorname{dom}(\mathsf{D}_{\Pi,\Theta})=\{\Psi\in L^{2}( \mathbb{R};\mathbb{C}^{2}):\Psi=\Psi_{z}+G_{z}\xi\,,\;\Psi_{z}\in H^{1}( \mathbb{R};\mathbb{C}^{2})\,,\;\xi\in\operatorname{ran}(\Pi)\,,\;\Pi\tau\Psi= \Theta\xi\}\] \[(-\mathsf{D}_{\Pi,\Theta}+z)\Psi =(-\mathsf{D}+z)\Psi_{z}\,,\] \[\operatorname{dom}(\mathsf{H}_{\Pi,\Theta})=\{\psi\in L^{2}( \mathbb{R}):\psi=\psi_{z}+\widehat{G}_{z}\xi\,,\;\psi_{z}\in H^{2}(\mathbb{R}) \,,\;\xi\in\operatorname{ran}(\Pi)\,,\;\Pi\widehat{\tau}\psi=\Theta\xi\},\] \[(-\mathsf{H}_{\Pi,\Theta}+z)\psi =(-\mathsf{H}+z)\psi_{z}\,;\] _such representations are \(z\)-independent and the decompositions of \(\Psi\) in \(\operatorname{dom}(\mathsf{D}_{\Pi,\Theta})\) and of \(\psi\) in \(\operatorname{dom}(\mathsf{H}_{\Pi,\Theta})\) are unique._ Proof.: The statements regarding the resolvents and the actions of the extensions follow from [22, Theorem 2.1] with \(\Gamma_{\Pi,\Theta}(z)\) there defined either as \(\Gamma_{\Pi,\Theta}(z):=\Theta-\Pi\tau G_{z}\Pi\) or as \(\Gamma_{\Pi,\Theta}(z):=\widehat{\Theta}-\Pi\widehat{\tau}\widehat{G}_{z}\Pi\). As regards the operators domains, we give the proof only for \(\mathsf{D}_{\Pi,\Theta}\), since the one for \(\mathsf{H}_{\Pi,\Theta}\) is of the same kind. By the resolvent formula, one has \[\operatorname{dom}(\mathsf{D}_{\Pi,\Theta})=\{\Psi\in L^{2}(\mathbb{R}; \mathbb{C}^{2}):\Psi=\Psi_{z}+G_{z}\Pi(\Theta-\Pi\tau G_{z}\Pi)^{-1}\Pi\tau\Psi _{z}\,,\;\Psi_{z}\in H^{1}(\mathbb{R};\mathbb{C}^{2})\}\,.\] Let us define \(\xi_{z}:=(\Theta-\Pi\,\tau G_{z}\Pi)^{-1}\Pi\tau\Psi_{z}\in\operatorname{ran} (\Pi)\); it is not difficult to check that \(\xi_{z}\) does not depend on \(z\) and so \(\Psi=\Psi_{z}+G_{z}\xi\). Then \[\Pi\tau\Psi-\Theta\xi=\Pi\tau\Psi_{z}+\Pi\tau G_{z}\xi-\Theta\xi=\Pi\tau\Psi_{ z}-(\Theta-\Pi\tau G_{z}\Pi)\xi=0\,.\] **Remark 2.2**.: Notice that the choice \(\Pi=\mathbb{0}\) gives the self-adjoint extensions \(\mathsf{D}\) and \(\mathsf{H}\). Therefore, in the following we always suppose \(\Pi\neq\mathbb{0}\) Since we want to extend the relation (1.1) to the case with point interactions, we also need to consider the self-adjoint extensions of \(\widehat{S}^{\circ}\mathbb{1}\). There are no essential changes with respect to the case of \(\mathbb{C}\)-valued functions, the only relevant one being that the defect indices increase to \((4n,4n)\). The result is of the same kind as in Theorem 2.1. **Theorem 2.3**.: _The set of the self-adjoint extensions of \(\widehat{S}\mathbb{1}\) is parametrized by couples \((\widehat{\Pi},\widehat{\Theta})\), where \(\widehat{\Pi}:\mathbb{C}^{4n}\to\mathbb{C}^{4n}\) is an othogonal projector and \(\widehat{\Theta}:\operatorname{ran}(\widehat{\Pi})\to\operatorname{ran}( \widehat{\Pi})\) is symmetric. The extension \(\widehat{\mathsf{H}}_{\Pi,\Theta}\) has resolvent_ \[(-\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}}+z)^{-1}=(-\mathsf{H}+z)^ {-1}\mathbb{1}+(\widehat{G}_{z}\mathbb{1})\widehat{\Pi}(\widehat{\Theta}- \widehat{\Pi}(\widehat{\tau}\widehat{G}_{z}\mathbb{1})\widehat{\Pi})^{-1} \widehat{\Pi}(\widehat{G}_{\widetilde{z}}^{*}\mathbb{1}),\quad z\in\varrho( \widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}})\cap\varrho(\mathsf{H}).\] _Moreover,_ \[\mathrm{dom}(\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}})=\{\Psi\in L^{ 2}(\mathbb{R};\mathbb{C}^{2}):\Psi=\Psi_{z}+(\widehat{G}_{z}\mathbb{1})\widehat {\xi},\ \Psi_{z}\in H^{2}(\mathbb{R};\mathbb{C}^{2}),\ \widehat{\xi}\in\mathrm{ran}(\widehat{\Pi}),\ \widehat{\Pi}(\widehat{\tau} \mathbb{1})\Psi=\widehat{\Theta}\widehat{\xi}\,\},\] \[(-\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}}+z)\Psi=(-\mathsf{H}+z) \mathbb{1}\Psi_{z}\,;\] _such representation is \(z\)-independent and the decomposition of \(\Psi\) in \(\mathrm{dom}(\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}})\) is unique._ **Remark 2.4**.: By Theorems 2.1 and 2.3, if both \(\widehat{\Pi}\) and \(\widehat{\Theta}\) are block diagonal, i.e., \(\widehat{\Pi}=\Pi_{1}\oplus\Pi_{2}\) and \(\widehat{\Theta}=\Theta_{1}\oplus\Theta_{1}\), then \[(-\widehat{\mathsf{H}}_{\Pi_{1}\oplus\Pi_{2},\Theta_{1}\oplus\Pi_{2}}+z)^{-1} =(-\mathsf{H}_{\Pi_{1},\Theta_{1}}+z)^{-1}\oplus(-\mathsf{H}_{\Pi_{2},\Theta_ {2}}+z)^{-1},\] equivalently, \[\widehat{\mathsf{H}}_{\Pi_{1}\oplus\Pi_{2},\Theta_{1}\oplus\Pi_{2}}=\mathsf{H }_{\Pi_{1},\Theta_{1}}\oplus\mathsf{H}_{\Pi_{2},\Theta_{2}}.\] In particular, \[\widehat{\mathsf{H}}_{\Pi\mathbb{1},\Theta\mathbb{1}}=\mathsf{H}_{\Pi,\Theta }\mathbb{1}\,.\] **Remark 2.5**.: Since \(g_{z}\) is the fundamental solution of \(-\mathsf{D}+z\), one has \[(-\mathsf{D}_{\Pi,\Theta}+z)\Psi=(-\mathsf{D}_{\Pi,\Theta}+z)(\Psi-G_{z}\xi)= (-\mathsf{D}+z)\Psi-\sum_{k=1}^{n}\xi_{k}\delta_{y_{k}}\,,\] i.e., \[\mathsf{D}_{\Pi,\Theta}\Psi=\mathsf{D}\Psi+\sum_{k=1}^{n}\xi_{k}\delta_{y_{k} }\,,\quad\xi\equiv(\xi_{1},\ldots,\xi_{n})\,,\] where the action of \(\mathsf{D}\) on \(\Psi\in L^{2}(\mathbb{R};\mathbb{C}^{2})\) is to be understood in a distributional sense. Analogously, \[\mathsf{H}_{\Pi,\Theta}\psi=\mathsf{H}\psi+\sum_{k=1}^{n}(\xi_{k,1}\delta_{y_ {k}}+\xi_{k,2}\delta^{\prime}_{y_{k}}),\quad\xi\equiv((\xi_{1,1},\xi_{1,2}), \ldots,(\xi_{n,1},\xi_{n,2}))\,,\] \[\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}}\Psi=\mathsf{H}\mathbb{1 }\Psi+\sum_{k=1}^{n}(\widehat{\xi}_{k,1}\delta_{y_{k}}+\widehat{\xi}_{k,2} \delta^{\prime}_{y_{k}}),\quad\widehat{\xi}\equiv((\widehat{\xi}_{1,1},\widehat {\xi}_{1,2}),\ldots,(\widehat{\xi}_{n,1},\widehat{\xi}_{n,2}))\,.\] In the following, we use the abbreviated notations \(\mathsf{D}_{\Theta}\equiv\mathsf{D}_{\mathbb{1},\Theta}\), \(\mathsf{H}_{\Theta}\equiv\mathsf{H}_{\mathbb{1},\Theta}\), \(\widehat{\mathsf{H}}_{\widehat{\Theta}}\equiv\widehat{\mathsf{H}}_{\mathbb{ 1},\widehat{\Theta}}\,.\) ## 3. \(\mathsf{D}^{2}=\mathsf{H}+\frac{1}{4}\) with point interactions We begin this section by providing an equivalent representation of the domains and actions of the self-adjoint operators we built in Section 2. **Theorem 3.1**.: _Let \(\mathsf{D}_{\Pi,\Theta}\), \(\mathsf{H}_{\Pi,\Theta}\) and \(\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}}\) as in Section 2. Then_ \[\mathrm{dom}(\mathsf{D}_{\Pi,\Theta})=\{\Psi\in H^{1}(\mathbb{R}\backslash Y; \mathbb{C}^{2}):\rho\Psi\in\mathrm{ran}(\Pi),\ \Pi\tau\Psi=\Theta\rho\Psi\},\quad\mathsf{D}_{\Pi,\Theta}\Psi=\mathsf{D}_{ \mathbb{R}\backslash Y}\Psi\,,\] \[\mathrm{dom}(\mathsf{H}_{\Pi,\Theta})=\{\psi\in H^{2}(\mathbb{R}\backslash Y): \widehat{\rho}\psi\in\mathrm{ran}(\Pi),\ \Pi\widehat{\tau}\psi=\Theta\widehat{\rho}\psi\},\quad\mathsf{H}_{\Pi,\Theta} \psi=\mathsf{H}_{\mathbb{R}\backslash Y}\psi\,,\] \[\mathrm{dom}(\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}})=\{\Psi\in H ^{2}(\mathbb{R}\backslash Y;\mathbb{C}^{2}):(\widehat{\rho}\mathbb{1})\Psi \in\mathrm{ran}(\widehat{\Pi}),\ \widehat{\Pi}(\widehat{\tau}\mathbb{1})\Psi=\widehat{\Theta}(\widehat{\rho} \mathbb{1})\Psi\},\quad\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}}\Psi=( \mathsf{H}_{\mathbb{R}\backslash Y}\mathbb{1})\Psi\,,\] _where \(\mathsf{D}_{\mathbb{R}\backslash Y}\) and \(\mathsf{H}_{\mathbb{R}\backslash Y}\) denote the free Dirac and Schrodinger operators acting on distributions supported in \(\mathbb{R}\backslash Y\),_ \[\rho:H^{1}(\mathbb{R}\backslash Y;\mathbb{C}^{2})\to\mathbb{C}^{2n}\,,\quad\rho \Psi:=\left(\rho_{y_{1}}\Psi\,,\ldots,\rho_{y_{n}}\Psi\right),\qquad\rho_{y} \Psi:=i\sigma_{1}[\Psi]_{y}\,,\] \[\widehat{\rho}:H^{2}(\mathbb{R}\backslash Y)\to\mathbb{C}^{2n}\,,\qquad\widehat {\rho}\psi:=\left(\widehat{\rho}_{y_{1}}\psi\,,\ldots,\widehat{\rho}_{y_{n}}\psi \right),\qquad\widehat{\rho}_{y}\psi:=\left[\psi^{\prime}\,\right]_{y}\oplus [-\psi]_{y}\,,\] \[[f]_{y}:=f(y^{+})-f(y^{-})\,.\] Proof.: Let \(\Psi=\Psi_{z}+G_{z}\xi\in\operatorname{dom}(\mathsf{D}_{\Pi,\Theta})\). One has \(\Psi_{z}\in H^{1}(\mathbb{R};\mathbb{C}^{2})\subset H^{1}(\mathbb{R}\backslash Y ;\mathbb{C}^{2})\) and \(G_{z}\xi\in H^{1}(\mathbb{R}\backslash Y;\mathbb{C}^{2})\); therefore, \(\Psi\in H^{1}(\mathbb{R}\backslash Y;\mathbb{C}^{2})\). By \([G_{z}\xi]_{y}=i\sigma_{1}\xi\), one gets \(\rho G_{z}\xi=\xi\); furthermore, by \(H^{1}(\mathbb{R}\backslash Y;\mathbb{C}^{2})\subset C_{b}(\mathbb{R};\mathbb{C }^{2})\), one gets \(\rho\Psi_{z}=0\). Therefore, \[\operatorname{dom}(\mathsf{D}_{\Pi,\Theta})\subseteq\mathscr{D}:=\{\Psi\in H ^{1}(\mathbb{R}\backslash Y;\mathbb{C}^{2}):\rho\Psi\in\operatorname{ran}(\Pi),\ \Pi\tau\Psi=\Theta\rho\Psi\}\,.\] By Remark 2.5, \(\mathsf{D}_{\Pi,\Theta}\Psi=\mathsf{D}_{\mathbb{R}\backslash Y}\Psi\) for any \(\Psi\in\operatorname{dom}(\mathsf{D}_{\Pi,\Theta})\), i.e., \(\mathsf{D}_{\Pi,\Theta}\subset\mathsf{D}_{\mathbb{R}\backslash Y}|\mathscr{D}\). Moreover, by integration by parts, \(\mathsf{D}_{\mathbb{R}\backslash Y}|\mathscr{D}\) is symmetric; hence, since \(\mathsf{D}_{\Pi,\Theta}\) is self-adjoint, one gets \(\mathsf{D}_{\Pi,\Theta}=\mathsf{D}_{\mathbb{R}\backslash Y}|\mathscr{D}\). The proofs for \(\mathsf{H}_{\Pi,\Theta}\) and \(\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}}\) are of the same kind, using the relation \(\widehat{\rho}\widehat{G}_{z}\xi=\xi\). **Remark 3.2**.: Notice that \(\psi\in H^{1}(\mathbb{R}\backslash Y)\) belongs to \(H^{1}(\mathbb{R})\) if and only if \([\psi]_{y_{k}}=0\) for any \(k\) and consequently \(\psi\in H^{2}(f\mathbb{R}\backslash Y)\) belongs to \(H^{2}(\mathbb{R})\) if and only if \([\psi]_{y_{k}}=[\psi^{\prime}]_{y_{k}}=0\) for any \(k\). By Theorem 3.1 and by \[(\mathsf{D}_{\mathbb{R}\backslash Y})^{2}=\left(\mathsf{H}_{\mathbb{R} \backslash Y}+\frac{1}{4}\right)\mathbb{1}\,,\] given the couple \((\Pi,\Theta)\), one gets that the couple \((\widehat{\Pi},\widehat{\Theta})\) is such that \[(\mathsf{D}_{\Pi,\Theta})^{2}=\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{ \Theta}}+\frac{\mathbb{1}}{4}\,, \tag{3.1}\] if and only if \[\operatorname{dom}((\mathsf{D}_{\Pi,\Theta})^{2})=\operatorname{dom}(\widehat {\mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}})\,. \tag{3.2}\] Therefore, exploiting the definitions of the operator domains in Theorem 3.1, there exists a couple \((\widehat{\Pi},\widehat{\Theta})\) for which (3.1) holds if and only if, given \((\Pi,\Theta)\), there exists \((\widehat{\Pi},\widehat{\Theta})\), \(\widehat{\Pi}\) an orthogonal projector in \(\mathbb{C}^{4n}\) and \(\widehat{\Theta}\) symmetric in \(\operatorname{ran}(\widehat{\Pi})\), such that \[\begin{cases}\rho\Psi\oplus\rho\mathsf{D}_{\mathbb{R}\backslash Y}\Psi\in \operatorname{ran}(\Pi\oplus\Pi)\\ (\Pi\oplus\Pi)\tau\Psi\oplus\tau\mathsf{D}_{\mathbb{R}\backslash Y}\Psi=( \Theta\oplus\Theta)\rho\Psi\oplus\rho\mathsf{D}_{\mathbb{R}\backslash Y}\Psi \end{cases}\iff\begin{cases}(\widehat{\rho}\mathbb{1})\Psi\in\operatorname{ ran}(\widehat{\Pi})\\ \widehat{\Pi}(\widehat{\tau}\mathbb{1})\Psi=\widehat{\Theta}(\widehat{\rho} \mathbb{1})\Psi\,.\end{cases} \tag{3.3}\] ### Spectral correspondence The relation (3.1) entails \(\pm z\in\varrho(\mathsf{D}_{\Pi,\Theta})\) if and only if \(z^{2}-\frac{1}{4}\in\varrho(\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{ \Theta}})\), equivalently, \(\pm\lambda\in\sigma(\mathsf{D}_{\Pi,\Theta})\) if and only if \(\lambda^{2}-\frac{1}{4}\in\sigma(\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{ \Theta}})\), and \[(-\mathsf{D}_{\Pi,\Theta}+z)^{-1}=(\mathsf{D}_{\Pi,\Theta}+z)\left(-\widehat{ \mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}}+\left(z^{2}-\frac{1}{4}\right) \mathbb{1}\right)^{-1}\,. \tag{3.4}\] Furthermore, since, by the invariance of the essential spectrum by finite-rank perturbations, \[\sigma_{ess}(\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}})=\sigma_{ ess}(\mathsf{H}\mathbb{1})=[0,\infty)\,,\qquad\sigma_{ess}(\mathsf{D}_{ \widehat{\Pi},\widehat{\Theta}})=\sigma_{ess}(\mathsf{D})=\left[-\infty,-\frac{1 }{2}\right)\cup\left[\frac{1}{2},+\infty\right)\,,\] one gets \[\lambda\in\sigma_{disc}(\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}} )\cap\left[-\frac{1}{4},0\right)\quad\iff\quad\pm\left(\lambda+\frac{1}{4} \right)^{\frac{1}{2}}\in\sigma_{disc}(\mathsf{D}_{\Pi,\Theta})\,.\] By the resolvent formulae in Theorems 2.1 and 2.3, \[\lambda\in\sigma_{disc}(\mathsf{D}_{\Pi,\Theta})\quad\iff\quad \lambda\in(-1/2,1/2)\quad\text{ and }\quad\det(\Theta-\Pi\tau G_{\lambda}\Pi)=0\,, \tag{3.6}\] \[\lambda\in\sigma_{disc}(\widehat{\mathsf{H}}_{\widehat{\Pi}, \widehat{\Theta}})\quad\iff\quad\lambda\in(-\infty,0)\quad\text{ and }\quad\det(\widehat{\Theta}-\widehat{\Pi}(\widehat{\tau}\widehat{G}_{\lambda} \mathbb{1})\widehat{\Pi})=0\,. \tag{3.5}\] Now, we solve (3.3) starting from the simplest case \(n=1\), \(\Pi=\mathbb{1}\) and then proceeding step-by-step towards the most general case. ### The case \(n=1\), \(\Pi=\mathbb{1}\) By (3.3), given the \(2\times 2\) Hermitian matrix \(\Theta\), we need to find the \(4\times 4\) Hermitian matrix \(\widehat{\Theta}\) such that \[\begin{bmatrix}\tau_{y}\Psi\\ \tau_{y}\mathsf{D}_{\mathbb{R}\setminus\{y\}}\Psi\end{bmatrix}=\begin{bmatrix} \Theta&\mathbb{0}\\ \mathbb{0}&\Theta\end{bmatrix}\begin{bmatrix}\rho_{y}\Psi\\ \rho_{y}\mathsf{D}_{\mathbb{R}\setminus\{y\}}\Psi\end{bmatrix}\quad \iff\quad(\widehat{\tau}_{y}\mathbb{1})\Psi=\widehat{\Theta}(\widehat{\rho }_{y}\mathbb{1})\Psi\,. \tag{3.7}\] To solve (3.7), at first we look for the two invertible matrices \(M_{1}\) and \(M_{2}\) such that \[(\widehat{\tau}_{y}\mathbb{1})\Psi=M_{1}\begin{bmatrix}\tau_{y}\Psi\\ \tau_{y}\mathsf{D}_{\mathbb{R}\setminus\{y\}}\Psi\end{bmatrix},\qquad( \widehat{\rho}_{y}\mathbb{1})\Psi=M_{2}\begin{bmatrix}\rho_{y}\Psi\\ \rho_{y}\mathsf{D}_{\mathbb{R}\setminus\{y\}}\Psi\end{bmatrix}. \tag{3.8}\] By direct calculations, one gets \[M_{1}=\begin{bmatrix}1&0&0&0\\ 0&\frac{i}{2}&0&i\\ 0&1&0&0\\ -\frac{i}{2}&0&i&0\end{bmatrix},\qquad M_{2}=\begin{bmatrix}\frac{1}{2}&0&1&0 \\ 0&i&0&0\\ 0&-\frac{1}{2}&0&1\\ i&0&0&0\end{bmatrix}. \tag{3.9}\] Therefore, (3.7) rewrites as \[M_{1}^{-1}(\widehat{\tau}_{y}\mathbb{1})\Psi=(\Theta\oplus\Theta)M_{2}^{-1}( \widehat{\rho}_{y}\mathbb{1})\Psi\quad\iff\quad(\widehat{\tau}_{y}\mathbb{1} )\Psi=\widehat{\Theta}(\widehat{\rho}_{y}\mathbb{1})\Psi\] and so the relation between \(\widehat{\Theta}\) and \(\Theta\) is given by \[\widehat{\Theta}= M_{1}(\Theta\oplus\Theta)M_{2}^{-1}\,. \tag{3.10}\] By \[\widehat{\Theta}=\widehat{\Theta}^{*}\quad\iff\quad M_{1}^{*}M_{2}(\Theta \oplus\Theta)=(\Theta\oplus\Theta)M_{2}^{*}M_{1},\] \(\widehat{\Theta}\) is symmetric by the relations \[M_{1}^{*}M_{2}=\begin{bmatrix}\mathbb{0}&\mathbb{1}\\ \mathbb{1}&\mathbb{0}\end{bmatrix}=M_{2}^{*}M_{1}\,. \tag{3.11}\] More explicitly, if \[\Theta=\begin{bmatrix}a&b\\ \bar{b}&d\end{bmatrix},\qquad a,d\in\mathbb{R},\,b\in\mathbb{C}\,,\] then \(\widehat{\Theta}\) is represented by the Hermitian matrix \[\widehat{\Theta}=\begin{bmatrix}0&-ib&0&-ia\\ i\bar{b}&d&id&0\\ 0&-id&0&-i\bar{b}\\ ia&0&ib&-a\end{bmatrix}\,.\] If \(a=d=0\) and \(b\in\mathbb{R}\), i.e., if \(\Theta=b\sigma_{1}\), then \(\widehat{\Theta}=b(\sigma_{2}\oplus\sigma_{2})\equiv b\sigma_{2}\mathbb{1}\) and, by Remark 2.4, the corresponding Schrodinger operator in \(L^{2}(\mathbb{R};\mathbb{C}^{2})\) is block diagonal: \[(\mathsf{D}_{b\sigma_{1}})^{2}=\left(\mathsf{H}_{b\sigma_{2}}+\frac{1}{4} \right)\mathbb{1}\,. \tag{3.12}\] ### The case \(n=1\), \(\Pi\neq\mathbb{1}\) Here we take \(\Pi:\mathbb{C}^{2}\to\mathbb{C}^{2}\) a not trivial orthogonal projection, i.e., \(\dim(\operatorname{ran}(\Pi))=1\), and \(\Theta\in\mathbb{R}\). By (3.8), (3.3) rewrites as \[\begin{cases}(\widehat{\rho}_{y}\mathbb{1})\Psi\in\operatorname{ran}(M_{2}(\Pi \oplus\Pi))\\ M_{2}(\Pi\oplus\Pi)M_{1}^{-1}(\widehat{\tau}_{y}\mathbb{1})\Psi=\Theta( \widehat{\rho}_{y}\mathbb{1})\Psi\end{cases}\qquad\Longleftrightarrow \qquad\begin{cases}(\widehat{\rho}_{y}\mathbb{1})\Psi\in\operatorname{ran}( \widehat{\Pi})\\ \widehat{\Pi}(\widehat{\tau}_{y}\mathbb{1})\Psi=\widehat{\Theta}(\widehat{\rho }_{y}\mathbb{1})\Psi\,.\end{cases} \tag{3.13}\] Therefore, \(\widehat{\Pi}:\mathbb{C}^{4}\to\mathbb{C}^{4}\) is the orthogonal projection onto the \(2\)-dimensional subspace \[\operatorname{ran}(\widehat{\Pi})=\operatorname{ran}(M_{2}(\Pi\oplus\Pi))= \operatorname{ran}(M_{2}(\Pi\oplus\Pi)M_{1}^{-1})\,,\] i.e., \[\widehat{\Pi}= M_{2}(\Pi\oplus\Pi)((\Pi\oplus\Pi)M_{2}^{*}M_{2}(\Pi\oplus\Pi))^ {-1}(\Pi\oplus\Pi)M_{2}^{*}\] \[= M_{2}(\Pi\oplus\Pi)(M_{2}^{*}M_{2})^{-1}(\Pi\oplus\Pi)M_{2}^{*}\] \[= (M_{2}(\Pi\oplus\Pi)M_{2}^{-1})(M_{2}(\Pi\oplus\Pi)M_{2}^{-1})^{* }\,.\] By (3.11), \(M_{2}(\Pi\oplus\Pi)M_{1}^{-1}\) is symmetric. Hence, \(\operatorname{ran}(\widehat{\Pi})=\ker(M_{2}(\Pi\oplus\Pi)M_{1}^{-1})^{\perp}\) and the symmetric operator \[M_{2}(\Pi\oplus\Pi)M_{1}^{-1}:\operatorname{ran}(\widehat{\Pi})\to \operatorname{ran}(\widehat{\Pi})\] is a bijection. Then, (3.13) gives \[\widehat{\Theta}:\operatorname{ran}(\widehat{\Pi})\to\operatorname{ran}( \widehat{\Pi}),\qquad\widehat{\Theta}:=\Theta(M_{2}(\Pi\oplus\Pi)M_{1}^{-1})^ {-1}.\] ### The case \(n>1\), \(\Pi=\mathbb{1}\) In order to exploit the results from the \(n=1\) case, we introduce the unitary operator \[U:\mathbb{C}^{4n}\to\mathbb{C}^{4n}\,,\qquad U(\xi_{1},\xi_{2},\dots,\xi_{2n}): =(\xi_{1},\xi_{n+1},\xi_{2},\xi_{n+2},\dots,\xi_{n},\xi_{2n})\,,\quad\xi_{k} \in\mathbb{C}^{2}\,. \tag{3.14}\] By such a definition, \[U(\tau\Psi\oplus\tau\mathsf{D}_{\mathbb{R}\setminus Y}\Psi)= \left(\begin{bmatrix}\tau_{y_{1}}\Psi\\ \tau_{y_{1}}\mathsf{D}_{\mathbb{R}\setminus Y}\Psi\end{bmatrix},\dots, \begin{bmatrix}\tau_{y_{n}}\Psi\\ \tau_{y_{n}}\mathsf{D}_{\mathbb{R}\setminus Y}\Psi\end{bmatrix}\right),\] \[U(\rho\Psi\oplus\rho\mathsf{D}_{\mathbb{R}\setminus Y}\Psi)= \left(\begin{bmatrix}\rho_{y_{1}}\Psi\\ \rho_{y_{1}}\mathsf{D}_{\mathbb{R}\setminus Y}\Psi\end{bmatrix},\dots, \begin{bmatrix}\rho_{y_{n}}\Psi\\ \rho_{y_{n}}\mathsf{D}_{\mathbb{R}\setminus Y}\Psi\end{bmatrix}\right)\,.\] Therefore, setting \[M_{1}^{\oplus}:\mathbb{C}^{4n}\to\mathbb{C}^{4n}\,,\qquad M_{1}^{\oplus}:=M_{ 1}\oplus\dots\oplus M_{1}\,,\] \[M_{2}^{\oplus}:\mathbb{C}^{4n}\to\mathbb{C}^{4n}\,,\qquad M_{2}^{\oplus}:=M_{ 2}\oplus\dots\oplus M_{2}\,,\] by (3.8), one gets \[M_{1}^{\oplus}U(\tau\Psi\oplus\tau\mathsf{D}_{\mathbb{R}\setminus Y}\Psi)= \left((\widehat{\tau}_{y_{1}}\mathbb{1})\Psi,\dots,(\widehat{\tau}_{y_{n}} \mathbb{1})\Psi\right)=U(\widehat{\tau}\mathbb{1})\Psi\,,\] \[M_{2}^{\oplus}U(\rho\Psi\oplus\rho\mathsf{D}_{\mathbb{R}\setminus Y}\Psi)= \left((\widehat{\rho}_{y_{1}}\mathbb{1})\Psi,\dots,(\widehat{\rho}_{y_{n}} \mathbb{1})\Psi\right)=U(\widehat{\rho}\mathbb{1})\Psi\] and so (3.3) rewrites as \[U^{*}(M_{1}^{\oplus})^{-1}U(\widehat{\tau}\mathbb{1})\Psi=(\Theta\oplus\Theta) U^{*}(M_{2}^{\oplus})^{-1}U(\widehat{\rho}\mathbb{1})\Psi\quad\Longleftrightarrow \quad(\widehat{\tau}\mathbb{1})\Psi=\widehat{\Theta}(\widehat{\rho}\mathbb{1}) \Psi\,.\] This gives \[\widehat{\Theta}=U^{*}M_{1}^{\oplus}U(\Theta\oplus\Theta)U^{*}(M_{2}^{\oplus})^ {-1}U\,. \tag{3.15}\] Such a operator \(\widehat{\Theta}\) is symmetric by \[U^{*}(M_{1}^{\oplus})^{*}M_{2}^{\oplus}U=\begin{bmatrix}\mathbb{0}&\mathbb{1} \\ \mathbb{1}&\mathbb{0}\end{bmatrix}=U^{*}(M_{2}^{\oplus})^{*}M_{1}^{\oplus}U\,. \tag{3.16}\] The relations (3.16) generalize (3.11), since \(U=\mathbb{1}\) whenever \(n=1\), and are a consequence of (3.11) itself and the definition (3.14). ### The case \(n>1\), \(\Pi\neq\mathbb{1}\) Finally, we consider the most general case. Using the unitary \(U:\mathbb{C}^{4n}\to\mathbb{C}^{4n}\) as in the previous section, (3.3) rewrites as \[\begin{cases}U^{*}(M_{2}^{\oplus})^{-1}U(\widehat{\rho}\mathbb{1})\Psi\in \operatorname{ran}(\Pi\oplus\Pi)&\iff\quad\begin{cases}(\widehat{\rho} \mathbb{1})\Psi\in\operatorname{ran}(\widehat{\Pi})\\ (\Pi\oplus\Pi)U^{*}(M_{1}^{\oplus})^{-1}U(\widehat{\tau}\mathbb{1})\Psi=( \Theta\oplus\Theta)U^{*}(M_{2}^{\oplus})^{-1}U(\widehat{\rho}\mathbb{1})\Psi \end{cases}\end{cases}\begin{cases}(\widehat{\rho}\mathbb{1})\Psi\in \operatorname{ran}(\widehat{\Pi})\\ \widehat{\Pi}(\widehat{\tau}\mathbb{1})\Psi=\widehat{\Theta}(\widehat{\rho} \mathbb{1})\Psi\,.\end{cases} \tag{3.17}\] This gives the orthogonal projector \(\widehat{\Pi}:\mathbb{C}^{4n}\to\mathbb{C}^{4n}\), with \(\dim(\operatorname{ran}(\widehat{\Pi}))=2\dim(\operatorname{ran}(\Pi))\), such that \[\operatorname{ran}(\widehat{\Pi})=\operatorname{ran}(U^{*}M_{2}^{\oplus}U(\Pi \oplus\Pi))=\operatorname{ran}(U^{*}M_{2}^{\oplus}U(\Pi\oplus\Pi)U^{*}(M_{1} ^{\oplus})^{-1}U)\,, \tag{3.18}\] i.e., \[\widehat{\Pi}= (U^{*}M_{2}^{\oplus}U(\Pi\oplus\Pi))\big{(}(U^{*}M_{2}^{\oplus}U (\Pi\oplus\Pi))^{*}(U^{*}M_{2}^{\oplus}U(\Pi\oplus\Pi))\big{)}^{-1}(U^{*}M_{2 }^{\oplus}U(\Pi\oplus\Pi))^{*}\] \[= U^{*}M_{2}^{\oplus}U(\Pi\oplus\Pi)((U^{*}M_{2}^{\oplus}U)^{*}U^{* }M_{2}^{\oplus}U)^{-1}U(\Pi\oplus\Pi)U^{*}(M_{2}^{\oplus})^{*}U\] \[= \big{(}U^{*}M_{2}^{\oplus}U(\Pi\oplus\Pi)U^{*}(M_{2}^{\oplus})^{- 1}\big{)}\big{(}U^{*}M_{2}^{\oplus}U(\Pi\oplus\Pi)U^{*}(M_{2}^{\oplus})^{-1} \big{)}^{*}\,,\] and \(\widehat{\Theta}:\operatorname{ran}(\widehat{\Pi})\to\operatorname{ran}( \widehat{\Pi})\), \[\widehat{\Theta}:=\big{(}U^{*}M_{2}^{\oplus}U(\Pi\oplus\Pi)U^{*}(M_{1}^{\oplus })^{-1}U\big{)}^{-1}U^{*}M_{2}^{\oplus}U(\Theta\oplus\Theta)U^{*}(M_{2}^{ \oplus})^{-1}U. \tag{3.19}\] By (3.16), \(U^{*}M_{2}^{\oplus}U(\Pi\oplus\Pi)U^{*}(M_{1}^{\oplus})^{-1}U\) is symmetric. Therefore, by \[\operatorname{ran}(\widehat{\Pi})=\operatorname{ran}(U^{*}M_{2}^{\oplus}U(\Pi \oplus\Pi)U^{*}(M_{1}^{\oplus})^{-1}U)=\ker(U^{*}M_{2}^{\oplus}U(\Pi\oplus\Pi )U^{*}(M_{1}^{\oplus})^{-1}U)^{\perp}\,,\] the operator \[U^{*}M_{2}^{\oplus}U(\Pi\oplus\Pi)U^{*}(M_{1}^{\oplus})^{-1}U:\operatorname{ ran}(\widehat{\Pi})\to\operatorname{ran}(\widehat{\Pi})\] is a bijection and \(\widehat{\Theta}\) is well defined. To conclude, we have to show that \(\widehat{\Theta}\) is symmetric. By (3.19) and by \(U^{*}U=\mathbb{1}\), \[\widehat{\Theta}\text{ is symmetric }\iff M_{2}^{\oplus}U(\Theta\oplus\Theta)(\Pi\oplus \Pi)U^{*}(M_{1}^{\oplus})^{-1}\text{ is symmetric}\] and so \(\widehat{\Theta}\) is symmetric by (3.16). **Remark 3.3**.: Let us point out that it is not necessary to determine \(\widehat{\Pi}\) and \(\widehat{\Theta}\) explicitly in order to write down the domain of \(\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}}\). Indeed, by (3.3), \[\operatorname{dom}(\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{ \Theta}}) =\left\{\Psi\in H^{2}(\mathbb{R}\backslash Y;\mathbb{C}^{2}):\rho\Psi \oplus\rho\mathsf{D}_{\mathbb{R}\backslash Y}\Psi\in\operatorname{ran}(\Pi \oplus\Pi)\right.\] \[\qquad\qquad\left.(\Pi\tau\Psi\right)\oplus\left(\Pi\tau\mathsf{ D}_{\mathbb{R}\backslash Y}\Psi\right)=(\Theta\rho\Psi)\oplus(\Theta\rho \mathsf{D}_{\mathbb{R}\backslash Y}\Psi)\right\}.\] However, one needs to know \(\widehat{\Pi}\) and \(\widehat{\Theta}\) in order to write down the resolvent of \(\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}}\), according to Theorem 2.3. The above representation of \(\operatorname{dom}(\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}})\) suggests an alternative way to build the self-adjoint extensions of \(\widehat{S}^{\diamond}\mathbb{1}=\mathsf{H}\mathbb{1}|C^{\infty}_{comp}( \mathbb{R};\mathbb{C}^{2})\): one can apply the results in [21] and [22] to \(\mathsf{H}\mathbb{1}|\ker(\widetilde{\tau})\), where \[\widetilde{\tau}:H^{2}(\mathbb{R};\mathbb{C}^{2})\to\mathbb{C}^{2n}\oplus \mathbb{C}^{2n}\,,\qquad\widetilde{\tau}\Psi:=\tau\Psi\oplus\tau\mathsf{D}\Psi\,.\] In that case, the family of self-adjoint extensions of \(\widehat{S}^{\diamond}\mathbb{1}\) is represented by operators of the kind \(\widetilde{H}_{\widehat{\Pi},\widehat{\Theta}}\), where \(\widetilde{\Pi}\) is an orthogonal projector in \(\mathbb{C}^{2n}\oplus\mathbb{C}^{2n}\) and \(\widetilde{\Theta}\) is a symmetric operator in \(\operatorname{ran}(\widetilde{\Theta})\). With respect to this parametrization, one has that \(\mathsf{D}^{2}_{\Pi,\Theta}=\widetilde{\mathsf{H}}_{\widehat{\Pi},\widetilde{ \Theta}}+\frac{\mathbb{1}}{4}\) if and only if \(\widetilde{\Pi}=\Pi\oplus\Pi\) and \(\widetilde{\Theta}=\Theta\oplus\Theta\). Even if such a correspondence is more explicit than the one which uses the couple \((\widehat{\Pi},\widehat{\Theta})\), it has the drawback that it works with a representation of the family of self-adjoint extensions of \(\widehat{S}^{\circ}\mathbb{1}\) which is different from the usual one and which lacks of the analogous of the property \(\widehat{\mathsf{H}}_{\Pi\mathbb{1},\Theta\mathbb{1}}=\mathsf{H}_{\Pi,\Theta} \mathbb{1}\). Therefore, in this paper we prefer to work with the family \(\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}}\). **Remark 3.4**.: Suppose that for any \(\Psi\equiv(\psi_{1},\psi_{2})\in\mathrm{dom}(\mathsf{D}_{\Pi,\Theta})\) one has \[\begin{cases}\rho\Psi\in\mathrm{ran}(\Pi)\\ \Pi\tau\Psi=\Theta\rho\Psi\end{cases}\quad\iff\quad\begin{cases}B_{1}(\psi_{1} )=0\\ B_{2}(\psi_{2})=0\,,\end{cases}\] with some linear operators \(B_{1}:H^{1}(\mathbb{R}\backslash Y)\to\mathbb{C}^{d_{1}}\) and \(B_{2}:H^{1}(\mathbb{R}\backslash Y)\to\mathbb{C}^{d_{2}}\). Then, by the representation of \(\mathrm{dom}(\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}})\) in Remark3.3, there follows that the boundary conditions for \(\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}}\) rewrites as \[\begin{cases}B_{1}(\psi_{1})=0\\ B_{1}\big{(}-i\psi_{2}^{\prime}+\frac{1}{2}\,\psi_{1}\big{)}=0\\ B_{2}(\psi_{2})=0\\ B_{2}\big{(}-i\psi_{1}^{\prime}-\frac{1}{2}\,\psi_{1}\big{)}=0\end{cases}\quad \equiv\quad\begin{cases}B_{1}(\psi_{1})=0\\ B_{2}(\psi_{1}^{\prime})=0\\ B_{2}(\psi_{2})=0\\ B_{1}(\psi_{2}^{\prime})=0\,.\end{cases}\] This entails \[\mathrm{dom}(\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}})=\mathrm{ dom}(\mathsf{H}_{1,2})\oplus\mathrm{dom}(\mathsf{H}_{2,1})\,,\qquad(\mathsf{D}_{ \Pi,\Theta})^{2}=\left(\mathsf{H}_{1,2}+\frac{1}{4}\right)\oplus\left(\mathsf{ H}_{2,1}+\frac{1}{4}\right)\,,\] where the self-adjoint operators \(\mathsf{H}_{j,k}:\mathrm{dom}(\mathsf{H}_{j,k})\subseteq L^{2}(\mathbb{R}) \to L^{2}(\mathbb{R})\) are defined by \[\mathrm{dom}(\mathsf{H}_{1,2}):=\{\psi\in H^{2}(\mathbb{R} \backslash Y):B_{1}(\psi)=0,\ B_{2}(\psi^{\prime})=0\}\,,\quad\mathsf{H}_{1,2 }\psi:=\mathsf{H}_{\mathbb{R}\backslash Y}\psi\,.\] \[\mathrm{dom}(\mathsf{H}_{2,1}):=\{\psi\in H^{2}(\mathbb{R} \backslash Y):B_{2}(\psi)=0,\ B_{1}(\psi^{\prime})=0\}\,,\quad\mathsf{H}_{2,1 }\psi:=\mathsf{H}_{\mathbb{R}\backslash Y}\psi\,.\] ## 4. Applications ### Local boundary conditions Here we consider the case corresponding to local boundary conditions for the Dirac operator, i.e., boundary conditions which do not couple the values of \(\Psi\) at different point. That means \[\Pi=\Pi_{1}\oplus\cdots\oplus\Pi_{n}\,,\qquad\Pi_{k}:\mathbb{C}^{2}\to \mathbb{C}^{2}\,,\qquad 1\leq k\leq n\,,\] \[\Theta=\Theta_{1}\oplus\cdots\oplus\Theta_{n}\,,\qquad\Theta_{k}:\mathrm{ran} (\Pi_{k})\to\mathrm{ran}(\Pi_{k})\,,\qquad 1\leq k\leq n\,.\] In this case, by \[U((\Pi_{1}\oplus\cdots\oplus\Pi_{n})\oplus(\Pi_{1}\oplus\cdots \oplus\Pi_{n}))U^{*}=(\Pi_{1}\oplus\Pi_{1})\oplus\cdots\oplus(\Pi_{n}\oplus \Pi_{n})\,,\] one gets, by (3.18), \[\mathrm{ran}(\widehat{\Pi})= \mathrm{ran}(U^{*}(M_{2}(\Pi_{1}\oplus\Pi_{1})M_{1}^{-1})\oplus \cdots\oplus(M_{2}(\Pi_{n}\oplus\Pi_{n})M_{1}^{-1})U)\] \[= \mathrm{ran}(U^{*}(\widehat{\Pi}_{1}\oplus\cdots\oplus\widehat{ \Pi}_{n})U)\,,\] where, in the case \(\Pi_{k}\neq\mathbb{1}\), \(\widehat{\Pi}_{k}:\mathbb{C}^{4}\to\mathbb{C}^{4}\) is the orthogonal projector onto the \(2\)-dimensional subspace \[\mathrm{ran}(\widehat{\Pi}_{k})=\mathrm{ran}(M_{2}(\Pi_{k}\oplus \Pi_{k})M_{1}^{-1})\,, \tag{4.1}\] otherwise \(\widehat{\Pi}_{k}=\mathbb{1}\). Then, by (3.19) and by \[U((\Pi_{1}\Theta_{1}\Pi_{1}\oplus\cdots\oplus\Pi_{n}\Theta_{n} \Pi_{n})\oplus(\Pi_{1}\Theta_{1}\Pi_{1}\oplus\cdots\oplus\Pi_{n}\Theta_{n}\Pi_{ n}))U^{*}\] \[= (\Pi_{1}\Theta_{1}\Pi_{1}\oplus\Pi_{1}\Theta_{1}\Pi_{1})\oplus \cdots\oplus(\Pi_{n}\Theta_{n}\Pi_{n}\oplus\Pi_{n}\Theta_{n}\Pi_{n})\,,\] one obtains \[\widehat{\Theta}=U^{*}(\widehat{\Theta}_{1}\oplus\cdots\oplus\widehat{\Theta}_{n})U\,,\] where, in the case \(\Pi_{k}\neq\mathbb{1}\), \(\Theta_{k}\in\mathbb{R}\), \[\widehat{\Theta}_{k}:\operatorname{ran}(\widehat{\Pi}_{k})\to\operatorname{ ran}(\widehat{\Pi}_{k})\,,\qquad\widehat{\Theta}_{k}=\Theta_{k}M_{1}(\Pi_{k} \oplus\Pi_{k})M_{2}^{-1}\,,\] otherwise, \[\widehat{\Theta}_{k}:\mathbb{C}^{4}\to\mathbb{C}^{4}\,,\qquad\widehat{\Theta}_ {k}=M_{1}(\Theta_{k}\oplus\Theta_{k})M_{2}^{-1}\,.\] Therefore, the corresponding boundary conditions for \(\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}}\) are \[\widehat{\rho}_{y_{k}}\Psi\in\operatorname{ran}(\widehat{\Pi}_{k})\,,\qquad \widehat{\Pi}_{k}\widehat{\tau}_{y_{k}}\Psi=\widehat{\Theta}_{k}\widehat{ \rho}_{y_{k}}\Psi\,,\qquad 1\leq k\leq n\,,\] and so they are local as well. ### Gesztesy-Seba realizations These two families of self-adjoint realizations of the Dirac operator with local point interactions correspond, in the non relativistic limit, to Schrodinger operators with local point interactions either of \(\delta\)-type or of \(\delta^{\prime}\)-type (see [17], [1, Appendix J], [14]). The operators in the \(\alpha\)-family have self-adjointness domains \[\operatorname{dom}(\mathsf{D}_{\alpha})=\{\Psi\equiv(\psi_{1},\psi_{2})\in H ^{1}(\mathbb{R})\oplus H^{1}(\mathbb{R}\backslash Y):[\psi_{2}]_{y_{k}}=-i \alpha_{k}\psi_{1}(y_{k}),\ 1\leq k\leq n\},\quad\alpha_{k}\in\mathbb{R}\,, \tag{4.2}\] and the ones in the \(\beta\)-family have self-adjointness domains \[\operatorname{dom}(\mathsf{D}_{\beta})=\{\Psi\equiv(\psi_{1},\psi_{2})\in H ^{1}(\mathbb{R}\backslash Y)\oplus H^{1}(\mathbb{R}):[\psi_{1}]_{y_{k}}=-i \beta_{k}\psi_{2}(y_{k}),\ 1\leq k\leq n\},\quad\beta_{k}\in\mathbb{R}\,. \tag{4.3}\] Since the cases where all the \(\alpha_{k}\)'s or all the \(\beta_{k}\)'s are equal to zero correspond to \(\mathsf{D}\), and the cases where there are \(0<m<n\)\(\alpha_{k}\)'s or \(\beta_{k}\)'s which are zero reduce to the cases with \((n-m)\) point interactions, without loss of generality we can suppose that all the \(\alpha_{k}\)'s and \(\beta_{k}\)'s are different from zero. By Theorem 3.1 and Remark 3.2, one has \[\mathsf{D}_{\alpha}=\mathsf{D}_{\Pi^{(\alpha)},\Theta^{(\alpha)}}\,,\quad\Pi^ {(\alpha)}=\Pi^{(\alpha)}_{1}\oplus\cdots\oplus\Pi^{(\alpha)}_{n}\,,\quad \Theta^{(\alpha)}=\Theta^{(\alpha)}_{1}\oplus\cdots\oplus\Theta^{(\alpha)}_{n}\,,\] where \[\Pi^{(\alpha)}_{k}(\xi_{1},\xi_{2})=(\xi_{1},0)\,,\qquad\Theta^{(\alpha)}_{k} :\mathbb{C}\to\mathbb{C}\,,\quad\Theta^{(\alpha)}_{k}=\alpha_{k}^{-1}\] and \[\mathsf{D}_{\beta}=\mathsf{D}_{\Pi^{(\beta)},\Theta^{(\beta)}}\,,\quad\Pi^{( \beta)}=\Pi^{(\beta)}_{1}\oplus\cdots\oplus\Pi^{(\beta)}_{n}\,,\quad\Theta^ {(\beta)}=\Theta^{(\beta)}_{1}\oplus\cdots\oplus\Theta^{(\beta)}_{n}\,,\] where \[\Pi^{(\beta)}_{k}(\xi_{1},\xi_{2})=(0,\xi_{2})\,,\qquad\Theta^{(\beta)}_{k} :\mathbb{C}\to\mathbb{C}\,,\quad\Theta^{(\beta)}_{k}=\beta_{k}^{-1}\,.\] Therefore, \[(\mathsf{D}_{\alpha})^{2}=\widehat{\mathsf{H}}_{\alpha}+\frac{1}{4}\,,\] where \[\widehat{\mathsf{H}}_{\alpha}=\widehat{\mathsf{H}}_{\widehat{\mathsf{H}}^{( \alpha)},\widehat{\Theta}^{(\alpha)}}\,,\] \[\operatorname{ran}\bigl{(}\widehat{\Pi}^{(\alpha)}\bigr{)}=\operatorname{ ran}\bigl{(}U^{*}\bigl{(}\widehat{\Pi}^{(\alpha)}_{1}\oplus\cdots\oplus\widehat{\Pi}^{( \alpha)}_{n}\bigr{)}U\bigr{)}\,,\quad\widehat{\Theta}^{(\alpha)}=U^{*} \bigl{(}\widehat{\Theta}^{(\alpha)}_{1}\oplus\cdots\oplus\widehat{\Theta}^{( \alpha)}_{n}\bigr{)}U\,,\] \[\operatorname{ran}(\widehat{\Pi}^{(\alpha)}_{k})=\operatorname{ran}(M_{2}( \Pi^{(\alpha)}_{k}\oplus\Pi^{(\alpha)}_{k}))=\mathbb{C}\oplus\{0\}\oplus\{0 \}\oplus\mathbb{C}\equiv\mathbb{C}^{2}\,,\] \[\widehat{\Theta}^{(\alpha)}_{k}=M_{1}(\Theta^{(\alpha)}_{k}\oplus\Theta^{( \alpha)}_{k})M_{2}^{-1}:\mathbb{C}^{2}\to\mathbb{C}^{2}\,,\quad\widehat{\Theta }^{(\alpha)}_{k}=\frac{1}{\alpha_{k}}\begin{bmatrix}0&-i\\ i&-1\end{bmatrix}\] and \[(\mathsf{D}_{\beta})^{2}=\widehat{\mathsf{H}}_{\beta}+\frac{1}{4}\,,\] where \[\widehat{\mathsf{H}}_{\beta}=\widehat{\mathsf{H}}_{\widehat{\mathsf{H}}^{(\beta)}, \widehat{\mathsf{C}}^{(\beta)}}\,,\] \[\operatorname{ran}\bigl{(}\widehat{\Pi}^{(\beta)}\bigr{)}= \operatorname{ran}\bigl{(}U^{*}\bigl{(}\widehat{\Pi}^{(\beta)}_{1}\oplus\cdots \oplus\widehat{\Pi}^{(\beta)}_{n}\bigr{)}U\bigr{)}\,,\quad\widehat{\Theta}^{( \beta)}=U^{*}\bigl{(}\widehat{\Theta}^{(\beta)}_{1}\oplus\cdots\oplus\widehat {\Theta}^{(\beta)}_{n}\bigr{)}U\,,\] \[\quad\quad\operatorname{ran}(\widehat{\Pi}^{(\beta)}_{k})= \operatorname{ran}(M_{2}(\Pi^{(\beta)}_{k}\oplus\Pi^{(\beta)}_{k}))=\{0\} \oplus\mathbb{C}\oplus\mathbb{C}\oplus\{0\}\equiv\mathbb{C}^{2}\,,\] \[\widehat{\Theta}^{(\beta)}_{k}=M_{1}(\Theta^{(\beta)}_{k}\oplus \Theta^{(\beta)}_{k})M_{2}^{-1}:\mathbb{C}^{2}\to\mathbb{C}^{2}\,,\quad\widehat {\Theta}^{(\beta)}_{k}=\frac{1}{\beta_{k}}\begin{bmatrix}1&i\\ -i&0\end{bmatrix}.\] Hence, \[\operatorname{dom}(\widehat{\mathsf{H}}_{\alpha})=\{\Psi\equiv( \psi_{1},\psi_{2})\in H^{2}(\mathbb{R}\backslash Y):[\psi_{1}]_{y_{k}}=[\psi _{2}^{\prime}]_{y_{k}}=0,\] \[[\psi_{1}^{\prime}]_{y_{k}}=\alpha_{k}(\psi_{1}(y_{k})-i\psi_{2}^ {\prime}(y_{k})),\ [\psi_{2}]_{y_{k}}=-i\alpha_{k}\psi_{1}(y_{k}),\ 1\leq k\leq n\}\,,\] and \[\operatorname{dom}(\widehat{\mathsf{H}}_{\beta})=\{\Psi\equiv( \psi_{1},\psi_{2})\in H^{2}(\mathbb{R}\backslash Y):[\psi_{1}^{\prime}]_{y_{k} }=[\psi_{2}]_{y_{k}}=0,\] \[[\psi_{1}]_{y_{k}}=-i\beta_{k}\psi_{2}(y_{k}),\ [\psi_{2}^{ \prime}]_{y_{k}}=-\beta_{k}(\psi_{2}(y_{k})+i\psi_{1}^{\prime}(y_{k})),\ 1\leq k\leq n\}\,.\] ### Separating boundary conditions Let \(n=1\), and \(\Pi=\mathbb{1}\). By \(2\tau\Psi=\Psi(y^{-})+\Psi(y^{+})\) and \(\rho\Psi=i\sigma_{1}(\Psi(y_{+})-\Psi(y^{+}))\), the boundary condition \(\tau\Psi=\Theta\rho\Psi\) rewrites as \[(2i\Theta\sigma_{1}+\mathbb{1})\Psi(y^{-})=(2i\Theta\sigma_{1}-\mathbb{1}) \Psi(y^{+})\,.\] If \(\Theta\) is such that \[\operatorname{ran}(2i\Theta\sigma_{1}+\mathbb{1})\cap\operatorname{ran}(2i \Theta\sigma_{1}-\mathbb{1})=\{0\}\,, \tag{4.4}\] then \(\tau\Psi=\Theta\rho\Psi\) is equivalent to the separating boundary conditions \[(2i\Theta\sigma_{1}+\mathbb{1})\Psi(y^{-})=0 \tag{4.6}\] \[(2i\Theta\sigma_{1}-\mathbb{1})\Psi(y^{+})=0\,. \tag{4.5}\] By the equivalence of (4.4) with \[\det(2i\sigma_{1}\Theta-\mathbb{1})=0=\det(2i\sigma_{1}\Theta+\mathbb{1})\,, \tag{4.7}\] one gets that (4.4) holds if and only if \[\Theta=\Theta_{\omega,\alpha,\beta}:=\frac{1}{2}\begin{bmatrix}\alpha&i\omega \sqrt{1+\alpha\beta}\\ -i\omega\sqrt{1+\alpha\beta}&\beta\end{bmatrix},\qquad\omega=\pm 1\,,\ \alpha,\beta \in\mathbb{R}\,,\ \alpha\beta\geq-1\,, \tag{4.8}\] and (4.5), (4.6) rewrite, whenever \(\Psi\equiv(\psi_{1},\psi_{2})\), as \[\begin{cases}(\omega\sqrt{1+\alpha\beta}-1)\psi_{1}(y^{-})-i\alpha\psi_{2}(y^ {-})=0\\ i\beta\psi_{1}(y^{-})+(\omega\sqrt{1+\alpha\beta}+1)\psi_{2}(y^{-})=0\,,\end{cases} \tag{4.9}\] \[\begin{cases}(\omega\sqrt{1+\alpha\beta}+1)\psi_{1}(y^{+})-i\alpha\psi_{2}(y^ {+})=0\\ i\beta\psi_{1}(y^{+})+(\omega\sqrt{1+\alpha\beta}-1)\psi_{2}(y^{+})=0\,.\end{cases} \tag{4.10}\] Then \[\mathsf{D}_{\omega,\alpha,\beta}=\mathsf{D}_{\omega,\alpha,\beta}^{-}\oplus \mathsf{D}_{\omega,\alpha,\beta}^{+}\,,\] where \(\mathsf{D}_{\omega,\alpha,\beta}:=\mathsf{D}_{\Theta_{\omega,\alpha,\beta}}\) and the self-adjoint operators \(\mathsf{D}_{\omega,\alpha,\beta}^{-}\) and \(\mathsf{D}_{\omega,\alpha,\beta}^{+}\) denote the Dirac operators in \(L^{2}((-\infty,y);\mathbb{C}^{2})\) and \(L^{2}((y,+\infty);\mathbb{C}^{2})\), with boundary conditions (4.9) and (4.10) respectively. Rewriting the boundary condition \(\widehat{\tau}\Psi=\widehat{\Theta}_{\omega,\alpha,\beta}(\widehat{\rho} \mathbb{1})\Psi\) as \[\bigl{(}2i\widehat{\Theta}_{\omega,\alpha,\beta}(\sigma_{2}\oplus\sigma_{2})+ \mathbb{1}\bigr{)}\widehat{\Psi}(y^{-})=\bigl{(}2i\widehat{\Theta}_{\omega, \alpha,\beta}(\sigma_{2}\oplus\sigma_{2})-\mathbb{1}\bigr{)}\widehat{\Psi}(y^ {+})\,,\] where \(\widehat{\Psi}\equiv(\psi_{1},\psi_{1}^{\prime},\psi_{2},\psi_{2}^{\prime})\) and \(\widehat{\Theta}_{\omega,\alpha,\beta}\) is defined by (3.10), i.e., \[\widehat{\Theta}_{\omega,\alpha,\beta}=\begin{bmatrix}0&\omega\sqrt{1+\alpha \beta}&0&-i\alpha\\ \omega\sqrt{1+\alpha\beta}&\beta&i\beta&0\\ 0&-i\beta&0&-\omega\sqrt{1+\alpha\beta}\\ i\alpha&0&-\omega\sqrt{1+\alpha\beta}&-\alpha\end{bmatrix}\,,\] one can check that \[\det\big{(}2i\widehat{\Theta}_{\omega,\alpha,\beta}(\sigma_{2}\oplus\sigma_{2 })-\mathbb{1}\big{)}=0=\det\big{(}2i\widehat{\Theta}_{\omega,\alpha,\beta}( \sigma_{2}\oplus\sigma_{2})+\mathbb{1}\big{)}\] and so, proceeding as above, \[\operatorname{ran}\!\left(2i\widehat{\Theta}_{\omega,\alpha,\beta}(\sigma_{2} \oplus\sigma_{2})+\mathbb{1}\right)\cap\operatorname{ran}\!\left(2i\widehat{ \Theta}_{\omega,\alpha,\beta}(\sigma_{2}\oplus\sigma_{2})-\mathbb{1}\right)= \{0\}\,.\] Thus the separating boundary conditions \[\big{(}2i\widehat{\Theta}_{\omega,\alpha,\beta}(\sigma_{2}\oplus \sigma_{2})+\mathbb{1}\big{)}\widehat{\Psi}(y^{-}) =0 \tag{4.12}\] \[\big{(}2i\widehat{\Theta}_{\omega,\alpha,\beta}(\sigma_{2}\oplus \sigma_{2})-\mathbb{1}\big{)}\widehat{\Psi}(y^{+}) =0\,. \tag{4.11}\] hold for \(\widehat{\mathsf{H}}_{\omega,\alpha,\beta}:=\widehat{\mathsf{H}}_{\widehat{ \Theta}_{\omega,\alpha,\beta}}\) and \[\widehat{\mathsf{H}}_{\omega,\alpha,\beta}=\widehat{\mathsf{H}}_{\omega, \alpha,\beta}^{-}\oplus\widehat{\mathsf{H}}_{\omega,\alpha,\beta}^{+}\,,\] where the self-adjoint operators \(\widehat{\mathsf{H}}_{\omega,\alpha,\beta}^{-}\) and \(\widehat{\mathsf{H}}_{\omega,\alpha,\beta}^{+}\) denote the Schrodinger operator \(-\frac{d^{2}}{dx^{2}}\,\mathbb{1}\) in \(L^{2}((-\infty,y);\mathbb{C}^{2})\) and \(L^{2}((y,+\infty);\mathbb{C}^{2})\), with boundary conditions (4.11) and (4.12) respectively. Furthermore, \[(\mathsf{D}_{\omega,\alpha,\beta}^{\pm})^{2}=\widehat{\mathsf{H}}_{\omega, \alpha,\beta}^{\pm}+\frac{\mathbb{1}}{4}\,.\] In the case \(n=1\), \(\Pi\neq\mathbb{1}\), there are no separating boundary conditions since the boundary conditions in \(\operatorname{dom}(\mathsf{D}_{\Pi,\Theta})\) rewrites as \[(\Pi-\mathbb{1})\sigma_{1}\Psi(y^{-})=(\Pi-\mathbb{1})\sigma_{1}\Psi(y^{+})\,, \qquad(2i\Theta\sigma_{1}+\Pi)\Psi(y^{-})=(2i\Theta\sigma_{1}-\Pi)\Psi(y^{+})\] and the linear operator in the relation on the left has a trivial kernel. By the \(n=1\) case, one immediately gets the family of separating and local boundary conditions: it suffices to take \(\Theta_{\underline{\omega},\underline{\alpha},\underline{\beta}}:=\Theta_{ \omega_{1},\alpha_{1},\beta_{1}}\oplus\cdots\oplus\Theta_{\omega_{n},\alpha_{ n},\beta_{n}}\). Then, using the abbreviated notations \(\mathsf{D}_{\underline{\omega},\underline{\alpha},\underline{\beta}}\equiv \mathsf{D}_{\Theta_{\underline{\omega},\underline{\alpha},\underline{\beta}}}\) and \(\widehat{\mathsf{H}}_{\underline{\omega},\underline{\alpha},\underline{\beta}} \equiv\widehat{\mathsf{H}}_{\widehat{\Theta}_{\underline{\omega},\underline{ \alpha},\underline{\beta}}}\), where \(\widehat{\Theta}_{\underline{\omega},\underline{\alpha},\underline{\beta}}:= U^{*}\big{(}\widehat{\Theta}_{\omega_{1},\alpha_{1},\beta_{1}}\oplus\cdots\oplus\widehat{\Theta}_{ \omega_{n},\alpha_{n},\beta_{n}}\big{)}U\) and \(\widehat{\Theta}_{\omega_{k},\alpha_{k},\beta_{k}}\) is defined by (3.10), i.e., \(\widehat{\Theta}_{\omega_{k},\alpha_{k},\beta_{k}}:=M_{1}(\Theta_{\omega_{k}, \alpha_{k},\beta_{k}}\oplus\Theta_{\omega,\alpha,\beta})M_{2}^{-1}\), one obtains \[\mathsf{D}_{\underline{\omega},\underline{\alpha},\underline{\beta}}=\mathsf{ D}_{\omega_{1},\alpha_{1},\beta_{1}}^{-}\oplus\mathsf{D}_{\omega_{1},2,\alpha_{1}, 2\beta_{1},2}\oplus\mathsf{D}_{\omega_{2},3,\alpha_{2},3\beta_{2},3}\oplus \cdots\oplus\mathsf{D}_{\omega_{n-1,n},\alpha_{n-1,n}\beta_{n-1,n}}\oplus \mathsf{D}_{\omega_{n},\alpha_{n}\beta_{n}}^{+}\] and \[\widehat{\mathsf{H}}_{\underline{\omega},\underline{\alpha},\underline{\beta}} =\widehat{\mathsf{H}}_{\omega_{1},\alpha_{1},\beta_{1}}^{-}\oplus\widehat{ \mathsf{H}}_{\omega_{1},2,\alpha_{1},\beta_{1},2}\oplus\widehat{\mathsf{H}}_{ \omega_{2},3,\alpha_{2},3\beta_{2},3}\oplus\cdots\oplus\widehat{\mathsf{H}}_{ \omega_{n-1,n},\alpha_{n-1,n}\beta_{n-1,n}}\oplus\widehat{\mathsf{H}}_{\omega _{n},\alpha_{n}\beta_{n}}^{+}\,.\] Here \(\mathsf{D}_{\omega_{k-1,k},\alpha_{k-1,k}\beta_{k-1,k}}\) denotes the self-adjoint Dirac operator in \(L^{2}((y_{k-1},y_{k});\mathbb{C}^{2})\) with boundary conditions of the kind (4.6) at \(y_{k-1}\) (with parameters \(\omega_{k-1},\alpha_{k-1},\beta_{k-1}\)) and of the kind (4.5) at \(y_{k}\) (with parameters \(\omega_{k},\alpha_{k},\beta_{k}\)); \(\widehat{\mathsf{H}}_{\omega_{k-1,k},\alpha_{k-1,k},\beta_{k-1,k}}\) is defined in a similar way, using the boundary conditions (4.11) and (4.12). Furthermore, \[(\mathsf{D}_{\omega_{k-1,k},\alpha_{k-1,k},\beta_{k-1,k}})^{2}=\widehat{ \mathsf{H}}_{\omega_{k-1,k},\alpha_{k-1,k},\beta_{k-1,k}}+\frac{\mathbb{1}}{4}\,, \qquad 1\leq k\leq n\,.\] ### Supersymmetry Since \[\sigma_{1}\sigma_{2}+\sigma_{2}\sigma_{1}=0=\sigma_{3}\sigma_{2}+\sigma_{2}\sigma_ {3}\,,\] one has \[\sigma_{2}\mathsf{D}_{\mathbb{R}\setminus Y}+\mathsf{D}_{\mathbb{R}\setminus Y} \sigma_{2}=0\,.\] Therefore, if \((\Pi,\Theta)\) is such that \[\begin{cases}\rho\Psi\in\operatorname{ran}(\Pi)\\ \Pi\tau\Psi=\Theta\rho\Psi\end{cases}\qquad\Longrightarrow\quad\begin{cases} \rho\sigma_{2}\Psi\in\operatorname{ran}(\Pi)\\ \Pi\tau\sigma_{2}\Psi=\Theta\rho\sigma_{2}\Psi\,,\end{cases} \tag{4.13}\] then \(\sigma_{2}\) anti-commutes with \(\mathsf{D}_{\Pi,\Theta}\) and so, by (3.1), the system \[\left(\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}}+\frac{\mathbb{1}} {4},\sigma_{2},\mathsf{D}_{\Pi,\Theta}\right) \tag{4.14}\] has supersymmetry (see, e.g., [4, Chapter 1], [15, Section 6.3]). By \[\langle\sigma_{2}\Psi\rangle_{y}=\sigma_{2}\langle\Psi\rangle_{y}\,,\qquad[ \sigma_{2}\Psi]_{y}=\sigma_{2}[\Psi]_{y}\,,\] and by \(\sigma_{1}\sigma_{2}=-\sigma_{2}\sigma_{1}\), one gets \[\tau\sigma_{2}\Psi=\sigma_{2}^{\oplus}\tau\Psi\,,\qquad\rho\sigma_{2}\Psi=- \sigma_{2}^{\oplus}\rho\Phi\,,\qquad\sigma_{2}^{\oplus}:=\sigma_{2}\oplus \cdots\oplus\sigma_{2}\,.\] Therefore, (4.13) holds whenever \[\Pi\sigma_{2}^{\oplus}-\sigma_{2}^{\oplus}\Pi=0=\Theta\sigma_{2}^{\oplus}+ \sigma_{2}^{\oplus}\Theta\,. \tag{4.15}\] Given a couple \((\Pi,\Theta)\) which satisfies (4.15), let us further suppose that \[\det(\Theta+\Pi\tau G_{0}\Pi)\neq 0\,. \tag{4.16}\] Then, by (3.5), zero is not an eigenvalue of \(\mathsf{D}_{\Pi,\Theta}\), i.e., the system (4.14) has no supersymmetric state and there is a spontaneous supersymmetry breaking (see, e.g., [4, Section 1.8]). In the case \(n=1\), the solutions of (4.15) are found immediately. If \(\Pi\neq\mathbb{1}\), then \(\Pi=\Pi_{\pm}:=v_{\pm}\otimes v_{\pm}\) and \(\Theta=0\), where \(v_{\pm}\), \(|v_{\pm}|=1\), solves \(\sigma_{2}v_{\pm}=\pm v_{\pm}\). If \(\Pi=\mathbb{1}\), then \(\Theta=\Theta_{a,b}:=b\sigma_{1}+a\sigma_{3}\), \(a,b\in\mathbb{R}\). Since, by (2.7), \(\tau G_{0}=-\frac{1}{2}\,\sigma_{3}\), \(\det(\Theta_{a,b}+\tau G_{0})=0\) if and only if \(b=0\) and \(a=\frac{1}{2}\). Therefore, for any \((a,b)\in\mathbb{R}^{2}\backslash\{(\frac{1}{2},0)\}\) the system (4.14) with \(\Pi=\mathbb{1}\) and \(\Theta=\Theta_{a,b}\) has no supersymmetric state and there is a spontaneous supersymmetry breaking. Notice that once the solution of (4.15) in known in the \(n=1\) case, then the set of solutions for the case of \(n>1\) local point interactions is readily obtained: \(\Pi=\Pi_{1}\oplus\cdots\oplus\Pi_{n}\) and \(\Theta=\Theta_{1}\oplus\cdots\oplus\Theta_{n}\), where \((\Pi_{k},\Theta_{k})\) is equal either to \((\Pi_{\pm},\mathbb{0})\) or to \((\mathbb{1},\Theta_{a_{k},b_{k}})\). ### Quantum Graphs Since \(\mathsf{D}_{\Pi,\Theta}\) is a generic self-adjoint extension of \(S=\mathsf{D}|C^{\infty}_{comp}(\mathbb{R}\setminus Y;\mathbb{C}^{2})=( \mathsf{D}|C^{\infty}_{comp}(I_{0};\mathbb{C}^{2}))\oplus\cdots\oplus(\mathsf{ D}|C^{\infty}_{comp}(I_{n};\mathbb{C}^{2}))\), the nonlocal extensions of \(S\) provide the self-adjoint realizations of the Dirac operator on a quantum graph with the two ends \(\overline{I}_{0}=(-\infty,y_{1}]\) and \(\overline{I}_{n}=[y_{n},+\infty)\) and the \((n-1)\) edges \(\overline{I}_{1}=[y_{1},y_{2}],\ldots,\overline{I}_{n-1}=[y_{n-1},y_{n}]\); the boundary conditions corresponding to the couple \((\Pi,\Theta)\) specify the connectivity of the graph. The case of a compact graph can be obtained by imposing separating boundary conditions at the two ends. Likewise, the nonlocal extensions of \(\mathsf{H}\mathbb{1}|C^{\infty}_{comp}(\mathbb{R}\setminus Y;\mathbb{C}^{2})\) provide self-adjoint realizations of the Schrodinger operator on a quantum graph with two ends and \((n-1)\) edges. For an introduction to the theory of quantum graphs we refer to the book [9] and the many references there; however, let us point out that our way of building the self-adjoint realizations on the graph is not the standard one. As an explicit example, let us consider the Dirac operator on the eye graph (see [13, Section III.D]). Therefore, we choose the subclass of boundary conditions for the Dirac operator in \(L^{2}(G;\mathbb{C}^{2})\), \(G=(-\infty,y_{1}]\sqcup[y_{1},y_{2}]\sqcup[y_{2},y_{3}]\sqcup[y_{3},+\infty)\), connecting \(\Psi(y_{1}^{-})\) with both \(\Psi(y_{1}^{+})\) and \(\Psi(y_{2}^{+})\) and connecting \(\Psi(y_{3}^{+})\) with both \(\Psi(y_{2}^{-})\) and \(\Psi(y_{3}^{-})\). Such kind of boundary conditions give to \(G\) the topology of a circle with two ends. \(y_{1}\)\(y_{2}\)\(y_{3}\)\(y_{3}\)\(y_{1}\)\(y_{2}\)\(y_{3}\)\(y_{2}\)\(y_{3}\)\(y_{1}\)\(y_{2}\)\(y_{3}\)\(y_{3}\)\(y_{2}\)\(y_{3}\)\(y_{1}\)\(y_{2}\)\(y_{3}\)\(y_{3}\)\(y_{1}\)\(y_{2}\)\(y_{3}\)\(y_{3}\)\(y_{1}\)\(y_{2}\)\(y_{3}\)\(y_{3}\)\(y_{2}\)\(y_{3}\)\(y_{3}\)\(y_{1}\)\(y_{2}\)\(y_{3}\)\(y_{3}\)\(y_{1}\)\(y_{2}\)\(y_{3}\)\(y_{3}\)\(y_{1}\)\(y_{2}\)\(y_{3}\)\(y_{3}\)\(y_{1}\)\(y_{2}\)\(y_{3}\)\(y_{3}\)\(y_{2}\)\(y_{3}\)\(y_{3}\)\(y_{1}\)\(y_{2}\)\(y_{3}\)\(y_{3}\)\(y_{1}\)\(y_{2}\)\(y_{3}\)\(y_{3}\)\(y_{1}\)\(y_{2}\)\(y_{3}\)\(y_{3}\)\(y_{1}\)\(y_{2}\)\(y_{3}\)\(y_{3}\)\(y_{1}\)\(y_{2}\)\(y_{3}\)\(y_{3}\)\(y_{1}\)\(y_{2}\)\(y_{3}\)\(y_{3}\)\(y_{2}\)\(y_{3}\)\(y_{1}\)\(y_{2}\)\(y_{3}\)\(y_{3}\)\(y_{1}\)\(y_{2}\)\(y_{3}\)\(y_{3}\)\(y_{1}\)\(y_{2}\)\(y_{3}\)\(y_{3}\)\(y_{1}\)\(y_{2}\)\(y_{3}\)\(y_{3}\)\(y_{2}\)\(y_{2}\)\(y_{3}\)\(y_{1}\)\(y_{2}\)\(y_{3} boundary conditions at the two vertices. Then, by Remark 3.4 and by (4.17), the Schrodinger operator \(\widehat{\mathsf{H}}_{\widehat{\Pi},\widehat{\Theta}}\) satisfies both the boundary conditions \(\mathrm{K}\) and \(\mathrm{K}_{*}\), where \[\mathrm{K}\equiv\begin{cases}\psi_{1}(y_{1}^{-})=\psi_{1}(y_{1}^{+})=\psi_{1}(y _{2}^{+})\\ \psi_{1}^{\prime}(y_{1}^{-})-\psi_{1}^{\prime}(y_{1}^{+})-\psi_{1}^{\prime}(y_ {2}^{+})=0\\ \psi_{1}(y_{2}^{-})=\psi_{1}(y_{3}^{-})=\psi_{1}(y_{3}^{+})\\ \psi_{1}^{\prime}(y_{2}^{-})+\psi_{1}^{\prime}(y_{3}^{-})-\psi_{1}^{\prime}(y_ {3}^{+})=0\end{cases}\qquad\mathrm{K}_{*}\equiv\begin{cases}\psi_{2}^{\prime}(y _{1}^{-})=\psi_{2}^{\prime}(y_{1}^{+})=\psi_{2}^{\prime}(y_{2}^{+})\\ \psi_{2}(y_{1}^{-})-\psi_{2}(y_{1}^{+})-\psi_{2}(y_{2}^{+})=0\\ \psi_{2}^{\prime}(y_{2}^{-})=\psi_{2}^{\prime}(y_{3}^{-})=\psi_{2}^{\prime}(y_ {3}^{+})\\ \psi_{2}(y_{2}^{-})+\psi_{2}(y_{3}^{-})-\psi_{2}(y_{3}^{+})=0\,.\end{cases} \tag{4.21}\] This gives \[(\mathsf{D}_{\mathrm{K}})^{2}=\left(\mathsf{H}_{\mathrm{K}}+\frac{1}{4}\right) \oplus\left(\mathsf{H}_{\mathrm{K}_{*}}+\frac{1}{4}\right)\,, \tag{4.22}\] where \(\mathsf{H}_{\mathrm{K}}\) is the Schrodinger operator in \(L^{2}(G)\) with the boundary conditions \(\mathrm{K}\) and \(\mathsf{H}_{\mathrm{K}_{*}}\) is the Schrodinger operator in \(L^{2}(G)\) with the boundary conditions \(\mathrm{K}_{*}\). The boundary conditions \(\mathrm{K}\) coincide with the usual Kirchhoff ones (see [9, eq. (1.4.4)]) while the boundary conditions \(\mathrm{K}_{*}\), are as sort of reversed Kirchhoff ones (named "homogeneous \(\delta^{\prime}\) vertex conditions" in [11]) given by the exchange \(\psi\leftrightarrow\psi^{\prime}\). The boundary conditions \(\mathrm{K}_{*}\), like the \(\mathrm{K}\) ones, give, in the case of the real line, the free Schrodinger operator; thus, (4.22) is consistent with (1.1). Furthermore, the Schrodinger operator \(\mathsf{H}_{\mathrm{K}}\oplus\mathsf{H}_{\mathrm{K}_{*}}\) appears in the nonrelativistic limit of \(\mathsf{D}_{K}\), see [11, Proposition 1.3]. The arguments in the previous example extend to any graph: by Remark 3.4, to the Kirchhoff-type boundary conditions for the Dirac operator \(\mathsf{D}_{\mathrm{K}}\) on the graph, i.e., to \[\begin{cases}\psi_{1}\text{ continuous at any vertex }v\\ \sum_{v}^{\pm}\psi_{2}(v)=0\text{ for any vertex }v,\end{cases}\] correspond, for the Schrodinger operators \(\mathsf{H}_{\mathrm{K}}\) and \(\mathsf{H}_{\mathrm{K}_{*}}\) such that (4.22) holds, the boundary conditions \[\mathrm{K}\equiv\begin{cases}\psi\text{ continuous at any vertex }v\\ \sum_{v}^{\pm}\psi^{\prime}(v)=0\,\text{ for any vertex }v\end{cases}\qquad \mathrm{K}_{*}\equiv\begin{cases}\psi^{\prime}\text{ continuous an any vertex }v\\ \sum_{v}^{\pm}\psi(v)=0\,\text{ for any vertex }v.\end{cases}\] Here, \(\sum_{v}^{\pm}f(v)\) means the sum over all the points \(y_{k}\in Y\) corresponding to the vertex \(v\) with the sign convention \[f(v):=\begin{cases}-f(y_{k}^{+})&y_{k}\text{ is at the left end of the interval/half-line}\\ +f(y_{k}^{-})&y_{k}\text{ is at the right end of the interval/half-line.}\end{cases}\]
2310.05695
Hierarchical Reinforcement Learning for Temporal Pattern Prediction
In this work, we explore the use of hierarchical reinforcement learning (HRL) for the task of temporal sequence prediction. Using a combination of deep learning and HRL, we develop a stock agent to predict temporal price sequences from historical stock price data and a vehicle agent to predict steering angles from first person, dash cam images. Our results in both domains indicate that a type of HRL, called feudal reinforcement learning, provides significant improvements to training speed and stability and prediction accuracy over standard RL. A key component to this success is the multi-resolution structure that introduces both temporal and spatial abstraction into the network hierarchy.
Faith Johnson, Kristin Dana
2023-10-09T13:15:57Z
http://arxiv.org/abs/2310.05695v1
# Hierarchical Reinforcement Learning for Temporal Pattern Prediction ###### Abstract In this work, we explore the use of hierarchical reinforcement learning (HRL) for the task of temporal sequence prediction. Using a combination of deep learning and HRL, we develop a stock agent to predict temporal price sequences from historical stock price data and a vehicle agent to predict steering angles from first person, dash cam images. Our results in both domains indicate that a type of HRL, called feudal reinforcement learning, provides significant improvements to training speed and stability and prediction accuracy over standard RL. A key component to this success is the multi-resolution structure that introduces both temporal and spatial abstraction into the network hierarchy. ## 1 Introduction Reinforcement learning (RL) has made major strides over the past decade, from learning to play Atari games [8] to mastering chess and Go [10]. However, RL algorithms tend to work in a specific, controlled environment and are often difficult to train. In response to this brittleness, hierarchical reinforcement learning (HRL) is growing in popularity. We combine deep learning and HRL for temporal sequence prediction in two application domains where publicly available data is abundant. First, we develop stock agents to execute trades in a market environment. The training data consists of historical stock prices from 1995 to 2018. Second, we develop a vehicle agent to predict steering angles given visual input. We use the Udacity dataset[11] as training data, which consists of five videos with a total duration of 1694 seconds. In HRL, a manager network operates at a lower temporal resolution and produces goals that it passes to the worker network. The worker network uses this goal to produce a policy over micro-actions at a higher temporal resolution than the manager[12]. Within the stock market, there are two natural hierarchies. The first hierarchy involves temporal scales. A trader can consider how a stock's price fluctuates over the course of an hour, but also over the course of a week, month, or year. The second hierarchy is the separation of the different market sectors, each containing a multitude of stocks. In order to trade in the market effectively, stock brokers must consider both the relationships between sectors and the relationships between the stocks in each sector. In the same vein, the task of autonomous navigation is complicated because, at all times, human drivers have two levels of things they are paying attention to. The first level is on a fine grain: don't immediately crash the vehicle by hitting obstacles. The second level is on a coarser grain: plan actions a few steps ahead to keep the vehicle going in the correct direction as efficiently as possible. In both domains, financial and vehicluar, we implement agents with both RL and HRL. We show that HRL provides improved training stability and prediction performance. ## 2 Methods ### LSTM Stock Predictions First, we set up a baseline for future result comparison using a simple LSTM network. Using stock market data gathered from Kaggle[7], we predict the closed price for a single day given the open price of that day. We build a simple LSTM model in Keras[2] with ten neurons and a ReLU activation function followed by a fully connected layer. This network is trained for ten epochs using the mean squared error loss function and an adam optimizer. Next, we predict a sequence of open prices for a particular stock given a sequence of previous open prices. For this, we use a slightly larger LSTM network with three layers of LSTMs, each with ten neurons and ReLU activation functions followed by a fully connected layer. The loss function and optimizer for this experiment are also mean squared error and Adam respectively. However, this network is trained for twenty epochs. We conduct this experiment with several sequence pairings: 1 previous price to predict the next price, 3 previous prices to predict the next 3 prices, and 5 previous prices to predict the next 5 prices. Finally, we implement a reinforcement learning stock agent to predict open stock prices by learning a multiplier to transform the previous open price to the next open price. It takes a sequence of historical open prices for a single stock as input and passes them through several layers of LSTMs with ReLU activation functions and a fully connected layer. The new price is computed by multiplying the current open price by the output to produce the predicted open price for the next day. ### Stock Environment After the multiplier agent, we rethink our approach to stock price prediction by moving away from predicting the price itself, and thus away from computing the answer to a regression problem. The next set of experiments involves executing trades in a stock market environment with the goal of doubling the value in a given portfolio. The rationale behind this change is that reinforcement learning excels at learning a policy over a given set of actions. In the regression problem of learning a multiplier, the action space is essentially infinite. In addition, the state space is infinite because it consists of all possible stock prices. Learning a policy over an infinite set of possibilities is almost impossible. To create our stock environment for these new reinforcement learning agents, we use Quadnl[9] to collect market information for a small subset of stocks from six sectors. The six stock sectors are technology, energy, finance, healthcare, utilities, and transportation. The goal is to have a diverse selection of stocks and sectors from the market in order for the agent to be able to glean the relationships between the sectors along with the relationship between stocks within each sector. We define the action space to consist of three actions: buy, sell, and hold. All agents buy or sell only one stock per action, unless otherwise stated. If an agent does not have enough money to execute the buy action, it is forced to hold. The same is true if it does not have enough shares to execute the sell action. The environment keeps track of an agent's current balance of cash and portfolio value, as well as giving the agents a reward for their actions, which is generally the change in the agent's total portfolio value. ### Hard Coded Stock Agent The baseline for performance in the stock environment comes from a hard coded stock agent whose aim is to double the value of a given portfolio. First, it defines two thresholds, a selling threshold and a buying threshold. When two consecutive open prices differ by less than the selling threshold, the agent decides to sell shares in the stock. When the two prices differ by more than the buying threshold, the agent decides to buy a share in the stock. The idea is that the buying threshold is positive and the selling threshold is negative so that the agent will sell when the price goes down and buy when the price goes up. When the price difference lies between the two thresholds, the agent takes the hold action. ### Reinforcement Learning Stock Agents #### 2.4.1 Q Learning Agent For the Q learning agent, the state space is limited to the combinations of whether or not the price of a stock has gone up or down and whether or not the agent currently possesses shares in said stock. In q learning, the agent keeps track of the policy over actions and states using a q table. The q table is indexed by the actions and the states, as in Table 1, and is initially filled with zeros. As an agent takes actions in the environment, it receives a reward that it uses to update the table to reflect the utility of each action given a certain state. An agent decides which action to take by referencing the portion of this q table corresponding to its current state and choosing the action associated with the highest q value. #### 2.4.2 Deep Q Network (DQN) Agent This stock agent builds upon the same ideas as the previous q learning agent, but it chooses it's actions differently. Instead of using the reward from the environment to update the q table, it uses a deep q network, or DQN, to approximate the q value of a certain action given a state. This network is made up of three LSTMs with ReLU activation functions in sequence, followed by a fully connected layer. The first LSTM has 32 hidden layers, and the last two have 64 layers. In the case of this network, the state is the previous three open prices for the stocks in each of the sectors. However, the action space remains the same. With the q learning agent, having an infinite state space would not be ideal for optimal policy convergence, but the DQN is still able to converge on a solution. The reward from the environment plays a role in the loss back-propagation of the network. For each action taken by the agent, a tuple containing the initial state, \(s_{0}\), the final state, \(s\), the action taken, \(a\), and the corresponding reward, \(r\), is saved in a replay buffer. During training, a random tuple is sampled from this buffer, and the loss, which in this case is the reward, is back-propagated through the network as in [8]. \begin{table} \begin{tabular}{|c||c|c|c|} \hline & Buy & Hold & Sell \\ \hline \hline Price Increases \& Have Shares & & \\ \hline Price Increases \& Have No Shares & & \\ \hline Price Decreases \& Have Shares & & \\ \hline Price Decreases \& Have No Shares & & \\ \hline \end{tabular} \end{table} Table 1: Example q table for the q learning stock agent. It is indexed by the three actions (buy, hold, sell) and the combination of price fluctuation and share possession. ### Feudal Reinforcement Learning In feudal reinforcement learning, the manager network operates at a lower temporal resolution than the worker network. It receives state input from the environment and communicates with the worker network through a goal vector. This goal vector encapsulates a temporally extended action that the manager thinks will receive the highest reward from the environment. The worker executes atomic actions in the environment based on this goal vector and its own state information. This process of manager/worker communication through temporal abstraction helps to break down a problem into more easily digestible pieces. To explain the concept of temporal abstraction further, take the case of an agent attempting to leave a room through a door. When a person thinks of completing this action, they don't do it at the low level of straight, straight, left, straight, right, etc. In other words, they do not consciously think of each atomic action required to exit the room. Instead, they think in terms of temporal abstraction. Find the door. Approach it. Pass through it. Each of those actions encapsulates multiple atomic actions that need to be executed in a specific order for the agent to complete the task. For a feudal network to solve the room example, the manager would create goal vectors for the "find the door", "approach it", and "pass through it" operations. Then, the worker would only have to focus on executing atomic actions to complete one of these smaller tasks at a time, which is much simpler than the original task of exiting the room as a whole. This makes it easier to generate an ideal policy. Additionally, the idea of temporal abstraction can be applied to space. Incorporating different spatial resolutions into a feudal network can break down problems into spatial abstractions which make them easier to solve in the same way. ### Maze Environment To test the performance of feudal reinforcement learning, we use a maze environment as proposed in Dayan et al.[3]. In this environment, there are multiple levels of the same maze, each at a lower spatial resolution than the previous level. The agent at the highest spatial resolution is the worker, who receives goal vectors from the agent at the next lowest resolution, who is its manager. This manager becomes the worker for the agent at the next lowest resolution, and so on, until you reach the level with the lowest spatial resolution, where the ultimate manager resides. For example, the worker on the level with the highest spatial resolution will operate in a 16x16 grid and have a manager who operates in an 8x8 grid. This manager will be the worker for the agent in the 4x4 grid, etc., until you get to the final manager in the 1x1 grid. We create this maze by editing the gym-maze[1] github repository code. In our maze, there are only two levels. The worker operates in a 4x4 version of the maze, and the manager operates in a 2x2 version, as in Figure 1. Each square in the manager's 2x2 rendition of the maze corresponds to a 2x2 section of the worker's maze. We omit the 1x1 manager from the Dayan et al. experiment because it is computationally irrelevant for this task. The state space of each agent is comprised of each square in the grid of their respective maze resolutions. The objective of the agents is to reach some goal square in the maze. This goal is mapped to the same equivalent location in all maze levels and can be specified at run-time. The action space is comprised of moving north/south/east/west or declaring that the goal is the current space. There is a base reward of \[\frac{-0.1}{X_{dim}*Y_{dim}}\] applied to every movement for all agents, where\(X_{dim}\) and \(Y_{dim}\) are the x and y dimensions of the maze. However, if an agent finds the goal, it receives a reward of 1. ### Maze Agents We test several agents in our maze environment, starting with a reinforcement learning model as a baseline, before moving to a feudal reinforcement learning implementation. #### 2.7.1 Q Learning Agent The first agent we built for the maze environment uses q learning to navigate a single, 4x4 level of the maze in search of a goal square. When it reaches this goal, the experiment ends. The q table of the agent is the same size as the maze with each square in the maze corresponding to one entry in the table. The values in this table are updated based on the reward received from the environment. #### 2.7.2 Feudal Q Learning Agent The feudal network solves the maze using q learning as well. The manager network receives it's location in the Figure 1: (a) Manager’s 2x2 view of the maze. (b) Worker’s corresponding 4x4 view of the same maze. maze as it's current state. It uses this to choose and execute the best action from it's q table. The manager is able to move in any of the four directions, and this direction is the basis for the goal vector that tells the worker what quadrant to move to. If the manager declares that the goal is in its current space, it is telling the worker that it should look for its own goal in a specific quadrant of the maze. When the worker receives a goal vector from the manager, it moves to the specified quadrant and waits for more instructions. If it is indicated that the worker should look for the goal in a quadrant, it continues to move until the goal is found. Both the manager and the worker receive a base negative reward for every action that doesn't result in finding the goal. In this way, the spaces furthest away from the goal will have a more negative q value than those closest to the goal. In addition, the manager receives a negative reward if the worker finds the goal space without following the instructions from the goal vector. While the individual reward values resulting from exploring the maze may be the same, the manager and worker do not receive the same reward signals. The worker takes many more steps in the environment due to the difference in spatial resolution, so it will receive a reward more often than the manager. In this way, the spatial abstraction of the maze results in a temporal abstraction of the reward signals. ### Feudal Reinforcement Learning Stock Agents Once we discovered the performance improvements of feudal reinforcement learning, we decide to return to the stock market portfolio experiments with feudal reinforcement learning agents. #### 2.8.1 Feudal Q Learning Agent Our feudal q learning stock agent operates in the same environment as the previous reinforcement learning agents and has the same goal of doubling the value of some portfolio. Its input is a sequence of open prices for each stock in the six predetermined sectors. The q table structure and state space are also the same. The main difference is the division of labor between the manager and worker networks. The manager receives the price input from the environment and determines whether or not each of the six sectors should be traded or not. This decision is passed to the worker in the goal vector. The worker then decides whether to buy or sell the stocks in the sectors specified by the manager. For each goal vector from the manager, the worker acts a fixed number of times to introduce temporal abstraction to the problem in addition to the spatial abstraction already present. The reward of the manager is the overall portfolio value change, while the worker receives a reward for the portfolio value change of each sector after each action it executes. #### 2.8.2 Feudal Networks with Multiple Workers The feudal reinforcement learning problem can be extended to a vertical hierarchy with multiple managers and workers in sequence, as we've already explored, but this concept can also be extended horizontally to one manager with multiple workers. To this end, we have implemented two different experiments, both using q learning. The first involved a manager network with a set of three different worker networks, where each worker makes a different number of transactions in the environment. The manager's action is choosing which of the workers will act in the environment at a given time. The second involved a manager network with a set of three workers, where each of these workers have a different hard coded behavior. The first buys when a stock's price increases and sells when it decreases, the second sells when a stock's price increases and buys when it decreases, and the third executes a random action. The manager's action set consists of choosing which worker will interact with the environment. ### Driving Environment We also test feudal reinforcement learning in the domain of autonomous vehicles. For that, we use the Udacity driving dataset[11]. They provide steering angles, first-person dash cam images, braking, and throttle pressure data. We augment this dataset to increase its size and influence model training by performing several transformations on the image and angle data. First, we implement a horizontal flip to effectively double the size of the dataset. For this change, we negate the angles associated with the flipped images. As an additional option, we use the horizontal and vertical optical flow images. The horizontal optical flow image, \(i_{x}\), is obtained by convolving the image with the row vector \([1,0,-1]\), while the vertical optical flow image is obtained by convolving the image with the column vector \[\begin{bmatrix}1\\ 0\\ -1\end{bmatrix}\] Finally, all images are scaled and normalized so that their pixel values lie in the range \([-1,1]\). ### Steering Angle Experiments #### 2.10.1 Steering Angle Prediction We started simple, so our first task was to predict steering angles based on visual input. After some initial difficulty with our model, we found a network[4] from the Udacity challenge that accurately predicts steering angles. It has a convolutional layer with a ReLU activation function followed by a dropout layer. The output of this is saved to use for a skip connection later on in the network. This is repeated four times before the output is fed through some fully connected layers, also with ReLU activation functions. At this point, the output and the intermediary representations are added together, passed through an ELU layer, and normalized. Then, the previous steering angle and the output of the ELU layer are passed through an LSTM. Finally, the output of the LSTM is passed through a fully connected layer to produce the steering angle. Note that this network takes in a sequence of images as well as the previous angle in order to make its predictions. #### 2.10.2 Subroutine ID Prediction Being able to predict steering angles is useful, but for feudal reinforcement learning we also need to classify the steering angles into their temporally abstracted categories (such as go right, go left, go straight). This can be done by hand, but it would be a lengthy process. Instead, we take inspiration from Kumar et al.[5] to learn these subroutines, otherwise called options or macro-actions, using a neural network. To do this, we jointly train two networks. The first takes in a sequence of angles and predicts the subroutine ID. The second takes in the subroutine ID, a sequence of images, and the previously predicted angle and predicts the next steering angle in the sequence. A problem we encountered with the steering angle prediction from the previous section is that it appears as if the network is simply predicting that the previous angle will be the next steering angle. To circumvent this, we give the second network the previously predicted angle instead of the ground truth angle. Additionally during training, the sequence of angles fed into the first network contains the angle it is trying to predict. However, during testing, we only use a sequence of angles preceding the angle we aim to predict in order to avoid this conflict. #### 2.10.3 t-SNE Prediction Ideally, we want an angle prediction network that does not take in the previous steering angle at all. To accomplish this, we explored using t-SNE[6] as an embedding space for our driving data and as the subroutine IDs themselves. To do this, we arranged the steering angle, braking, and throttle pressure data into vectors of length ten. Then, the vectors from each category that correspond to the same time steps are concatenated together to make vectors of length thirty. The collection of these vectors is passed through the unsupervised t-SNE algorithm to create a coordinate space for the driving data. Each vector of length thirty is given one x and y coordinate pair as illustrated in Figure 2. The greater collection of all of the generated points is in Figure 3. The coloring of the points in this figure is hard coded. The points corresponding to vectors with primarily negative steering angles are in blue. The points corresponding to vectors with positive steering angles are shown in Figure 4. Figure 4: K-Means clustering (k=20) of the TSNE coordinates of the Udacity data with the centroids pictured in red. Not only to distinct clusters form in the data, but each cluster corresponds to a unique action of the vehicle. Figure 3: Total plot of the t-SNE coordinates for the Udacity data. The colors correspond to the average sign of the angles in each length 3m vector used to generate the points. Figure 2: Steering, braking, and throttle data are concatenated every m time steps to make a vector of length 3m. Each 3m vector corresponds to one set of coordinates in the 2D t-SNE space. The t-SNE coordinates act like a manager for the steering angle prediction and operate at a lower temporal scale. In our experiments, m=10. \(t\) and \(\tau\) correspond to the final time step for the driving data and t-SNE coordinates respectively. steering angles are in green. The orange points correspond to vectors with steering angles that are relatively close to zero. Once we have the t-SNE embedding of the data, we use K-Means clustering on the coordinates and take the centroids of the clusters as our new subroutine IDs, as shown in Figure 4. We vary k from ten to twenty to determine if different numbers of clusters improve prediction performance. Then, we train a network to take in the centroids as the subroutine ID, as well as a sequence of images, in order to predict the next steering angle. In order to ensure that no data pertaining to the predicted steering angle is used as input to this network, we use the t-SNE centroid corresponding to the data of the previous 3m steering, braking, and throttle data as input to the network. To illustrate, refer back to Figure 2. If we are predicting an angle from the range \(t\in[2m,3m]\), then the t-SNE centroid used for the subroutine ID input to the angle prediction network will be the centroid at \(\tau=2\), which was made with the steering, braking, and throttle data from \(t\in[m,2m]\). In this way, the angle we are attempting to predict will not be used to compute the t-SNE centroid used as the subroutine ID. This shift also incorporates an extra level of temporal abstraction into our network. Additionally, we create a tool that displays the visual data corresponding to the different t-SNE coordinates, allowing the user to visually inspect that neighboring points in the embedding space correspond to similar driving behaviors. Figure 5 attempts to replicate this by showing example training images that correspond to some of the t-SNE centroids. Notice that the bottom right of the figure contains sharp right turns. Moving diagonally upwards, the right turns get less sharp until the vehicle begins to go straight. Then, this straight motion gradually begins to become a left turn until, by the top left of the figure, the vehicle is making sharp left turns. Figure 5: Example training images are shown with their corresponding t-SNE centroids. Notice that the bottom right of the figure contains sharp right turns. As you move upwards, the right turn gets less sharp until the vehicle begins to go straight. By the top left of the figure, the vehicle is making sharp left turns. ## 3 Results ### LSTM Experiments Our first experiment used LSTMs to predict open prices of stocks. We varied the input and output window sizes from one to fifteen and compared the results. A subset of the prediction graphs are available in Figure 8 for predicting two, four, ten, and twelve prices out. Also included in each graph is a line representing the average of the last two, four, ten, or twelve prices as a comparison to the prediction. The prediction with a window of two works extremely well, as evidenced by the fact that the three lines (the real, prediction, and average price) are almost directly on top of each other. However, as the window length increases, there is a clear divergence of the prediction from the real price. There is a trade off between accuracy and the length of the prediction, which is expected because data farther out in time will have less of a dependence on the input to the LSTM. Also, the predictions become much noisier, which is to be expected for the same reason. We also tested LSTM prediction on smoothed data. After training the LSTM on this smoothed data, we tested the model on new smoothed and non-smoothed data and compared the loss values, computed through mean squared error, in Figure 6. The blue line is the loss associated with the smoothed data and the orange line corresponds to the regular data. The loss value is on the y-axis, and the x-axis shows how much smoothing was applied to the training data. We smoothed using a moving average filter, so the x-axis points represent how many data points were used in the average. The graph shows that feeding smoothed data into an LSTM increases the accuracy of its predictions and leads to a quicker decay of the loss. The results for our first stock agent that learns a multiplier to predict the next open price based on the previous stock price can be found in Figure 7 (a). The blue line represents the real stock open price, and the green line is the prediction. The y-axis is the price in dollars, and the x-axis is the time step. To get a better idea of exactly how accurate the predictions are, Figure 7 (b) also shows the difference between the real open prices and the predicted prices for each time step. Most of the predictions differ by less than a dollar, which is an order of magnitude less than the prices themselves. The largest differences between the real and predicted prices occur during drastic changes to the stock price. We then compare these predictions with those from the LSTM. In general, the reinforcement agent predictions, as pictured in Figure 9 (a), are much more accurate than the LSTM prediction, as pictured in Figure 9 (b). It seems that LSTMs do a decent job at predicting smaller, local changes, but their performance falls short when a large change in price occurs. The reinforcement learning agent is more ro Figure 6: LSTM loss comparison for predictions on regular data and smoothed data. The more smoothed the data, the more quickly the loss decays. Figure 7: Results for the multiplier stock agent. (a) shows that the predictions match very closely, and (b) shows that the areas where the predicted and real price differ the most occur during drastic price changes. bust to being able to handle these changes. Additionally, the LSTM predictions are much more noisy. We also compare the LSTM and multiplication stock agent's abilities to predict open prices for multiple stocks at once in Figure 10. Once again, we see that the predictions of the reinforcement learning stock agent are much better than the LSTM. For this experiment, we used stocks from the same sector in order to increase the likelihood that there would be correlations in the stocks' behavior. Reinforcement learning is better able to find and exploit these relationships to help make predictions than the LSTM. With this in mind, we shift our focus towards reinforcement learning and other techniques that can be derived from it. ### Maze Experiments Moving to the maze experiments, we compare the relative performance of reinforcement learning and feudal reinforcement learning. We had three different agents navigate a maze until they reached some goal space. For this experiment, we used a fixed maze with a fixed goal space, but it is possible to randomize both the maze structure and the goal location at run time. The standard q learning agent's performance is documented in Figure 11 and Figure 12 in blue. The next two agents are different variations of feudal q learning networks. In the first, the goal vector the worker receives from the manager tells it which direction to take in the maze. The results for this agent are in red in Figures 11 and 12. In the second feudal q learning agent, the goal vector received by the worker tells it which quadrant to go to in the maze. This is different than the previous agent because the worker is not explicitly told which direction to take to reach this goal quadrant. This agent's results are pictured in Figures 11 and 12 in green. Figure 8: Prediction results for an LSTM when the size of the input and output prediction windows are from two, four, ten, and twelve respectively. The LSTM predictions match more closely with smaller windows than larger windows and provide less noisy results. However, the larger windows allow for longer term predictions. In Figure 11, we compare the amount of time steps per episode for each of the three agents. The q learning agent takes the most amount of time overall, closely followed by the feudal network with a direction as the goal vector. Additionally, these two methods solve the maze in approximately the same amount of time once they have found the optimal path through the maze. However, the feudal network with a quadrant as the goal vector is both significantly faster during training and finds a better solution to the maze, as evidenced by the fact that its solution takes less time to navigate the maze than the other two agents. Figure 12 shows the reward per episode for each of the three maze agents. The reward of the three agents converges to the same number, by design, but it is clear from the graph that the feudal agent with a quadrant as the goal vector performs the best. Its reward reaches the convergence value much faster than either of the other two agents. However, both of the feudal agents have a large dip in the reward early on in training that is not present in the q learning agent, indicating that the feudal networks do a lot more of their exploration in the earlier stages of their training than the q learning network. ### Portfolio Stock Experiments Now that we've discovered the power of feudal reinforcement learning, we revisit the problem of predicting stock prices. However, instead of attempting to solve a regression problem, we shift our focus to learning a policy over actions. We compare the performance of a hard coded agent, a q learning agent, a DQN agent, and a feudal Q learning agent at the task of doubling the value of a given portfolio in the stock market in Figure 13. Pay attention to Figure 10: Comparison between the multiplier stock agent and an LSTM for predicting prices for multiple stocks at once. The stock symbols are provided in the legend (top left). The LSTM struggles to capture the behavior of multiple stocks at once. Figure 9: Comparison of the multiplication stock agent predictions and the LSTM predictions. The multiplication agent matches the real prices closer than the LSTM and provides less noisy predictions overall. the x-axes in this figure to note the difference between the presented methods. The hard coded agent takes the longest amount of time to accomplish this task, as expected, with a duration of 1453 time steps. Also unsurprisingly, the q learning agent has the next longest duration with 802 time steps. The DQN agent doubles its portfolio value in 679 time steps, and the feudal q learning agent achieves this goal in 680 time steps. The main takeaway from this result is that we achieved comparable results with feudal q learning, which is a relatively simple method, as we did with the DQN, which is a relatively complicated, deep learning method. We wanted to verify that this was always the case, so we repeated this portfolio doubling experiment multiple times for each agent and recorded the duration results in the histograms in Figure 14. The y-axis is the number of trials in each bin, and the x-axis is the duration of each trial. Extra attention should be paid to the x-axes of the graphs in Figure 14. The hard coded and q learning agents have an x-axis from 0 to 2500, while the other two agent's x-axes are capped at 1000. We can see that the q learning agent has values skewed more towards zero than the hard coded agent, so we expect an overall faster average duration from that agent. The average duration for the hard coded agent was 1521 time steps, and the q learning agent had an average duration of 1326 time steps, so this we prove this claim to be correct. In the same way, the feudal q learning agent has a histogram that is skewed more towards zero than the DQN agent, so we expect this to be the faster method. The DQN agent took an average of 651 time steps, and the feudal q learning agent took an average of 573 time steps. Therefore, we show that our original result was an understatement, and feudal q learning is, on average, much faster than a DQN at doubling a portfolio's value in the stock market. ### Steering Angle Experiments In the stock portfolio experiments, we prove the effectiveness of feudal reinforcement learning. In this section, we aim to explore the boundary of its abilities in the driving domain. Our first experiment involves predicting steering angles based on image input. We create an image cube with ten sequential frames that we feed into our modified Udacity challenge network[4], along with the previous steering angle, to predict the next steering angle. Figure 15 shows a subset of real steering angles from the Udacity[11] dataset, in blue, and the corresponding predicted angles, in orange. The predictions follow the real angles very closely except when there are drastic changes in the steering angles where it tends to over/under estimate the steering angle, which is the same issue we encountered with the LSTM stock experiments. Our ultimate goal, however, is to use feudal networks to predict steering angles. To do this, we first need to label subroutines within the data in order to have data with which to train the manager network. Instead of doing this by hand, we jointly train two networks: one that takes in a sequence of angles and predicts their subroutine ID, and another that takes in this subroutine ID, an image cube, and the previously predicted angle and predicts the next angle in the sequence. Figure 16 shows these prediction results. The left graph contains the steering angle predictions. The real angles are in blue, and the predicted angles are in orange. The right graph shows the predicted subroutine IDs. The blue line is the raw prediction values, and the orange line shows the binned values. For this, we map the predicted subrou Figure 11: Comparison of time steps required per episode for three agents in the maze environment. Feudal reinforcement learning (green) takes less time to solve the maze overall and explores the maze more efficiently. Figure 12: Comparison of the reward per episode for three agents in the maze environment. The reward for feudal reinforcement learning (green) converges faster than the other two methods. tine IDs to their closest value in the set \(\{-1,0,1\}\). In this way, we have three discrete subroutine IDs corresponding to left turns, right turns, and going straight. However, there are two problems with these solutions. The first is that it stands to reason that there could be more than just three subroutines represented in the driving data. Driving is a complex task that involves a lot of minutia. For instance, we could expand left turns to turning a little left, turning a moderate amount of left, and turning a lot left. The same could be done for right turns and even going straight. Therefore, constraining the subroutine IDs to fit into three discrete categories, as inspired by [5], may not allow us to represent an agent's actions thoroughly enough. The second problem is that, ideally, we want a network that predicts steering angles without explicitly taking in information about the previous angle because this gives the network an unfair advantage. To this end, we shift our focus from handcrafting our subroutine ID definitions to using t-SNE to do it automatically. We embed the data into 2D space and use those coordinate pairs as the subroutine IDs. However, before we attempt to predict the t-SNE coordinates from image data, we run an experiment to determine if the t-SNE coordinates will work as subroutine IDs. We use the ground truth value of the t-SNE centroids as the subroutine ID in our angle prediction network, along with an image cube of size ten, to determine whether or not it would be worthwhile to attempt to predict the centroids. If using t-SNE as the subroutine ID produces inaccurate results, then we would need to explore other avenues. The results of this are in Figure 17. The blue lines are the real steering angle, and the orange lines are the predicted angle. While the results in this figure are less accurate than our other prediction results, the predictions are Figure 14: We run the experiment from Figure 13 multiple times and record the optimal solution durations from each trial in histograms. Notice the difference in time scales between the first two and the last two histograms. The ranking from the previous figure still holds. However, feudal reinforcement learning has cemented itself as the fastest method, as evidenced by the fact that it’s histogram is skewed to the left more than the DQN’s. Figure 13: Comparison of hard coded, q learning, DQN, and feudal q learning agent performance, respectively, in the task of doubling the value of a portfolio in the stock market. The hard coded agent is the slowest, followed by the q learning agent. The DQN agent and the feudal reinforcement learning agent performed comparably. Note the difference in the x-axis scales. Figure 15: Steering angle predictions on the Udacity dataset from the modified Udacity steering challenge winner network. Note that this network takes the previous angle as input when predicting the next angle which gives it an unfair advantage. more relevant to real world applications because they are computed using only visual input. ## 4 Discussion In this work, we show that feudal reinforcement learning is more effective than reinforcement learning at the tasks of stock price prediction and steering angle prediction. We originally considered using generative adversarial networks (GANs) for sequence prediction, but our early experiments pointed to the effectiveness of feudal reinforcement learning instead. With our maze experiments, we find that feudal reinforcement learning is faster during training than reinforcement learning. We also find that feudal reinforcement learning achieves the maximum reward more quickly than reinforcement learning. Both of these effects are due to feudal reinforcement learning's temporal abstraction. Breaking down the problem into more easily digestible pieces narrows the focus of the worker agent and allows the optimal policy to be found more quickly. Additionally, temporal abstraction also helps alleviate the problems of long term credit assignment and sparse reward signals. The lower temporal resolution of the manager shortens the period of time between rewards overall. In addition to the original sparse reward, the worker network also receives a reward for obeying the goals from the manager. This feedback can be much more frequent than the sparse reward, thus allowing for more consistent network updates. In this way, we were able to achieve better results with feudal q learning in our stock portfolio experiments than with a DQN. Finally, we find that a t-SNE embedding space can be useful as the goal space for the manager in feudal reinforcement learning. Figure 16: Angle and subroutine ID prediction results on the Udacity dataset. Notice that the subroutine ID’s behavior mimics the real angle behavior. Figure 17: Results of steering angle prediction when the t-SNE coordinates of the input data are used as the subroutine IDs. Notice that, for these results, we use a network that does not take the previous angle as input. ment learning in our steering angle prediction experiment. We use the centroid corresponding to steering angle, braking, and throttle data from the previous ten time steps as the subroutine ID in our angle prediction network and were able to predict future steering angles without the direct use of the steering angle from the previous time step. The temporal abstraction inherent in the t-SNE centroid creation mimics the role of the manager network and allows the worker to be able to more accurately predict steering angles than if it attempted this task on its own.